From 58bb071825a70a882d5a4159529a5800b18349e0 Mon Sep 17 00:00:00 2001 From: Eric Tune Date: Wed, 15 Oct 2014 08:30:02 -0700 Subject: Move developer documentation to docs/devel/ Fix links. --- collab.md | 37 ++++++++++++ development.md | 179 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++ flaky-tests.md | 52 +++++++++++++++++ releasing.dot | 113 ++++++++++++++++++++++++++++++++++++ releasing.md | 152 ++++++++++++++++++++++++++++++++++++++++++++++++ releasing.png | Bin 0 -> 30693 bytes releasing.svg | 113 ++++++++++++++++++++++++++++++++++++ 7 files changed, 646 insertions(+) create mode 100644 collab.md create mode 100644 development.md create mode 100644 flaky-tests.md create mode 100644 releasing.dot create mode 100644 releasing.md create mode 100644 releasing.png create mode 100644 releasing.svg diff --git a/collab.md b/collab.md new file mode 100644 index 00000000..c4644048 --- /dev/null +++ b/collab.md @@ -0,0 +1,37 @@ +# On Collaborative Development + +Kubernetes is open source, but many of the people working on it do so as their day job. In order to avoid forcing people to be "at work" effectively 24/7, we want to establish some semi-formal protocols around development. Hopefully these rules make things go more smoothly. If you find that this is not the case, please complain loudly. + +## Patches welcome + +First and foremost: as a potential contributor, your changes and ideas are welcome at any hour of the day or night, weekdays, weekends, and holidays. Please do not ever hesitate to ask a question or send a PR. + +## Timezones and calendars + +For the time being, most of the people working on this project are in the US and on Pacific time. Any times mentioned henceforth will refer to this timezone. Any references to "work days" will refer to the US calendar. + +## Code reviews + +All changes must be code reviewed. For non-maintainers this is obvious, since you can't commit anyway. But even for maintainers, we want all changes to get at least one review, preferably from someone who knows the areas the change touches. For non-trivial changes we may want two reviewers. The primary reviewer will make this decision and nominate a second reviewer, if needed. Except for trivial changes, PRs should sit for at least a 2 hours to allow for wider review. + +Most PRs will find reviewers organically. If a maintainer intends to be the primary reviewer of a PR they should set themselves as the assignee on GitHub and say so in a reply to the PR. Only the primary reviewer of a change should actually do the merge, except in rare cases (e.g. they are unavailable in a reasonable timeframe). + +If a PR has gone 2 work days without an owner emerging, please poke the PR thread and ask for a reviewer to be assigned. + +Except for rare cases, such as trivial changes (e.g. typos, comments) or emergencies (e.g. broken builds), maintainers should not merge their own changes. + +Expect reviewers to request that you avoid [common go style mistakes](https://code.google.com/p/go-wiki/wiki/CodeReviewComments) in your PRs. + +## Assigned reviews + +Maintainers can assign reviews to other maintainers, when appropriate. The assignee becomes the shepherd for that PR and is responsible for merging the PR once they are satisfied with it or else closing it. The assignee might request reviews from non-maintainers. + +## Merge hours + +Maintainers will do merges between the hours of 7:00 am Monday and 7:00 pm (19:00h) Friday. PRs that arrive over the weekend or on holidays will only be merged if there is a very good reason for it and if the code review requirements have been met. + +There may be discussion an even approvals granted outside of the above hours, but merges will generally be deferred. + +## Holds + +Any maintainer or core contributor who wants to review a PR but does not have time immediately may put a hold on a PR simply by saying so on the PR discussion and offering an ETA measured in single-digit days at most. Any PR that has a hold shall not be merged until the person who requested the hold acks the review, withdraws their hold, or is overruled by a preponderance of maintainers. diff --git a/development.md b/development.md new file mode 100644 index 00000000..f750c611 --- /dev/null +++ b/development.md @@ -0,0 +1,179 @@ +# Development Guide + +# Releases and Official Builds + +Official releases are built in Docker containers. Details are [here](build/README.md). You can do simple builds and development with just a local Docker installation. If want to build go locally outside of docker, please continue below. + +## Go development environment + +Kubernetes is written in [Go](http://golang.org) programming language. If you haven't set up Go development environment, please follow [this instruction](http://golang.org/doc/code.html) to install go tool and set up GOPATH. Ensure your version of Go is at least 1.3. + +## Put kubernetes into GOPATH + +We highly recommend to put kubernetes' code into your GOPATH. For example, the following commands will download kubernetes' code under the current user's GOPATH (Assuming there's only one directory in GOPATH.): + +``` +$ echo $GOPATH +/home/user/goproj +$ mkdir -p $GOPATH/src/github.com/GoogleCloudPlatform/ +$ cd $GOPATH/src/github.com/GoogleCloudPlatform/ +$ git clone git@github.com:GoogleCloudPlatform/kubernetes.git +``` + +The commands above will not work if there are more than one directory in ``$GOPATH``. + +(Obviously, clone your own fork of Kubernetes if you plan to do development.) + +## godep and dependency management + +Kubernetes uses [godep](https://github.com/tools/godep) to manage dependencies. It is not strictly required for building Kubernetes but it is required when managing dependencies under the Godeps/ tree, and is required by a number of the build and test scripts. Please make sure that ``godep`` is installed and in your ``$PATH``. + +### Installing godep +There are many ways to build and host go binaries. Here is an easy way to get utilities like ```godep``` installed: + +1. Ensure that [mercurial](http://mercurial.selenic.com/wiki/Download) is installed on your system. (some of godep's dependencies use the mercurial +source control system). Use ```apt-get install mercurial``` or ```yum install mercurial``` on Linux, or [brew.sh](http://brew.sh) on OS X, or download +directly from mercurial. +2. Create a new GOPATH for your tools and install godep: +``` +export GOPATH=$HOME/go-tools +mkdir -p $GOPATH +go get github.com/tools/godep +``` + +3. Add $GOPATH/bin to your path. Typically you'd add this to your ~/.profile: +``` +export GOPATH=$HOME/go-tools +export PATH=$PATH:$GOPATH/bin +``` + +### Using godep +Here is a quick summary of `godep`. `godep` helps manage third party dependencies by copying known versions into Godeps/_workspace. You can use `godep` in three ways: + +1. Use `godep` to call your `go` commands. For example: `godep go test ./...` +2. Use `godep` to modify your `$GOPATH` so that other tools know where to find the dependencies. Specifically: `export GOPATH=$GOPATH:$(godep path)` +3. Use `godep` to copy the saved versions of packages into your `$GOPATH`. This is done with `godep restore`. + +We recommend using options #1 or #2. + +## Hooks + +Before committing any changes, please link/copy these hooks into your .git +directory. This will keep you from accidentally committing non-gofmt'd go code. + +``` +cd kubernetes +ln -s hooks/prepare-commit-msg .git/hooks/prepare-commit-msg +ln -s hooks/commit-msg .git/hooks/commit-msg +``` + +## Unit tests + +``` +cd kubernetes +hack/test-go.sh +``` + +Alternatively, you could also run: + +``` +cd kubernetes +godep go test ./... +``` + +If you only want to run unit tests in one package, you could run ``godep go test`` under the package directory. For example, the following commands will run all unit tests in package kubelet: + +``` +$ cd kubernetes # step into kubernetes' directory. +$ cd pkg/kubelet +$ godep go test +# some output from unit tests +PASS +ok github.com/GoogleCloudPlatform/kubernetes/pkg/kubelet 0.317s +``` + +## Coverage +``` +cd kubernetes +godep go tool cover -html=target/c.out +``` + +## Integration tests + +You need an etcd somewhere in your PATH. To install etcd, run: + +``` +cd kubernetes +hack/install-etcd.sh +sudo ln -s $(pwd)/third_party/etcd/bin/etcd /usr/bin/etcd +``` + +``` +cd kubernetes +hack/test-integration.sh +``` + +## End-to-End tests + +You can run an end-to-end test which will bring up a master and two minions, perform some tests, and then tear everything down. Make sure you have followed the getting started steps for your chosen cloud platform (which might involve changing the `KUBERNETES_PROVIDER` environment variable to something other than "gce". +``` +cd kubernetes +hack/e2e-test.sh +``` + +Pressing control-C should result in an orderly shutdown but if something goes wrong and you still have some VMs running you can force a cleanup with the magical incantation: +``` +hack/e2e-test.sh 1 1 1 +``` + +## Testing out flaky tests +[Instructions here](docs/devel/flaky-tests.md) + +## Add/Update dependencies + +Kubernetes uses [godep](https://github.com/tools/godep) to manage dependencies. To add or update a package, please follow the instructions on [godep's document](https://github.com/tools/godep). + +To add a new package ``foo/bar``: + +- Make sure the kubernetes' root directory is in $GOPATH/github.com/GoogleCloudPlatform/kubernetes +- Run ``godep restore`` to make sure you have all dependancies pulled. +- Download foo/bar into the first directory in GOPATH: ``go get foo/bar``. +- Change code in kubernetes to use ``foo/bar``. +- Run ``godep save ./...`` under kubernetes' root directory. + +To update a package ``foo/bar``: + +- Make sure the kubernetes' root directory is in $GOPATH/github.com/GoogleCloudPlatform/kubernetes +- Run ``godep restore`` to make sure you have all dependancies pulled. +- Update the package with ``go get -u foo/bar``. +- Change code in kubernetes accordingly if necessary. +- Run ``godep update foo/bar`` under kubernetes' root directory. + +## Keeping your development fork in sync + +One time after cloning your forked repo: + +``` +git remote add upstream https://github.com/GoogleCloudPlatform/kubernetes.git +``` + +Then each time you want to sync to upstream: + +``` +git fetch upstream +git rebase upstream/master +``` + +## Regenerating the API documentation + +``` +cd kubernetes/api +sudo docker build -t kubernetes/raml2html . +sudo docker run --name="docgen" kubernetes/raml2html +sudo docker cp docgen:/data/kubernetes.html . +``` + +View the API documentation using htmlpreview (works on your fork, too): +``` +http://htmlpreview.github.io/?https://github.com/GoogleCloudPlatform/kubernetes/blob/master/api/kubernetes.html +``` diff --git a/flaky-tests.md b/flaky-tests.md new file mode 100644 index 00000000..d2cc8fad --- /dev/null +++ b/flaky-tests.md @@ -0,0 +1,52 @@ +# Hunting flaky tests in Kubernetes +Sometimes unit tests are flaky. This means that due to (usually) race conditions, they will occasionally fail, even though most of the time they pass. + +We have a goal of 99.9% flake free tests. This means that there is only one flake in one thousand runs of a test. + +Running a test 1000 times on your own machine can be tedious and time consuming. Fortunately, there is a better way to achieve this using Kubernetes. + +_Note: these instructions are mildly hacky for now, as we get run once semantics and logging they will get better_ + +There is a testing image ```brendanburns/flake``` up on the docker hub. We will use this image to test our fix. + +Create a replication controller with the following config: +```yaml +id: flakeController +desiredState: + replicas: 24 + replicaSelector: + name: flake + podTemplate: + desiredState: + manifest: + version: v1beta1 + id: "" + volumes: [] + containers: + - name: flake + image: brendanburns/flake + env: + - name: TEST_PACKAGE + value: pkg/tools + - name: REPO_SPEC + value: https://github.com/GoogleCloudPlatform/kubernetes + restartpolicy: {} + labels: + name: flake +labels: + name: flake +``` + +```./cluster/kubecfg.sh -c controller.yaml create replicaControllers``` + +This will spin up 100 instances of the test. They will run to completion, then exit, the kubelet will restart them, eventually you will have sufficient +runs for your purposes, and you can stop the replication controller: + +```sh +./cluster/kubecfg.sh stop flakeController +./cluster/kubecfg.sh rm flakeController +``` + +Now examine the machines with ```docker ps -a``` and look for tasks that exited with non-zero exit codes (ignore those that exited -1, since that's what happens when you stop the replica controller) + +Happy flake hunting! diff --git a/releasing.dot b/releasing.dot new file mode 100644 index 00000000..fe8124c3 --- /dev/null +++ b/releasing.dot @@ -0,0 +1,113 @@ +// Build it with: +// $ dot -Tsvg releasing.dot >releasing.svg + +digraph tagged_release { + size = "5,5" + // Arrows go up. + rankdir = BT + subgraph left { + // Group the left nodes together. + ci012abc -> pr101 -> ci345cde -> pr102 + style = invis + } + subgraph right { + // Group the right nodes together. + version_commit -> dev_commit + style = invis + } + { // Align the version commit and the info about it. + rank = same + // Align them with pr101 + pr101 + version_commit + // release_info shows the change in the commit. + release_info + } + { // Align the dev commit and the info about it. + rank = same + // Align them with 345cde + ci345cde + dev_commit + dev_info + } + // Join the nodes from subgraph left. + pr99 -> ci012abc + pr102 -> pr100 + // Do the version node. + pr99 -> version_commit + dev_commit -> pr100 + tag -> version_commit + pr99 [ + label = "Merge PR #99" + shape = box + fillcolor = "#ccccff" + style = "filled" + fontname = "Helvetica Neue, Helvetica, Segoe UI, Arial, freesans, sans-serif" + ]; + ci012abc [ + label = "012abc" + shape = circle + fillcolor = "#ffffcc" + style = "filled" + fontname = "Consolas, Liberation Mono, Menlo, Courier, monospace" + ]; + pr101 [ + label = "Merge PR #101" + shape = box + fillcolor = "#ccccff" + style = "filled" + fontname = "Helvetica Neue, Helvetica, Segoe UI, Arial, freesans, sans-serif" + ]; + ci345cde [ + label = "345cde" + shape = circle + fillcolor = "#ffffcc" + style = "filled" + fontname = "Consolas, Liberation Mono, Menlo, Courier, monospace" + ]; + pr102 [ + label = "Merge PR #102" + shape = box + fillcolor = "#ccccff" + style = "filled" + fontname = "Helvetica Neue, Helvetica, Segoe UI, Arial, freesans, sans-serif" + ]; + version_commit [ + label = "678fed" + shape = circle + fillcolor = "#ccffcc" + style = "filled" + fontname = "Consolas, Liberation Mono, Menlo, Courier, monospace" + ]; + dev_commit [ + label = "456dcb" + shape = circle + fillcolor = "#ffffcc" + style = "filled" + fontname = "Consolas, Liberation Mono, Menlo, Courier, monospace" + ]; + pr100 [ + label = "Merge PR #100" + shape = box + fillcolor = "#ccccff" + style = "filled" + fontname = "Helvetica Neue, Helvetica, Segoe UI, Arial, freesans, sans-serif" + ]; + release_info [ + label = "pkg/version/base.go:\ngitVersion = \"v0.5\";" + shape = none + fontname = "Helvetica Neue, Helvetica, Segoe UI, Arial, freesans, sans-serif" + ]; + dev_info [ + label = "pkg/version/base.go:\ngitVersion = \"v0.5-dev\";" + shape = none + fontname = "Helvetica Neue, Helvetica, Segoe UI, Arial, freesans, sans-serif" + ]; + tag [ + label = "$ git tag -a v0.5" + fillcolor = "#ffcccc" + style = "filled" + fontname = "Helvetica Neue, Helvetica, Segoe UI, Arial, freesans, sans-serif" + ]; +} + diff --git a/releasing.md b/releasing.md new file mode 100644 index 00000000..4cdf8827 --- /dev/null +++ b/releasing.md @@ -0,0 +1,152 @@ +# Releasing Kubernetes + +This document explains how to create a Kubernetes release (as in version) and +how the version information gets embedded into the built binaries. + +## Origin of the Sources + +Kubernetes may be built from either a git tree (using `hack/build-go.sh`) or +from a tarball (using either `hack/build-go.sh` or `go install`) or directly by +the Go native build system (using `go get`). + +When building from git, we want to be able to insert specific information about +the build tree at build time. In particular, we want to use the output of `git +describe` to generate the version of Kubernetes and the status of the build +tree (add a `-dirty` prefix if the tree was modified.) + +When building from a tarball or using the Go build system, we will not have +access to the information about the git tree, but we still want to be able to +tell whether this build corresponds to an exact release (e.g. v0.3) or is +between releases (e.g. at some point in development between v0.3 and v0.4). + +## Version Number Format + +In order to account for these use cases, there are some specific formats that +may end up representing the Kubernetes version. Here are a few examples: + +- **v0.5**: This is official version 0.5 and this version will only be used + when building from a clean git tree at the v0.5 git tag, or from a tree + extracted from the tarball corresponding to that specific release. +- **v0.5-15-g0123abcd4567**: This is the `git describe` output and it indicates + that we are 15 commits past the v0.5 release and that the SHA1 of the commit + where the binaries were built was `0123abcd4567`. It is only possible to have + this level of detail in the version information when building from git, not + when building from a tarball. +- **v0.5-15-g0123abcd4567-dirty** or **v0.5-dirty**: The extra `-dirty` prefix + means that the tree had local modifications or untracked files at the time of + the build, so there's no guarantee that the source code matches exactly the + state of the tree at the `0123abcd4567` commit or at the `v0.5` git tag + (resp.) +- **v0.5-dev**: This means we are building from a tarball or using `go get` or, + if we have a git tree, we are using `go install` directly, so it is not + possible to inject the git version into the build information. Additionally, + this is not an official release, so the `-dev` prefix indicates that the + version we are building is after `v0.5` but before `v0.6`. (There is actually + an exception where a commit with `v0.5-dev` is not present on `v0.6`, see + later for details.) + +## Injecting Version into Binaries + +In order to cover the different build cases, we start by providing information +that can be used when using only Go build tools or when we do not have the git +version information available. + +To be able to provide a meaningful version in those cases, we set the contents +of variables in a Go source file that will be used when no overrides are +present. + +We are using `pkg/version/base.go` as the source of versioning in absence of +information from git. Here is a sample of that file's contents: + +``` + var ( + gitVersion string = "v0.4-dev" // version from git, output of $(git describe) + gitCommit string = "" // sha1 from git, output of $(git rev-parse HEAD) + ) +``` + +This means a build with `go install` or `go get` or a build from a tarball will +yield binaries that will identify themselves as `v0.4-dev` and will not be able +to provide you with a SHA1. + +To add the extra versioning information when building from git, the +`hack/build-go.sh` script will gather that information (using `git describe` and +`git rev-parse`) and then create a `-ldflags` string to pass to `go install` and +tell the Go linker to override the contents of those variables at build time. It +can, for instance, tell it to override `gitVersion` and set it to +`v0.4-13-g4567bcdef6789-dirty` and set `gitCommit` to `4567bcdef6789...` which +is the complete SHA1 of the (dirty) tree used at build time. + +## Handling Official Versions + +Handling official versions from git is easy, as long as there is an annotated +git tag pointing to a specific version then `git describe` will return that tag +exactly which will match the idea of an official version (e.g. `v0.5`). + +Handling it on tarballs is a bit harder since the exact version string must be +present in `pkg/version/base.go` for it to get embedded into the binaries. But +simply creating a commit with `v0.5` on its own would mean that the commits +coming after it would also get the `v0.5` version when built from tarball or `go +get` while in fact they do not match `v0.5` (the one that was tagged) exactly. + +To handle that case, creating a new release should involve creating two adjacent +commits where the first of them will set the version to `v0.5` and the second +will set it to `v0.5-dev`. In that case, even in the presence of merges, there +will be a single comit where the exact `v0.5` version will be used and all +others around it will either have `v0.4-dev` or `v0.5-dev`. + +The diagram below illustrates it. + +![Diagram of git commits involved in the release](./releasing.png) + +After working on `v0.4-dev` and merging PR 99 we decide it is time to release +`v0.5`. So we start a new branch, create one commit to update +`pkg/version/base.go` to include `gitVersion = "v0.5"` and `git commit` it. + +We test it and make sure everything is working as expected. + +Before sending a PR for it, we create a second commit on that same branch, +updating `pkg/version/base.go` to include `gitVersion = "v0.5-dev"`. That will +ensure that further builds (from tarball or `go install`) on that tree will +always include the `-dev` prefix and will not have a `v0.5` version (since they +do not match the official `v0.5` exactly.) + +We then send PR 100 with both commits in it. + +Once the PR is accepted, we can use `git tag -a` to create an annotated tag +*pointing to the one commit* that has `v0.5` in `pkg/version/base.go` and push +it to GitHub. (Unfortunately GitHub tags/releases are not annotated tags, so +this needs to be done from a git client and pushed to GitHub using SSH.) + +## Parallel Commits + +While we are working on releasing `v0.5`, other development takes place and +other PRs get merged. For instance, in the example above, PRs 101 and 102 get +merged to the master branch before the versioning PR gets merged. + +This is not a problem, it is only slightly inaccurate that checking out the tree +at commit `012abc` or commit `345cde` or at the commit of the merges of PR 101 +or 102 will yield a version of `v0.4-dev` *but* those commits are not present in +`v0.5`. + +In that sense, there is a small window in which commits will get a +`v0.4-dev` or `v0.4-N-gXXX` label and while they're indeed later than `v0.4` +but they are not really before `v0.5` in that `v0.5` does not contain those +commits. + +Unfortunately, there is not much we can do about it. On the other hand, other +projects seem to live with that and it does not really become a large problem. + +As an example, Docker commit a327d9b91edf has a `v1.1.1-N-gXXX` label but it is +not present in Docker `v1.2.0`: + +``` + $ git describe a327d9b91edf + v1.1.1-822-ga327d9b91edf + + $ git log --oneline v1.2.0..a327d9b91edf + a327d9b91edf Fix data space reporting from Kb/Mb to KB/MB + + (Non-empty output here means the commit is not present on v1.2.0.) +``` + diff --git a/releasing.png b/releasing.png new file mode 100644 index 00000000..935628de Binary files /dev/null and b/releasing.png differ diff --git a/releasing.svg b/releasing.svg new file mode 100644 index 00000000..f703e6e2 --- /dev/null +++ b/releasing.svg @@ -0,0 +1,113 @@ + + + + + + +tagged_release + + +ci012abc + +012abc + + +pr101 + +Merge PR #101 + + +ci012abc->pr101 + + + + +ci345cde + +345cde + + +pr101->ci345cde + + + + +pr102 + +Merge PR #102 + + +ci345cde->pr102 + + + + +pr100 + +Merge PR #100 + + +pr102->pr100 + + + + +version_commit + +678fed + + +dev_commit + +456dcb + + +version_commit->dev_commit + + + + +dev_commit->pr100 + + + + +release_info +pkg/version/base.go: +gitVersion = "v0.5"; + + +dev_info +pkg/version/base.go: +gitVersion = "v0.5-dev"; + + +pr99 + +Merge PR #99 + + +pr99->ci012abc + + + + +pr99->version_commit + + + + +tag + +$ git tag -a v0.5 + + +tag->version_commit + + + + + -- cgit v1.2.3 From eff78d030052bcdce7269c392ac956d2fa6f6f4a Mon Sep 17 00:00:00 2001 From: Kouhei Ueno Date: Fri, 17 Oct 2014 19:45:12 +0900 Subject: Change git repo checkout https --- development.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/development.md b/development.md index f750c611..ccd64386 100644 --- a/development.md +++ b/development.md @@ -17,7 +17,7 @@ $ echo $GOPATH /home/user/goproj $ mkdir -p $GOPATH/src/github.com/GoogleCloudPlatform/ $ cd $GOPATH/src/github.com/GoogleCloudPlatform/ -$ git clone git@github.com:GoogleCloudPlatform/kubernetes.git +$ git clone https://github.com/GoogleCloudPlatform/kubernetes.git ``` The commands above will not work if there are more than one directory in ``$GOPATH``. -- cgit v1.2.3 From cf8e52e4286fc67c50a28432ce398ce2359ed527 Mon Sep 17 00:00:00 2001 From: Przemo Nowaczyk Date: Tue, 28 Oct 2014 20:57:15 +0100 Subject: small docs fixes --- development.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/development.md b/development.md index ccd64386..715ccb8f 100644 --- a/development.md +++ b/development.md @@ -62,9 +62,9 @@ Before committing any changes, please link/copy these hooks into your .git directory. This will keep you from accidentally committing non-gofmt'd go code. ``` -cd kubernetes -ln -s hooks/prepare-commit-msg .git/hooks/prepare-commit-msg -ln -s hooks/commit-msg .git/hooks/commit-msg +cd kubernetes/.git/hooks/ +ln -s ../../hooks/prepare-commit-msg . +ln -s ../../hooks/commit-msg . ``` ## Unit tests -- cgit v1.2.3 From 3306aecc13331d26f59b36e282278707769d1f90 Mon Sep 17 00:00:00 2001 From: Eric Tune Date: Thu, 16 Oct 2014 14:45:16 -0700 Subject: Separated user, dev, and design docs. Renamed: logging.md -> devel/logging.m Renamed: access.md -> design/access.md Renamed: identifiers.md -> design/identifiers.md Renamed: labels.md -> design/labels.md Renamed: namespaces.md -> design/namespaces.md Renamed: security.md -> design/security.md Renamed: networking.md -> design/networking.md Added abbreviated user user-focused document in place of most moved docs. Added docs/README.md explains how docs are organized. Added short, user-oriented documentation on labels Added a glossary. Fixed up some links. --- logging.md | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) create mode 100644 logging.md diff --git a/logging.md b/logging.md new file mode 100644 index 00000000..9b6bfa2a --- /dev/null +++ b/logging.md @@ -0,0 +1,26 @@ +Logging Conventions +=================== + +The following conventions for the glog levels to use. glog is globally prefered to "log" for better runtime control. + +* glog.Errorf() - Always an error +* glog.Warningf() - Something unexpected, but probably not an error +* glog.Infof() has multiple levels: + * glog.V(0) - Generally useful for this to ALWAYS be visible to an operator + * Programmer errors + * Logging extra info about a panic + * CLI argument handling + * glog.V(1) - A reasonable default log level if you don't want verbosity. + * Information about config (listening on X, watching Y) + * Errors that repeat frequently that relate to conditions that can be corrected (pod detected as unhealthy) + * glog.V(2) - Useful steady state information about the service and important log messages that may correlate to significant changes in the system. This is the recommended default log level for most systems. + * Logging HTTP requests and their exit code + * System state changing (killing pod) + * Controller state change events (starting pods) + * Scheduler log messages + * glog.V(3) - Extended information about changes + * More info about system state changes + * glog.V(4) - Debug level verbosity (for now) + * Logging in particularly thorny parts of code where you may want to come back later and check it + +As per the comments, the practical default level is V(2). Developers and QE environments may wish to run at V(3) or V(4). If you wish to change the log level, you can pass in `-v=X` where X is the desired maximum level to log. -- cgit v1.2.3 From e60fd03ae144559d597553de28f391c27ad50a4c Mon Sep 17 00:00:00 2001 From: Maria Nita Date: Tue, 11 Nov 2014 17:21:38 +0100 Subject: Update path to files in development doc --- development.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/development.md b/development.md index 715ccb8f..220c9371 100644 --- a/development.md +++ b/development.md @@ -2,7 +2,7 @@ # Releases and Official Builds -Official releases are built in Docker containers. Details are [here](build/README.md). You can do simple builds and development with just a local Docker installation. If want to build go locally outside of docker, please continue below. +Official releases are built in Docker containers. Details are [here](../../build/README.md). You can do simple builds and development with just a local Docker installation. If want to build go locally outside of docker, please continue below. ## Go development environment @@ -104,7 +104,7 @@ You need an etcd somewhere in your PATH. To install etcd, run: ``` cd kubernetes -hack/install-etcd.sh +hack/travis/install-etcd.sh sudo ln -s $(pwd)/third_party/etcd/bin/etcd /usr/bin/etcd ``` -- cgit v1.2.3 From fcaa1651e4ba4c6b73284acdd45e18c19ec74a5d Mon Sep 17 00:00:00 2001 From: Deyuan Deng Date: Sun, 2 Nov 2014 20:13:43 -0500 Subject: Fix DESIGN.md link, and etcd installation instruction. --- development.md | 13 +------------ 1 file changed, 1 insertion(+), 12 deletions(-) diff --git a/development.md b/development.md index 220c9371..38635ace 100644 --- a/development.md +++ b/development.md @@ -100,18 +100,7 @@ godep go tool cover -html=target/c.out ## Integration tests -You need an etcd somewhere in your PATH. To install etcd, run: - -``` -cd kubernetes -hack/travis/install-etcd.sh -sudo ln -s $(pwd)/third_party/etcd/bin/etcd /usr/bin/etcd -``` - -``` -cd kubernetes -hack/test-integration.sh -``` +You need an [etcd](https://github.com/coreos/etcd/releases/tag/v0.4.6) in your path, please make sure it is installed and in your ``$PATH``. ## End-to-End tests -- cgit v1.2.3 From 397e99fc2120ecd698696eb98b0dbc3019874a72 Mon Sep 17 00:00:00 2001 From: Eric Tune Date: Mon, 17 Nov 2014 11:20:31 -0800 Subject: Update development.md It looks like magic incantation `hack/e2e-test.sh 1 1 1` is not longer supported. --- development.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/development.md b/development.md index 38635ace..3b831ef4 100644 --- a/development.md +++ b/development.md @@ -110,11 +110,13 @@ cd kubernetes hack/e2e-test.sh ``` -Pressing control-C should result in an orderly shutdown but if something goes wrong and you still have some VMs running you can force a cleanup with the magical incantation: +Pressing control-C should result in an orderly shutdown but if something goes wrong and you still have some VMs running you can force a cleanup with this command: ``` -hack/e2e-test.sh 1 1 1 +go run e2e.go --down ``` +See the flag definitions in `hack/e2e.go` for more options, such as reusing an existing cluster. + ## Testing out flaky tests [Instructions here](docs/devel/flaky-tests.md) -- cgit v1.2.3 From d935c1cbe379794974179ebebdd8dad1821035f4 Mon Sep 17 00:00:00 2001 From: goltermann Date: Mon, 1 Dec 2014 19:07:46 -0800 Subject: Adding docs for prioritization of issues. --- issues.md | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) create mode 100644 issues.md diff --git a/issues.md b/issues.md new file mode 100644 index 00000000..491dba49 --- /dev/null +++ b/issues.md @@ -0,0 +1,21 @@ +GitHub Issues for the Kubernetes Project +======================================== + +A list quick overview of how we will review and prioritize incoming issues at https://github.com/GoogleCloudPlatform/kubernetes/issues + +Priorities +---------- + +We will use GitHub issue labels for prioritization. The absence of a priority label means the bug has not been reviewed and prioritized yet. + +Priorities are "moment in time" labels, and what is low priority today, could be high priority tomorrow, and vice versa. As we move to v1.0, we may decide certain bugs aren't actually needed yet, or that others really do need to be pulled in. + +Here we define the priorities for up until v1.0. Once the Kubernetes project hits 1.0, we will revisit the scheme and update as appropriate. + +Definitions +----------- +* P0 - something broken for users, build broken, or critical security issue. Someone must drop everything and work on it. +* P1 - must fix for earliest possible OSS binary release (every two weeks) +* P2 - must fix for v1.0 release - will block the release +* P3 - post v1.0 +* untriaged - anything without a Priority/PX label will be considered untriaged \ No newline at end of file -- cgit v1.2.3 From 5390d1bf88c6255d8e94333420ed9124ca17231f Mon Sep 17 00:00:00 2001 From: goltermann Date: Tue, 2 Dec 2014 14:54:57 -0800 Subject: Create pull-requests.md --- pull-requests.md | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) create mode 100644 pull-requests.md diff --git a/pull-requests.md b/pull-requests.md new file mode 100644 index 00000000..ed12b839 --- /dev/null +++ b/pull-requests.md @@ -0,0 +1,16 @@ +Pull Request Process +==================== + +An overview of how we will manage old or out-of-date pull requests. + +Process +------- + +We will close any pull requests older than two weeks. + +Exceptions can be made for PRs that have active review comments, or that are awaiting other dependent PRs. Closed pull requests are easy to recreate, and little work is lost by closing a pull request that subsequently needs to be reopened. + +We want to limit the total number of PRs in flight to: +* Maintain a clean project +* Remove old PRs that would be difficult to rebase as the underlying code has changed over time +* Encourage code velocity -- cgit v1.2.3 From e95c87269a04c1d9ac0ae1bd87fdfa1ae39184a7 Mon Sep 17 00:00:00 2001 From: Brendan Burns Date: Mon, 8 Dec 2014 16:01:35 -0800 Subject: Expand e2e instructions. --- development.md | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-) diff --git a/development.md b/development.md index 3b831ef4..14a4df87 100644 --- a/development.md +++ b/development.md @@ -115,7 +115,24 @@ Pressing control-C should result in an orderly shutdown but if something goes wr go run e2e.go --down ``` -See the flag definitions in `hack/e2e.go` for more options, such as reusing an existing cluster. +See the flag definitions in `hack/e2e.go` for more options, such as reusing an existing cluster, here is an overview: + +```sh +# Create a fresh cluster. Deletes a cluster first, if it exists +go run e2e.go --up + +# Test if a cluster is up. +go run e2e.go --isup + +# Push code to an existing cluster +go run e2e.go --push + +# Run all tests +go run e2e.go --test + +# Run tests matching a glob. +go run e2e.go --tests=... +``` ## Testing out flaky tests [Instructions here](docs/devel/flaky-tests.md) -- cgit v1.2.3 From c6a38a5921afbff7ec0f0af5fc2e031bbeb8e69f Mon Sep 17 00:00:00 2001 From: Brendan Burns Date: Mon, 8 Dec 2014 19:48:02 -0800 Subject: address comments. --- development.md | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+) diff --git a/development.md b/development.md index 14a4df87..81695803 100644 --- a/development.md +++ b/development.md @@ -115,9 +115,13 @@ Pressing control-C should result in an orderly shutdown but if something goes wr go run e2e.go --down ``` +### Flag options See the flag definitions in `hack/e2e.go` for more options, such as reusing an existing cluster, here is an overview: ```sh +# Build binaries for testing +go run e2e.go --build + # Create a fresh cluster. Deletes a cluster first, if it exists go run e2e.go --up @@ -127,6 +131,9 @@ go run e2e.go --isup # Push code to an existing cluster go run e2e.go --push +# Push to an existing cluster, or bring up a cluster if it's down. +go run e2e.go --pushup + # Run all tests go run e2e.go --test @@ -134,6 +141,22 @@ go run e2e.go --test go run e2e.go --tests=... ``` +### Combining flags +```sh +# Flags can be combined, and their actions will take place in this order: +# -build, -push|-up|-pushup, -test|-tests=..., -down +# e.g.: +go run e2e.go -build -pushup -test -down + +# -v (verbose) can be added if you want streaming output instead of only +# seeing the output of failed commands. + +# -ctl can be used to quickly call kubectl against your e2e cluster. Useful for +# cleaning up after a failed test or viewing logs. +go run e2e.go -ctl='get events' +go run e2e.go -ctl='delete pod foobar' +``` + ## Testing out flaky tests [Instructions here](docs/devel/flaky-tests.md) -- cgit v1.2.3 From cfaed2e3d0677c44d39db16aa96d3a5c25bdbfbb Mon Sep 17 00:00:00 2001 From: MikeJeffrey Date: Fri, 12 Dec 2014 11:05:30 -0800 Subject: Create README.md in docs/devel --- README.md | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) create mode 100644 README.md diff --git a/README.md b/README.md new file mode 100644 index 00000000..82804564 --- /dev/null +++ b/README.md @@ -0,0 +1,19 @@ +# Developing Kubernetes + +Docs in this directory relate to developing Kubernetes. + +* **On Collaborative Development** ([collab.md](collab.md)): info on pull requests and code reviews. + +* **Development Guide** ([development.md](development.md)): Setting up your environment; tests. + +* **Hunting flaky tests** ([flaky-tests.md](flaky-tests.md)): We have a goal of 99.9% flake free tests. + Here's how to run your tests many times. + +* **GitHub Issues** ([issues.md](issues.md)): How incoming issues are reviewed and prioritized. + +* **Logging Conventions** ([logging.md](logging.md)]: Glog levels. + +* **Pull Request Process** ([pull-requests.md](pull-requests.md)): When and why pull requests are closed. + +* **Releasing Kubernetes** ([releasing.md](releasing.md)): How to create a Kubernetes release (as in version) + and how the version information gets embedded into the built binaries. -- cgit v1.2.3 From 7cc008cd36abab4dfabc12ed2b31de94326e1256 Mon Sep 17 00:00:00 2001 From: Daniel Smith Date: Tue, 16 Dec 2014 14:36:39 -0800 Subject: fix godep instructions --- development.md | 46 +++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 41 insertions(+), 5 deletions(-) diff --git a/development.md b/development.md index 81695803..c73f339c 100644 --- a/development.md +++ b/development.md @@ -48,13 +48,49 @@ export PATH=$PATH:$GOPATH/bin ``` ### Using godep -Here is a quick summary of `godep`. `godep` helps manage third party dependencies by copying known versions into Godeps/_workspace. You can use `godep` in three ways: +Here is a quick summary of `godep`. `godep` helps manage third party dependencies by copying known versions into Godeps/_workspace. Here is the recommended way to set up your system. There are other ways that may work, but this is the easiest one I know of. -1. Use `godep` to call your `go` commands. For example: `godep go test ./...` -2. Use `godep` to modify your `$GOPATH` so that other tools know where to find the dependencies. Specifically: `export GOPATH=$GOPATH:$(godep path)` -3. Use `godep` to copy the saved versions of packages into your `$GOPATH`. This is done with `godep restore`. +1. Devote a directory to this endeavor: -We recommend using options #1 or #2. +``` +export KPATH=$HOME/code/kubernetes +mkdir -p $KPATH/src/github.com/GoogleCloudPlatform/kubernetes +cd $KPATH/src/github.com/GoogleCloudPlatform/kubernetes +git clone https://path/to/your/fork . +# Or copy your existing local repo here. IMPORTANT: making a symlink doesn't work. +``` + +2. Set up your GOPATH. + +``` +# Option A: this will let your builds see packages that exist elsewhere on your system. +export GOPATH=$KPATH:$GOPATH +# Option B: This will *not* let your local builds see packages that exist elsewhere on your system. +export GOPATH=$KPATH +# Option B is recommended if you're going to mess with the dependencies. +``` + +3. Populate your new $GOPATH. + +``` +cd $KPATH/src/github.com/GoogleCloudPlatform/kubernetes +godep restore +``` + +4. To add a dependency, you can do ```go get path/to/dependency``` as usual. + +5. To package up a dependency, do + +``` +cd $KPATH/src/github.com/GoogleCloudPlatform/kubernetes +godep save ./... +# Sanity check that your Godeps.json file is ok by re-restoring: +godep restore +``` + +I (lavalamp) have sometimes found it expedient to manually fix the /Godeps/godeps.json file to minimize the changes. + +Please send dependency updates in separate commits within your PR, for easier reviewing. ## Hooks -- cgit v1.2.3 From c02a212f141e56d816f7c252dce911b52ac65d30 Mon Sep 17 00:00:00 2001 From: Sergey Evstifeev Date: Mon, 22 Dec 2014 15:53:32 +0100 Subject: Fix broken flaky-tests.md documentation link The link ended up pointing at ./docs/devel/docs/devel/flaky-tests.md instead of .docs/devel/flaky-tests.md --- development.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/development.md b/development.md index c73f339c..8cccbcd9 100644 --- a/development.md +++ b/development.md @@ -194,7 +194,7 @@ go run e2e.go -ctl='delete pod foobar' ``` ## Testing out flaky tests -[Instructions here](docs/devel/flaky-tests.md) +[Instructions here](flaky-tests.md) ## Add/Update dependencies -- cgit v1.2.3 From d45be03704bdc6a74ca4a43edbadace9cfd0c586 Mon Sep 17 00:00:00 2001 From: Alex Robinson Date: Tue, 6 Jan 2015 18:05:33 +0000 Subject: Minor doc/comment fixes that came up while reading through some code. --- development.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/development.md b/development.md index 8cccbcd9..2b7476e8 100644 --- a/development.md +++ b/development.md @@ -161,6 +161,9 @@ go run e2e.go --build # Create a fresh cluster. Deletes a cluster first, if it exists go run e2e.go --up +# Create a fresh cluster at a specific release version. +go run e2e.go --up --version=0.7.0 + # Test if a cluster is up. go run e2e.go --isup -- cgit v1.2.3 From 3c3d2468b90ea9050b40a4ae97eae7b8d8c16c22 Mon Sep 17 00:00:00 2001 From: Alex Robinson Date: Fri, 9 Jan 2015 22:38:14 +0000 Subject: Update the doc on how to test for flakiness to actually work and to use kubectl. --- flaky-tests.md | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/flaky-tests.md b/flaky-tests.md index d2cc8fad..ccd32afb 100644 --- a/flaky-tests.md +++ b/flaky-tests.md @@ -12,6 +12,8 @@ There is a testing image ```brendanburns/flake``` up on the docker hub. We will Create a replication controller with the following config: ```yaml id: flakeController +kind: ReplicationController +apiVersion: v1beta1 desiredState: replicas: 24 replicaSelector: @@ -37,14 +39,14 @@ labels: name: flake ``` -```./cluster/kubecfg.sh -c controller.yaml create replicaControllers``` +```./cluster/kubectl.sh create -f controller.yaml``` This will spin up 100 instances of the test. They will run to completion, then exit, the kubelet will restart them, eventually you will have sufficient -runs for your purposes, and you can stop the replication controller: +runs for your purposes, and you can stop the replication controller by setting the ```replicas``` field to 0 and then running: ```sh -./cluster/kubecfg.sh stop flakeController -./cluster/kubecfg.sh rm flakeController +./cluster/kubectl.sh update -f controller.yaml +./cluster/kubectl.sh delete -f controller.yaml ``` Now examine the machines with ```docker ps -a``` and look for tasks that exited with non-zero exit codes (ignore those that exited -1, since that's what happens when you stop the replica controller) -- cgit v1.2.3 From 43612a093e26501be9cbea1150c4bae42925cf51 Mon Sep 17 00:00:00 2001 From: Deyuan Deng Date: Sat, 10 Jan 2015 17:24:20 -0500 Subject: Doc fixes --- development.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/development.md b/development.md index 2b7476e8..bd18b828 100644 --- a/development.md +++ b/development.md @@ -137,6 +137,10 @@ godep go tool cover -html=target/c.out ## Integration tests You need an [etcd](https://github.com/coreos/etcd/releases/tag/v0.4.6) in your path, please make sure it is installed and in your ``$PATH``. +``` +cd kubernetes +hack/test-integration.sh +``` ## End-to-End tests -- cgit v1.2.3 From ce93f7027812035c48e20a539ff9d74940f284bd Mon Sep 17 00:00:00 2001 From: Brendan Burns Date: Wed, 14 Jan 2015 21:27:13 -0800 Subject: Add a gendocs pre-submit hook. --- development.md | 11 ++--------- 1 file changed, 2 insertions(+), 9 deletions(-) diff --git a/development.md b/development.md index bd18b828..0e2f6fbf 100644 --- a/development.md +++ b/development.md @@ -238,16 +238,9 @@ git fetch upstream git rebase upstream/master ``` -## Regenerating the API documentation +## Regenerating the CLI documentation ``` -cd kubernetes/api -sudo docker build -t kubernetes/raml2html . -sudo docker run --name="docgen" kubernetes/raml2html -sudo docker cp docgen:/data/kubernetes.html . +hack/run-gendocs.sh ``` -View the API documentation using htmlpreview (works on your fork, too): -``` -http://htmlpreview.github.io/?https://github.com/GoogleCloudPlatform/kubernetes/blob/master/api/kubernetes.html -``` -- cgit v1.2.3 From 61d146fd7faae5047f02c46f8454a186a8d99daf Mon Sep 17 00:00:00 2001 From: Parth Oberoi Date: Tue, 20 Jan 2015 05:02:13 +0530 Subject: typo fixed --- collab.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/collab.md b/collab.md index c4644048..633b7682 100644 --- a/collab.md +++ b/collab.md @@ -12,7 +12,7 @@ For the time being, most of the people working on this project are in the US and ## Code reviews -All changes must be code reviewed. For non-maintainers this is obvious, since you can't commit anyway. But even for maintainers, we want all changes to get at least one review, preferably from someone who knows the areas the change touches. For non-trivial changes we may want two reviewers. The primary reviewer will make this decision and nominate a second reviewer, if needed. Except for trivial changes, PRs should sit for at least a 2 hours to allow for wider review. +All changes must be code reviewed. For non-maintainers this is obvious, since you can't commit anyway. But even for maintainers, we want all changes to get at least one review, preferably from someone who knows the areas the change touches. For non-trivial changes we may want two reviewers. The primary reviewer will make this decision and nominate a second reviewer, if needed. Except for trivial changes, PRs should sit for at least 2 hours to allow for wider review. Most PRs will find reviewers organically. If a maintainer intends to be the primary reviewer of a PR they should set themselves as the assignee on GitHub and say so in a reply to the PR. Only the primary reviewer of a change should actually do the merge, except in rare cases (e.g. they are unavailable in a reasonable timeframe). -- cgit v1.2.3 From 972ee6e91f7a4be56e06490a8d8eb8cca2bc8eeb Mon Sep 17 00:00:00 2001 From: Parth Oberoi Date: Wed, 21 Jan 2015 14:29:56 +0530 Subject: typo fixed ';' unexpected ';' after environment on line 7 --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 82804564..ab41448d 100644 --- a/README.md +++ b/README.md @@ -4,7 +4,7 @@ Docs in this directory relate to developing Kubernetes. * **On Collaborative Development** ([collab.md](collab.md)): info on pull requests and code reviews. -* **Development Guide** ([development.md](development.md)): Setting up your environment; tests. +* **Development Guide** ([development.md](development.md)): Setting up your environment tests. * **Hunting flaky tests** ([flaky-tests.md](flaky-tests.md)): We have a goal of 99.9% flake free tests. Here's how to run your tests many times. -- cgit v1.2.3 From d48dabc0cd457f54117b4fae601e0a0415a0fb6c Mon Sep 17 00:00:00 2001 From: Victor Marmol Date: Fri, 23 Jan 2015 15:51:01 -0800 Subject: Update developer docs to use hack/ for e2e. --- development.md | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/development.md b/development.md index 0e2f6fbf..f4770db3 100644 --- a/development.md +++ b/development.md @@ -152,7 +152,7 @@ hack/e2e-test.sh Pressing control-C should result in an orderly shutdown but if something goes wrong and you still have some VMs running you can force a cleanup with this command: ``` -go run e2e.go --down +go run hack/e2e.go --down ``` ### Flag options @@ -160,28 +160,28 @@ See the flag definitions in `hack/e2e.go` for more options, such as reusing an e ```sh # Build binaries for testing -go run e2e.go --build +go run hack/e2e.go --build # Create a fresh cluster. Deletes a cluster first, if it exists -go run e2e.go --up +go run hack/e2e.go --up # Create a fresh cluster at a specific release version. -go run e2e.go --up --version=0.7.0 +go run hack/e2e.go --up --version=0.7.0 # Test if a cluster is up. -go run e2e.go --isup +go run hack/e2e.go --isup # Push code to an existing cluster -go run e2e.go --push +go run hack/e2e.go --push # Push to an existing cluster, or bring up a cluster if it's down. -go run e2e.go --pushup +go run hack/e2e.go --pushup # Run all tests -go run e2e.go --test +go run hack/e2e.go --test # Run tests matching a glob. -go run e2e.go --tests=... +go run hack/e2e.go --tests=... ``` ### Combining flags -- cgit v1.2.3 From 84665f86076376dabf8ed2c69130197b168a42c1 Mon Sep 17 00:00:00 2001 From: Salvatore Dario Minonne Date: Fri, 30 Jan 2015 15:27:41 +0100 Subject: Fix dockerfile for etcd.2.0.0 --- development.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/development.md b/development.md index f4770db3..aa2878f1 100644 --- a/development.md +++ b/development.md @@ -136,7 +136,7 @@ godep go tool cover -html=target/c.out ## Integration tests -You need an [etcd](https://github.com/coreos/etcd/releases/tag/v0.4.6) in your path, please make sure it is installed and in your ``$PATH``. +You need an [etcd](https://github.com/coreos/etcd/releases/tag/v2.0.0) in your path, please make sure it is installed and in your ``$PATH``. ``` cd kubernetes hack/test-integration.sh @@ -243,4 +243,3 @@ git rebase upstream/master ``` hack/run-gendocs.sh ``` - -- cgit v1.2.3 From ea4c801002746183d77d291dc538cc1523d2ae59 Mon Sep 17 00:00:00 2001 From: kargakis Date: Tue, 3 Feb 2015 15:09:46 +0100 Subject: Add links to logging libraries in question --- logging.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/logging.md b/logging.md index 9b6bfa2a..82b6a0c8 100644 --- a/logging.md +++ b/logging.md @@ -1,7 +1,7 @@ Logging Conventions =================== -The following conventions for the glog levels to use. glog is globally prefered to "log" for better runtime control. +The following conventions for the glog levels to use. [glog](http://godoc.org/github.com/golang/glog) is globally prefered to [log](http://golang.org/pkg/log/) for better runtime control. * glog.Errorf() - Always an error * glog.Warningf() - Something unexpected, but probably not an error -- cgit v1.2.3 From b133560996a739a9e5160147da264df28d0d7e95 Mon Sep 17 00:00:00 2001 From: Alex Robinson Date: Tue, 10 Feb 2015 09:35:11 +0000 Subject: Add steps to the development guide for how to use godep to update an existing dependency. Also change from the numbered lists from markdown that didn't work due to the intervening code blocks to just raw text numbered lists. --- development.md | 32 +++++++++++++++++--------------- 1 file changed, 17 insertions(+), 15 deletions(-) diff --git a/development.md b/development.md index aa2878f1..3d05f71f 100644 --- a/development.md +++ b/development.md @@ -31,17 +31,18 @@ Kubernetes uses [godep](https://github.com/tools/godep) to manage dependencies. ### Installing godep There are many ways to build and host go binaries. Here is an easy way to get utilities like ```godep``` installed: -1. Ensure that [mercurial](http://mercurial.selenic.com/wiki/Download) is installed on your system. (some of godep's dependencies use the mercurial +1) Ensure that [mercurial](http://mercurial.selenic.com/wiki/Download) is installed on your system. (some of godep's dependencies use the mercurial source control system). Use ```apt-get install mercurial``` or ```yum install mercurial``` on Linux, or [brew.sh](http://brew.sh) on OS X, or download directly from mercurial. -2. Create a new GOPATH for your tools and install godep: + +2) Create a new GOPATH for your tools and install godep: ``` export GOPATH=$HOME/go-tools mkdir -p $GOPATH go get github.com/tools/godep ``` -3. Add $GOPATH/bin to your path. Typically you'd add this to your ~/.profile: +3) Add $GOPATH/bin to your path. Typically you'd add this to your ~/.profile: ``` export GOPATH=$HOME/go-tools export PATH=$PATH:$GOPATH/bin @@ -50,8 +51,7 @@ export PATH=$PATH:$GOPATH/bin ### Using godep Here is a quick summary of `godep`. `godep` helps manage third party dependencies by copying known versions into Godeps/_workspace. Here is the recommended way to set up your system. There are other ways that may work, but this is the easiest one I know of. -1. Devote a directory to this endeavor: - +1) Devote a directory to this endeavor: ``` export KPATH=$HOME/code/kubernetes mkdir -p $KPATH/src/github.com/GoogleCloudPlatform/kubernetes @@ -60,8 +60,7 @@ git clone https://path/to/your/fork . # Or copy your existing local repo here. IMPORTANT: making a symlink doesn't work. ``` -2. Set up your GOPATH. - +2) Set up your GOPATH. ``` # Option A: this will let your builds see packages that exist elsewhere on your system. export GOPATH=$KPATH:$GOPATH @@ -70,24 +69,27 @@ export GOPATH=$KPATH # Option B is recommended if you're going to mess with the dependencies. ``` -3. Populate your new $GOPATH. - +3) Populate your new $GOPATH. ``` cd $KPATH/src/github.com/GoogleCloudPlatform/kubernetes godep restore ``` -4. To add a dependency, you can do ```go get path/to/dependency``` as usual. - -5. To package up a dependency, do - +4) Next, you can either add a new dependency or update an existing one. ``` +# To add a new dependency, run: cd $KPATH/src/github.com/GoogleCloudPlatform/kubernetes +go get path/to/dependency godep save ./... -# Sanity check that your Godeps.json file is ok by re-restoring: -godep restore + +# To update an existing dependency, do +cd $KPATH/src/github.com/GoogleCloudPlatform/kubernetes +go get -u path/to/dependency +godep update path/to/dependency ``` +5) Before sending your PR, it's a good idea to sanity check that your Godeps.json file is ok by re-restoring: ```godep restore``` + I (lavalamp) have sometimes found it expedient to manually fix the /Godeps/godeps.json file to minimize the changes. Please send dependency updates in separate commits within your PR, for easier reviewing. -- cgit v1.2.3 From 66a676bf589605da619b9b6ac6123491787df441 Mon Sep 17 00:00:00 2001 From: Alex Robinson Date: Wed, 11 Feb 2015 12:16:16 -0800 Subject: Fix bad config in flaky test documentation and add script to help check for flakes. --- flaky-tests.md | 24 ++++++++++++++++++------ 1 file changed, 18 insertions(+), 6 deletions(-) diff --git a/flaky-tests.md b/flaky-tests.md index ccd32afb..e352e110 100644 --- a/flaky-tests.md +++ b/flaky-tests.md @@ -11,7 +11,7 @@ There is a testing image ```brendanburns/flake``` up on the docker hub. We will Create a replication controller with the following config: ```yaml -id: flakeController +id: flakecontroller kind: ReplicationController apiVersion: v1beta1 desiredState: @@ -41,14 +41,26 @@ labels: ```./cluster/kubectl.sh create -f controller.yaml``` -This will spin up 100 instances of the test. They will run to completion, then exit, the kubelet will restart them, eventually you will have sufficient -runs for your purposes, and you can stop the replication controller by setting the ```replicas``` field to 0 and then running: +This will spin up 24 instances of the test. They will run to completion, then exit, and the kubelet will restart them, accumulating more and more runs of the test. +You can examine the recent runs of the test by calling ```docker ps -a``` and looking for tasks that exited with non-zero exit codes. Unfortunately, docker ps -a only keeps around the exit status of the last 15-20 containers with the same image, so you have to check them frequently. +You can use this script to automate checking for failures, assuming your cluster is running on GCE and has four nodes: ```sh -./cluster/kubectl.sh update -f controller.yaml -./cluster/kubectl.sh delete -f controller.yaml +echo "" > output.txt +for i in {1..4}; do + echo "Checking kubernetes-minion-${i}" + echo "kubernetes-minion-${i}:" >> output.txt + gcloud compute ssh "kubernetes-minion-${i}" --command="sudo docker ps -a" >> output.txt +done +grep "Exited ([^0])" output.txt ``` -Now examine the machines with ```docker ps -a``` and look for tasks that exited with non-zero exit codes (ignore those that exited -1, since that's what happens when you stop the replica controller) +Eventually you will have sufficient runs for your purposes. At that point you can stop and delete the replication controller by running: + +```sh +./cluster/kubectl.sh stop replicationcontroller flakecontroller +``` + +If you do a final check for flakes with ```docker ps -a```, ignore tasks that exited -1, since that's what happens when you stop the replication controller. Happy flake hunting! -- cgit v1.2.3 From 83e0629cedcf0fa400034ab154c72c2543b03d5f Mon Sep 17 00:00:00 2001 From: Marek Grabowski Date: Sat, 14 Feb 2015 00:11:38 +0100 Subject: Added instruction for profiling apiserver --- README.md | 2 ++ profiling.md | 30 ++++++++++++++++++++++++++++++ 2 files changed, 32 insertions(+) create mode 100644 profiling.md diff --git a/README.md b/README.md index ab41448d..bf398e9f 100644 --- a/README.md +++ b/README.md @@ -17,3 +17,5 @@ Docs in this directory relate to developing Kubernetes. * **Releasing Kubernetes** ([releasing.md](releasing.md)): How to create a Kubernetes release (as in version) and how the version information gets embedded into the built binaries. + +* **Profiling Kubernetes** ([profiling.md](profiling.md)): How to plug in go pprof profiler to Kubernetes. diff --git a/profiling.md b/profiling.md new file mode 100644 index 00000000..68d1cc24 --- /dev/null +++ b/profiling.md @@ -0,0 +1,30 @@ +# Profiling Kubernetes + +This document explain how to plug in profiler and how to profile Kubernetes services. + +## Profiling library + +Go comes with inbuilt 'net/http/pprof' profiling library and profiling web service. The way service works is binding debug/pprof/ subtree on a running webserver to the profiler. Reading from subpages of debug/pprof returns pprof-formated profiles of the running binary. The output can be processed offline by the tool of choice, or used as an input to handy 'go tool pprof', which can graphically represent the result. + +## Adding profiling to services to APIserver. + +TL;DR: Add lines: +``` + m.mux.HandleFunc("/debug/pprof/", pprof.Index) + m.mux.HandleFunc("/debug/pprof/profile", pprof.Profile) + m.mux.HandleFunc("/debug/pprof/symbol", pprof.Symbol) +``` +to the init(c *Config) method in 'pkg/master/master.go' and import 'net/http/pprof' package. + +In most use cases to use profiler service it's enough to do 'import _ net/http/pprof', which automatically registers a handler in the default http.Server. Slight inconvenience is that APIserver uses default server for intra-cluster communication, so plugging profiler to it is not really useful. In 'pkg/master/server/server.go' more servers are created and started as separate goroutines. The one that is usually serving external traffic is secureServer. The handler for this traffic is defined in 'pkg/master/master.go' and stored in Handler variable. It is created from HTTP multiplexer, so the only thing that needs to be done is adding profiler handler functions to this multiplexer. This is exactly what lines after TL;DR do. + +## Connecting to the profiler +Even when running profiler I found not really straightforward to use 'go tool pprof' with it. The problem is that at least for dev purposes certificates generated for APIserver are not signed by anyone trusted and because secureServer serves only secure traffic it isn't straightforward to connect to the service. The best workaround I found is by creating an ssh tunnel from the kubernetes_master open unsecured port to some external server, and use this server as a proxy. To save everyone looking for correct ssh flags, it is done by running: +``` + ssh kubernetes_master -L:localhost:8080 +``` +or analogous one for you Cloud provider. Afterwards you can e.g. run +``` +go tool pprof http://localhost:/debug/pprof/profile +``` +to get 30 sec. CPU profile. -- cgit v1.2.3 From 3cb657fac08b3c302cb244066c69e73469ccafd1 Mon Sep 17 00:00:00 2001 From: Zach Loafman Date: Wed, 18 Feb 2015 07:51:36 -0800 Subject: Document current ways to run a single e2e --- development.md | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/development.md b/development.md index 3d05f71f..302f4af8 100644 --- a/development.md +++ b/development.md @@ -182,8 +182,11 @@ go run hack/e2e.go --pushup # Run all tests go run hack/e2e.go --test -# Run tests matching a glob. -go run hack/e2e.go --tests=... +# Run tests matching the regex "Pods.*env" +go run hack/e2e.go -v -test --test_args="--ginkgo.focus=Pods.*env" + +# Alternately, if you have the e2e cluster up and no desire to see the event stream, you can run ginkgo-e2e.sh directly: +hack/ginkgo-e2e.sh --ginkgo.focus=Pods.*env ``` ### Combining flags -- cgit v1.2.3 From d20061eeff1b9cfa0774b9259143ca7f7c859791 Mon Sep 17 00:00:00 2001 From: Alex Robinson Date: Wed, 18 Feb 2015 22:04:56 +0000 Subject: Combine the two documentation sections on how to use godeps. --- development.md | 32 +++++++------------------------- 1 file changed, 7 insertions(+), 25 deletions(-) diff --git a/development.md b/development.md index 302f4af8..67ef5916 100644 --- a/development.md +++ b/development.md @@ -49,7 +49,7 @@ export PATH=$PATH:$GOPATH/bin ``` ### Using godep -Here is a quick summary of `godep`. `godep` helps manage third party dependencies by copying known versions into Godeps/_workspace. Here is the recommended way to set up your system. There are other ways that may work, but this is the easiest one I know of. +Here's a quick walkthrough of one way to use godeps to add or update a Kubernetes dependency into Godeps/_workspace. For more details, please see the instructions in [godep's documentation](https://github.com/tools/godep). 1) Devote a directory to this endeavor: ``` @@ -69,7 +69,7 @@ export GOPATH=$KPATH # Option B is recommended if you're going to mess with the dependencies. ``` -3) Populate your new $GOPATH. +3) Populate your new GOPATH. ``` cd $KPATH/src/github.com/GoogleCloudPlatform/kubernetes godep restore @@ -77,20 +77,22 @@ godep restore 4) Next, you can either add a new dependency or update an existing one. ``` -# To add a new dependency, run: +# To add a new dependency, do: cd $KPATH/src/github.com/GoogleCloudPlatform/kubernetes go get path/to/dependency +# Change code in Kubernetes to use the dependency. godep save ./... -# To update an existing dependency, do +# To update an existing dependency, do: cd $KPATH/src/github.com/GoogleCloudPlatform/kubernetes go get -u path/to/dependency +# Change code in Kubernetes accordingly if necessary. godep update path/to/dependency ``` 5) Before sending your PR, it's a good idea to sanity check that your Godeps.json file is ok by re-restoring: ```godep restore``` -I (lavalamp) have sometimes found it expedient to manually fix the /Godeps/godeps.json file to minimize the changes. +It is sometimes expedient to manually fix the /Godeps/godeps.json file to minimize the changes. Please send dependency updates in separate commits within your PR, for easier reviewing. @@ -208,26 +210,6 @@ go run e2e.go -ctl='delete pod foobar' ## Testing out flaky tests [Instructions here](flaky-tests.md) -## Add/Update dependencies - -Kubernetes uses [godep](https://github.com/tools/godep) to manage dependencies. To add or update a package, please follow the instructions on [godep's document](https://github.com/tools/godep). - -To add a new package ``foo/bar``: - -- Make sure the kubernetes' root directory is in $GOPATH/github.com/GoogleCloudPlatform/kubernetes -- Run ``godep restore`` to make sure you have all dependancies pulled. -- Download foo/bar into the first directory in GOPATH: ``go get foo/bar``. -- Change code in kubernetes to use ``foo/bar``. -- Run ``godep save ./...`` under kubernetes' root directory. - -To update a package ``foo/bar``: - -- Make sure the kubernetes' root directory is in $GOPATH/github.com/GoogleCloudPlatform/kubernetes -- Run ``godep restore`` to make sure you have all dependancies pulled. -- Update the package with ``go get -u foo/bar``. -- Change code in kubernetes accordingly if necessary. -- Run ``godep update foo/bar`` under kubernetes' root directory. - ## Keeping your development fork in sync One time after cloning your forked repo: -- cgit v1.2.3 From 4ee6432fbc1cadf071dc71634391b4ceacadc425 Mon Sep 17 00:00:00 2001 From: gmarek Date: Thu, 19 Feb 2015 14:50:54 +0100 Subject: Add info about contention profiling to profiling.md --- profiling.md | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/profiling.md b/profiling.md index 68d1cc24..142ef11e 100644 --- a/profiling.md +++ b/profiling.md @@ -23,8 +23,12 @@ Even when running profiler I found not really straightforward to use 'go tool pp ``` ssh kubernetes_master -L:localhost:8080 ``` -or analogous one for you Cloud provider. Afterwards you can e.g. run +or analogous one for you Cloud provider. Afterwards you can e.g. run ``` go tool pprof http://localhost:/debug/pprof/profile ``` to get 30 sec. CPU profile. + +## Contention profiling + +To enable contetion profiling you need to add line ```rt.SetBlockProfileRate(1)``` to ones added before (```rt``` stands for ```runtime``` in ```master.go```). This enables 'debug/pprof/block' subpage, which can be used as an input go to ```go tool pprof```. -- cgit v1.2.3 From 0e7dfbc995f979400dae9c740fe8fa8ea6203c49 Mon Sep 17 00:00:00 2001 From: Jeff Grafton Date: Thu, 19 Feb 2015 18:40:28 -0800 Subject: Update development doc on how to generate code coverage reports. --- development.md | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-) diff --git a/development.md b/development.md index 67ef5916..615b4d55 100644 --- a/development.md +++ b/development.md @@ -133,11 +133,28 @@ ok github.com/GoogleCloudPlatform/kubernetes/pkg/kubelet 0.317s ``` ## Coverage + +Currently, collecting coverage is only supported for the Go unit tests. + +To run all unit tests and generate an HTML coverage report, run the following: + +``` +cd kubernetes +KUBE_COVER=y hack/test-go.sh +``` + +At the end of the run, an the HTML report will be generated with the path printed to stdout. + +To run tests and collect coverage in only one package, pass its relative path under the `kubernetes` directory as an argument, for example: ``` cd kubernetes -godep go tool cover -html=target/c.out +KUBE_COVER=y hack/test-go.sh pkg/kubectl ``` +Multiple arguments can be passed, in which case the coverage results will be combined for all tests run. + +Coverage results for the project can also be viewed on [Coveralls](https://coveralls.io/r/GoogleCloudPlatform/kubernetes), and are continuously updated as commits are merged. Additionally, all pull requests which spawn a Travis build will report unit test coverage results to Coveralls. + ## Integration tests You need an [etcd](https://github.com/coreos/etcd/releases/tag/v2.0.0) in your path, please make sure it is installed and in your ``$PATH``. -- cgit v1.2.3 From a65c29ed5cd9092522b0b6e4791984e2dabe5bce Mon Sep 17 00:00:00 2001 From: gmarek Date: Fri, 20 Feb 2015 09:39:13 +0100 Subject: apply comments --- profiling.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/profiling.md b/profiling.md index 142ef11e..1e14b5c4 100644 --- a/profiling.md +++ b/profiling.md @@ -31,4 +31,4 @@ to get 30 sec. CPU profile. ## Contention profiling -To enable contetion profiling you need to add line ```rt.SetBlockProfileRate(1)``` to ones added before (```rt``` stands for ```runtime``` in ```master.go```). This enables 'debug/pprof/block' subpage, which can be used as an input go to ```go tool pprof```. +To enable contetion profiling you need to add line ```rt.SetBlockProfileRate(1)``` in addition to ```m.mux.HandleFunc(...)``` added before (```rt``` stands for ```runtime``` in ```master.go```). This enables 'debug/pprof/block' subpage, which can be used as an input to ```go tool pprof```. -- cgit v1.2.3 From 021e5a3ec46c27cde7bba6fbae539cb4ba048a21 Mon Sep 17 00:00:00 2001 From: Eric Tune Date: Tue, 3 Mar 2015 14:29:39 -0800 Subject: Added a doc with coding advice. --- coding-conventions.md | 7 +++++++ 1 file changed, 7 insertions(+) create mode 100644 coding-conventions.md diff --git a/coding-conventions.md b/coding-conventions.md new file mode 100644 index 00000000..3d493803 --- /dev/null +++ b/coding-conventions.md @@ -0,0 +1,7 @@ +Coding style advice for contributors + - Bash + - https://google-styleguide.googlecode.com/svn/trunk/shell.xml + - Go + - https://github.com/golang/go/wiki/CodeReviewComments + - https://gist.github.com/lavalamp/4bd23295a9f32706a48f + -- cgit v1.2.3 From 9f5ea46527fa41d04ef4226931730e4c61b6bf22 Mon Sep 17 00:00:00 2001 From: Quinton Hoole Date: Wed, 4 Mar 2015 17:03:55 -0800 Subject: Add documentation about the Kubernetes Github Flow. Added an animation (and a link to it) detailing the standard Kubernetes Github Flow. --- development.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/development.md b/development.md index 615b4d55..a20834e9 100644 --- a/development.md +++ b/development.md @@ -8,7 +8,7 @@ Official releases are built in Docker containers. Details are [here](../../buil Kubernetes is written in [Go](http://golang.org) programming language. If you haven't set up Go development environment, please follow [this instruction](http://golang.org/doc/code.html) to install go tool and set up GOPATH. Ensure your version of Go is at least 1.3. -## Put kubernetes into GOPATH +## Clone kubernetes into GOPATH We highly recommend to put kubernetes' code into your GOPATH. For example, the following commands will download kubernetes' code under the current user's GOPATH (Assuming there's only one directory in GOPATH.): @@ -22,7 +22,9 @@ $ git clone https://github.com/GoogleCloudPlatform/kubernetes.git The commands above will not work if there are more than one directory in ``$GOPATH``. -(Obviously, clone your own fork of Kubernetes if you plan to do development.) +If you plan to do development, read about the +[Kubernetes Github Flow](https://docs.google.com/a/google.com/presentation/d/1WDGN_ggq1Ae3eeQmbSCMyUG1UhhRH6UZTy0pePq09Xo/pub?start=false&loop=false&delayms=3000), +and then clone your own fork of Kubernetes as described there. ## godep and dependency management -- cgit v1.2.3 From 46a8a0873c60de634831e14cc294b7e390e09326 Mon Sep 17 00:00:00 2001 From: Quinton Hoole Date: Thu, 5 Mar 2015 11:23:03 -0800 Subject: Make slides visible to the public, fix a typo. Moved to account quintonh@gmail.com to make it visible to the public without any login. Correct "push request" to "pull request". --- development.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/development.md b/development.md index a20834e9..7eccbcc8 100644 --- a/development.md +++ b/development.md @@ -23,7 +23,7 @@ $ git clone https://github.com/GoogleCloudPlatform/kubernetes.git The commands above will not work if there are more than one directory in ``$GOPATH``. If you plan to do development, read about the -[Kubernetes Github Flow](https://docs.google.com/a/google.com/presentation/d/1WDGN_ggq1Ae3eeQmbSCMyUG1UhhRH6UZTy0pePq09Xo/pub?start=false&loop=false&delayms=3000), +[Kubernetes Github Flow](https://docs.google.com/presentation/d/1HVxKSnvlc2WJJq8b9KCYtact5ZRrzDzkWgKEfm0QO_o/pub?start=false&loop=false&delayms=3000), and then clone your own fork of Kubernetes as described there. ## godep and dependency management -- cgit v1.2.3 From 2b45ccdae8899976b76378c0aa3216fc258f2c14 Mon Sep 17 00:00:00 2001 From: Tim Hockin Date: Mon, 9 Mar 2015 21:38:51 -0700 Subject: Add a doc on making PRs easier to review --- faster_reviews.md | 177 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 177 insertions(+) create mode 100644 faster_reviews.md diff --git a/faster_reviews.md b/faster_reviews.md new file mode 100644 index 00000000..142ac946 --- /dev/null +++ b/faster_reviews.md @@ -0,0 +1,177 @@ +# How to get faster PR reviews + +Most of what is written here is not at all specific to Kubernetes, but it bears +being written down in the hope that it will occasionally remind people of "best +practices" around code reviews. + +You've just had a brilliant idea on how to make Kubernetes better. Let's call +that idea "FeatureX". Feature X is not even that complicated. You have a +pretty good idea of how to implement it. You jump in and implement it, fixing a +bunch of stuff along the way. You send your PR - this is awesome! And it sits. +And sits. A week goes by and nobody reviews it. Finally someone offers a few +comments, which you fix up and wait for more review. And you wait. Another +week or two goes by. This is horrible. + +What went wrong? One particular problem that comes up frequently is this - your +PR is too big to review. You've touched 39 files and have 8657 insertions. +When your would-be reviewers pull up the diffs they run away - this PR is going +to take 4 hours to review and they don't have 4 hours right now. They'll get to it +later, just as soon as they have more free time (ha!). + +Let's talk about how to avoid this. + +## 1. Don't build a cathedral in one PR + +Are you sure FeatureX is something the Kubernetes team wants or will accept, or +that it is implemented to fit with other changes in flight? Are you willing to +bet a few days or weeks of work on it? If you have any doubt at all about the +usefulness of your feature or the design - make a proposal doc or a sketch PR +or both. Write or code up just enough to express the idea and the design and +why you made those choices, then get feedback on this. Now, when we ask you to +change a bunch of facets of the design, you don't have to re-write it all. + +## 2. Smaller diffs are exponentially better + +Small PRs get reviewed faster and are more likely to be correct than big ones. +Let's face it - attention wanes over time. If your PR takes 60 minutes to +review, I almost guarantee that the reviewer's eye for details is not as keen in +the last 30 minutes as it was in the first. This leads to multiple rounds of +review when one might have sufficed. In some cases the review is delayed in its +entirety by the need for a large contiguous block of time to sit and read your +code. + +Whenever possible, break up your PRs into multiple commits. Making a series of +discrete commits is a powerful way to express the evolution of an idea or the +different ideas that make up a single feature. There's a balance to be struck, +obviously. If your commits are too small they become more cumbersome to deal +with. Strive to group logically distinct ideas into commits. + +For example, if you found that FeatureX needed some "prefactoring" to fit in, +make a commit that JUST does that prefactoring. Then make a new commit for +FeatureX. Don't lump unrelated things together just because you didn't think +about prefactoring. If you need to, fork a new branch, do the prefactoring +there and send a PR for that. If you can explain why you are doing seemingly +no-op work ("it makes the FeatureX change easier, I promise") we'll probably be +OK with it. + +Obviously, a PR with 25 commits is still very cumbersome to review, so use +common sense. + +## 3. Multiple small PRs are often better than multiple commits + +If you can extract whole ideas from your PR and send those as PRs of their own, +you can avoid the painful problem of continually rebasing. Kubernetes is a +fast-moving codebase - lock in your changes ASAP, and make merges be someone +else's problem. + +Obviously, we want every PR to be useful on its own, so you'll have to use +common sense in deciding what can be a PR vs what should be a commit in a larger +PR. Rule of thumb - if this commit or set of commits is directly related to +FeatureX and nothing else, it should probably be part of the FeatureX PR. If +you can plausibly imagine someone finding value in this commit outside of +FeatureX, try it as a PR. + +Don't worry about flooding us with PRs. We'd rather have 100 small, obvious PRs +than 10 unreviewable monoliths. + +## 4. Don't rename, reformat, comment, etc in the same PR + +Often, as you are implementing FeatureX, you find things that are just wrong. +Bad comments, poorly named functions, bad structure, weak type-safety. You +should absolutely fix those things (or at least file issues, please) - but not +in this PR. See the above points - break unrelated changes out into different +PRs or commits. Otherwise your diff will have WAY too many changes, and your +reviewer won't see the forest because of all the trees. + +## 5. Comments matter + +Read up on GoDoc - follow those general rules. If you're writing code and you +think there is any possible chance that someone might not understand why you did +something (or that you won't remember what you yourself did), comment it. If +you think there's something pretty obvious that we could follow up on, add a +TODO. Many code-review comments are about this exact issue. + +## 5. Tests are almost always required + +Nothing is more frustrating than doing a review, only to find that the tests are +inadequate or even entirely absent. Very few PRs can touch code and NOT touch +tests. If you don't know how to test FeatureX - ask! We'll be happy to help +you design things for easy testing or to suggest appropriate test cases. + +## 6. Look for opportunities to generify + +If you find yourself writing something that touches a lot of modules, think hard +about the dependencies you are introducing between packages. Can some of what +you're doing be made more generic and moved up and out of the FeatureX package? +Do you need to use a function or type from an otherwise unrelated package? If +so, promote! We have places specifically for hosting more generic code. + +Likewise if FeatureX is similar in form to FeatureW which was checked in last +month and it happens to exactly duplicate some tricky stuff from FeatureW, +consider prefactoring core logic out and using it in both FeatureW and FeatureX. +But do that in a different commit or PR, please. + +## 7. Fix feedback in a new commit + +Your reviewer has finally sent you some feedback on FeatureX. You make a bunch +of changes and ... what? You could patch those into your commits with git +"squash" or "fixup" logic. But that makes your changes hard to verify. Unless +your whole PR is pretty trivial, you should instead put your fixups into a new +commit and re-push. Your reviewer can then look at that commit on its own - so +much faster to review than starting over. + +We might still ask you to squash commits at the very end, for the sake of a clean +history. + +## 8. KISS, YAGNI, MVP, etc + +Sometimes we need to remind each other of core tenets of software design - Keep +It Simple, You Aren't Gonna Need It, Minimum Viable Product, and so on. Adding +features "because we might need it later" is antithetical to software that +ships. Add the things you need NOW and (ideally) leave room for things you +might need later - but don't implement them now. + +## 9. Push back + +We understand that it is hard to imagine, but sometimes we make mistakes. It's +OK to push back on changes requested during a review. If you have a good reason +for doing something a certain way, you are absolutley allowed to debate the +merits of a requested change. You might be overruled, but you might also +prevail. We're mostly pretty reasonable people. Mostly. + +## 10. I'm still getting stalled - help?! + +So, you've done all that and you still aren't getting any PR love? Here's some +things you can do that might help kick a stalled process along: + + * Make sure that your PR has an assigned reviewer (assignee in GitHub). If + this is not the case, reply to the PR comment stream asking for one to be + assigned. + + * Ping the assignee (@username) on the PR comment stream asking for an + estimate of when they can get to it. + + * Ping the assigneed by email (many of us have email addresses that are well + published or are the same as our GitHub handle @google.com or @redhat.com). + +If you think you have fixed all the issues in a round of review, and you haven't +heard back, you should ping the reviewer (assignee) on the comment stream with a +"please take another look" (PTAL) or similar comment indicating you are done and +you think it is ready for re-review. In fact, this is probably a good habit for +all PRs. + +One phenomenon of open-source projects (where anyone can comment on any issue) +is the dog-pile - your PR gets so many comments from so many people it becomes +hard to follow. In this situation you can ask the primary reviewer +(assignee) whether they want you to fork a new PR to clear out all the comments. +Remember: you don't HAVE to fix every issue raised by every person who feels +like commenting, but you should at least answer reasonable comments with an +explanation. + +## Final: Use common sense + +Obviously, none of these points are hard rules. There is no document that can +take the place of common sense and good taste. Use your best judgement, but put +a bit of thought into how your work can be made easier to review. If you do +these things your PRs will flow much more easily. + -- cgit v1.2.3 From 472bf52e671efb9ef69fc5de2776bd2a7ea1cb8a Mon Sep 17 00:00:00 2001 From: Phaneendra Chiruvella Date: Sun, 15 Mar 2015 22:20:26 +0530 Subject: update link to common golang style mistakes --- collab.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/collab.md b/collab.md index 633b7682..f9f12e25 100644 --- a/collab.md +++ b/collab.md @@ -20,7 +20,7 @@ If a PR has gone 2 work days without an owner emerging, please poke the PR threa Except for rare cases, such as trivial changes (e.g. typos, comments) or emergencies (e.g. broken builds), maintainers should not merge their own changes. -Expect reviewers to request that you avoid [common go style mistakes](https://code.google.com/p/go-wiki/wiki/CodeReviewComments) in your PRs. +Expect reviewers to request that you avoid [common go style mistakes](https://github.com/golang/go/wiki/CodeReviewComments) in your PRs. ## Assigned reviews -- cgit v1.2.3 From d2499d4bdc149cfc2744cf3d9a54ee7be8c4841e Mon Sep 17 00:00:00 2001 From: Tim Hockin Date: Fri, 13 Mar 2015 13:06:20 -0700 Subject: Add a doc explaining how to make API changes Covers compatibility, internal API, versioned APIs, tests, fuzzer, semantic deep equal, etc. I wrote this as I worked on the next big multi-port service change. --- api_changes.md | 289 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 289 insertions(+) create mode 100644 api_changes.md diff --git a/api_changes.md b/api_changes.md new file mode 100644 index 00000000..c1005278 --- /dev/null +++ b/api_changes.md @@ -0,0 +1,289 @@ +# So you want to change the API? + +The Kubernetes API has two major components - the internal structures and +the versioned APIs. The versioned APIs are intended to be stable, while the +internal structures are implemented to best reflect the needs of the Kubernetes +code itself. + +What this means for API changes is that you have to be somewhat thoughtful in +how you approach changes, and that you have to touch a number of pieces to make +a complete change. This document aims to guide you through the process, though +not all API changes will need all of these steps. + +## Operational overview + +It's important to have a high level understanding of the API system used in +Kubernetes in order to navigate the rest of this document. + +As mentioned above, the internal representation of an API object is decoupled +from any one API version. This provides a lot of freedom to evolve the code, +but it requires robust infrastructure to convert between representations. There +are multiple steps in processing an API operation - even something as simple as +a GET involves a great deal of machinery. + +The conversion process is logically a "star" with the internal form at the +center. Every versioned API can be converted to the internal form (and +vice-versa), but versioned APIs do not convert to other versioned APIs directly. +This sounds like a heavy process, but in reality we don't intend to keep more +than a small number of versions alive at once. While all of the Kubernetes code +operates on the internal structures, they are always converted to a versioned +form before being written to storage (disk or etcd) or being sent over a wire. +Clients should consume and operate on the versioned APIs exclusively. + +To demonstrate the general process, let's walk through a (hypothetical) example: + + 1. A user POSTs a `Pod` object to `/api/v7beta1/...` + 2. The JSON is unmarshalled into a `v7beta1.Pod` structure + 3. Default values are applied to the `v7beta1.Pod` + 4. The `v7beta1.Pod` is converted to an `api.Pod` structure + 5. The `api.Pod` is validated, and any errors are returned to the user + 6. The `api.Pod` is converted to a `v6.Pod` (because v6 is the latest stable + version) + 7. The `v6.Pod` is marshalled into JSON and written to etcd + +Now that we have the `Pod` object stored, a user can GET that object in any +supported api version. For example: + + 1. A user GETs the `Pod` from `/api/v5/...` + 2. The JSON is read from etcd and unmarshalled into a `v6.Pod` structure + 3. Default values are applied to the `v6.Pod` + 4. The `v6.Pod` is converted to an `api.Pod` structure + 5. The `api.Pod` is converted to a `v5.Pod` structure + 6. The `v5.Pod` is marshalled into JSON and sent to the user + +The implication of this process is that API changes must be done carefully and +backward-compatibly. + +## On compatibility + +Before talking about how to make API changes, it is worthwhile to clarify what +we mean by API compatibility. An API change is considered backward-compatible +if it: + * adds new functionality that is not required for correct behavior + * does not change existing semantics + * does not change existing defaults + +Put another way: + +1. Any API call (e.g. a structure POSTed to a REST endpoint) that worked before + your change must work the same after your change. +2. Any API call that uses your change must not cause problems (e.g. crash or + degrade behavior) when issued against servers that do not include your change. +3. It must be possible to round-trip your change (convert to different API + versions and back) with no loss of information. + +If your change does not meet these criteria, it is not considered strictly +compatible. There are times when this might be OK, but mostly we want changes +that meet this definition. If you think you need to break compatibility, you +should talk to the Kubernetes team first. + +Let's consider some examples. In a hypothetical API (assume we're at version +v6), the `Frobber` struct looks something like this: + +```go +// API v6. +type Frobber struct { + Height int `json:"height"` + Param string `json:"param"` +} +``` + +You want to add a new `Width` field. It is generally safe to add new fields +without changing the API version, so you can simply change it to: + +```go +// Still API v6. +type Frobber struct { + Height int `json:"height"` + Width int `json:"width"` + Param string `json:"param"` +} +``` + +The onus is on you to define a sane default value for `Width` such that rule #1 +above is true - API calls and stored objects that used to work must continue to +work. + +For your next change you want to allow multiple `Param` values. You can not +simply change `Param string` to `Params []string` (without creating a whole new +API version) - that fails rules #1 and #2. You can instead do something like: + +```go +// Still API v6, but kind of clumsy. +type Frobber struct { + Height int `json:"height"` + Width int `json:"width"` + Param string `json:"param"` // the first param + ExtraParams []string `json:"params"` // additional params +} +``` + +Now you can satisfy the rules: API calls that provide the old style `Param` +will still work, while servers that don't understand `ExtraParams` can ignore +it. This is somewhat unsatisfying as an API, but it is strictly compatible. + +Part of the reason for versioning APIs and for using internal structs that are +distinct from any one version is to handle growth like this. The internal +representation can be implemented as: + +```go +// Internal, soon to be v7beta1. +type Frobber struct { + Height int + Width int + Params []string +} +``` + +The code that converts to/from versioned APIs can decode this into the somewhat +uglier (but compatible!) structures. Eventually, a new API version, let's call +it v7beta1, will be forked and it can use the clean internal structure. + +We've seen how to satisfy rules #1 and #2. Rule #3 means that you can not +extend one versioned API without also extending the others. For example, an +API call might POST an object in API v7beta1 format, which uses the cleaner +`Params` field, but the API server might store that object in trusty old v6 +form (since v7beta1 is "beta"). When the user reads the object back in the +v7beta1 API it would be unacceptable to have lost all but `Params[0]`. This +means that, even though it is ugly, a compatible change must be made to the v6 +API. + +As another interesting example, enumerated values provide a unique challenge. +Adding a new value to an enumerated set is *not* a compatible change. Clients +which assume they know how to handle all possible values of a given field will +not be able to handle the new values. However, removing value from an +enumerated set *can* be a compatible change, if handled properly (treat the +removed value as deprecated but allowed). + +## Changing versioned APIs + +For most changes, you will probably find it easiest to change the versioned +APIs first. This forces you to think about how to make your change in a +compatible way. Rather than doing each step in every version, it's usually +easier to do each versioned API one at a time, or to do all of one version +before starting "all the rest". + +### Edit types.go + +The struct definitions for each API are in `pkg/api//types.go`. Edit +those files to reflect the change you want to make. Note that all non-online +fields in versioned APIs must have description tags - these are used to generate +documentation. + +### Edit defaults.go + +If your change includes new fields for which you will need default values, you +need to add cases to `pkg/api//defaults.go`. Of course, since you +have added code, you have to add a test: `pkg/api//defaults_test.go`. + +Don't forget to run the tests! + +### Edit conversion.go + +Given that you have not yet changed the internal structs, this might feel +premature, and that's because it is. You don't yet have anything to convert to +or from. We will revisit this in the "internal" section. If you're doing this +all in a different order (i.e. you started with the internal structs), then you +should jump to that topic below. In the very rare case that you are making an +incompatible change you might or might not want to do this now, but you will +have to do more later. The files you want are +`pkg/api//conversion.go` and `pkg/api//conversion_test.go`. + +## Changing the internal structures + +Now it is time to change the internal structs so your versioned changes can be +used. + +### Edit types.go + +Similar to the versioned APIs, the definitions for the internal structs are in +`pkg/api/types.go`. Edit those files to reflect the change you want to make. +Keep in mind that the internal structs must be able to express *all* of the +versioned APIs. + +## Edit validation.go + +Most changes made to the internal structs need some form of input validation. +Validation is currently done on internal objects in +`pkg/api/validation/validation.go`. This validation is the one of the first +opportunities we have to make a great user experience - good error messages and +thorough validation help ensure that users are giving you what you expect and, +when they don't, that they know why and how to fix it. Think hard about the +contents of `string` fields, the bounds of `int` fields and the +requiredness/optionalness of fields. + +Of course, code needs tests - `pkg/api/validation/validation_test.go`. + +## Edit version conversions + +At this point you have both the versioned API changes and the internal +structure changes done. If there are any notable differences - field names, +types, structural change in particular - you must add some logic to convert +versioned APIs to and from the internal representation. If you see errors from +the `serialization_test`, it may indicate the need for explicit conversions. + +The conversion code resides with each versioned API - +`pkg/api//conversion.go`. Unsurprisingly, this also requires you to +add tests to `pkg/api//conversion_test.go`. + +## Update the fuzzer + +Part of our testing regimen for APIs is to "fuzz" (fill with random values) API +objects and then convert them to and from the different API versions. This is +a great way of exposing places where you lost information or made bad +assumptions. If you have added any fields which need very careful formatting +(the test does not run validation) or if you have made assumptions such as +"this slice will always have at least 1 element", you may get an error or even +a panic from the `serialization_test`. If so, look at the diff it produces (or +the backtrace in case of a panic) and figure out what you forgot. Encode that +into the fuzzer's custom fuzz functions. + +The fuzzer can be found in `pkg/api/testing/fuzzer.go`. + +## Update the semantic comparisons + +VERY VERY rarely is this needed, but when it hits, it hurts. In some rare +cases we end up with objects (e.g. resource quantites) that have morally +equivalent values with different bitwise representations (e.g. value 10 with a +base-2 formatter is the same as value 0 with a base-10 formatter). The only way +Go knows how to do deep-equality is through field-by-field bitwise comparisons. +This is a problem for us. + +The first thing you should do is try not to do that. If you really can't avoid +this, I'd like to introduce you to our semantic DeepEqual routine. It supports +custom overrides for specific types - you can find that in `pkg/api/helpers.go`. + +There's one other time when you might have to touch this: unexported fields. +You see, while Go's `reflect` package is allowed to touch unexported fields, us +mere mortals are not - this includes semantic DeepEqual. Fortunately, most of +our API objects are "dumb structs" all the way down - all fields are exported +(start with a capital letter) and there are no unexported fields. But sometimes +you want to include an object in our API that does have unexported fields +somewhere in it (for example, `time.Time` has unexported fields). If this hits +you, you may have to touch the semantic DeepEqual customization functions. + +## Implement your change + +Now you have the API all changed - go implement whatever it is that you're +doing! + +## Write end-to-end tests + +This is, sadly, still sort of painful. Talk to us and we'll try to help you +figure out the best way to make sure your cool feature keeps working forever. + +## Examples and docs + +At last, your change is done, all unit tests pass, e2e passes, you're done, +right? Actually, no. You just changed the API. If you are touching an +existing facet of the API, you have to try *really* hard to make sure that +*all* the examples and docs are updated. There's no easy way to do this, due +in part ot JSON and YAML silently dropping unknown fields. You're clever - +you'll figure it out. Put `grep` or `ack` to good use. + +If you added functionality, you should consider documenting it and/or writing +an example to illustrate your change. + +## Adding new REST objects + +TODO(smarterclayton): write this. -- cgit v1.2.3 From 9786c3c7634b5b6a54fcf9258660ae36c694d8d2 Mon Sep 17 00:00:00 2001 From: Yu-Ju Hong Date: Tue, 17 Mar 2015 12:30:47 -0700 Subject: Add -v to `go run hack/e2e.go -ctl` commands --- development.md | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/development.md b/development.md index 7eccbcc8..ef7c7ce8 100644 --- a/development.md +++ b/development.md @@ -215,15 +215,16 @@ hack/ginkgo-e2e.sh --ginkgo.focus=Pods.*env # Flags can be combined, and their actions will take place in this order: # -build, -push|-up|-pushup, -test|-tests=..., -down # e.g.: -go run e2e.go -build -pushup -test -down +go run hack/e2e.go -build -pushup -test -down # -v (verbose) can be added if you want streaming output instead of only # seeing the output of failed commands. # -ctl can be used to quickly call kubectl against your e2e cluster. Useful for -# cleaning up after a failed test or viewing logs. -go run e2e.go -ctl='get events' -go run e2e.go -ctl='delete pod foobar' +# cleaning up after a failed test or viewing logs. Use -v to avoid supressing +# kubectl output. +go run hack/e2e.go -v -ctl='get events' +go run hack/e2e.go -v -ctl='delete pod foobar' ``` ## Testing out flaky tests -- cgit v1.2.3 From 3fe373e83ae17bf4a84c6632b91be7ad61f7b97b Mon Sep 17 00:00:00 2001 From: Rohit Jnagal Date: Fri, 13 Mar 2015 00:30:32 +0000 Subject: Update vagrant documentation to use get.k8s.io for setup. --- developer-guides/vagrant.md | 321 ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 321 insertions(+) create mode 100644 developer-guides/vagrant.md diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md new file mode 100644 index 00000000..47236381 --- /dev/null +++ b/developer-guides/vagrant.md @@ -0,0 +1,321 @@ +## Getting started with Vagrant + +Running kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/develop on your local machine (Linux, Mac OS X). + +### Prerequisites +1. Install latest version >= 1.6.2 of vagrant from http://www.vagrantup.com/downloads.html +2. Install latest version of Virtual Box from https://www.virtualbox.org/wiki/Downloads +3. Get or build a [binary release](../../getting-started-guides/binary_release.md) + +### Setup + +By default, the Vagrant setup will create a single kubernetes-master and 1 kubernetes-minion. Each VM will take 1 GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate free disk space). To start your local cluster, open a shell and run: + +``` +cd kubernetes + +export KUBERNETES_PROVIDER=vagrant +cluster/kube-up.sh +``` + +The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine. + +Vagrant will provision each machine in the cluster with all the necessary components to run Kubernetes. The initial setup can take a few minutes to complete on each machine. + +By default, each VM in the cluster is running Fedora, and all of the Kubernetes services are installed into systemd. + +To access the master or any minion: + +``` +vagrant ssh master +vagrant ssh minion-1 +``` + +If you are running more than one minion, you can access the others by: + +``` +vagrant ssh minion-2 +vagrant ssh minion-3 +``` + +To view the service status and/or logs on the kubernetes-master: +``` +vagrant ssh master +[vagrant@kubernetes-master ~] $ sudo systemctl status kube-apiserver +[vagrant@kubernetes-master ~] $ sudo journalctl -r -u kube-apiserver + +[vagrant@kubernetes-master ~] $ sudo systemctl status kube-controller-manager +[vagrant@kubernetes-master ~] $ sudo journalctl -r -u kube-controller-manager + +[vagrant@kubernetes-master ~] $ sudo systemctl status etcd +[vagrant@kubernetes-master ~] $ sudo systemctl status nginx +``` + +To view the services on any of the kubernetes-minion(s): +``` +vagrant ssh minion-1 +[vagrant@kubernetes-minion-1] $ sudo systemctl status docker +[vagrant@kubernetes-minion-1] $ sudo journalctl -r -u docker +[vagrant@kubernetes-minion-1] $ sudo systemctl status kubelet +[vagrant@kubernetes-minion-1] $ sudo journalctl -r -u kubelet +``` + +### Interacting with your Kubernetes cluster with Vagrant. + +With your Kubernetes cluster up, you can manage the nodes in your cluster with the regular Vagrant commands. + +To push updates to new Kubernetes code after making source changes: +``` +cluster/kube-push.sh +``` + +To stop and then restart the cluster: +``` +vagrant halt +cluster/kube-up.sh +``` + +To destroy the cluster: +``` +vagrant destroy +``` + +Once your Vagrant machines are up and provisioned, the first thing to do is to check that you can use the `kubectl.sh` script. + +You may need to build the binaries first, you can do this with ```make``` + +``` +$ ./cluster/kubectl.sh get minions + +NAME LABELS +10.245.1.4 +10.245.1.5 +10.245.1.3 + +``` + +### Interacting with your Kubernetes cluster with the `kube-*` scripts. + +Alternatively to using the vagrant commands, you can also use the `cluster/kube-*.sh` scripts to interact with the vagrant based provider just like any other hosting platform for kubernetes. + +All of these commands assume you have set `KUBERNETES_PROVIDER` appropriately: + +``` +export KUBERNETES_PROVIDER=vagrant +``` + +Bring up a vagrant cluster + +``` +cluster/kube-up.sh +``` + +Destroy the vagrant cluster + +``` +cluster/kube-down.sh +``` + +Update the vagrant cluster after you make changes (only works when building your own releases locally): + +``` +cluster/kube-push.sh +``` + +Interact with the cluster + +``` +cluster/kubectl.sh +``` + +### Authenticating with your master + +When using the vagrant provider in Kubernetes, the `cluster/kubectl.sh` script will cache your credentials in a `~/.kubernetes_vagrant_auth` file so you will not be prompted for them in the future. + +``` +cat ~/.kubernetes_vagrant_auth +{ "User": "vagrant", + "Password": "vagrant" + "CAFile": "/home/k8s_user/.kubernetes.vagrant.ca.crt", + "CertFile": "/home/k8s_user/.kubecfg.vagrant.crt", + "KeyFile": "/home/k8s_user/.kubecfg.vagrant.key" +} +``` + +You should now be set to use the `cluster/kubectl.sh` script. For example try to list the minions that you have started with: + +``` +cluster/kubectl.sh get minions +``` + +### Running containers + +Your cluster is running, you can list the minions in your cluster: + +``` +$ cluster/kubectl.sh get minions + +NAME LABELS +10.245.2.4 +10.245.2.3 +10.245.2.2 + +``` + +Now start running some containers! + +You can now use any of the cluster/kube-*.sh commands to interact with your VM machines. +Before starting a container there will be no pods, services and replication controllers. + +``` +$ cluster/kubectl.sh get pods +NAME IMAGE(S) HOST LABELS STATUS + +$ cluster/kubectl.sh get services +NAME LABELS SELECTOR IP PORT + +$ cluster/kubectl.sh get replicationControllers +NAME IMAGE(S SELECTOR REPLICAS +``` + +Start a container running nginx with a replication controller and three replicas + +``` +$ cluster/kubectl.sh run-container my-nginx --image=dockerfile/nginx --replicas=3 --port=80 +``` + +When listing the pods, you will see that three containers have been started and are in Waiting state: + +``` +$ cluster/kubectl.sh get pods +NAME IMAGE(S) HOST LABELS STATUS +781191ff-3ffe-11e4-9036-0800279696e1 dockerfile/nginx 10.245.2.4/10.245.2.4 name=myNginx Waiting +7813c8bd-3ffe-11e4-9036-0800279696e1 dockerfile/nginx 10.245.2.2/10.245.2.2 name=myNginx Waiting +78140853-3ffe-11e4-9036-0800279696e1 dockerfile/nginx 10.245.2.3/10.245.2.3 name=myNginx Waiting +``` + +You need to wait for the provisioning to complete, you can monitor the minions by doing: + +``` +$ sudo salt '*minion-1' cmd.run 'docker images' +kubernetes-minion-1: + REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE + 96864a7d2df3 26 hours ago 204.4 MB + google/cadvisor latest e0575e677c50 13 days ago 12.64 MB + kubernetes/pause latest 6c4579af347b 8 weeks ago 239.8 kB +``` + +Once the docker image for nginx has been downloaded, the container will start and you can list it: + +``` +$ sudo salt '*minion-1' cmd.run 'docker ps' +kubernetes-minion-1: + CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES + dbe79bf6e25b dockerfile/nginx:latest "nginx" 21 seconds ago Up 19 seconds k8s--mynginx.8c5b8a3a--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--fcfa837f + fa0e29c94501 kubernetes/pause:latest "/pause" 8 minutes ago Up 8 minutes 0.0.0.0:8080->80/tcp k8s--net.a90e7ce4--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--baf5b21b + aa2ee3ed844a google/cadvisor:latest "/usr/bin/cadvisor - 38 minutes ago Up 38 minutes k8s--cadvisor.9e90d182--cadvisor_-_agent.file--4626b3a2 + 65a3a926f357 kubernetes/pause:latest "/pause" 39 minutes ago Up 39 minutes 0.0.0.0:4194->8080/tcp k8s--net.c5ba7f0e--cadvisor_-_agent.file--342fd561 +``` + +Going back to listing the pods, services and replicationControllers, you now have: + +``` +$ cluster/kubectl.sh get pods +NAME IMAGE(S) HOST LABELS STATUS +781191ff-3ffe-11e4-9036-0800279696e1 dockerfile/nginx 10.245.2.4/10.245.2.4 name=myNginx Running +7813c8bd-3ffe-11e4-9036-0800279696e1 dockerfile/nginx 10.245.2.2/10.245.2.2 name=myNginx Running +78140853-3ffe-11e4-9036-0800279696e1 dockerfile/nginx 10.245.2.3/10.245.2.3 name=myNginx Running + +$ cluster/kubectl.sh get services +NAME LABELS SELECTOR IP PORT + +$ cluster/kubectl.sh get replicationControllers +NAME IMAGE(S SELECTOR REPLICAS +myNginx dockerfile/nginx name=my-nginx 3 +``` + +We did not start any services, hence there are none listed. But we see three replicas displayed properly. +Check the [guestbook](../../examples/guestbook/README.md) application to learn how to create a service. +You can already play with resizing the replicas with: + +``` +$ cluster/kubectl.sh resize rc my-nginx --replicas=2 +$ cluster/kubectl.sh get pods +NAME IMAGE(S) HOST LABELS STATUS +7813c8bd-3ffe-11e4-9036-0800279696e1 dockerfile/nginx 10.245.2.2/10.245.2.2 name=myNginx Running +78140853-3ffe-11e4-9036-0800279696e1 dockerfile/nginx 10.245.2.3/10.245.2.3 name=myNginx Running +``` + +Congratulations! + +### Testing + +The following will run all of the end-to-end testing scenarios assuming you set your environment in cluster/kube-env.sh + +``` +NUM_MINIONS=3 hack/e2e-test.sh +``` + +### Troubleshooting + +#### I keep downloading the same (large) box all the time! + +By default the Vagrantfile will download the box from S3. You can change this (and cache the box locally) by providing an alternate URL when calling `kube-up.sh` + +```bash +export KUBERNETES_BOX_URL=path_of_your_kuber_box +export KUBERNETES_PROVIDER=vagrant +cluster/kube-up.sh +``` + + +#### I just created the cluster, but I am getting authorization errors! + +You probably have an incorrect ~/.kubernetes_vagrant_auth file for the cluster you are attempting to contact. + +``` +rm ~/.kubernetes_vagrant_auth +``` + +After using kubectl.sh make sure that the correct credentials are set: + +``` +cat ~/.kubernetes_vagrant_auth +{ + "User": "vagrant", + "Password": "vagrant" +} +``` + +#### I just created the cluster, but I do not see my container running ! + +If this is your first time creating the cluster, the kubelet on each minion schedules a number of docker pull requests to fetch prerequisite images. This can take some time and as a result may delay your initial pod getting provisioned. + +#### I changed Kubernetes code, but it's not running ! + +Are you sure there was no build error? After running `$ vagrant provision`, scroll up and ensure that each Salt state was completed successfully on each box in the cluster. +It's very likely you see a build error due to an error in your source files! + +#### I have brought Vagrant up but the minions won't validate ! + +Are you sure you built a release first? Did you install `net-tools`? For more clues, login to one of the minions (`vagrant ssh minion-1`) and inspect the salt minion log (`sudo cat /var/log/salt/minion`). + +#### I want to change the number of minions ! + +You can control the number of minions that are instantiated via the environment variable `NUM_MINIONS` on your host machine. If you plan to work with replicas, we strongly encourage you to work with enough minions to satisfy your largest intended replica size. If you do not plan to work with replicas, you can save some system resources by running with a single minion. You do this, by setting `NUM_MINIONS` to 1 like so: + +``` +export NUM_MINIONS=1 +``` + +#### I want my VMs to have more memory ! + +You can control the memory allotted to virtual machines with the `KUBERNETES_MEMORY` environment variable. +Just set it to the number of megabytes you would like the machines to have. For example: + +``` +export KUBERNETES_MEMORY=2048 +``` + +#### I ran vagrant suspend and nothing works! +```vagrant suspend``` seems to mess up the network. It's not supported at this time. -- cgit v1.2.3 From 8a901730fe7348d9bb207233f51a9713b77791b2 Mon Sep 17 00:00:00 2001 From: Adam Dymitruk Date: Mon, 23 Mar 2015 23:51:46 -0700 Subject: Better wording for clean up. Encouraging squashing by default leads to important history being lost. People new to different git flows may be doing themselves and the project a disservice without knowing. --- faster_reviews.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/faster_reviews.md b/faster_reviews.md index 142ac946..a2d00465 100644 --- a/faster_reviews.md +++ b/faster_reviews.md @@ -120,8 +120,8 @@ your whole PR is pretty trivial, you should instead put your fixups into a new commit and re-push. Your reviewer can then look at that commit on its own - so much faster to review than starting over. -We might still ask you to squash commits at the very end, for the sake of a clean -history. +We might still ask you to clean up your commits at the very end, for the sake +of a more readable history. ## 8. KISS, YAGNI, MVP, etc -- cgit v1.2.3 From 4d946b3353672a2b27cde2aed92b0ab7abbd7c10 Mon Sep 17 00:00:00 2001 From: Rohit Jnagal Date: Wed, 25 Mar 2015 17:54:23 +0000 Subject: Add a pointer to kubernetes-dev to API changes doc. --- api_changes.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/api_changes.md b/api_changes.md index c1005278..be02e16c 100644 --- a/api_changes.md +++ b/api_changes.md @@ -284,6 +284,12 @@ you'll figure it out. Put `grep` or `ack` to good use. If you added functionality, you should consider documenting it and/or writing an example to illustrate your change. +## Incompatible API changes +If your change is going to be backward incompatible or might be a breaking change for API +consumers, please send an announcement to `kubernetes-dev@googlegroups.com` before +the change gets in. If you are unsure, ask. Also make sure that the change gets documented in +`CHANGELOG.md` for the next release. + ## Adding new REST objects TODO(smarterclayton): write this. -- cgit v1.2.3 From 636062818feee072bf5eac1636bff2df1f9e4848 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Ant=C3=B3nio=20Meireles?= Date: Mon, 30 Mar 2015 14:42:20 +0100 Subject: remove remaining references to containerized cadvisor. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit since GoogleCloudPlatform/kubernetes#5308 got merged cadvisor facilities are built-in in kubelet, so time to update the 'screenshots'... Signed-off-by: António Meireles --- developer-guides/vagrant.md | 3 --- 1 file changed, 3 deletions(-) diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index 47236381..8e439009 100644 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -201,7 +201,6 @@ $ sudo salt '*minion-1' cmd.run 'docker images' kubernetes-minion-1: REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE 96864a7d2df3 26 hours ago 204.4 MB - google/cadvisor latest e0575e677c50 13 days ago 12.64 MB kubernetes/pause latest 6c4579af347b 8 weeks ago 239.8 kB ``` @@ -213,8 +212,6 @@ kubernetes-minion-1: CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES dbe79bf6e25b dockerfile/nginx:latest "nginx" 21 seconds ago Up 19 seconds k8s--mynginx.8c5b8a3a--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--fcfa837f fa0e29c94501 kubernetes/pause:latest "/pause" 8 minutes ago Up 8 minutes 0.0.0.0:8080->80/tcp k8s--net.a90e7ce4--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--baf5b21b - aa2ee3ed844a google/cadvisor:latest "/usr/bin/cadvisor - 38 minutes ago Up 38 minutes k8s--cadvisor.9e90d182--cadvisor_-_agent.file--4626b3a2 - 65a3a926f357 kubernetes/pause:latest "/pause" 39 minutes ago Up 39 minutes 0.0.0.0:4194->8080/tcp k8s--net.c5ba7f0e--cadvisor_-_agent.file--342fd561 ``` Going back to listing the pods, services and replicationControllers, you now have: -- cgit v1.2.3 From d60aa36171ee57c3a2d0b02a8285c5f0e6107e9f Mon Sep 17 00:00:00 2001 From: Eric Tune Date: Fri, 6 Mar 2015 09:54:46 -0800 Subject: Proposed guidelines for new Getting-started-guides. # *** ERROR: *** docs are out of sync between cli and markdown # run hack/run-gendocs.sh > docs/kubectl.md to regenerate # # Your commit will be aborted unless you regenerate docs. COMMIT_BLOCKED_ON_GENDOCS --- development.md | 11 +++++ writing-a-getting-started-guide.md | 99 ++++++++++++++++++++++++++++++++++++++ 2 files changed, 110 insertions(+) create mode 100644 writing-a-getting-started-guide.md diff --git a/development.md b/development.md index ef7c7ce8..7972eef6 100644 --- a/development.md +++ b/development.md @@ -227,6 +227,17 @@ go run hack/e2e.go -v -ctl='get events' go run hack/e2e.go -v -ctl='delete pod foobar' ``` +## Conformance testing +End-to-end testing, as described above, is for [development +distributions](../../docs/devel/writing-a-getting-started-guide.md). A conformance test is used on +a [versioned distro](../../docs/devel/writing-a-getting-started-guide.md). + +The conformance test runs a subset of the e2e-tests against a manually-created cluster. It does not +require support for up/push/down and other operations. To run a conformance test, you need to know the +IP of the master for your cluster and the authorization arguments to use. The conformance test is +intended to run against a cluster at a specific binary release of Kubernetes. +See [conformance-test.sh](../../hack/conformance-test.sh). + ## Testing out flaky tests [Instructions here](flaky-tests.md) diff --git a/writing-a-getting-started-guide.md b/writing-a-getting-started-guide.md new file mode 100644 index 00000000..7c837351 --- /dev/null +++ b/writing-a-getting-started-guide.md @@ -0,0 +1,99 @@ +# Writing a Getting Started Guide +This page gives some advice for anyone planning to write or update a Getting Started Guide for Kubernetes. +It also gives some guidelines which reviewers should follow when reviewing a pull request for a +guide. + +A Getting Started Guide is instructions on how to create a Kubernetes cluster on top of a particular +type(s) of infrastructure. Infrastructure includes: the IaaS provider for VMs; +the node OS; inter-node networking; and node Configuration Management system. +A guide refers to scripts, Configuration Manangement files, and/or binary assets such as RPMs. We call +the combination of all these things needed to run on a particular type of infrastructure a +**distro**. + +[The Matrix](../../docs/getting-started-guides/README.md) lists the distros. If there is already a guide +which is similar to the one you have planned, consider improving that one. + + +Distros fall into two categories: + - **versioned distros** are tested to work with a particular binary release of Kubernetes. These + come in a wide variety, reflecting a wide range of ideas and preferences in how to run a cluster. + - **development distros** are tested work with the latest Kubernetes source code. But, there are + relatively few of these and the bar is much higher for creating one. + +There are different guidelines for each. + +## Versioned Distro Guidelines +These guidelines say *what* to do. See the Rationale section for *why*. + - Send us a PR. + - Put the instructions in `docs/getting-started-guides/...`. Scripts go there too. This helps devs easily + search for uses of flags by guides. + - We may ask that you host binary assets or large amounts of code in our `contrib` directory or on your + own repo. + - Setup a cluster and run the [conformance test](../../docs/devel/conformance-test.md) against it, and report the + results in your PR. + - Add or update a row in [The Matrix](../../docs/getting-started-guides/README.md). + - State the binary version of kubernetes that you tested clearly in your Guide doc and in The Matrix. + - Even if you are just updating the binary version used, please still do a conformance test. + - If it worked before and now fails, you can ask on IRC, + check the release notes since your last tested version, or look at git -logs for files in other distros + that are updated to the new version. + - Versioned distros should typically not modify or add code in `cluster/`. That is just scripts for developer + distros. + - If a versioned distro has not been updated for many binary releases, it may be dropped frome the Matrix. + +If you have a cluster partially working, but doing all the above steps seems like too much work, +we still want to hear from you. We suggest you write a blog post or a Gist, and we will link to it on our wiki page. +Just file an issue or chat us on IRC and one of the committers will link to it from the wiki. + +## Development Distro Guidelines +These guidelines say *what* to do. See the Rationale section for *why*. + - the main reason to add a new development distro is to support a new IaaS provider (VM and + network management). This means implementing a new `pkg/cloudprovider/$IAAS_NAME`. + - Development distros should use Saltstack for Configuration Management. + - development distros need to support automated cluster creation, deletion, upgrading, etc. + This mean writing scripts in `cluster/$IAAS_NAME`. + - all commits to the tip of this repo need to not break any of the development distros + - the author of the change is responsible for making changes necessary on all the cloud-providers if the + change affects any of them, and reverting the change if it breaks any of the CIs. + - a development distro needs to have an organization which owns it. This organization needs to: + - Setting up and maintaining Continuous Integration that runs e2e frequently (multiple times per day) against the + Distro at head, and which notifies all devs of breakage. + - being reasonably available for questions and assiting with + refactoring and feature additions that affect code for their IaaS. + +## Rationale + - We want want people to create Kubernetes clusters with whatever IaaS, Node OS, + configuration management tools, and so on, which they are familiar with. The + guidelines for **versioned distros** are designed for flexiblity. + - We want developers to be able to work without understanding all the permutations of + IaaS, NodeOS, and configuration management. The guidelines for **developer distros** are designed + for consistency. + - We want users to have a uniform experience with Kubernetes whenever they follow instructions anywhere + in our Github repository. So, we ask that versioned distros pass a **conformance test** to make sure + really work. + - We ask versioned distros to **clearly state a version**. People pulling from Github may + expect any instructions there to work at Head, so stuff that has not been tested at Head needs + to be called out. We are still changing things really fast, and, while the REST API is versioned, + it is not practical at this point to version or limit changes that affect distros. We still change + flags at the Kubernetes/Infrastructure interface. + - We want to **limit the number of development distros** for several reasons. Developers should + only have to change a limited number of places to add a new feature. Also, since we will + gate commits on passing CI for all distros, and since end-to-end tests are typically somewhat + flaky, it would be highly likely for there to be false positives and CI backlogs with many CI pipelines. + - We do not require versioned distros to do **CI** for several reasons. It is a steep + learning curve to understand our our automated testing scripts. And it is considerable effort + to fully automate setup and teardown of a cluster, which is needed for CI. And, not everyone + has the time and money to run CI. We do not want to + discourage people from writing and sharing guides because of this. + - Versioned distro authors are free to run their own CI and let us know if there is breakage, but we + will not include them as commit hooks -- there cannot be so many commit checks that it is impossible + to pass them all. + - We prefer a single Configuration Management tool for development distros. If there were more + than one, the core developers would have to learn multiple tools and update config in multiple + places. **Saltstack** happens to be the one we picked when we started the project. We + welcome versioned distros that use any tool; there are already examples of + CoreOS Fleet, Ansible, and others. + - You can still run code from head or your own branch + if you use another Configuration Management tool -- you just have to do some manual steps + during testing and deployment. + -- cgit v1.2.3 From 73ec8632c4acb601abe0fd66903ce1ceacecf578 Mon Sep 17 00:00:00 2001 From: Piotr Szczesniak Date: Fri, 27 Mar 2015 11:15:47 +0100 Subject: Changed merge policy --- collab.md | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/collab.md b/collab.md index f9f12e25..b8781519 100644 --- a/collab.md +++ b/collab.md @@ -6,13 +6,9 @@ Kubernetes is open source, but many of the people working on it do so as their d First and foremost: as a potential contributor, your changes and ideas are welcome at any hour of the day or night, weekdays, weekends, and holidays. Please do not ever hesitate to ask a question or send a PR. -## Timezones and calendars - -For the time being, most of the people working on this project are in the US and on Pacific time. Any times mentioned henceforth will refer to this timezone. Any references to "work days" will refer to the US calendar. - ## Code reviews -All changes must be code reviewed. For non-maintainers this is obvious, since you can't commit anyway. But even for maintainers, we want all changes to get at least one review, preferably from someone who knows the areas the change touches. For non-trivial changes we may want two reviewers. The primary reviewer will make this decision and nominate a second reviewer, if needed. Except for trivial changes, PRs should sit for at least 2 hours to allow for wider review. +All changes must be code reviewed. For non-maintainers this is obvious, since you can't commit anyway. But even for maintainers, we want all changes to get at least one review, preferably (for non-trivial changes obligately) from someone who knows the areas the change touches. For non-trivial changes we may want two reviewers. The primary reviewer will make this decision and nominate a second reviewer, if needed. Except for trivial changes, PRs should not be committed until relevant parties (e.g. owners of the subsystem affected by the PR) have had a reasonable chance to look at PR in their local business hours. Most PRs will find reviewers organically. If a maintainer intends to be the primary reviewer of a PR they should set themselves as the assignee on GitHub and say so in a reply to the PR. Only the primary reviewer of a change should actually do the merge, except in rare cases (e.g. they are unavailable in a reasonable timeframe). @@ -28,7 +24,7 @@ Maintainers can assign reviews to other maintainers, when appropriate. The assig ## Merge hours -Maintainers will do merges between the hours of 7:00 am Monday and 7:00 pm (19:00h) Friday. PRs that arrive over the weekend or on holidays will only be merged if there is a very good reason for it and if the code review requirements have been met. +Maintainers will do merges of appropriately reviewed-and-approved changes during their local "business hours" (typically 7:00 am Monday to 5:00 pm (17:00h) Friday). PRs that arrive over the weekend or on holidays will only be merged if there is a very good reason for it and if the code review requirements have been met. Concretely this means that nobody should merge changes immediately before going to bed for the night. There may be discussion an even approvals granted outside of the above hours, but merges will generally be deferred. -- cgit v1.2.3 From fcd666c840cb67c79ecdd8b0ef5116272644fb48 Mon Sep 17 00:00:00 2001 From: goltermann Date: Wed, 1 Apr 2015 13:00:37 -0700 Subject: Update issues.md Updating priority definitions - open for discussion if there are other opinions. --- issues.md | 14 ++++++-------- 1 file changed, 6 insertions(+), 8 deletions(-) diff --git a/issues.md b/issues.md index 491dba49..f2db3277 100644 --- a/issues.md +++ b/issues.md @@ -8,14 +8,12 @@ Priorities We will use GitHub issue labels for prioritization. The absence of a priority label means the bug has not been reviewed and prioritized yet. -Priorities are "moment in time" labels, and what is low priority today, could be high priority tomorrow, and vice versa. As we move to v1.0, we may decide certain bugs aren't actually needed yet, or that others really do need to be pulled in. - -Here we define the priorities for up until v1.0. Once the Kubernetes project hits 1.0, we will revisit the scheme and update as appropriate. - Definitions ----------- * P0 - something broken for users, build broken, or critical security issue. Someone must drop everything and work on it. -* P1 - must fix for earliest possible OSS binary release (every two weeks) -* P2 - must fix for v1.0 release - will block the release -* P3 - post v1.0 -* untriaged - anything without a Priority/PX label will be considered untriaged \ No newline at end of file +* P1 - must fix for earliest possible binary release (every two weeks) +* P2 - should be fixed in next major relase version +* P3 - default priority for lower importance bugs that we still want to track and plan to fix at some point +* design - priority/design is for issues that are used to track design discussions +* support - priority/support is used for issues tracking user support requests +* untriaged - anything without a priority/X label will be considered untriaged -- cgit v1.2.3 From 5d31ce87c823910d4b40a4be65bc54fa267372b6 Mon Sep 17 00:00:00 2001 From: goltermann Date: Wed, 1 Apr 2015 16:37:42 -0700 Subject: Update issues.md --- issues.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/issues.md b/issues.md index f2db3277..51395cae 100644 --- a/issues.md +++ b/issues.md @@ -12,7 +12,7 @@ Definitions ----------- * P0 - something broken for users, build broken, or critical security issue. Someone must drop everything and work on it. * P1 - must fix for earliest possible binary release (every two weeks) -* P2 - should be fixed in next major relase version +* P2 - should be fixed in next major release version * P3 - default priority for lower importance bugs that we still want to track and plan to fix at some point * design - priority/design is for issues that are used to track design discussions * support - priority/support is used for issues tracking user support requests -- cgit v1.2.3 From 52f4cee414f94ee5fc58cf943b443094e6773094 Mon Sep 17 00:00:00 2001 From: Brendan Burns Date: Thu, 2 Apr 2015 12:05:49 -0700 Subject: Add some more clarity around "controversial" or "complex" PRs and merging. --- collab.md | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/collab.md b/collab.md index b8781519..dd7b8059 100644 --- a/collab.md +++ b/collab.md @@ -28,6 +28,13 @@ Maintainers will do merges of appropriately reviewed-and-approved changes during There may be discussion an even approvals granted outside of the above hours, but merges will generally be deferred. +If a PR is considered complex or controversial, the merge of that PR should be delayed to give all interested parties in all timezones the opportunity to provide feedback. Concretely, this means that such PRs should be held for 24 +hours before merging. Of course "complex" and "controversial" are left to the judgement of the people involved, but we trust that part of being a committer is the judgement required to evaluate such things honestly, and not be +motivated by your desire (or your cube-mate's desire) to get their code merged. Also see "Holds" below, any reviewer can issue a "hold" to indicate that the PR is in fact complicated or complex and deserves further review. + +PRs that are incorrectly judged to be merge-able, may be reverted and subject to re-review, if subsequent reviewers believe that they in fact are controversial or complex. + + ## Holds Any maintainer or core contributor who wants to review a PR but does not have time immediately may put a hold on a PR simply by saying so on the PR discussion and offering an ETA measured in single-digit days at most. Any PR that has a hold shall not be merged until the person who requested the hold acks the review, withdraws their hold, or is overruled by a preponderance of maintainers. -- cgit v1.2.3 From 655bbc697f92fe1229534c80a97a56862a4eb440 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Ant=C3=B3nio=20Meireles?= Date: Mon, 6 Apr 2015 20:29:32 +0100 Subject: adding release notes guidelines to the (new) releases policy. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit per the ongoing conversation at GoogleCloudPlatform/kubernetes#6213 Signed-off-by: António Meireles --- releasing.md | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/releasing.md b/releasing.md index 4cdf8827..125355c3 100644 --- a/releasing.md +++ b/releasing.md @@ -150,3 +150,16 @@ not present in Docker `v1.2.0`: (Non-empty output here means the commit is not present on v1.2.0.) ``` +## Release Notes + +No official release should be made final without properly matching release notes. + +There should be made available, per release, a small summary, preamble, of the +major changes, both in terms of feature improvements/bug fixes and notes about +functional feature changes (if any) regarding the previous released version so +that the BOM regarding updating to it gets as obvious and trouble free as possible. + +After this summary, preamble, all the relevant PRs/issues that got in that +version should be listed and linked together with a small summary understandable +by plain mortals (in a perfect world PR/issue's title would be enough but often +it is just too cryptic/geeky/domain-specific that it isn't). -- cgit v1.2.3 From 2375fb9e51bf693abe62554e0c1ca8c3f0719328 Mon Sep 17 00:00:00 2001 From: Robert Bailey Date: Wed, 15 Apr 2015 20:50:00 -0700 Subject: Add documentation to help new contributors with write access from accidentally pushing upstream. --- development.md | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/development.md b/development.md index 7972eef6..bbd94fef 100644 --- a/development.md +++ b/development.md @@ -256,6 +256,13 @@ git fetch upstream git rebase upstream/master ``` +If you have write access to the main repository, you should modify your git configuration so that +you can't accidentally push to upstream: + +``` +git remote set-url --push upstream no_push +``` + ## Regenerating the CLI documentation ``` -- cgit v1.2.3 From 457acee81e7566f1baa3ecc9056032eef7317b5e Mon Sep 17 00:00:00 2001 From: Tim Hockin Date: Thu, 16 Apr 2015 09:11:47 -0700 Subject: Stop using dockerfile/* images As per http://blog.docker.com/2015/03/updates-available-to-popular-repos-update-your-images/ docker has stopped answering dockerfile/redis and dockerfile/nginx. Fix all users in our tree. Sadly this means a lot of published examples are now broken. --- developer-guides/vagrant.md | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index 8e439009..ab0ef274 100644 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -181,7 +181,7 @@ NAME IMAGE(S SELECTOR REPLICAS Start a container running nginx with a replication controller and three replicas ``` -$ cluster/kubectl.sh run-container my-nginx --image=dockerfile/nginx --replicas=3 --port=80 +$ cluster/kubectl.sh run-container my-nginx --image=nginx --replicas=3 --port=80 ``` When listing the pods, you will see that three containers have been started and are in Waiting state: @@ -189,9 +189,9 @@ When listing the pods, you will see that three containers have been started and ``` $ cluster/kubectl.sh get pods NAME IMAGE(S) HOST LABELS STATUS -781191ff-3ffe-11e4-9036-0800279696e1 dockerfile/nginx 10.245.2.4/10.245.2.4 name=myNginx Waiting -7813c8bd-3ffe-11e4-9036-0800279696e1 dockerfile/nginx 10.245.2.2/10.245.2.2 name=myNginx Waiting -78140853-3ffe-11e4-9036-0800279696e1 dockerfile/nginx 10.245.2.3/10.245.2.3 name=myNginx Waiting +781191ff-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.4/10.245.2.4 name=myNginx Waiting +7813c8bd-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.2/10.245.2.2 name=myNginx Waiting +78140853-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.3/10.245.2.3 name=myNginx Waiting ``` You need to wait for the provisioning to complete, you can monitor the minions by doing: @@ -210,7 +210,7 @@ Once the docker image for nginx has been downloaded, the container will start an $ sudo salt '*minion-1' cmd.run 'docker ps' kubernetes-minion-1: CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES - dbe79bf6e25b dockerfile/nginx:latest "nginx" 21 seconds ago Up 19 seconds k8s--mynginx.8c5b8a3a--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--fcfa837f + dbe79bf6e25b nginx:latest "nginx" 21 seconds ago Up 19 seconds k8s--mynginx.8c5b8a3a--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--fcfa837f fa0e29c94501 kubernetes/pause:latest "/pause" 8 minutes ago Up 8 minutes 0.0.0.0:8080->80/tcp k8s--net.a90e7ce4--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--baf5b21b ``` @@ -219,16 +219,16 @@ Going back to listing the pods, services and replicationControllers, you now hav ``` $ cluster/kubectl.sh get pods NAME IMAGE(S) HOST LABELS STATUS -781191ff-3ffe-11e4-9036-0800279696e1 dockerfile/nginx 10.245.2.4/10.245.2.4 name=myNginx Running -7813c8bd-3ffe-11e4-9036-0800279696e1 dockerfile/nginx 10.245.2.2/10.245.2.2 name=myNginx Running -78140853-3ffe-11e4-9036-0800279696e1 dockerfile/nginx 10.245.2.3/10.245.2.3 name=myNginx Running +781191ff-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.4/10.245.2.4 name=myNginx Running +7813c8bd-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.2/10.245.2.2 name=myNginx Running +78140853-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.3/10.245.2.3 name=myNginx Running $ cluster/kubectl.sh get services NAME LABELS SELECTOR IP PORT $ cluster/kubectl.sh get replicationControllers NAME IMAGE(S SELECTOR REPLICAS -myNginx dockerfile/nginx name=my-nginx 3 +myNginx nginx name=my-nginx 3 ``` We did not start any services, hence there are none listed. But we see three replicas displayed properly. @@ -239,8 +239,8 @@ You can already play with resizing the replicas with: $ cluster/kubectl.sh resize rc my-nginx --replicas=2 $ cluster/kubectl.sh get pods NAME IMAGE(S) HOST LABELS STATUS -7813c8bd-3ffe-11e4-9036-0800279696e1 dockerfile/nginx 10.245.2.2/10.245.2.2 name=myNginx Running -78140853-3ffe-11e4-9036-0800279696e1 dockerfile/nginx 10.245.2.3/10.245.2.3 name=myNginx Running +7813c8bd-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.2/10.245.2.2 name=myNginx Running +78140853-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.3/10.245.2.3 name=myNginx Running ``` Congratulations! -- cgit v1.2.3 From 77e469b2870d2fda54fa2555d63edf5965cc26b8 Mon Sep 17 00:00:00 2001 From: Matt Bogosian Date: Wed, 15 Apr 2015 16:07:50 -0700 Subject: Fix #2741. Add support for alternate Vagrant providers: VMWare Fusion, VMWare Workstation, and Parallels. --- developer-guides/vagrant.md | 115 +++++++++++++++++++++++++------------------- 1 file changed, 66 insertions(+), 49 deletions(-) diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index ab0ef274..baf40b97 100644 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -4,42 +4,54 @@ Running kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/deve ### Prerequisites 1. Install latest version >= 1.6.2 of vagrant from http://www.vagrantup.com/downloads.html -2. Install latest version of Virtual Box from https://www.virtualbox.org/wiki/Downloads +2. Install one of: + 1. The latest version of Virtual Box from https://www.virtualbox.org/wiki/Downloads + 2. [VMWare Fusion](https://www.vmware.com/products/fusion/) version 5 or greater as well as the appropriate [Vagrant VMWare Fusion provider](https://www.vagrantup.com/vmware) + 3. [VMWare Workstation](https://www.vmware.com/products/workstation/) version 9 or greater as well as the [Vagrant VMWare Workstation provider](https://www.vagrantup.com/vmware) + 4. [Parallels Desktop](https://www.parallels.com/products/desktop/) version 9 or greater as well as the [Vagrant Parallels provider](https://parallels.github.io/vagrant-parallels/) 3. Get or build a [binary release](../../getting-started-guides/binary_release.md) ### Setup By default, the Vagrant setup will create a single kubernetes-master and 1 kubernetes-minion. Each VM will take 1 GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate free disk space). To start your local cluster, open a shell and run: -``` +```sh cd kubernetes export KUBERNETES_PROVIDER=vagrant -cluster/kube-up.sh +./cluster/kube-up.sh ``` The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine. +If you installed more than one Vagrant provider, Kubernetes will usually pick the appropriate one. However, you can override which one Kubernetes will use by setting the [`VAGRANT_DEFAULT_PROVIDER`](https://docs.vagrantup.com/v2/providers/default.html) environment variable: + +```sh +export VAGRANT_DEFAULT_PROVIDER=parallels +export KUBERNETES_PROVIDER=vagrant +./cluster/kube-up.sh +``` + Vagrant will provision each machine in the cluster with all the necessary components to run Kubernetes. The initial setup can take a few minutes to complete on each machine. By default, each VM in the cluster is running Fedora, and all of the Kubernetes services are installed into systemd. To access the master or any minion: -``` +```sh vagrant ssh master vagrant ssh minion-1 ``` If you are running more than one minion, you can access the others by: -``` +```sh vagrant ssh minion-2 vagrant ssh minion-3 ``` To view the service status and/or logs on the kubernetes-master: -``` +```sh vagrant ssh master [vagrant@kubernetes-master ~] $ sudo systemctl status kube-apiserver [vagrant@kubernetes-master ~] $ sudo journalctl -r -u kube-apiserver @@ -52,7 +64,7 @@ vagrant ssh master ``` To view the services on any of the kubernetes-minion(s): -``` +```sh vagrant ssh minion-1 [vagrant@kubernetes-minion-1] $ sudo systemctl status docker [vagrant@kubernetes-minion-1] $ sudo journalctl -r -u docker @@ -65,18 +77,18 @@ vagrant ssh minion-1 With your Kubernetes cluster up, you can manage the nodes in your cluster with the regular Vagrant commands. To push updates to new Kubernetes code after making source changes: -``` -cluster/kube-push.sh +```sh +./cluster/kube-push.sh ``` To stop and then restart the cluster: -``` +```sh vagrant halt -cluster/kube-up.sh +./cluster/kube-up.sh ``` To destroy the cluster: -``` +```sh vagrant destroy ``` @@ -84,14 +96,13 @@ Once your Vagrant machines are up and provisioned, the first thing to do is to c You may need to build the binaries first, you can do this with ```make``` -``` +```sh $ ./cluster/kubectl.sh get minions NAME LABELS 10.245.1.4 10.245.1.5 10.245.1.3 - ``` ### Interacting with your Kubernetes cluster with the `kube-*` scripts. @@ -100,39 +111,39 @@ Alternatively to using the vagrant commands, you can also use the `cluster/kube- All of these commands assume you have set `KUBERNETES_PROVIDER` appropriately: -``` +```sh export KUBERNETES_PROVIDER=vagrant ``` Bring up a vagrant cluster -``` -cluster/kube-up.sh +```sh +./cluster/kube-up.sh ``` Destroy the vagrant cluster -``` -cluster/kube-down.sh +```sh +./cluster/kube-down.sh ``` Update the vagrant cluster after you make changes (only works when building your own releases locally): -``` -cluster/kube-push.sh +```sh +./cluster/kube-push.sh ``` Interact with the cluster -``` -cluster/kubectl.sh +```sh +./cluster/kubectl.sh ``` ### Authenticating with your master When using the vagrant provider in Kubernetes, the `cluster/kubectl.sh` script will cache your credentials in a `~/.kubernetes_vagrant_auth` file so you will not be prompted for them in the future. -``` +```sh cat ~/.kubernetes_vagrant_auth { "User": "vagrant", "Password": "vagrant" @@ -144,22 +155,21 @@ cat ~/.kubernetes_vagrant_auth You should now be set to use the `cluster/kubectl.sh` script. For example try to list the minions that you have started with: -``` -cluster/kubectl.sh get minions +```sh +./cluster/kubectl.sh get minions ``` ### Running containers Your cluster is running, you can list the minions in your cluster: -``` -$ cluster/kubectl.sh get minions +```sh +$ ./cluster/kubectl.sh get minions NAME LABELS 10.245.2.4 10.245.2.3 10.245.2.2 - ``` Now start running some containers! @@ -196,7 +206,7 @@ NAME IMAGE(S) HOST You need to wait for the provisioning to complete, you can monitor the minions by doing: -``` +```sh $ sudo salt '*minion-1' cmd.run 'docker images' kubernetes-minion-1: REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE @@ -206,7 +216,7 @@ kubernetes-minion-1: Once the docker image for nginx has been downloaded, the container will start and you can list it: -``` +```sh $ sudo salt '*minion-1' cmd.run 'docker ps' kubernetes-minion-1: CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES @@ -235,9 +245,9 @@ We did not start any services, hence there are none listed. But we see three rep Check the [guestbook](../../examples/guestbook/README.md) application to learn how to create a service. You can already play with resizing the replicas with: -``` -$ cluster/kubectl.sh resize rc my-nginx --replicas=2 -$ cluster/kubectl.sh get pods +```sh +$ ./cluster/kubectl.sh resize rc my-nginx --replicas=2 +$ ./cluster/kubectl.sh get pods NAME IMAGE(S) HOST LABELS STATUS 7813c8bd-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.2/10.245.2.2 name=myNginx Running 78140853-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.3/10.245.2.3 name=myNginx Running @@ -247,9 +257,9 @@ Congratulations! ### Testing -The following will run all of the end-to-end testing scenarios assuming you set your environment in cluster/kube-env.sh +The following will run all of the end-to-end testing scenarios assuming you set your environment in `cluster/kube-env.sh`: -``` +```sh NUM_MINIONS=3 hack/e2e-test.sh ``` @@ -257,26 +267,26 @@ NUM_MINIONS=3 hack/e2e-test.sh #### I keep downloading the same (large) box all the time! -By default the Vagrantfile will download the box from S3. You can change this (and cache the box locally) by providing an alternate URL when calling `kube-up.sh` +By default the Vagrantfile will download the box from S3. You can change this (and cache the box locally) by providing a name and an alternate URL when calling `kube-up.sh` -```bash +```sh +export KUBERNETES_BOX_NAME=choose_your_own_name_for_your_kuber_box export KUBERNETES_BOX_URL=path_of_your_kuber_box export KUBERNETES_PROVIDER=vagrant -cluster/kube-up.sh +./cluster/kube-up.sh ``` - #### I just created the cluster, but I am getting authorization errors! You probably have an incorrect ~/.kubernetes_vagrant_auth file for the cluster you are attempting to contact. -``` +```sh rm ~/.kubernetes_vagrant_auth ``` After using kubectl.sh make sure that the correct credentials are set: -``` +```sh cat ~/.kubernetes_vagrant_auth { "User": "vagrant", @@ -284,35 +294,42 @@ cat ~/.kubernetes_vagrant_auth } ``` -#### I just created the cluster, but I do not see my container running ! +#### I just created the cluster, but I do not see my container running! If this is your first time creating the cluster, the kubelet on each minion schedules a number of docker pull requests to fetch prerequisite images. This can take some time and as a result may delay your initial pod getting provisioned. -#### I changed Kubernetes code, but it's not running ! +#### I changed Kubernetes code, but it's not running! Are you sure there was no build error? After running `$ vagrant provision`, scroll up and ensure that each Salt state was completed successfully on each box in the cluster. It's very likely you see a build error due to an error in your source files! -#### I have brought Vagrant up but the minions won't validate ! +#### I have brought Vagrant up but the minions won't validate! Are you sure you built a release first? Did you install `net-tools`? For more clues, login to one of the minions (`vagrant ssh minion-1`) and inspect the salt minion log (`sudo cat /var/log/salt/minion`). -#### I want to change the number of minions ! +#### I want to change the number of minions! You can control the number of minions that are instantiated via the environment variable `NUM_MINIONS` on your host machine. If you plan to work with replicas, we strongly encourage you to work with enough minions to satisfy your largest intended replica size. If you do not plan to work with replicas, you can save some system resources by running with a single minion. You do this, by setting `NUM_MINIONS` to 1 like so: -``` +```sh export NUM_MINIONS=1 ``` -#### I want my VMs to have more memory ! +#### I want my VMs to have more memory! You can control the memory allotted to virtual machines with the `KUBERNETES_MEMORY` environment variable. Just set it to the number of megabytes you would like the machines to have. For example: -``` +```sh export KUBERNETES_MEMORY=2048 ``` +If you need more granular control, you can set the amount of memory for the master and minions independently. For example: + +```sh +export KUBERNETES_MASTER_MEMORY=1536 +export KUBERNETES_MASTER_MINION=2048 +``` + #### I ran vagrant suspend and nothing works! ```vagrant suspend``` seems to mess up the network. It's not supported at this time. -- cgit v1.2.3 From eb7b52f95ad2af7378be8ca2ab3bbba8b1a00f34 Mon Sep 17 00:00:00 2001 From: George Kuan Date: Sun, 26 Apr 2015 19:37:14 -0700 Subject: Corrected some typos --- api_changes.md | 4 ++-- collab.md | 2 +- development.md | 2 +- faster_reviews.md | 6 +++--- logging.md | 2 +- profiling.md | 4 ++-- releasing.md | 2 +- writing-a-getting-started-guide.md | 8 ++++---- 8 files changed, 15 insertions(+), 15 deletions(-) diff --git a/api_changes.md b/api_changes.md index be02e16c..6ab86ce0 100644 --- a/api_changes.md +++ b/api_changes.md @@ -243,7 +243,7 @@ The fuzzer can be found in `pkg/api/testing/fuzzer.go`. ## Update the semantic comparisons VERY VERY rarely is this needed, but when it hits, it hurts. In some rare -cases we end up with objects (e.g. resource quantites) that have morally +cases we end up with objects (e.g. resource quantities) that have morally equivalent values with different bitwise representations (e.g. value 10 with a base-2 formatter is the same as value 0 with a base-10 formatter). The only way Go knows how to do deep-equality is through field-by-field bitwise comparisons. @@ -278,7 +278,7 @@ At last, your change is done, all unit tests pass, e2e passes, you're done, right? Actually, no. You just changed the API. If you are touching an existing facet of the API, you have to try *really* hard to make sure that *all* the examples and docs are updated. There's no easy way to do this, due -in part ot JSON and YAML silently dropping unknown fields. You're clever - +in part to JSON and YAML silently dropping unknown fields. You're clever - you'll figure it out. Put `grep` or `ack` to good use. If you added functionality, you should consider documenting it and/or writing diff --git a/collab.md b/collab.md index dd7b8059..000fb6ea 100644 --- a/collab.md +++ b/collab.md @@ -29,7 +29,7 @@ Maintainers will do merges of appropriately reviewed-and-approved changes during There may be discussion an even approvals granted outside of the above hours, but merges will generally be deferred. If a PR is considered complex or controversial, the merge of that PR should be delayed to give all interested parties in all timezones the opportunity to provide feedback. Concretely, this means that such PRs should be held for 24 -hours before merging. Of course "complex" and "controversial" are left to the judgement of the people involved, but we trust that part of being a committer is the judgement required to evaluate such things honestly, and not be +hours before merging. Of course "complex" and "controversial" are left to the judgment of the people involved, but we trust that part of being a committer is the judgment required to evaluate such things honestly, and not be motivated by your desire (or your cube-mate's desire) to get their code merged. Also see "Holds" below, any reviewer can issue a "hold" to indicate that the PR is in fact complicated or complex and deserves further review. PRs that are incorrectly judged to be merge-able, may be reverted and subject to re-review, if subsequent reviewers believe that they in fact are controversial or complex. diff --git a/development.md b/development.md index bbd94fef..556f7c22 100644 --- a/development.md +++ b/development.md @@ -221,7 +221,7 @@ go run hack/e2e.go -build -pushup -test -down # seeing the output of failed commands. # -ctl can be used to quickly call kubectl against your e2e cluster. Useful for -# cleaning up after a failed test or viewing logs. Use -v to avoid supressing +# cleaning up after a failed test or viewing logs. Use -v to avoid suppressing # kubectl output. go run hack/e2e.go -v -ctl='get events' go run hack/e2e.go -v -ctl='delete pod foobar' diff --git a/faster_reviews.md b/faster_reviews.md index a2d00465..2562879b 100644 --- a/faster_reviews.md +++ b/faster_reviews.md @@ -135,7 +135,7 @@ might need later - but don't implement them now. We understand that it is hard to imagine, but sometimes we make mistakes. It's OK to push back on changes requested during a review. If you have a good reason -for doing something a certain way, you are absolutley allowed to debate the +for doing something a certain way, you are absolutely allowed to debate the merits of a requested change. You might be overruled, but you might also prevail. We're mostly pretty reasonable people. Mostly. @@ -151,7 +151,7 @@ things you can do that might help kick a stalled process along: * Ping the assignee (@username) on the PR comment stream asking for an estimate of when they can get to it. - * Ping the assigneed by email (many of us have email addresses that are well + * Ping the assignee by email (many of us have email addresses that are well published or are the same as our GitHub handle @google.com or @redhat.com). If you think you have fixed all the issues in a round of review, and you haven't @@ -171,7 +171,7 @@ explanation. ## Final: Use common sense Obviously, none of these points are hard rules. There is no document that can -take the place of common sense and good taste. Use your best judgement, but put +take the place of common sense and good taste. Use your best judgment, but put a bit of thought into how your work can be made easier to review. If you do these things your PRs will flow much more easily. diff --git a/logging.md b/logging.md index 82b6a0c8..23430474 100644 --- a/logging.md +++ b/logging.md @@ -1,7 +1,7 @@ Logging Conventions =================== -The following conventions for the glog levels to use. [glog](http://godoc.org/github.com/golang/glog) is globally prefered to [log](http://golang.org/pkg/log/) for better runtime control. +The following conventions for the glog levels to use. [glog](http://godoc.org/github.com/golang/glog) is globally preferred to [log](http://golang.org/pkg/log/) for better runtime control. * glog.Errorf() - Always an error * glog.Warningf() - Something unexpected, but probably not an error diff --git a/profiling.md b/profiling.md index 1e14b5c4..03b17766 100644 --- a/profiling.md +++ b/profiling.md @@ -4,7 +4,7 @@ This document explain how to plug in profiler and how to profile Kubernetes serv ## Profiling library -Go comes with inbuilt 'net/http/pprof' profiling library and profiling web service. The way service works is binding debug/pprof/ subtree on a running webserver to the profiler. Reading from subpages of debug/pprof returns pprof-formated profiles of the running binary. The output can be processed offline by the tool of choice, or used as an input to handy 'go tool pprof', which can graphically represent the result. +Go comes with inbuilt 'net/http/pprof' profiling library and profiling web service. The way service works is binding debug/pprof/ subtree on a running webserver to the profiler. Reading from subpages of debug/pprof returns pprof-formatted profiles of the running binary. The output can be processed offline by the tool of choice, or used as an input to handy 'go tool pprof', which can graphically represent the result. ## Adding profiling to services to APIserver. @@ -31,4 +31,4 @@ to get 30 sec. CPU profile. ## Contention profiling -To enable contetion profiling you need to add line ```rt.SetBlockProfileRate(1)``` in addition to ```m.mux.HandleFunc(...)``` added before (```rt``` stands for ```runtime``` in ```master.go```). This enables 'debug/pprof/block' subpage, which can be used as an input to ```go tool pprof```. +To enable contention profiling you need to add line ```rt.SetBlockProfileRate(1)``` in addition to ```m.mux.HandleFunc(...)``` added before (```rt``` stands for ```runtime``` in ```master.go```). This enables 'debug/pprof/block' subpage, which can be used as an input to ```go tool pprof```. diff --git a/releasing.md b/releasing.md index 125355c3..02bb0ca4 100644 --- a/releasing.md +++ b/releasing.md @@ -92,7 +92,7 @@ get` while in fact they do not match `v0.5` (the one that was tagged) exactly. To handle that case, creating a new release should involve creating two adjacent commits where the first of them will set the version to `v0.5` and the second will set it to `v0.5-dev`. In that case, even in the presence of merges, there -will be a single comit where the exact `v0.5` version will be used and all +will be a single commit where the exact `v0.5` version will be used and all others around it will either have `v0.4-dev` or `v0.5-dev`. The diagram below illustrates it. diff --git a/writing-a-getting-started-guide.md b/writing-a-getting-started-guide.md index 7c837351..c1066f06 100644 --- a/writing-a-getting-started-guide.md +++ b/writing-a-getting-started-guide.md @@ -6,7 +6,7 @@ guide. A Getting Started Guide is instructions on how to create a Kubernetes cluster on top of a particular type(s) of infrastructure. Infrastructure includes: the IaaS provider for VMs; the node OS; inter-node networking; and node Configuration Management system. -A guide refers to scripts, Configuration Manangement files, and/or binary assets such as RPMs. We call +A guide refers to scripts, Configuration Management files, and/or binary assets such as RPMs. We call the combination of all these things needed to run on a particular type of infrastructure a **distro**. @@ -39,7 +39,7 @@ These guidelines say *what* to do. See the Rationale section for *why*. that are updated to the new version. - Versioned distros should typically not modify or add code in `cluster/`. That is just scripts for developer distros. - - If a versioned distro has not been updated for many binary releases, it may be dropped frome the Matrix. + - If a versioned distro has not been updated for many binary releases, it may be dropped from the Matrix. If you have a cluster partially working, but doing all the above steps seems like too much work, we still want to hear from you. We suggest you write a blog post or a Gist, and we will link to it on our wiki page. @@ -58,13 +58,13 @@ These guidelines say *what* to do. See the Rationale section for *why*. - a development distro needs to have an organization which owns it. This organization needs to: - Setting up and maintaining Continuous Integration that runs e2e frequently (multiple times per day) against the Distro at head, and which notifies all devs of breakage. - - being reasonably available for questions and assiting with + - being reasonably available for questions and assisting with refactoring and feature additions that affect code for their IaaS. ## Rationale - We want want people to create Kubernetes clusters with whatever IaaS, Node OS, configuration management tools, and so on, which they are familiar with. The - guidelines for **versioned distros** are designed for flexiblity. + guidelines for **versioned distros** are designed for flexibility. - We want developers to be able to work without understanding all the permutations of IaaS, NodeOS, and configuration management. The guidelines for **developer distros** are designed for consistency. -- cgit v1.2.3 From ef8d5722be698e57886b2c47df2bdddb9d37da9e Mon Sep 17 00:00:00 2001 From: Paul Morie Date: Fri, 24 Apr 2015 18:02:52 -0400 Subject: Add hint re: fuzzer to api changes doc --- api_changes.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/api_changes.md b/api_changes.md index be02e16c..d68a776f 100644 --- a/api_changes.md +++ b/api_changes.md @@ -236,7 +236,9 @@ assumptions. If you have added any fields which need very careful formatting "this slice will always have at least 1 element", you may get an error or even a panic from the `serialization_test`. If so, look at the diff it produces (or the backtrace in case of a panic) and figure out what you forgot. Encode that -into the fuzzer's custom fuzz functions. +into the fuzzer's custom fuzz functions. Hint: if you added defaults for a field, +that field will need to have a custom fuzz function that ensures that the field is +fuzzed to a non-empty value. The fuzzer can be found in `pkg/api/testing/fuzzer.go`. -- cgit v1.2.3 From c7f8e8e7f8f037b0e7d94b0e361b75ea5c50676d Mon Sep 17 00:00:00 2001 From: Wojciech Tyczynski Date: Fri, 17 Apr 2015 14:16:33 +0200 Subject: Improvements to conversions generator. --- api_changes.md | 19 +++++++++++++++++-- 1 file changed, 17 insertions(+), 2 deletions(-) diff --git a/api_changes.md b/api_changes.md index 5e648544..6c495c4c 100644 --- a/api_changes.md +++ b/api_changes.md @@ -222,9 +222,24 @@ types, structural change in particular - you must add some logic to convert versioned APIs to and from the internal representation. If you see errors from the `serialization_test`, it may indicate the need for explicit conversions. +Performance of conversions very heavily influence performance of apiserver. +Thus, we are auto-generating conversion functions that are much more efficient +than the generic ones (which are based on reflections and thus are highly +inefficient). + The conversion code resides with each versioned API - -`pkg/api//conversion.go`. Unsurprisingly, this also requires you to -add tests to `pkg/api//conversion_test.go`. +`pkg/api//conversion.go`. To regenerate conversion functions: + - run +``` + $ go run cmd/kube-conversion/conversion.go -v -f -n +``` + - replace all conversion functions (convert\* functions) in the above file + with the contents of \ + - replace arguments of `newer.Scheme.AddGeneratedConversionFuncs` + with the contents of \ + +Unsurprisingly, this also requires you to add tests to +`pkg/api//conversion_test.go`. ## Update the fuzzer -- cgit v1.2.3 From f3f8354b3ab7ece1e62212d12f6cdd0b21e7b6cf Mon Sep 17 00:00:00 2001 From: Paul Morie Date: Mon, 4 May 2015 15:37:07 -0400 Subject: Add step to API changes doc for swagger regen --- api_changes.md | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/api_changes.md b/api_changes.md index 6c495c4c..f46d2d4e 100644 --- a/api_changes.md +++ b/api_changes.md @@ -301,6 +301,14 @@ you'll figure it out. Put `grep` or `ack` to good use. If you added functionality, you should consider documenting it and/or writing an example to illustrate your change. +Make sure you update the swagger API spec by running: + +```shell +$ hack/update-swagger-spec.sh +``` + +The API spec changes should be in a commit separate from your other changes. + ## Incompatible API changes If your change is going to be backward incompatible or might be a breaking change for API consumers, please send an announcement to `kubernetes-dev@googlegroups.com` before -- cgit v1.2.3 From 8b6e9102beb4fc7914000c10e0b0ef99b5245fbf Mon Sep 17 00:00:00 2001 From: Matt Bogosian Date: Thu, 7 May 2015 12:04:31 -0700 Subject: Fix environment variable error in Vagrant docs: `KUBERNETES_MASTER_MINION` -> `KUBERNETES_MINION_MEMORY`. --- developer-guides/vagrant.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index baf40b97..50c9769a 100644 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -328,7 +328,7 @@ If you need more granular control, you can set the amount of memory for the mast ```sh export KUBERNETES_MASTER_MEMORY=1536 -export KUBERNETES_MASTER_MINION=2048 +export KUBERNETES_MINION_MEMORY=2048 ``` #### I ran vagrant suspend and nothing works! -- cgit v1.2.3 From c92c7a5d8201e1bb2b74af77f0c3980d6b8c750b Mon Sep 17 00:00:00 2001 From: Wojciech Tyczynski Date: Wed, 13 May 2015 14:36:59 +0200 Subject: Instructions for generating conversions. --- api_changes.md | 28 +++++++++++++++++++++------- 1 file changed, 21 insertions(+), 7 deletions(-) diff --git a/api_changes.md b/api_changes.md index f46d2d4e..8b0a0e56 100644 --- a/api_changes.md +++ b/api_changes.md @@ -227,18 +227,32 @@ Thus, we are auto-generating conversion functions that are much more efficient than the generic ones (which are based on reflections and thus are highly inefficient). -The conversion code resides with each versioned API - -`pkg/api//conversion.go`. To regenerate conversion functions: +The conversion code resides with each versioned API. There are two files: + - `pkg/api//conversion.go` containing manually written conversion + functions + - `pkg/api//conversion_generated.go` containing auto-generated + conversion functions + +Since auto-generated conversion functions are using manually written ones, +those manually written should be named with a defined convention, i.e. a function +converting type X in pkg a to type Y in pkg b, should be named: +`convert_a_X_To_b_Y`. + +Also note that you can (and for efficiency reasons should) use auto-generated +conversion functions when writing your conversion functions. + +Once all the necessary manually written conversions are added, you need to +regenerate auto-generated ones. To regenerate them: - run ``` $ go run cmd/kube-conversion/conversion.go -v -f -n ``` - - replace all conversion functions (convert\* functions) in the above file - with the contents of \ - - replace arguments of `newer.Scheme.AddGeneratedConversionFuncs` - with the contents of \ + - replace all conversion functions (convert\* functions) in the + `pkg/api//conversion_generated.go` with the contents of \ + - replace arguments of `newer.Scheme.AddGeneratedConversionFuncs` in the + `pkg/api//conversion_generated.go` with the contents of \ -Unsurprisingly, this also requires you to add tests to +Unsurprisingly, adding manually written conversion also requires you to add tests to `pkg/api//conversion_test.go`. ## Update the fuzzer -- cgit v1.2.3 From b67f72a3168e3be7368c968d93172a365bd84eb1 Mon Sep 17 00:00:00 2001 From: Tim Hockin Date: Tue, 12 May 2015 21:59:44 -0700 Subject: Switch git hooks to use pre-commit --- development.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/development.md b/development.md index 556f7c22..6d6bdb86 100644 --- a/development.md +++ b/development.md @@ -105,8 +105,7 @@ directory. This will keep you from accidentally committing non-gofmt'd go code. ``` cd kubernetes/.git/hooks/ -ln -s ../../hooks/prepare-commit-msg . -ln -s ../../hooks/commit-msg . +ln -s ../../hooks/pre-commit . ``` ## Unit tests -- cgit v1.2.3 From e1d595ebbd61baebc62f6db3150ee5881e6e71a8 Mon Sep 17 00:00:00 2001 From: Jeff Lowdermilk Date: Thu, 14 May 2015 15:12:45 -0700 Subject: Add ga-beacon analytics to gendocs scripts hack/run-gendocs.sh puts ga-beacon analytics link into all md files, hack/verify-gendocs.sh verifies presence of link. --- README.md | 3 +++ api_changes.md | 3 +++ coding-conventions.md | 3 +++ collab.md | 3 +++ developer-guides/vagrant.md | 3 +++ development.md | 3 +++ faster_reviews.md | 3 +++ flaky-tests.md | 3 +++ issues.md | 3 +++ logging.md | 3 +++ profiling.md | 3 +++ pull-requests.md | 3 +++ releasing.md | 3 +++ writing-a-getting-started-guide.md | 3 +++ 14 files changed, 42 insertions(+) diff --git a/README.md b/README.md index bf398e9f..13ccc42d 100644 --- a/README.md +++ b/README.md @@ -19,3 +19,6 @@ Docs in this directory relate to developing Kubernetes. and how the version information gets embedded into the built binaries. * **Profiling Kubernetes** ([profiling.md](profiling.md)): How to plug in go pprof profiler to Kubernetes. + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/README.md?pixel)]() diff --git a/api_changes.md b/api_changes.md index 8b0a0e56..c2932215 100644 --- a/api_changes.md +++ b/api_changes.md @@ -332,3 +332,6 @@ the change gets in. If you are unsure, ask. Also make sure that the change gets ## Adding new REST objects TODO(smarterclayton): write this. + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/api_changes.md?pixel)]() diff --git a/coding-conventions.md b/coding-conventions.md index 3d493803..bdcbb708 100644 --- a/coding-conventions.md +++ b/coding-conventions.md @@ -5,3 +5,6 @@ Coding style advice for contributors - https://github.com/golang/go/wiki/CodeReviewComments - https://gist.github.com/lavalamp/4bd23295a9f32706a48f + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/coding-conventions.md?pixel)]() diff --git a/collab.md b/collab.md index 000fb6ea..293cd6f4 100644 --- a/collab.md +++ b/collab.md @@ -38,3 +38,6 @@ PRs that are incorrectly judged to be merge-able, may be reverted and subject to ## Holds Any maintainer or core contributor who wants to review a PR but does not have time immediately may put a hold on a PR simply by saying so on the PR discussion and offering an ETA measured in single-digit days at most. Any PR that has a hold shall not be merged until the person who requested the hold acks the review, withdraws their hold, or is overruled by a preponderance of maintainers. + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/collab.md?pixel)]() diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index 50c9769a..f958b124 100644 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -333,3 +333,6 @@ export KUBERNETES_MINION_MEMORY=2048 #### I ran vagrant suspend and nothing works! ```vagrant suspend``` seems to mess up the network. It's not supported at this time. + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/developer-guides/vagrant.md?pixel)]() diff --git a/development.md b/development.md index 6d6bdb86..02b513cc 100644 --- a/development.md +++ b/development.md @@ -267,3 +267,6 @@ git remote set-url --push upstream no_push ``` hack/run-gendocs.sh ``` + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/development.md?pixel)]() diff --git a/faster_reviews.md b/faster_reviews.md index 2562879b..ed890a7f 100644 --- a/faster_reviews.md +++ b/faster_reviews.md @@ -175,3 +175,6 @@ take the place of common sense and good taste. Use your best judgment, but put a bit of thought into how your work can be made easier to review. If you do these things your PRs will flow much more easily. + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/faster_reviews.md?pixel)]() diff --git a/flaky-tests.md b/flaky-tests.md index e352e110..56bd2c59 100644 --- a/flaky-tests.md +++ b/flaky-tests.md @@ -64,3 +64,6 @@ Eventually you will have sufficient runs for your purposes. At that point you ca If you do a final check for flakes with ```docker ps -a```, ignore tasks that exited -1, since that's what happens when you stop the replication controller. Happy flake hunting! + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/flaky-tests.md?pixel)]() diff --git a/issues.md b/issues.md index 51395cae..99e1089a 100644 --- a/issues.md +++ b/issues.md @@ -17,3 +17,6 @@ Definitions * design - priority/design is for issues that are used to track design discussions * support - priority/support is used for issues tracking user support requests * untriaged - anything without a priority/X label will be considered untriaged + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/issues.md?pixel)]() diff --git a/logging.md b/logging.md index 23430474..331eda97 100644 --- a/logging.md +++ b/logging.md @@ -24,3 +24,6 @@ The following conventions for the glog levels to use. [glog](http://godoc.org/g * Logging in particularly thorny parts of code where you may want to come back later and check it As per the comments, the practical default level is V(2). Developers and QE environments may wish to run at V(3) or V(4). If you wish to change the log level, you can pass in `-v=X` where X is the desired maximum level to log. + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/logging.md?pixel)]() diff --git a/profiling.md b/profiling.md index 03b17766..1dd42095 100644 --- a/profiling.md +++ b/profiling.md @@ -32,3 +32,6 @@ to get 30 sec. CPU profile. ## Contention profiling To enable contention profiling you need to add line ```rt.SetBlockProfileRate(1)``` in addition to ```m.mux.HandleFunc(...)``` added before (```rt``` stands for ```runtime``` in ```master.go```). This enables 'debug/pprof/block' subpage, which can be used as an input to ```go tool pprof```. + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/profiling.md?pixel)]() diff --git a/pull-requests.md b/pull-requests.md index ed12b839..627bc64e 100644 --- a/pull-requests.md +++ b/pull-requests.md @@ -14,3 +14,6 @@ We want to limit the total number of PRs in flight to: * Maintain a clean project * Remove old PRs that would be difficult to rebase as the underlying code has changed over time * Encourage code velocity + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/pull-requests.md?pixel)]() diff --git a/releasing.md b/releasing.md index 02bb0ca4..803e321a 100644 --- a/releasing.md +++ b/releasing.md @@ -163,3 +163,6 @@ After this summary, preamble, all the relevant PRs/issues that got in that version should be listed and linked together with a small summary understandable by plain mortals (in a perfect world PR/issue's title would be enough but often it is just too cryptic/geeky/domain-specific that it isn't). + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/releasing.md?pixel)]() diff --git a/writing-a-getting-started-guide.md b/writing-a-getting-started-guide.md index c1066f06..873fafcc 100644 --- a/writing-a-getting-started-guide.md +++ b/writing-a-getting-started-guide.md @@ -97,3 +97,6 @@ These guidelines say *what* to do. See the Rationale section for *why*. if you use another Configuration Management tool -- you just have to do some manual steps during testing and deployment. + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/writing-a-getting-started-guide.md?pixel)]() -- cgit v1.2.3 From cf0bda9102a72d6fb78c716882dc280f2394abfe Mon Sep 17 00:00:00 2001 From: Chao Xu Date: Fri, 15 May 2015 15:28:28 -0700 Subject: update docs/devel flaky-tests to v1beta3 --- flaky-tests.md | 42 ++++++++++++++++++------------------------ 1 file changed, 18 insertions(+), 24 deletions(-) diff --git a/flaky-tests.md b/flaky-tests.md index e352e110..a7ea75f8 100644 --- a/flaky-tests.md +++ b/flaky-tests.md @@ -11,33 +11,27 @@ There is a testing image ```brendanburns/flake``` up on the docker hub. We will Create a replication controller with the following config: ```yaml -id: flakecontroller +apiVersion: v1beta3 kind: ReplicationController -apiVersion: v1beta1 -desiredState: +metadata: + name: flakecontroller +spec: replicas: 24 - replicaSelector: - name: flake - podTemplate: - desiredState: - manifest: - version: v1beta1 - id: "" - volumes: [] - containers: - - name: flake - image: brendanburns/flake - env: - - name: TEST_PACKAGE - value: pkg/tools - - name: REPO_SPEC - value: https://github.com/GoogleCloudPlatform/kubernetes - restartpolicy: {} - labels: - name: flake -labels: - name: flake + template: + metadata: + labels: + name: flake + spec: + containers: + - name: flake + image: brendanburns/flake + env: + - name: TEST_PACKAGE + value: pkg/tools + - name: REPO_SPEC + value: https://github.com/GoogleCloudPlatform/kubernetes ``` +Note that we omit the labels and the selector fields of the replication controller, because they will be populated from the labels field of the pod template by default. ```./cluster/kubectl.sh create -f controller.yaml``` -- cgit v1.2.3 From 31b44ff68f996c4004bec2171c2a8448a942005b Mon Sep 17 00:00:00 2001 From: Eric Tune Date: Tue, 28 Apr 2015 18:10:59 -0700 Subject: Add API change suggestions. --- api_changes.md | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/api_changes.md b/api_changes.md index c2932215..6e29c3f6 100644 --- a/api_changes.md +++ b/api_changes.md @@ -12,7 +12,7 @@ not all API changes will need all of these steps. ## Operational overview -It's important to have a high level understanding of the API system used in +It is important to have a high level understanding of the API system used in Kubernetes in order to navigate the rest of this document. As mentioned above, the internal representation of an API object is decoupled @@ -24,13 +24,13 @@ a GET involves a great deal of machinery. The conversion process is logically a "star" with the internal form at the center. Every versioned API can be converted to the internal form (and vice-versa), but versioned APIs do not convert to other versioned APIs directly. -This sounds like a heavy process, but in reality we don't intend to keep more +This sounds like a heavy process, but in reality we do not intend to keep more than a small number of versions alive at once. While all of the Kubernetes code operates on the internal structures, they are always converted to a versioned form before being written to storage (disk or etcd) or being sent over a wire. Clients should consume and operate on the versioned APIs exclusively. -To demonstrate the general process, let's walk through a (hypothetical) example: +To demonstrate the general process, here is a (hypothetical) example: 1. A user POSTs a `Pod` object to `/api/v7beta1/...` 2. The JSON is unmarshalled into a `v7beta1.Pod` structure @@ -176,6 +176,12 @@ If your change includes new fields for which you will need default values, you need to add cases to `pkg/api//defaults.go`. Of course, since you have added code, you have to add a test: `pkg/api//defaults_test.go`. +Do use pointers to scalars when you need to distinguish between an unset value +and an an automatic zero value. For example, +`PodSpec.TerminationGracePeriodSeconds` is defined as `*int64` the go type +definition. A zero value means 0 seconds, and a nil value asks the system to +pick a default. + Don't forget to run the tests! ### Edit conversion.go -- cgit v1.2.3 From bb07a8b81e212671a8c398723b7941149e46e952 Mon Sep 17 00:00:00 2001 From: Tim Hockin Date: Thu, 14 May 2015 17:38:08 -0700 Subject: Don't rename api imports in conversions --- api_changes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/api_changes.md b/api_changes.md index 6e29c3f6..6eff094b 100644 --- a/api_changes.md +++ b/api_changes.md @@ -255,7 +255,7 @@ regenerate auto-generated ones. To regenerate them: ``` - replace all conversion functions (convert\* functions) in the `pkg/api//conversion_generated.go` with the contents of \ - - replace arguments of `newer.Scheme.AddGeneratedConversionFuncs` in the + - replace arguments of `api.Scheme.AddGeneratedConversionFuncs` in the `pkg/api//conversion_generated.go` with the contents of \ Unsurprisingly, adding manually written conversion also requires you to add tests to -- cgit v1.2.3 From 3c173916ea41e7bb03bae1af85eefa4bc027c985 Mon Sep 17 00:00:00 2001 From: Wojciech Tyczynski Date: Tue, 19 May 2015 17:47:03 +0200 Subject: Automatically generate conversions --- api_changes.md | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/api_changes.md b/api_changes.md index 6eff094b..4627c6df 100644 --- a/api_changes.md +++ b/api_changes.md @@ -251,12 +251,8 @@ Once all the necessary manually written conversions are added, you need to regenerate auto-generated ones. To regenerate them: - run ``` - $ go run cmd/kube-conversion/conversion.go -v -f -n + $ hack/update-generated-conversions.sh ``` - - replace all conversion functions (convert\* functions) in the - `pkg/api//conversion_generated.go` with the contents of \ - - replace arguments of `api.Scheme.AddGeneratedConversionFuncs` in the - `pkg/api//conversion_generated.go` with the contents of \ Unsurprisingly, adding manually written conversion also requires you to add tests to `pkg/api//conversion_test.go`. -- cgit v1.2.3 From c817b2f96f0640e57d9fb5209e152a9dfefae11d Mon Sep 17 00:00:00 2001 From: Chao Xu Date: Wed, 20 May 2015 17:17:01 -0700 Subject: in docs, update replicationController to replicationcontroller --- developer-guides/vagrant.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index f958b124..d0c07f3f 100644 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -184,7 +184,7 @@ NAME IMAGE(S) HOST LABELS STATUS $ cluster/kubectl.sh get services NAME LABELS SELECTOR IP PORT -$ cluster/kubectl.sh get replicationControllers +$ cluster/kubectl.sh get replicationcontrollers NAME IMAGE(S SELECTOR REPLICAS ``` @@ -224,7 +224,7 @@ kubernetes-minion-1: fa0e29c94501 kubernetes/pause:latest "/pause" 8 minutes ago Up 8 minutes 0.0.0.0:8080->80/tcp k8s--net.a90e7ce4--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--baf5b21b ``` -Going back to listing the pods, services and replicationControllers, you now have: +Going back to listing the pods, services and replicationcontrollers, you now have: ``` $ cluster/kubectl.sh get pods @@ -236,7 +236,7 @@ NAME IMAGE(S) HOST $ cluster/kubectl.sh get services NAME LABELS SELECTOR IP PORT -$ cluster/kubectl.sh get replicationControllers +$ cluster/kubectl.sh get replicationcontrollers NAME IMAGE(S SELECTOR REPLICAS myNginx nginx name=my-nginx 3 ``` -- cgit v1.2.3 From bd70869deb13da3d7193141bc85d50591d349aa4 Mon Sep 17 00:00:00 2001 From: Anastasis Andronidis Date: Thu, 21 May 2015 22:53:10 +0200 Subject: rename run-container to run in kubectl --- developer-guides/vagrant.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index d0c07f3f..31ad79f1 100644 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -191,7 +191,7 @@ NAME IMAGE(S SELECTOR REPLICAS Start a container running nginx with a replication controller and three replicas ``` -$ cluster/kubectl.sh run-container my-nginx --image=nginx --replicas=3 --port=80 +$ cluster/kubectl.sh run my-nginx --image=nginx --replicas=3 --port=80 ``` When listing the pods, you will see that three containers have been started and are in Waiting state: -- cgit v1.2.3 From 4636961f5a4077462e01f7f4514d852801081c74 Mon Sep 17 00:00:00 2001 From: Anastasis Andronidis Date: Thu, 21 May 2015 23:10:25 +0200 Subject: rename resize to scale --- developer-guides/vagrant.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index 31ad79f1..e51b7187 100644 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -243,10 +243,10 @@ myNginx nginx name=my-nginx 3 We did not start any services, hence there are none listed. But we see three replicas displayed properly. Check the [guestbook](../../examples/guestbook/README.md) application to learn how to create a service. -You can already play with resizing the replicas with: +You can already play with scaling the replicas with: ```sh -$ ./cluster/kubectl.sh resize rc my-nginx --replicas=2 +$ ./cluster/kubectl.sh scale rc my-nginx --replicas=2 $ ./cluster/kubectl.sh get pods NAME IMAGE(S) HOST LABELS STATUS 7813c8bd-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.2/10.245.2.2 name=myNginx Running -- cgit v1.2.3 From a5e20a975cbae982e5f8fd3960fd9b0680b9e5d3 Mon Sep 17 00:00:00 2001 From: Wojciech Tyczynski Date: Thu, 28 May 2015 17:41:42 +0200 Subject: Update instructions on conversions. --- api_changes.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/api_changes.md b/api_changes.md index 4627c6df..17278c6e 100644 --- a/api_changes.md +++ b/api_changes.md @@ -254,6 +254,12 @@ regenerate auto-generated ones. To regenerate them: $ hack/update-generated-conversions.sh ``` +If running the above script is impossible due to compile errors, the easiest +workaround is to comment out the code causing errors and let the script to +regenerate it. If the auto-generated conversion methods are not used by the +manually-written ones, it's fine to just remove the whole file and let the +generator to create it from scratch. + Unsurprisingly, adding manually written conversion also requires you to add tests to `pkg/api//conversion_test.go`. -- cgit v1.2.3 From 1351801078e0cfac27bfdbfacc431a43de88b94f Mon Sep 17 00:00:00 2001 From: Alex Robinson Date: Thu, 4 Jun 2015 21:32:29 +0000 Subject: Fix broken links in the vagrant developer guide. --- developer-guides/vagrant.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index e51b7187..332ac3d5 100644 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -9,7 +9,7 @@ Running kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/deve 2. [VMWare Fusion](https://www.vmware.com/products/fusion/) version 5 or greater as well as the appropriate [Vagrant VMWare Fusion provider](https://www.vagrantup.com/vmware) 3. [VMWare Workstation](https://www.vmware.com/products/workstation/) version 9 or greater as well as the [Vagrant VMWare Workstation provider](https://www.vagrantup.com/vmware) 4. [Parallels Desktop](https://www.parallels.com/products/desktop/) version 9 or greater as well as the [Vagrant Parallels provider](https://parallels.github.io/vagrant-parallels/) -3. Get or build a [binary release](../../getting-started-guides/binary_release.md) +3. Get or build a [binary release](/docs/getting-started-guides/binary_release.md) ### Setup @@ -242,7 +242,7 @@ myNginx nginx name=my-nginx 3 ``` We did not start any services, hence there are none listed. But we see three replicas displayed properly. -Check the [guestbook](../../examples/guestbook/README.md) application to learn how to create a service. +Check the [guestbook](/examples/guestbook/README.md) application to learn how to create a service. You can already play with scaling the replicas with: ```sh -- cgit v1.2.3 From 28951be8bb0fcaa9af59f0cad444c56ae7ecda21 Mon Sep 17 00:00:00 2001 From: Kris Rousey Date: Fri, 5 Jun 2015 12:47:15 -0700 Subject: Updating docs/ to v1 --- flaky-tests.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/flaky-tests.md b/flaky-tests.md index 7870517f..5eb09ec9 100644 --- a/flaky-tests.md +++ b/flaky-tests.md @@ -11,7 +11,7 @@ There is a testing image ```brendanburns/flake``` up on the docker hub. We will Create a replication controller with the following config: ```yaml -apiVersion: v1beta3 +apiVersion: v1 kind: ReplicationController metadata: name: flakecontroller -- cgit v1.2.3 From 2f18beac68176d99d4137a59faee0e653571ff63 Mon Sep 17 00:00:00 2001 From: Brendan Burns Date: Fri, 5 Jun 2015 14:50:11 -0700 Subject: Purge cluster/kubectl.sh from nearly all docs. Mark cluster/kubectl.sh as deprecated. --- flaky-tests.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/flaky-tests.md b/flaky-tests.md index 5eb09ec9..da5549c8 100644 --- a/flaky-tests.md +++ b/flaky-tests.md @@ -33,7 +33,9 @@ spec: ``` Note that we omit the labels and the selector fields of the replication controller, because they will be populated from the labels field of the pod template by default. -```./cluster/kubectl.sh create -f controller.yaml``` +``` +kubectl create -f controller.yaml +``` This will spin up 24 instances of the test. They will run to completion, then exit, and the kubelet will restart them, accumulating more and more runs of the test. You can examine the recent runs of the test by calling ```docker ps -a``` and looking for tasks that exited with non-zero exit codes. Unfortunately, docker ps -a only keeps around the exit status of the last 15-20 containers with the same image, so you have to check them frequently. @@ -52,7 +54,7 @@ grep "Exited ([^0])" output.txt Eventually you will have sufficient runs for your purposes. At that point you can stop and delete the replication controller by running: ```sh -./cluster/kubectl.sh stop replicationcontroller flakecontroller +kubectl stop replicationcontroller flakecontroller ``` If you do a final check for flakes with ```docker ps -a```, ignore tasks that exited -1, since that's what happens when you stop the replication controller. -- cgit v1.2.3 From 9e09c1101a2e808dba17e0a56741a812b4d4cbf4 Mon Sep 17 00:00:00 2001 From: Jeffrey Paine Date: Mon, 8 Jun 2015 16:32:28 -0400 Subject: Consolidate git setup documentation. Closes #9091 --- development.md | 79 +++++++++++++++++++++++++++++++++---------------------- git_workflow.png | Bin 0 -> 90004 bytes 2 files changed, 48 insertions(+), 31 deletions(-) create mode 100644 git_workflow.png diff --git a/development.md b/development.md index 02b513cc..2e540bcb 100644 --- a/development.md +++ b/development.md @@ -8,23 +8,62 @@ Official releases are built in Docker containers. Details are [here](../../buil Kubernetes is written in [Go](http://golang.org) programming language. If you haven't set up Go development environment, please follow [this instruction](http://golang.org/doc/code.html) to install go tool and set up GOPATH. Ensure your version of Go is at least 1.3. -## Clone kubernetes into GOPATH +## Git Setup -We highly recommend to put kubernetes' code into your GOPATH. For example, the following commands will download kubernetes' code under the current user's GOPATH (Assuming there's only one directory in GOPATH.): +Below, we outline one of the more common git workflows that core developers use. Other git workflows are also valid. + +### Visual overview +![Git workflow](git_workflow.png) + +### Fork the main repository + +1. Go to https://github.com/GoogleCloudPlatform/kubernetes +2. Click the "Fork" button (at the top right) + +### Clone your fork + +The commands below require that you have $GOPATH set ([$GOPATH docs](https://golang.org/doc/code.html#GOPATH)). We highly recommend you put kubernetes' code into your GOPATH. Note: the commands below will not work if there is more than one directory in your `$GOPATH`. ``` -$ echo $GOPATH -/home/user/goproj $ mkdir -p $GOPATH/src/github.com/GoogleCloudPlatform/ $ cd $GOPATH/src/github.com/GoogleCloudPlatform/ -$ git clone https://github.com/GoogleCloudPlatform/kubernetes.git +# Replace "$YOUR_GITHUB_USERNAME" below with your github username +$ git clone https://github.com/$YOUR_GITHUB_USERNAME/kubernetes.git +$ cd kubernetes +$ git remote add upstream 'https://github.com/GoogleCloudPlatform/kubernetes.git' +``` + +### Create a branch and make changes + +``` +$ git checkout -b myfeature +# Make your code changes +``` + +### Keeping your development fork in sync + +``` +$ git fetch upstream +$ git rebase upstream/master +``` + +Note: If you have write access to the main repository at github.com/GoogleCloudPlatform/kubernetes, you should modify your git configuration so that you can't accidentally push to upstream: + +``` +git remote set-url --push upstream no_push ``` -The commands above will not work if there are more than one directory in ``$GOPATH``. +### Commiting changes to your fork + +``` +$ git commit +$ git push -f origin myfeature +``` + +### Creating a pull request +1. Visit http://github.com/$YOUR_GITHUB_USERNAME/kubernetes +2. Click the "Compare and pull request" button next to your "myfeature" branch. -If you plan to do development, read about the -[Kubernetes Github Flow](https://docs.google.com/presentation/d/1HVxKSnvlc2WJJq8b9KCYtact5ZRrzDzkWgKEfm0QO_o/pub?start=false&loop=false&delayms=3000), -and then clone your own fork of Kubernetes as described there. ## godep and dependency management @@ -240,28 +279,6 @@ See [conformance-test.sh](../../hack/conformance-test.sh). ## Testing out flaky tests [Instructions here](flaky-tests.md) -## Keeping your development fork in sync - -One time after cloning your forked repo: - -``` -git remote add upstream https://github.com/GoogleCloudPlatform/kubernetes.git -``` - -Then each time you want to sync to upstream: - -``` -git fetch upstream -git rebase upstream/master -``` - -If you have write access to the main repository, you should modify your git configuration so that -you can't accidentally push to upstream: - -``` -git remote set-url --push upstream no_push -``` - ## Regenerating the CLI documentation ``` diff --git a/git_workflow.png b/git_workflow.png new file mode 100644 index 00000000..e3bd70da Binary files /dev/null and b/git_workflow.png differ -- cgit v1.2.3 From 2c9669befd4fe4580bc77bc6f3c40236a07bc651 Mon Sep 17 00:00:00 2001 From: Ed Costello Date: Thu, 11 Jun 2015 01:11:44 -0400 Subject: Copy edits for spelling errors and typos Signed-off-by: Ed Costello --- collab.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/collab.md b/collab.md index 293cd6f4..b424f502 100644 --- a/collab.md +++ b/collab.md @@ -8,7 +8,7 @@ First and foremost: as a potential contributor, your changes and ideas are welco ## Code reviews -All changes must be code reviewed. For non-maintainers this is obvious, since you can't commit anyway. But even for maintainers, we want all changes to get at least one review, preferably (for non-trivial changes obligately) from someone who knows the areas the change touches. For non-trivial changes we may want two reviewers. The primary reviewer will make this decision and nominate a second reviewer, if needed. Except for trivial changes, PRs should not be committed until relevant parties (e.g. owners of the subsystem affected by the PR) have had a reasonable chance to look at PR in their local business hours. +All changes must be code reviewed. For non-maintainers this is obvious, since you can't commit anyway. But even for maintainers, we want all changes to get at least one review, preferably (for non-trivial changes obligatorily) from someone who knows the areas the change touches. For non-trivial changes we may want two reviewers. The primary reviewer will make this decision and nominate a second reviewer, if needed. Except for trivial changes, PRs should not be committed until relevant parties (e.g. owners of the subsystem affected by the PR) have had a reasonable chance to look at PR in their local business hours. Most PRs will find reviewers organically. If a maintainer intends to be the primary reviewer of a PR they should set themselves as the assignee on GitHub and say so in a reply to the PR. Only the primary reviewer of a change should actually do the merge, except in rare cases (e.g. they are unavailable in a reasonable timeframe). -- cgit v1.2.3 From 98ebb76f76a840d96487141106b13bfa071ed94f Mon Sep 17 00:00:00 2001 From: Alex Robinson Date: Thu, 18 Jun 2015 00:14:27 +0000 Subject: Add devel doc laying out the steps to add new metrics to the code base. --- instrumentation.md | 36 ++++++++++++++++++++++++++++++++++++ 1 file changed, 36 insertions(+) create mode 100644 instrumentation.md diff --git a/instrumentation.md b/instrumentation.md new file mode 100644 index 00000000..b52480d2 --- /dev/null +++ b/instrumentation.md @@ -0,0 +1,36 @@ +Instrumenting Kubernetes with a new metric +=================== + +The following is a step-by-step guide for adding a new metric to the Kubernetes code base. + +We use the Prometheus monitoring system's golang client library for instrumenting our code. Once you've picked out a file that you want to add a metric to, you should: + +1. Import "github.com/prometheus/client_golang/prometheus". + +2. Create a top-level var to define the metric. For this, you have to: + 1. Pick the type of metric. Use a Gauge for things you want to set to a particular value, a Counter for things you want to increment, or a Histogram or Summary for histograms/distributions of values (typically for latency). Histograms are better if you're going to aggregate the values across jobs, while summaries are better if you just want the job to give you a useful summary of the values. + 2. Give the metric a name and description. + 3. Pick whether you want to distinguish different categories of things using labels on the metric. If so, add "Vec" to the name of the type of metric you want and add a slice of the label names to the definition. + + https://github.com/GoogleCloudPlatform/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/apiserver/apiserver.go#L53 + https://github.com/GoogleCloudPlatform/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/kubelet/metrics/metrics.go#L31 + +3. Register the metric so that prometheus will know to export it. + + https://github.com/GoogleCloudPlatform/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/kubelet/metrics/metrics.go#L74 + https://github.com/GoogleCloudPlatform/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/apiserver/apiserver.go#L78 + +4. Use the metric by calling the appropriate method for your metric type (Set, Inc/Add, or Observe, respectively for Gauge, Counter, or Histogram/Summary), first calling WithLabelValues if your metric has any labels + + https://github.com/GoogleCloudPlatform/kubernetes/blob/3ce7fe8310ff081dbbd3d95490193e1d5250d2c9/pkg/kubelet/kubelet.go#L1384 + https://github.com/GoogleCloudPlatform/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/apiserver/apiserver.go#L87 + + +These are the metric type definitions if you're curious to learn about them or need more information: +https://github.com/prometheus/client_golang/blob/master/prometheus/gauge.go +https://github.com/prometheus/client_golang/blob/master/prometheus/counter.go +https://github.com/prometheus/client_golang/blob/master/prometheus/histogram.go +https://github.com/prometheus/client_golang/blob/master/prometheus/summary.go + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/instrumentation.md?pixel)]() -- cgit v1.2.3 From dab63f2280e28641cc0b0890c919106059243cbf Mon Sep 17 00:00:00 2001 From: Marek Biskup Date: Fri, 19 Jun 2015 17:41:12 +0200 Subject: add links to unlinked documents; move making-release-notes.md to docs/devel --- README.md | 2 ++ making-release-notes.md | 33 +++++++++++++++++++++++++++++++++ 2 files changed, 35 insertions(+) create mode 100644 making-release-notes.md diff --git a/README.md b/README.md index 13ccc42d..3ee8a244 100644 --- a/README.md +++ b/README.md @@ -6,6 +6,8 @@ Docs in this directory relate to developing Kubernetes. * **Development Guide** ([development.md](development.md)): Setting up your environment tests. +* **Making release notes** ([making-release-notes.md](making-release-notes.md)): Generating release nodes for a new release. + * **Hunting flaky tests** ([flaky-tests.md](flaky-tests.md)): We have a goal of 99.9% flake free tests. Here's how to run your tests many times. diff --git a/making-release-notes.md b/making-release-notes.md new file mode 100644 index 00000000..823bff64 --- /dev/null +++ b/making-release-notes.md @@ -0,0 +1,33 @@ +## Making release notes +This documents the process for making release notes for a release. + +### 1) Note the PR number of the previous release +Find the PR that was merged with the previous release. Remember this number +_TODO_: Figure out a way to record this somewhere to save the next release engineer time. + +### 2) Build the release-notes tool +```bash +${KUBERNETES_ROOT}/build/make-release-notes.sh +``` + +### 3) Trim the release notes +This generates a list of the entire set of PRs merged since the last release. It is likely long +and many PRs aren't worth mentioning. + +Open up ```candidate-notes.md``` in your favorite editor. + +Remove, regroup, organize to your hearts content. + + +### 4) Update CHANGELOG.md +With the final markdown all set, cut and paste it to the top of ```CHANGELOG.md``` + +### 5) Update the Release page + * Switch to the [releases](https://github.com/GoogleCloudPlatform/kubernetes/releases) page. + * Open up the release you are working on. + * Cut and paste the final markdown from above into the release notes + * Press Save. + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/making-release-notes.md?pixel)]() -- cgit v1.2.3 From 7cf9d2ca9006732cfb199e48d011c2d520461ec9 Mon Sep 17 00:00:00 2001 From: Mike Danese Date: Fri, 19 Jun 2015 09:59:27 -0700 Subject: fix master precommit hook --- making-release-notes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/making-release-notes.md b/making-release-notes.md index 823bff64..ffccf6d3 100644 --- a/making-release-notes.md +++ b/making-release-notes.md @@ -30,4 +30,4 @@ With the final markdown all set, cut and paste it to the top of ```CHANGELOG.md` -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/making-release-notes.md?pixel)]() +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/making-release-notes.md?pixel)]() -- cgit v1.2.3 From add7066dad3a40e2b8f6891e5dd2cb1943e4bb6c Mon Sep 17 00:00:00 2001 From: goltermann Date: Tue, 23 Jun 2015 11:46:19 -0700 Subject: Add PR merge policy for RC. Link to ok-to-merge label --- pull-requests.md | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/pull-requests.md b/pull-requests.md index 627bc64e..1b5c30e6 100644 --- a/pull-requests.md +++ b/pull-requests.md @@ -15,5 +15,17 @@ We want to limit the total number of PRs in flight to: * Remove old PRs that would be difficult to rebase as the underlying code has changed over time * Encourage code velocity +RC to v1.0 Pull Requests +------------------------ + +Between the first RC build (~6/22) and v1.0, we will adopt a higher bar for PR merges. For v1.0 to be a stable release, we need to ensure that any fixes going in are very well tested and have a low risk of breaking anything. Refactors and complex changes will be rejected in favor of more strategic and smaller workarounds. + +These PRs require: +* A risk assessment by the code author in the PR. This should outline which parts of the code are being touched, the risk of regression, and complexity of the code. +* Two LGTMs from experienced reviewers. + +Once those requirements are met, they will be labeled [ok-to-merge](https://github.com/GoogleCloudPlatform/kubernetes/pulls?utf8=%E2%9C%93&q=is%3Aopen+is%3Apr+label%3Aok-to-merge) and can be merged. + +These restrictions will be relaxed after v1.0 is released. [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/pull-requests.md?pixel)]() -- cgit v1.2.3 From 076fe1da6660e66243622d03d4d719ba3be35914 Mon Sep 17 00:00:00 2001 From: Marek Biskup Date: Thu, 25 Jun 2015 08:36:44 +0200 Subject: add missing document links --- README.md | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/README.md b/README.md index 3ee8a244..dc2909ff 100644 --- a/README.md +++ b/README.md @@ -22,5 +22,13 @@ Docs in this directory relate to developing Kubernetes. * **Profiling Kubernetes** ([profiling.md](profiling.md)): How to plug in go pprof profiler to Kubernetes. +* **Instrumenting Kubernetes with a new metric** + ([instrumentation.md](instrumentation.md)): How to add a new metrics to the + Kubernetes code base. + +* **Coding Conventions** ([coding-conventions.md](coding-conventions.md)): + Coding style advice for contributors. + +* **Faster PR reviews** ([faster_reviews.md](faster_reviews.md)): How to get faster PR reviews. [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/README.md?pixel)]() -- cgit v1.2.3 From 9ed56207140ce5ba468e9db71f7eaaf97789b871 Mon Sep 17 00:00:00 2001 From: Mike Danese Date: Fri, 26 Jun 2015 14:42:48 -0700 Subject: add documentation and script on how to get recent and "nightly" builds --- README.md | 2 ++ getting-builds.md | 24 ++++++++++++++++++++++++ 2 files changed, 26 insertions(+) create mode 100644 getting-builds.md diff --git a/README.md b/README.md index dc2909ff..5957902f 100644 --- a/README.md +++ b/README.md @@ -31,4 +31,6 @@ Docs in this directory relate to developing Kubernetes. * **Faster PR reviews** ([faster_reviews.md](faster_reviews.md)): How to get faster PR reviews. +* **Getting Recent Builds** ([getting-builds.md](getting-builds.md)): How to get recent builds including the latest builds to pass CI. + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/README.md?pixel)]() diff --git a/getting-builds.md b/getting-builds.md new file mode 100644 index 00000000..dbad8f3a --- /dev/null +++ b/getting-builds.md @@ -0,0 +1,24 @@ +# Getting Kubernetes Builds + +You can use [hack/get-build.sh](../../hack/get-build.sh) to or use as a reference on how to get the most recent builds with curl. With `get-build.sh` you can grab the most recent stable build, the most recent release candidate, or the most recent build to pass our ci and gce e2e tests (essentially a nightly build). + +``` +usage: + ./hack/get-build.sh [stable|release|latest|latest-green] + + stable: latest stable version + release: latest release candidate + latest: latest ci build + latest-green: latest ci build to pass gce e2e +``` + +You can also use the gsutil tool to explore the Google Cloud Storage release bucket. Here are some examples: +``` +gsutil cat gs://kubernetes-release/ci/latest.txt # output the latest ci version number +gsutil cat gs://kubernetes-release/ci/latest-green.txt # output the latest ci version number that passed gce e2e +gsutil ls gs://kubernetes-release/ci/v0.20.0-29-g29a55cc/ # list the contents of a ci release +gsutil ls gs://kubernetes-release/release # list all official releases and rcs +``` + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/getting-builds.md?pixel)]() -- cgit v1.2.3 From ca9ef4abe107ab2c3b7f763bdf49aeff8f2a3d0c Mon Sep 17 00:00:00 2001 From: David Oppenheimer Date: Tue, 7 Jul 2015 13:06:19 -0700 Subject: Move scheduler overview from docs/design/ to docs/devel/ --- scheduler.md | 50 ++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 50 insertions(+) create mode 100644 scheduler.md diff --git a/scheduler.md b/scheduler.md new file mode 100644 index 00000000..ac01e6db --- /dev/null +++ b/scheduler.md @@ -0,0 +1,50 @@ + +# The Kubernetes Scheduler + +The Kubernetes scheduler runs as a process alongside the other master +components such as the API server. Its interface to the API server is to watch +for Pods with an empty PodSpec.NodeName, and for each Pod, it posts a Binding +indicating where the Pod should be scheduled. + +## The scheduling process + +The scheduler tries to find a node for each Pod, one at a time, as it notices +these Pods via watch. There are three steps. First it applies a set of "predicates" that filter out +inappropriate nodes. For example, if the PodSpec specifies resource limits, then the scheduler +will filter out nodes that don't have at least that much resources available (computed +as the capacity of the node minus the sum of the resource limits of the containers that +are already running on the node). Second, it applies a set of "priority functions" +that rank the nodes that weren't filtered out by the predicate check. For example, +it tries to spread Pods across nodes while at the same time favoring the least-loaded +nodes (where "load" here is sum of the resource limits of the containers running on the node, +divided by the node's capacity). +Finally, the node with the highest priority is chosen +(or, if there are multiple such nodes, then one of them is chosen at random). The code +for this main scheduling loop is in the function `Schedule()` in +[plugin/pkg/scheduler/generic_scheduler.go](../../plugin/pkg/scheduler/generic_scheduler.go) + +## Scheduler extensibility + +The scheduler is extensible: the cluster administrator can choose which of the pre-defined +scheduling policies to apply, and can add new ones. The built-in predicates and priorities are +defined in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](../../plugin/pkg/scheduler/algorithm/predicates/predicates.go) and +[plugin/pkg/scheduler/algorithm/priorities/priorities.go](../../plugin/pkg/scheduler/algorithm/priorities/priorities.go), respectively. +The policies that are applied when scheduling can be chosen in one of two ways. Normally, +the policies used are selected by the functions `defaultPredicates()` and `defaultPriorities()` in +[plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](../../plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). +However, the choice of policies +can be overridden by passing the command-line flag `--policy-config-file` to the scheduler, pointing to a JSON +file specifying which scheduling policies to use. See +[examples/scheduler-policy-config.json](../../examples/scheduler-policy-config.json) for an example +config file. (Note that the config file format is versioned; the API is defined in +[plugin/pkg/scheduler/api/](../../plugin/pkg/scheduler/api/)). +Thus to add a new scheduling policy, you should modify predicates.go or priorities.go, +and either register the policy in `defaultPredicates()` or `defaultPriorities()`, or use a policy config file. + +## Exploring the code + +If you want to get a global picture of how the scheduler works, you can start in +[plugin/cmd/kube-scheduler/app/server.go](../../plugin/cmd/kube-scheduler/app/server.go) + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/scheduler.md?pixel)]() -- cgit v1.2.3 From 2b8e318ccafb353dc06bb7066ccb8671591bbaba Mon Sep 17 00:00:00 2001 From: Alex Mohr Date: Tue, 7 Jul 2015 16:29:18 -0700 Subject: Update release notes tool and documentation --- making-release-notes.md | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) diff --git a/making-release-notes.md b/making-release-notes.md index ffccf6d3..5d08ac50 100644 --- a/making-release-notes.md +++ b/making-release-notes.md @@ -2,17 +2,21 @@ This documents the process for making release notes for a release. ### 1) Note the PR number of the previous release -Find the PR that was merged with the previous release. Remember this number +Find the most-recent PR that was merged with the previous .0 release. Remember this as $LASTPR. _TODO_: Figure out a way to record this somewhere to save the next release engineer time. -### 2) Build the release-notes tool +Find the most-recent PR that was merged with the current .0 release. Remeber this as $CURRENTPR. + +### 2) Run the release-notes tool ```bash -${KUBERNETES_ROOT}/build/make-release-notes.sh +${KUBERNETES_ROOT}/build/make-release-notes.sh $LASTPR $CURRENTPR ``` ### 3) Trim the release notes -This generates a list of the entire set of PRs merged since the last release. It is likely long -and many PRs aren't worth mentioning. +This generates a list of the entire set of PRs merged since the last minor +release. It is likely long and many PRs aren't worth mentioning. If any of the +PRs were cherrypicked into patches on the last minor release, you should exclude +them from the current release's notes. Open up ```candidate-notes.md``` in your favorite editor. -- cgit v1.2.3 From 8c28498ca08ac4cd76ea1d23992836dec63581f6 Mon Sep 17 00:00:00 2001 From: Janet Kuo Date: Tue, 7 Jul 2015 18:02:21 -0700 Subject: Update kubectl get command in docs/devel/ --- developer-guides/vagrant.md | 70 +++++++++++++++++++++++---------------------- 1 file changed, 36 insertions(+), 34 deletions(-) diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index 332ac3d5..d8d7a1ec 100644 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -36,14 +36,14 @@ Vagrant will provision each machine in the cluster with all the necessary compon By default, each VM in the cluster is running Fedora, and all of the Kubernetes services are installed into systemd. -To access the master or any minion: +To access the master or any node: ```sh vagrant ssh master vagrant ssh minion-1 ``` -If you are running more than one minion, you can access the others by: +If you are running more than one nodes, you can access the others by: ```sh vagrant ssh minion-2 @@ -97,12 +97,12 @@ Once your Vagrant machines are up and provisioned, the first thing to do is to c You may need to build the binaries first, you can do this with ```make``` ```sh -$ ./cluster/kubectl.sh get minions +$ ./cluster/kubectl.sh get nodes -NAME LABELS -10.245.1.4 -10.245.1.5 -10.245.1.3 +NAME LABELS STATUS +kubernetes-minion-0whl kubernetes.io/hostname=kubernetes-minion-0whl Ready +kubernetes-minion-4jdf kubernetes.io/hostname=kubernetes-minion-4jdf Ready +kubernetes-minion-epbe kubernetes.io/hostname=kubernetes-minion-epbe Ready ``` ### Interacting with your Kubernetes cluster with the `kube-*` scripts. @@ -153,23 +153,23 @@ cat ~/.kubernetes_vagrant_auth } ``` -You should now be set to use the `cluster/kubectl.sh` script. For example try to list the minions that you have started with: +You should now be set to use the `cluster/kubectl.sh` script. For example try to list the nodes that you have started with: ```sh -./cluster/kubectl.sh get minions +./cluster/kubectl.sh get nodes ``` ### Running containers -Your cluster is running, you can list the minions in your cluster: +Your cluster is running, you can list the nodes in your cluster: ```sh -$ ./cluster/kubectl.sh get minions +$ ./cluster/kubectl.sh get nodes -NAME LABELS -10.245.2.4 -10.245.2.3 -10.245.2.2 +NAME LABELS STATUS +kubernetes-minion-0whl kubernetes.io/hostname=kubernetes-minion-0whl Ready +kubernetes-minion-4jdf kubernetes.io/hostname=kubernetes-minion-4jdf Ready +kubernetes-minion-epbe kubernetes.io/hostname=kubernetes-minion-epbe Ready ``` Now start running some containers! @@ -179,29 +179,31 @@ Before starting a container there will be no pods, services and replication cont ``` $ cluster/kubectl.sh get pods -NAME IMAGE(S) HOST LABELS STATUS +NAME READY STATUS RESTARTS AGE $ cluster/kubectl.sh get services -NAME LABELS SELECTOR IP PORT +NAME LABELS SELECTOR IP(S) PORT(S) -$ cluster/kubectl.sh get replicationcontrollers -NAME IMAGE(S SELECTOR REPLICAS +$ cluster/kubectl.sh get rc +CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS ``` Start a container running nginx with a replication controller and three replicas ``` $ cluster/kubectl.sh run my-nginx --image=nginx --replicas=3 --port=80 +CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS +my-nginx my-nginx nginx run=my-nginx 3 ``` When listing the pods, you will see that three containers have been started and are in Waiting state: ``` $ cluster/kubectl.sh get pods -NAME IMAGE(S) HOST LABELS STATUS -781191ff-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.4/10.245.2.4 name=myNginx Waiting -7813c8bd-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.2/10.245.2.2 name=myNginx Waiting -78140853-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.3/10.245.2.3 name=myNginx Waiting +NAME READY STATUS RESTARTS AGE +my-nginx-389da 1/1 Waiting 0 33s +my-nginx-kqdjk 1/1 Waiting 0 33s +my-nginx-nyj3x 1/1 Waiting 0 33s ``` You need to wait for the provisioning to complete, you can monitor the minions by doing: @@ -228,17 +230,17 @@ Going back to listing the pods, services and replicationcontrollers, you now hav ``` $ cluster/kubectl.sh get pods -NAME IMAGE(S) HOST LABELS STATUS -781191ff-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.4/10.245.2.4 name=myNginx Running -7813c8bd-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.2/10.245.2.2 name=myNginx Running -78140853-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.3/10.245.2.3 name=myNginx Running +NAME READY STATUS RESTARTS AGE +my-nginx-389da 1/1 Running 0 33s +my-nginx-kqdjk 1/1 Running 0 33s +my-nginx-nyj3x 1/1 Running 0 33s $ cluster/kubectl.sh get services -NAME LABELS SELECTOR IP PORT +NAME LABELS SELECTOR IP(S) PORT(S) -$ cluster/kubectl.sh get replicationcontrollers -NAME IMAGE(S SELECTOR REPLICAS -myNginx nginx name=my-nginx 3 +$ cluster/kubectl.sh get rc +NAME IMAGE(S) SELECTOR REPLICAS +my-nginx nginx run=my-nginx 3 ``` We did not start any services, hence there are none listed. But we see three replicas displayed properly. @@ -248,9 +250,9 @@ You can already play with scaling the replicas with: ```sh $ ./cluster/kubectl.sh scale rc my-nginx --replicas=2 $ ./cluster/kubectl.sh get pods -NAME IMAGE(S) HOST LABELS STATUS -7813c8bd-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.2/10.245.2.2 name=myNginx Running -78140853-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.3/10.245.2.3 name=myNginx Running +NAME READY STATUS RESTARTS AGE +my-nginx-kqdjk 1/1 Running 0 13m +my-nginx-nyj3x 1/1 Running 0 13m ``` Congratulations! -- cgit v1.2.3 From 2fe55a7351c1beb2e07ed9ab470500737d08527f Mon Sep 17 00:00:00 2001 From: Zach Loafman Date: Thu, 2 Jul 2015 08:04:24 -0700 Subject: Update releasing.md with Kubernetes release process This updates releasing.md with actual instructions on how to cut a release, leaving the theory section of that document alone. Along the way, I streamlined tiny bits of the existing process as I was describing them. The instructions are possibly pedantic, but should be executable by anyone at this point, versus taking someone versant in the dark arts. Relies on #10910. Fixes #1883. --- releasing.md | 138 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 135 insertions(+), 3 deletions(-) diff --git a/releasing.md b/releasing.md index 803e321a..b621c526 100644 --- a/releasing.md +++ b/releasing.md @@ -1,7 +1,138 @@ # Releasing Kubernetes -This document explains how to create a Kubernetes release (as in version) and -how the version information gets embedded into the built binaries. +This document explains how to cut a release, and the theory behind it. If you +just want to cut a release and move on with your life, you can stop reading +after the first section. + +## How to cut a Kubernetes release + +Regardless of whether you are cutting a major or minor version, cutting a +release breaks down into four pieces: + +1. Selecting release components. +1. Tagging and merging the release in Git. +1. Building and pushing the binaries. +1. Writing release notes. + +You should progress in this strict order. + +### Building a New Major/Minor Version (`vX.Y.0`) + +#### Selecting Release Components + +When cutting a major/minor release, your first job is to find the branch +point. We cut `vX.Y.0` releases directly from `master`, which is also the the +branch that we have most continuous validation on. Go first to [the main GCE +Jenkins end-to-end job](http://go/k8s-test/job/kubernetes-e2e-gce) and next to [the +Critical Builds page](http://go/k8s-test/view/Critical%20Builds) and hopefully find a +recent Git hash that looks stable across at least `kubernetes-e2e-gce` and +`kubernetes-e2e-gke-ci`. First glance through builds and look for nice solid +rows of green builds, and then check temporally with the other Critical Builds +to make sure they're solid around then as well. Once you find some greens, you +can find the Git hash for a build by looking at the "Console Log", then look for +`githash=`. You should see a line line: + +``` ++ githash=v0.20.2-322-g974377b +``` + +Because Jenkins builds frequently, if you're looking between jobs +(e.g. `kubernetes-e2e-gke-ci` and `kubernetes-e2e-gce`), there may be no single +`githash` that's been run on both jobs. In that case, take the a green +`kubernetes-e2e-gce` build (but please check that it corresponds to a temporally +similar build that's green on `kubernetes-e2e-gke-ci`). Lastly, if you're having +trouble understanding why the GKE continuous integration clusters are failing +and you're trying to cut a release, don't hesistate to contact the GKE +oncall. + +Before proceeding to the next step: +``` +export BRANCHPOINT=v0.20.2-322-g974377b +``` +Where `v0.20.2-322-g974377b` is the git hash you decided on. This will become +our (retroactive) branch point. + +#### Branching, Tagging and Merging +Do the following: + +1. `export VER=x.y` (e.g. `0.20` for v0.20) +1. cd to the base of the repo +1. `git fetch upstream && git checkout -b release-${VER} ${BRANCHPOINT}` (you did set `${BRANCHPOINT}`, right?) +1. Make sure you don't have any files you care about littering your repo (they + better be checked in or outside the repo, or the next step will delete them). +1. `make clean && git reset --hard HEAD && git clean -xdf` +1. `make` (TBD: you really shouldn't have to do this, but the swagger output step requires it right now) +1. `./build/mark-new-version.sh v${VER}.0` to mark the new release and get further + instructions. This creates a series of commits on the branch you're working + on (`release-${VER}`), including forking our documentation for the release, + the release version commit (which is then tagged), and the post-release + version commit. +1. Follow the instructions given to you by that script. They are canon for the + remainder of the Git process. If you don't understand something in that + process, please ask! + +**TODO**: how to fix tags, etc., if you have to shift the release branchpoint. + +#### Building and Pushing Binaries + +In your git repo (you still have `${VER}` set from above right?): + +1. `git checkout upstream/master && build/build-official-release.sh v${VER}.0` (the `build-official-release.sh` script is version agnostic, so it's best to run it off `master` directly). +1. Follow the instructions given to you by that script. +1. At this point, you've done all the Git bits, you've got all the binary bits pushed, and you've got the template for the release started on GitHub. + +#### Writing Release Notes + +[This helpful guide](making-release-notes.md) describes how to write release +notes for a major/minor release. In the release template on GitHub, leave the +last PR number that the tool finds for the `.0` release, so the next releaser +doesn't have to hunt. + +### Building a New Patch Release (`vX.Y.Z` for `Z > 0`) + +#### Selecting Release Components + +We cut `vX.Y.Z` releases from the `release-vX.Y` branch after all cherry picks +to the branch have been resolved. You should ensure all outstanding cherry picks +have been reviewed and merged and the branch validated on Jenkins (validation +TBD). See the [Cherry Picks](cherry-picks.md) for more information on how to +manage cherry picks prior to cutting the release. + +#### Tagging and Merging + +Do the following (you still have `${VER}` set and you're still working on the +`release-${VER}` branch, right?): + +1. `export PATCH=Z` where `Z` is the patch level of `vX.Y.Z` +1. `make` (TBD: you really shouldn't have to do this, but the swagger output step requires it right now) +1. `./build/mark-new-version.sh v${VER}.${PATCH}` to mark the new release and get further + instructions. This creates a series of commits on the branch you're working + on (`release-${VER}`), including forking our documentation for the release, + the release version commit (which is then tagged), and the post-release + version commit. +1. Follow the instructions given to you by that script. They are canon for the + remainder of the Git process. If you don't understand something in that + process, please ask! + +**TODO**: how to fix tags, etc., if the release is changed. + +#### Building and Pushing Binaries + +In your git repo (you still have `${VER}` and `${PATCH}` set from above right?): + +1. `git checkout upstream/master && build/build-official-release.sh + v${VER}.${PATCH}` (the `build-official-release.sh` script is version + agnostic, so it's best to run it off `master` directly). +1. Follow the instructions given to you by that script. At this point, you've + done all the Git bits, you've got all the binary bits pushed, and you've got + the template for the release started on GitHub. + +#### Writing Release Notes + +Release notes for a patch release are relatives fast: `git log release-${VER}` +(If you followed the procedure in the first section, all the cherry-picks will +have the pull request number in the commit log). Unless there's some reason not +to, just include all the PRs back to the last release. ## Origin of the Sources @@ -116,7 +247,8 @@ We then send PR 100 with both commits in it. Once the PR is accepted, we can use `git tag -a` to create an annotated tag *pointing to the one commit* that has `v0.5` in `pkg/version/base.go` and push it to GitHub. (Unfortunately GitHub tags/releases are not annotated tags, so -this needs to be done from a git client and pushed to GitHub using SSH.) +this needs to be done from a git client and pushed to GitHub using SSH or +HTTPS.) ## Parallel Commits -- cgit v1.2.3 From 4b1d27f1ee0de2e4fdbd83286aea851ad5b29a4c Mon Sep 17 00:00:00 2001 From: Zach Loafman Date: Thu, 9 Jul 2015 14:24:02 -0700 Subject: Add a short doc on cherry picks --- cherry-picks.md | 32 ++++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+) create mode 100644 cherry-picks.md diff --git a/cherry-picks.md b/cherry-picks.md new file mode 100644 index 00000000..b6669110 --- /dev/null +++ b/cherry-picks.md @@ -0,0 +1,32 @@ +# Overview + +This document explains cherry picks are managed on release branches within the +Kubernetes projects. + +## Propose a Cherry Pick + +Any contributor can propose a cherry pick of any pull request, like so: + +``` +hack/cherry_pick_pull.sh 98765 upstream/release-3.14 +``` + +This will walk you through the steps to propose an automated cherry pick of pull + #98765 for remote branch `upstream/release-3.14`. + +## Cherry Pick Review + +Cherry pick pull requests are reviewed differently than normal pull requests. In +particular, they may be self-merged by the release branch owner without fanfare, +in the case the release branch owner knows the cherry pick was already +requested - this should not be the norm, but it may happen. + +[Contributor License Agreements](../../CONTRIBUTING.md) is considered implicit +for all code within cherry-pick pull requests, ***unless there is a large +conflict***. + +## Searching for Cherry Picks + +Now that we've structured cherry picks as PRs, searching for all cherry-picks +against a release is a GitHub query: For example, +[this query is all of the v0.21.x cherry-picks](https://github.com/GoogleCloudPlatform/kubernetes/pulls?utf8=%E2%9C%93&q=is%3Apr+%22automated+cherry+pick%22+base%3Arelease-0.21) -- cgit v1.2.3 From 3e5d853c22dc580e4c8c75616f5654c3ca10fe6e Mon Sep 17 00:00:00 2001 From: jiangyaoguo Date: Wed, 8 Jul 2015 01:37:40 +0800 Subject: change get minions cmd in docs --- development.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/development.md b/development.md index 2e540bcb..07b61c47 100644 --- a/development.md +++ b/development.md @@ -205,7 +205,7 @@ hack/test-integration.sh ## End-to-End tests -You can run an end-to-end test which will bring up a master and two minions, perform some tests, and then tear everything down. Make sure you have followed the getting started steps for your chosen cloud platform (which might involve changing the `KUBERNETES_PROVIDER` environment variable to something other than "gce". +You can run an end-to-end test which will bring up a master and two nodes, perform some tests, and then tear everything down. Make sure you have followed the getting started steps for your chosen cloud platform (which might involve changing the `KUBERNETES_PROVIDER` environment variable to something other than "gce". ``` cd kubernetes hack/e2e-test.sh -- cgit v1.2.3 From 75be32d08ec39a3ac3a5c9d450bd946e96077934 Mon Sep 17 00:00:00 2001 From: dingh Date: Wed, 8 Jul 2015 16:34:07 +0800 Subject: Create schedule_algorithm file This document explains briefly the schedule algorithm of Kubernetes and can be complementary to scheduler.md. --- scheduler_algorithm.md | 36 ++++++++++++++++++++++++++++++++++++ 1 file changed, 36 insertions(+) create mode 100644 scheduler_algorithm.md diff --git a/scheduler_algorithm.md b/scheduler_algorithm.md new file mode 100644 index 00000000..dbd0d7cd --- /dev/null +++ b/scheduler_algorithm.md @@ -0,0 +1,36 @@ +# Scheduler Algorithm in Kubernetes + +For each unscheduled Pod, the Kubernetes scheduler tries to find a node across the cluster according to a set of rules. A general introduction to the Kubernetes scheduler can be found at [docs/devel/scheduler.md](../../docs/devel/scheduler.md). In this document, the algorithm of how to select a node for the Pod is explained. There are two steps before a destination node of a Pod is chosen. The first step is filtering all the nodes and the second is ranking the remaining nodes to find a best fit for the Pod. + +## Filtering the nodes +The purpose of filtering the nodes is to filter out the nodes that do not meet certain requirements of the Pod. For example, if the free resource on a node (measured by the capacity minus the sum of the resource limits of all the Pods that already run on the node) is less than the Pod's required resource, the node should not be considered in the ranking phase so it is filtered out. Currently, there are several "predicates" implementing different filtering policies, including: + +- `NoDiskConflict`: Evaluate if a pod can fit due to the volumes it requests, and those that are already mounted. +- `PodFitsResources`: Check if the free resource (CPU and Memory) meets the requirement of the Pod. The free resource is measured by the capacity minus the sum of limits of all Pods on the node. +- `PodFitsPorts`: Check if any HostPort required by the Pod is already occupied on the node. +- `PodFitsHost`: Filter out all nodes except the one specified in the PodSpec's NodeName field. +- `PodSelectorMatches`: Check if the labels of the node match the labels specified in the Pod's `nodeSelector` field. +- `CheckNodeLabelPresence`: Check if all the specified labels exist on a node or not, regardless of the value. + +The details of the above predicates can be found in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](../../plugin/pkg/scheduler/algorithm/predicates/predicates.go). All predicates mentioned above can be used in combination to perform a sophisticated filtering policy. Kubernetes uses some, but not all, of these predicates by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](../../plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). + +## Ranking the nodes + +The filtered nodes are considered suitable to host the Pod, and it is often that there are more than one nodes remaining. Kubernetes prioritizes the remaining nodes to to find the "best" one for the Pod. The prioritization is performed by a set of priority functions. For each remaining node, a priority function gives a score which scales from 0-10 with 10 representing for "most preferred" and 0 for "least preferred". Each priority function is weighted by a positive number and the final score of each node is calculated by adding up all the weighted scores. For example, suppose there are two priority functions, `priorityFunc1` and `priorityFunc2` with weighting factors `weight1` and `weight2` respectively, the final score of some NodeA is: + + finalScoreNodeA = (weight1 * priorityFunc1) + (weight2 * priorityFunc2) + +After the scores of all nodes are calculated, the node with highest score is chosen as the host of the Pod. If there are more than one nodes with equal highest scores, a random one among them is chosen. + +Currently, Kubernetes scheduler provides some practical priority functions, including: + +- `LeastRequestedPriority`: The node is prioritized based on the fraction of the node that would be free if the new Pod were scheduled onto the node. (In other words, (capacity - sum of limits of all Pods already on the node - limit of Pod that is being scheduled) / capacity). CPU and memory are equally weighted. The node with the highest free fraction is the most preferred. Note that this priority function has the effect of spreading Pods across the nodes with respect to resource consumption. +- `CalculateNodeLabelPriority`: Prefer nodes that have the specified label. +- `BalancedResourceAllocation`: This priority function tries to put the Pod on a node such that the CPU and Memory utilization rate is balanced after the Pod is deployed. +- `CalculateSpreadPriority`: Spread Pods by minimizing the number of Pods belonging to the same service on the same node. +- `CalculateAntiAffinityPriority`: Spread Pods by minimizing the number of Pods belonging to the same service on nodes with the same value for a particular label. + +The details of the above priority functions can be found in [plugin/pkg/scheduler/algorithm/priorities](../../plugin/pkg/scheduler/algorithm/priorities). Kubernetes uses some, but not all, of these priority functions by default. You can see which ones are used by default in [/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](../../plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). Similar as predicates, you can combine the above priority functions and assign weight factors (positive number) to them as you want (check [docs/devel/scheduler.md](../../docs/devel/scheduler.md) for how to customize). + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/scheduler_algorithm.md?pixel)]() -- cgit v1.2.3 From 581e4f7b0f6d64d15046061ccdd5addba6dc96c3 Mon Sep 17 00:00:00 2001 From: Daniel Smith Date: Thu, 9 Jul 2015 18:02:10 -0700 Subject: Auto-fixed docs --- developer-guides/vagrant.md | 4 ++-- releasing.md | 2 +- scheduler.md | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index d8d7a1ec..a561b446 100644 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -9,7 +9,7 @@ Running kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/deve 2. [VMWare Fusion](https://www.vmware.com/products/fusion/) version 5 or greater as well as the appropriate [Vagrant VMWare Fusion provider](https://www.vagrantup.com/vmware) 3. [VMWare Workstation](https://www.vmware.com/products/workstation/) version 9 or greater as well as the [Vagrant VMWare Workstation provider](https://www.vagrantup.com/vmware) 4. [Parallels Desktop](https://www.parallels.com/products/desktop/) version 9 or greater as well as the [Vagrant Parallels provider](https://parallels.github.io/vagrant-parallels/) -3. Get or build a [binary release](/docs/getting-started-guides/binary_release.md) +3. Get or build a [binary release](../../../docs/getting-started-guides/binary_release.md) ### Setup @@ -244,7 +244,7 @@ my-nginx nginx run=my-nginx 3 ``` We did not start any services, hence there are none listed. But we see three replicas displayed properly. -Check the [guestbook](/examples/guestbook/README.md) application to learn how to create a service. +Check the [guestbook](../../../examples/guestbook/README.md) application to learn how to create a service. You can already play with scaling the replicas with: ```sh diff --git a/releasing.md b/releasing.md index 803e321a..fe765244 100644 --- a/releasing.md +++ b/releasing.md @@ -97,7 +97,7 @@ others around it will either have `v0.4-dev` or `v0.5-dev`. The diagram below illustrates it. -![Diagram of git commits involved in the release](./releasing.png) +![Diagram of git commits involved in the release](releasing.png) After working on `v0.4-dev` and merging PR 99 we decide it is time to release `v0.5`. So we start a new branch, create one commit to update diff --git a/scheduler.md b/scheduler.md index ac01e6db..de05b014 100644 --- a/scheduler.md +++ b/scheduler.md @@ -37,7 +37,7 @@ can be overridden by passing the command-line flag `--policy-config-file` to the file specifying which scheduling policies to use. See [examples/scheduler-policy-config.json](../../examples/scheduler-policy-config.json) for an example config file. (Note that the config file format is versioned; the API is defined in -[plugin/pkg/scheduler/api/](../../plugin/pkg/scheduler/api/)). +[plugin/pkg/scheduler/api](../../plugin/pkg/scheduler/api/)). Thus to add a new scheduling policy, you should modify predicates.go or priorities.go, and either register the policy in `defaultPredicates()` or `defaultPriorities()`, or use a policy config file. -- cgit v1.2.3 From 7b06b56cdbb460df3dfda0db38c6219af4df0207 Mon Sep 17 00:00:00 2001 From: Daniel Smith Date: Thu, 9 Jul 2015 18:31:29 -0700 Subject: manual fixes --- writing-a-getting-started-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/writing-a-getting-started-guide.md b/writing-a-getting-started-guide.md index 873fafcc..d7452c09 100644 --- a/writing-a-getting-started-guide.md +++ b/writing-a-getting-started-guide.md @@ -29,7 +29,7 @@ These guidelines say *what* to do. See the Rationale section for *why*. search for uses of flags by guides. - We may ask that you host binary assets or large amounts of code in our `contrib` directory or on your own repo. - - Setup a cluster and run the [conformance test](../../docs/devel/conformance-test.md) against it, and report the + - Setup a cluster and run the [conformance test](../../docs/devel/development.md#conformance-testing) against it, and report the results in your PR. - Add or update a row in [The Matrix](../../docs/getting-started-guides/README.md). - State the binary version of kubernetes that you tested clearly in your Guide doc and in The Matrix. -- cgit v1.2.3 From 3a38ce4217962abf1ebd37e08ea51c5c857de70e Mon Sep 17 00:00:00 2001 From: Mike Danese Date: Fri, 10 Jul 2015 12:51:35 -0700 Subject: fix verify gendocs --- cherry-picks.md | 3 +++ scheduler_algorithm.md | 2 +- 2 files changed, 4 insertions(+), 1 deletion(-) diff --git a/cherry-picks.md b/cherry-picks.md index b6669110..5fbada99 100644 --- a/cherry-picks.md +++ b/cherry-picks.md @@ -30,3 +30,6 @@ conflict***. Now that we've structured cherry picks as PRs, searching for all cherry-picks against a release is a GitHub query: For example, [this query is all of the v0.21.x cherry-picks](https://github.com/GoogleCloudPlatform/kubernetes/pulls?utf8=%E2%9C%93&q=is%3Apr+%22automated+cherry+pick%22+base%3Arelease-0.21) + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/cherry-picks.md?pixel)]() diff --git a/scheduler_algorithm.md b/scheduler_algorithm.md index dbd0d7cd..f353a4ed 100644 --- a/scheduler_algorithm.md +++ b/scheduler_algorithm.md @@ -30,7 +30,7 @@ Currently, Kubernetes scheduler provides some practical priority functions, incl - `CalculateSpreadPriority`: Spread Pods by minimizing the number of Pods belonging to the same service on the same node. - `CalculateAntiAffinityPriority`: Spread Pods by minimizing the number of Pods belonging to the same service on nodes with the same value for a particular label. -The details of the above priority functions can be found in [plugin/pkg/scheduler/algorithm/priorities](../../plugin/pkg/scheduler/algorithm/priorities). Kubernetes uses some, but not all, of these priority functions by default. You can see which ones are used by default in [/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](../../plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). Similar as predicates, you can combine the above priority functions and assign weight factors (positive number) to them as you want (check [docs/devel/scheduler.md](../../docs/devel/scheduler.md) for how to customize). +The details of the above priority functions can be found in [plugin/pkg/scheduler/algorithm/priorities](../../plugin/pkg/scheduler/algorithm/priorities/). Kubernetes uses some, but not all, of these priority functions by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](../../plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). Similar as predicates, you can combine the above priority functions and assign weight factors (positive number) to them as you want (check [docs/devel/scheduler.md](../../docs/devel/scheduler.md) for how to customize). [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/scheduler_algorithm.md?pixel)]() -- cgit v1.2.3 From 92e08e130d859bb0a7dad654534906a2a92ed4a3 Mon Sep 17 00:00:00 2001 From: Zach Loafman Date: Fri, 10 Jul 2015 18:43:12 -0700 Subject: Fix patch release instructions Somewhere in the last round of editing, I compressed the patch release instructions after the release validation steps went in. They no longer made sense because they assume some variables are set from the previous step that you don't have set. Set them. These instructions are now begging to be refactored between the patch and normal releases, but I won't do that here. --- releasing.md | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/releasing.md b/releasing.md index 6533858c..9e9fcaf7 100644 --- a/releasing.md +++ b/releasing.md @@ -100,10 +100,13 @@ manage cherry picks prior to cutting the release. #### Tagging and Merging -Do the following (you still have `${VER}` set and you're still working on the -`release-${VER}` branch, right?): - +1. `export VER=x.y` (e.g. `0.20` for v0.20) 1. `export PATCH=Z` where `Z` is the patch level of `vX.Y.Z` +1. cd to the base of the repo +1. `git fetch upstream && git checkout -b upstream/release-${VER}` +1. Make sure you don't have any files you care about littering your repo (they + better be checked in or outside the repo, or the next step will delete them). +1. `make clean && git reset --hard HEAD && git clean -xdf` 1. `make` (TBD: you really shouldn't have to do this, but the swagger output step requires it right now) 1. `./build/mark-new-version.sh v${VER}.${PATCH}` to mark the new release and get further instructions. This creates a series of commits on the branch you're working -- cgit v1.2.3 From a284d4cf980e30a237d170cd77b8f50c8b251c3b Mon Sep 17 00:00:00 2001 From: Ed Costello Date: Sun, 12 Jul 2015 22:03:06 -0400 Subject: Copy edits for typos --- releasing.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/releasing.md b/releasing.md index 6533858c..9cec89e0 100644 --- a/releasing.md +++ b/releasing.md @@ -21,7 +21,7 @@ You should progress in this strict order. #### Selecting Release Components When cutting a major/minor release, your first job is to find the branch -point. We cut `vX.Y.0` releases directly from `master`, which is also the the +point. We cut `vX.Y.0` releases directly from `master`, which is also the branch that we have most continuous validation on. Go first to [the main GCE Jenkins end-to-end job](http://go/k8s-test/job/kubernetes-e2e-gce) and next to [the Critical Builds page](http://go/k8s-test/view/Critical%20Builds) and hopefully find a @@ -42,7 +42,7 @@ Because Jenkins builds frequently, if you're looking between jobs `kubernetes-e2e-gce` build (but please check that it corresponds to a temporally similar build that's green on `kubernetes-e2e-gke-ci`). Lastly, if you're having trouble understanding why the GKE continuous integration clusters are failing -and you're trying to cut a release, don't hesistate to contact the GKE +and you're trying to cut a release, don't hesitate to contact the GKE oncall. Before proceeding to the next step: -- cgit v1.2.3 From eed049cf8d255fd787d10565e8700ba5cd296750 Mon Sep 17 00:00:00 2001 From: Zach Loafman Date: Sat, 11 Jul 2015 17:00:20 -0700 Subject: hack/cherry_pick_pull.sh: Allow multiple pulls Reorder the arguments to allow for multiple pulls at the end: hack/cherry_pick_pull.sh ... This solves some common A-then-immediate-A' cases that appear frequently on head. (There's a workaround, but it's a hack.) Updates the documentation. --- cherry-picks.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/cherry-picks.md b/cherry-picks.md index 5fbada99..2708db93 100644 --- a/cherry-picks.md +++ b/cherry-picks.md @@ -8,7 +8,7 @@ Kubernetes projects. Any contributor can propose a cherry pick of any pull request, like so: ``` -hack/cherry_pick_pull.sh 98765 upstream/release-3.14 +hack/cherry_pick_pull.sh upstream/release-3.14 98765 ``` This will walk you through the steps to propose an automated cherry pick of pull -- cgit v1.2.3 From 5b891f610132f670f1a2bc4cbdbf53ef05180c25 Mon Sep 17 00:00:00 2001 From: Ed Costello Date: Mon, 13 Jul 2015 10:11:07 -0400 Subject: Copy edits to remove doubled words --- api_changes.md | 2 +- scheduler_algorithm.md | 2 +- writing-a-getting-started-guide.md | 4 ++-- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/api_changes.md b/api_changes.md index 17278c6e..de073677 100644 --- a/api_changes.md +++ b/api_changes.md @@ -177,7 +177,7 @@ need to add cases to `pkg/api//defaults.go`. Of course, since you have added code, you have to add a test: `pkg/api//defaults_test.go`. Do use pointers to scalars when you need to distinguish between an unset value -and an an automatic zero value. For example, +and an automatic zero value. For example, `PodSpec.TerminationGracePeriodSeconds` is defined as `*int64` the go type definition. A zero value means 0 seconds, and a nil value asks the system to pick a default. diff --git a/scheduler_algorithm.md b/scheduler_algorithm.md index f353a4ed..2d239f2b 100644 --- a/scheduler_algorithm.md +++ b/scheduler_algorithm.md @@ -16,7 +16,7 @@ The details of the above predicates can be found in [plugin/pkg/scheduler/algori ## Ranking the nodes -The filtered nodes are considered suitable to host the Pod, and it is often that there are more than one nodes remaining. Kubernetes prioritizes the remaining nodes to to find the "best" one for the Pod. The prioritization is performed by a set of priority functions. For each remaining node, a priority function gives a score which scales from 0-10 with 10 representing for "most preferred" and 0 for "least preferred". Each priority function is weighted by a positive number and the final score of each node is calculated by adding up all the weighted scores. For example, suppose there are two priority functions, `priorityFunc1` and `priorityFunc2` with weighting factors `weight1` and `weight2` respectively, the final score of some NodeA is: +The filtered nodes are considered suitable to host the Pod, and it is often that there are more than one nodes remaining. Kubernetes prioritizes the remaining nodes to find the "best" one for the Pod. The prioritization is performed by a set of priority functions. For each remaining node, a priority function gives a score which scales from 0-10 with 10 representing for "most preferred" and 0 for "least preferred". Each priority function is weighted by a positive number and the final score of each node is calculated by adding up all the weighted scores. For example, suppose there are two priority functions, `priorityFunc1` and `priorityFunc2` with weighting factors `weight1` and `weight2` respectively, the final score of some NodeA is: finalScoreNodeA = (weight1 * priorityFunc1) + (weight2 * priorityFunc2) diff --git a/writing-a-getting-started-guide.md b/writing-a-getting-started-guide.md index d7452c09..40852361 100644 --- a/writing-a-getting-started-guide.md +++ b/writing-a-getting-started-guide.md @@ -62,7 +62,7 @@ These guidelines say *what* to do. See the Rationale section for *why*. refactoring and feature additions that affect code for their IaaS. ## Rationale - - We want want people to create Kubernetes clusters with whatever IaaS, Node OS, + - We want people to create Kubernetes clusters with whatever IaaS, Node OS, configuration management tools, and so on, which they are familiar with. The guidelines for **versioned distros** are designed for flexibility. - We want developers to be able to work without understanding all the permutations of @@ -81,7 +81,7 @@ These guidelines say *what* to do. See the Rationale section for *why*. gate commits on passing CI for all distros, and since end-to-end tests are typically somewhat flaky, it would be highly likely for there to be false positives and CI backlogs with many CI pipelines. - We do not require versioned distros to do **CI** for several reasons. It is a steep - learning curve to understand our our automated testing scripts. And it is considerable effort + learning curve to understand our automated testing scripts. And it is considerable effort to fully automate setup and teardown of a cluster, which is needed for CI. And, not everyone has the time and money to run CI. We do not want to discourage people from writing and sharing guides because of this. -- cgit v1.2.3 From 01bb3613a48b76cfb0354376aedc1cfb2077bf1b Mon Sep 17 00:00:00 2001 From: Tim Hockin Date: Sat, 11 Jul 2015 21:04:52 -0700 Subject: Run gendocs and munges --- README.md | 14 ++++++++++++++ api_changes.md | 14 ++++++++++++++ cherry-picks.md | 14 ++++++++++++++ coding-conventions.md | 14 ++++++++++++++ collab.md | 14 ++++++++++++++ developer-guides/vagrant.md | 14 ++++++++++++++ development.md | 14 ++++++++++++++ faster_reviews.md | 14 ++++++++++++++ flaky-tests.md | 14 ++++++++++++++ getting-builds.md | 14 ++++++++++++++ instrumentation.md | 14 ++++++++++++++ issues.md | 14 ++++++++++++++ logging.md | 14 ++++++++++++++ making-release-notes.md | 14 ++++++++++++++ profiling.md | 14 ++++++++++++++ pull-requests.md | 14 ++++++++++++++ releasing.md | 14 ++++++++++++++ scheduler.md | 14 ++++++++++++++ scheduler_algorithm.md | 14 ++++++++++++++ writing-a-getting-started-guide.md | 14 ++++++++++++++ 20 files changed, 280 insertions(+) diff --git a/README.md b/README.md index 5957902f..26eb7ced 100644 --- a/README.md +++ b/README.md @@ -1,3 +1,17 @@ + + + + +

*** PLEASE NOTE: This document applies to the HEAD of the source +tree only. If you are using a released version of Kubernetes, you almost +certainly want the docs that go with that version.

+ +Documentation for specific releases can be found at +[releases.k8s.io](http://releases.k8s.io). + + + + # Developing Kubernetes Docs in this directory relate to developing Kubernetes. diff --git a/api_changes.md b/api_changes.md index de073677..3ad1847d 100644 --- a/api_changes.md +++ b/api_changes.md @@ -1,3 +1,17 @@ + + + + +

*** PLEASE NOTE: This document applies to the HEAD of the source +tree only. If you are using a released version of Kubernetes, you almost +certainly want the docs that go with that version.

+ +Documentation for specific releases can be found at +[releases.k8s.io](http://releases.k8s.io). + + + + # So you want to change the API? The Kubernetes API has two major components - the internal structures and diff --git a/cherry-picks.md b/cherry-picks.md index 2708db93..03f2ebb5 100644 --- a/cherry-picks.md +++ b/cherry-picks.md @@ -1,3 +1,17 @@ + + + + +

*** PLEASE NOTE: This document applies to the HEAD of the source +tree only. If you are using a released version of Kubernetes, you almost +certainly want the docs that go with that version.

+ +Documentation for specific releases can be found at +[releases.k8s.io](http://releases.k8s.io). + + + + # Overview This document explains cherry picks are managed on release branches within the diff --git a/coding-conventions.md b/coding-conventions.md index bdcbb708..e61398ee 100644 --- a/coding-conventions.md +++ b/coding-conventions.md @@ -1,3 +1,17 @@ + + + + +

*** PLEASE NOTE: This document applies to the HEAD of the source +tree only. If you are using a released version of Kubernetes, you almost +certainly want the docs that go with that version.

+ +Documentation for specific releases can be found at +[releases.k8s.io](http://releases.k8s.io). + + + + Coding style advice for contributors - Bash - https://google-styleguide.googlecode.com/svn/trunk/shell.xml diff --git a/collab.md b/collab.md index b424f502..dc12537d 100644 --- a/collab.md +++ b/collab.md @@ -1,3 +1,17 @@ + + + + +

*** PLEASE NOTE: This document applies to the HEAD of the source +tree only. If you are using a released version of Kubernetes, you almost +certainly want the docs that go with that version.

+ +Documentation for specific releases can be found at +[releases.k8s.io](http://releases.k8s.io). + + + + # On Collaborative Development Kubernetes is open source, but many of the people working on it do so as their day job. In order to avoid forcing people to be "at work" effectively 24/7, we want to establish some semi-formal protocols around development. Hopefully these rules make things go more smoothly. If you find that this is not the case, please complain loudly. diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index a561b446..1edf07a6 100644 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -1,3 +1,17 @@ + + + + +

*** PLEASE NOTE: This document applies to the HEAD of the source +tree only. If you are using a released version of Kubernetes, you almost +certainly want the docs that go with that version.

+ +Documentation for specific releases can be found at +[releases.k8s.io](http://releases.k8s.io). + + + + ## Getting started with Vagrant Running kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/develop on your local machine (Linux, Mac OS X). diff --git a/development.md b/development.md index 2e540bcb..37a4478a 100644 --- a/development.md +++ b/development.md @@ -1,3 +1,17 @@ + + + + +

*** PLEASE NOTE: This document applies to the HEAD of the source +tree only. If you are using a released version of Kubernetes, you almost +certainly want the docs that go with that version.

+ +Documentation for specific releases can be found at +[releases.k8s.io](http://releases.k8s.io). + + + + # Development Guide # Releases and Official Builds diff --git a/faster_reviews.md b/faster_reviews.md index ed890a7f..99e60fb1 100644 --- a/faster_reviews.md +++ b/faster_reviews.md @@ -1,3 +1,17 @@ + + + + +

*** PLEASE NOTE: This document applies to the HEAD of the source +tree only. If you are using a released version of Kubernetes, you almost +certainly want the docs that go with that version.

+ +Documentation for specific releases can be found at +[releases.k8s.io](http://releases.k8s.io). + + + + # How to get faster PR reviews Most of what is written here is not at all specific to Kubernetes, but it bears diff --git a/flaky-tests.md b/flaky-tests.md index da5549c8..ee93bf19 100644 --- a/flaky-tests.md +++ b/flaky-tests.md @@ -1,3 +1,17 @@ + + + + +

*** PLEASE NOTE: This document applies to the HEAD of the source +tree only. If you are using a released version of Kubernetes, you almost +certainly want the docs that go with that version.

+ +Documentation for specific releases can be found at +[releases.k8s.io](http://releases.k8s.io). + + + + # Hunting flaky tests in Kubernetes Sometimes unit tests are flaky. This means that due to (usually) race conditions, they will occasionally fail, even though most of the time they pass. diff --git a/getting-builds.md b/getting-builds.md index dbad8f3a..5a1a4dde 100644 --- a/getting-builds.md +++ b/getting-builds.md @@ -1,3 +1,17 @@ + + + + +

*** PLEASE NOTE: This document applies to the HEAD of the source +tree only. If you are using a released version of Kubernetes, you almost +certainly want the docs that go with that version.

+ +Documentation for specific releases can be found at +[releases.k8s.io](http://releases.k8s.io). + + + + # Getting Kubernetes Builds You can use [hack/get-build.sh](../../hack/get-build.sh) to or use as a reference on how to get the most recent builds with curl. With `get-build.sh` you can grab the most recent stable build, the most recent release candidate, or the most recent build to pass our ci and gce e2e tests (essentially a nightly build). diff --git a/instrumentation.md b/instrumentation.md index b52480d2..762d1980 100644 --- a/instrumentation.md +++ b/instrumentation.md @@ -1,3 +1,17 @@ + + + + +

*** PLEASE NOTE: This document applies to the HEAD of the source +tree only. If you are using a released version of Kubernetes, you almost +certainly want the docs that go with that version.

+ +Documentation for specific releases can be found at +[releases.k8s.io](http://releases.k8s.io). + + + + Instrumenting Kubernetes with a new metric =================== diff --git a/issues.md b/issues.md index 99e1089a..62444185 100644 --- a/issues.md +++ b/issues.md @@ -1,3 +1,17 @@ + + + + +

*** PLEASE NOTE: This document applies to the HEAD of the source +tree only. If you are using a released version of Kubernetes, you almost +certainly want the docs that go with that version.

+ +Documentation for specific releases can be found at +[releases.k8s.io](http://releases.k8s.io). + + + + GitHub Issues for the Kubernetes Project ======================================== diff --git a/logging.md b/logging.md index 331eda97..1ca18718 100644 --- a/logging.md +++ b/logging.md @@ -1,3 +1,17 @@ + + + + +

*** PLEASE NOTE: This document applies to the HEAD of the source +tree only. If you are using a released version of Kubernetes, you almost +certainly want the docs that go with that version.

+ +Documentation for specific releases can be found at +[releases.k8s.io](http://releases.k8s.io). + + + + Logging Conventions =================== diff --git a/making-release-notes.md b/making-release-notes.md index 5d08ac50..0dfbeebe 100644 --- a/making-release-notes.md +++ b/making-release-notes.md @@ -1,3 +1,17 @@ + + + + +

*** PLEASE NOTE: This document applies to the HEAD of the source +tree only. If you are using a released version of Kubernetes, you almost +certainly want the docs that go with that version.

+ +Documentation for specific releases can be found at +[releases.k8s.io](http://releases.k8s.io). + + + + ## Making release notes This documents the process for making release notes for a release. diff --git a/profiling.md b/profiling.md index 1dd42095..51635424 100644 --- a/profiling.md +++ b/profiling.md @@ -1,3 +1,17 @@ + + + + +

*** PLEASE NOTE: This document applies to the HEAD of the source +tree only. If you are using a released version of Kubernetes, you almost +certainly want the docs that go with that version.

+ +Documentation for specific releases can be found at +[releases.k8s.io](http://releases.k8s.io). + + + + # Profiling Kubernetes This document explain how to plug in profiler and how to profile Kubernetes services. diff --git a/pull-requests.md b/pull-requests.md index 1b5c30e6..e82d2d00 100644 --- a/pull-requests.md +++ b/pull-requests.md @@ -1,3 +1,17 @@ + + + + +

*** PLEASE NOTE: This document applies to the HEAD of the source +tree only. If you are using a released version of Kubernetes, you almost +certainly want the docs that go with that version.

+ +Documentation for specific releases can be found at +[releases.k8s.io](http://releases.k8s.io). + + + + Pull Request Process ==================== diff --git a/releasing.md b/releasing.md index 9cec89e0..a83f6677 100644 --- a/releasing.md +++ b/releasing.md @@ -1,3 +1,17 @@ + + + + +

*** PLEASE NOTE: This document applies to the HEAD of the source +tree only. If you are using a released version of Kubernetes, you almost +certainly want the docs that go with that version.

+ +Documentation for specific releases can be found at +[releases.k8s.io](http://releases.k8s.io). + + + + # Releasing Kubernetes This document explains how to cut a release, and the theory behind it. If you diff --git a/scheduler.md b/scheduler.md index de05b014..3e1ae0e1 100644 --- a/scheduler.md +++ b/scheduler.md @@ -1,3 +1,17 @@ + + + + +

*** PLEASE NOTE: This document applies to the HEAD of the source +tree only. If you are using a released version of Kubernetes, you almost +certainly want the docs that go with that version.

+ +Documentation for specific releases can be found at +[releases.k8s.io](http://releases.k8s.io). + + + + # The Kubernetes Scheduler diff --git a/scheduler_algorithm.md b/scheduler_algorithm.md index 2d239f2b..96789422 100644 --- a/scheduler_algorithm.md +++ b/scheduler_algorithm.md @@ -1,3 +1,17 @@ + + + + +

*** PLEASE NOTE: This document applies to the HEAD of the source +tree only. If you are using a released version of Kubernetes, you almost +certainly want the docs that go with that version.

+ +Documentation for specific releases can be found at +[releases.k8s.io](http://releases.k8s.io). + + + + # Scheduler Algorithm in Kubernetes For each unscheduled Pod, the Kubernetes scheduler tries to find a node across the cluster according to a set of rules. A general introduction to the Kubernetes scheduler can be found at [docs/devel/scheduler.md](../../docs/devel/scheduler.md). In this document, the algorithm of how to select a node for the Pod is explained. There are two steps before a destination node of a Pod is chosen. The first step is filtering all the nodes and the second is ranking the remaining nodes to find a best fit for the Pod. diff --git a/writing-a-getting-started-guide.md b/writing-a-getting-started-guide.md index 40852361..7b94d9a3 100644 --- a/writing-a-getting-started-guide.md +++ b/writing-a-getting-started-guide.md @@ -1,3 +1,17 @@ + + + + +

*** PLEASE NOTE: This document applies to the HEAD of the source +tree only. If you are using a released version of Kubernetes, you almost +certainly want the docs that go with that version.

+ +Documentation for specific releases can be found at +[releases.k8s.io](http://releases.k8s.io). + + + + # Writing a Getting Started Guide This page gives some advice for anyone planning to write or update a Getting Started Guide for Kubernetes. It also gives some guidelines which reviewers should follow when reviewing a pull request for a -- cgit v1.2.3 From 37813afc4bc36b2f617cdac0233e1d02b45352eb Mon Sep 17 00:00:00 2001 From: Tim Hockin Date: Sun, 12 Jul 2015 21:15:58 -0700 Subject: Change 'minion' to 'node' in docs --- developer-guides/vagrant.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index 1edf07a6..1316e26b 100644 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -27,7 +27,7 @@ Running kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/deve ### Setup -By default, the Vagrant setup will create a single kubernetes-master and 1 kubernetes-minion. Each VM will take 1 GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate free disk space). To start your local cluster, open a shell and run: +By default, the Vagrant setup will create a single master VM (called kubernetes-master) and one node (called kubernetes-minion-1). Each VM will take 1 GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate free disk space). To start your local cluster, open a shell and run: ```sh cd kubernetes @@ -77,7 +77,7 @@ vagrant ssh master [vagrant@kubernetes-master ~] $ sudo systemctl status nginx ``` -To view the services on any of the kubernetes-minion(s): +To view the services on any of the nodes: ```sh vagrant ssh minion-1 [vagrant@kubernetes-minion-1] $ sudo systemctl status docker @@ -312,20 +312,20 @@ cat ~/.kubernetes_vagrant_auth #### I just created the cluster, but I do not see my container running! -If this is your first time creating the cluster, the kubelet on each minion schedules a number of docker pull requests to fetch prerequisite images. This can take some time and as a result may delay your initial pod getting provisioned. +If this is your first time creating the cluster, the kubelet on each node schedules a number of docker pull requests to fetch prerequisite images. This can take some time and as a result may delay your initial pod getting provisioned. #### I changed Kubernetes code, but it's not running! Are you sure there was no build error? After running `$ vagrant provision`, scroll up and ensure that each Salt state was completed successfully on each box in the cluster. It's very likely you see a build error due to an error in your source files! -#### I have brought Vagrant up but the minions won't validate! +#### I have brought Vagrant up but the nodes won't validate! -Are you sure you built a release first? Did you install `net-tools`? For more clues, login to one of the minions (`vagrant ssh minion-1`) and inspect the salt minion log (`sudo cat /var/log/salt/minion`). +Are you sure you built a release first? Did you install `net-tools`? For more clues, login to one of the nodes (`vagrant ssh minion-1`) and inspect the salt minion log (`sudo cat /var/log/salt/minion`). -#### I want to change the number of minions! +#### I want to change the number of nodes! -You can control the number of minions that are instantiated via the environment variable `NUM_MINIONS` on your host machine. If you plan to work with replicas, we strongly encourage you to work with enough minions to satisfy your largest intended replica size. If you do not plan to work with replicas, you can save some system resources by running with a single minion. You do this, by setting `NUM_MINIONS` to 1 like so: +You can control the number of nodes that are instantiated via the environment variable `NUM_MINIONS` on your host machine. If you plan to work with replicas, we strongly encourage you to work with enough nodes to satisfy your largest intended replica size. If you do not plan to work with replicas, you can save some system resources by running with a single node. You do this, by setting `NUM_MINIONS` to 1 like so: ```sh export NUM_MINIONS=1 @@ -340,7 +340,7 @@ Just set it to the number of megabytes you would like the machines to have. For export KUBERNETES_MEMORY=2048 ``` -If you need more granular control, you can set the amount of memory for the master and minions independently. For example: +If you need more granular control, you can set the amount of memory for the master and nodes independently. For example: ```sh export KUBERNETES_MASTER_MEMORY=1536 -- cgit v1.2.3 From b8889dc9532b5b58504d8b0ab52df2d5c386e449 Mon Sep 17 00:00:00 2001 From: Daniel Smith Date: Mon, 13 Jul 2015 17:13:09 -0700 Subject: Apply mungedocs changes --- README.md | 3 +++ api_changes.md | 2 ++ cherry-picks.md | 2 ++ coding-conventions.md | 3 ++- collab.md | 2 ++ developer-guides/vagrant.md | 2 ++ development.md | 6 ++++-- faster_reviews.md | 3 ++- flaky-tests.md | 2 ++ getting-builds.md | 2 ++ instrumentation.md | 2 ++ issues.md | 2 ++ logging.md | 2 ++ making-release-notes.md | 3 ++- profiling.md | 2 ++ pull-requests.md | 3 +++ releasing.md | 2 ++ scheduler.md | 2 ++ scheduler_algorithm.md | 6 ++++-- writing-a-getting-started-guide.md | 4 +++- 20 files changed, 47 insertions(+), 8 deletions(-) diff --git a/README.md b/README.md index 26eb7ced..6ce86769 100644 --- a/README.md +++ b/README.md @@ -47,4 +47,7 @@ Docs in this directory relate to developing Kubernetes. * **Getting Recent Builds** ([getting-builds.md](getting-builds.md)): How to get recent builds including the latest builds to pass CI. + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/README.md?pixel)]() + diff --git a/api_changes.md b/api_changes.md index 3ad1847d..3a0c1991 100644 --- a/api_changes.md +++ b/api_changes.md @@ -356,4 +356,6 @@ the change gets in. If you are unsure, ask. Also make sure that the change gets TODO(smarterclayton): write this. + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/api_changes.md?pixel)]() + diff --git a/cherry-picks.md b/cherry-picks.md index 03f2ebb5..04811f0b 100644 --- a/cherry-picks.md +++ b/cherry-picks.md @@ -46,4 +46,6 @@ against a release is a GitHub query: For example, [this query is all of the v0.21.x cherry-picks](https://github.com/GoogleCloudPlatform/kubernetes/pulls?utf8=%E2%9C%93&q=is%3Apr+%22automated+cherry+pick%22+base%3Arelease-0.21) + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/cherry-picks.md?pixel)]() + diff --git a/coding-conventions.md b/coding-conventions.md index e61398ee..54d9aaa6 100644 --- a/coding-conventions.md +++ b/coding-conventions.md @@ -20,5 +20,6 @@ Coding style advice for contributors - https://gist.github.com/lavalamp/4bd23295a9f32706a48f - + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/coding-conventions.md?pixel)]() + diff --git a/collab.md b/collab.md index dc12537d..d212012f 100644 --- a/collab.md +++ b/collab.md @@ -54,4 +54,6 @@ PRs that are incorrectly judged to be merge-able, may be reverted and subject to Any maintainer or core contributor who wants to review a PR but does not have time immediately may put a hold on a PR simply by saying so on the PR discussion and offering an ETA measured in single-digit days at most. Any PR that has a hold shall not be merged until the person who requested the hold acks the review, withdraws their hold, or is overruled by a preponderance of maintainers. + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/collab.md?pixel)]() + diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index 1316e26b..1b716648 100644 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -351,4 +351,6 @@ export KUBERNETES_MINION_MEMORY=2048 ```vagrant suspend``` seems to mess up the network. It's not supported at this time. + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/developer-guides/vagrant.md?pixel)]() + diff --git a/development.md b/development.md index 157f49d6..ba9b9897 100644 --- a/development.md +++ b/development.md @@ -281,8 +281,8 @@ go run hack/e2e.go -v -ctl='delete pod foobar' ## Conformance testing End-to-end testing, as described above, is for [development -distributions](../../docs/devel/writing-a-getting-started-guide.md). A conformance test is used on -a [versioned distro](../../docs/devel/writing-a-getting-started-guide.md). +distributions](writing-a-getting-started-guide.md). A conformance test is used on +a [versioned distro](writing-a-getting-started-guide.md). The conformance test runs a subset of the e2e-tests against a manually-created cluster. It does not require support for up/push/down and other operations. To run a conformance test, you need to know the @@ -300,4 +300,6 @@ hack/run-gendocs.sh ``` + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/development.md?pixel)]() + diff --git a/faster_reviews.md b/faster_reviews.md index 99e60fb1..eb3b25e9 100644 --- a/faster_reviews.md +++ b/faster_reviews.md @@ -190,5 +190,6 @@ a bit of thought into how your work can be made easier to review. If you do these things your PRs will flow much more easily. - + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/faster_reviews.md?pixel)]() + diff --git a/flaky-tests.md b/flaky-tests.md index ee93bf19..d26fc406 100644 --- a/flaky-tests.md +++ b/flaky-tests.md @@ -76,4 +76,6 @@ If you do a final check for flakes with ```docker ps -a```, ignore tasks that ex Happy flake hunting! + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/flaky-tests.md?pixel)]() + diff --git a/getting-builds.md b/getting-builds.md index 5a1a4dde..770d486c 100644 --- a/getting-builds.md +++ b/getting-builds.md @@ -35,4 +35,6 @@ gsutil ls gs://kubernetes-release/release # list all official re ``` + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/getting-builds.md?pixel)]() + diff --git a/instrumentation.md b/instrumentation.md index 762d1980..22cd38e1 100644 --- a/instrumentation.md +++ b/instrumentation.md @@ -47,4 +47,6 @@ https://github.com/prometheus/client_golang/blob/master/prometheus/histogram.go https://github.com/prometheus/client_golang/blob/master/prometheus/summary.go + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/instrumentation.md?pixel)]() + diff --git a/issues.md b/issues.md index 62444185..d4d1d132 100644 --- a/issues.md +++ b/issues.md @@ -33,4 +33,6 @@ Definitions * untriaged - anything without a priority/X label will be considered untriaged + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/issues.md?pixel)]() + diff --git a/logging.md b/logging.md index 1ca18718..bf2bd5c8 100644 --- a/logging.md +++ b/logging.md @@ -40,4 +40,6 @@ The following conventions for the glog levels to use. [glog](http://godoc.org/g As per the comments, the practical default level is V(2). Developers and QE environments may wish to run at V(3) or V(4). If you wish to change the log level, you can pass in `-v=X` where X is the desired maximum level to log. + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/logging.md?pixel)]() + diff --git a/making-release-notes.md b/making-release-notes.md index 0dfbeebe..877c1364 100644 --- a/making-release-notes.md +++ b/making-release-notes.md @@ -47,5 +47,6 @@ With the final markdown all set, cut and paste it to the top of ```CHANGELOG.md` * Press Save. - + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/making-release-notes.md?pixel)]() + diff --git a/profiling.md b/profiling.md index 51635424..41737414 100644 --- a/profiling.md +++ b/profiling.md @@ -48,4 +48,6 @@ to get 30 sec. CPU profile. To enable contention profiling you need to add line ```rt.SetBlockProfileRate(1)``` in addition to ```m.mux.HandleFunc(...)``` added before (```rt``` stands for ```runtime``` in ```master.go```). This enables 'debug/pprof/block' subpage, which can be used as an input to ```go tool pprof```. + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/profiling.md?pixel)]() + diff --git a/pull-requests.md b/pull-requests.md index e82d2d00..1c6bbe5f 100644 --- a/pull-requests.md +++ b/pull-requests.md @@ -42,4 +42,7 @@ Once those requirements are met, they will be labeled [ok-to-merge](https://gith These restrictions will be relaxed after v1.0 is released. + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/pull-requests.md?pixel)]() + diff --git a/releasing.md b/releasing.md index 29e685cf..5cdbde2f 100644 --- a/releasing.md +++ b/releasing.md @@ -314,4 +314,6 @@ by plain mortals (in a perfect world PR/issue's title would be enough but often it is just too cryptic/geeky/domain-specific that it isn't). + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/releasing.md?pixel)]() + diff --git a/scheduler.md b/scheduler.md index 3e1ae0e1..d9fccefc 100644 --- a/scheduler.md +++ b/scheduler.md @@ -61,4 +61,6 @@ If you want to get a global picture of how the scheduler works, you can start in [plugin/cmd/kube-scheduler/app/server.go](../../plugin/cmd/kube-scheduler/app/server.go) + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/scheduler.md?pixel)]() + diff --git a/scheduler_algorithm.md b/scheduler_algorithm.md index 96789422..119b0c86 100644 --- a/scheduler_algorithm.md +++ b/scheduler_algorithm.md @@ -14,7 +14,7 @@ certainly want the docs that go with that version. # Scheduler Algorithm in Kubernetes -For each unscheduled Pod, the Kubernetes scheduler tries to find a node across the cluster according to a set of rules. A general introduction to the Kubernetes scheduler can be found at [docs/devel/scheduler.md](../../docs/devel/scheduler.md). In this document, the algorithm of how to select a node for the Pod is explained. There are two steps before a destination node of a Pod is chosen. The first step is filtering all the nodes and the second is ranking the remaining nodes to find a best fit for the Pod. +For each unscheduled Pod, the Kubernetes scheduler tries to find a node across the cluster according to a set of rules. A general introduction to the Kubernetes scheduler can be found at [scheduler.md](scheduler.md). In this document, the algorithm of how to select a node for the Pod is explained. There are two steps before a destination node of a Pod is chosen. The first step is filtering all the nodes and the second is ranking the remaining nodes to find a best fit for the Pod. ## Filtering the nodes The purpose of filtering the nodes is to filter out the nodes that do not meet certain requirements of the Pod. For example, if the free resource on a node (measured by the capacity minus the sum of the resource limits of all the Pods that already run on the node) is less than the Pod's required resource, the node should not be considered in the ranking phase so it is filtered out. Currently, there are several "predicates" implementing different filtering policies, including: @@ -44,7 +44,9 @@ Currently, Kubernetes scheduler provides some practical priority functions, incl - `CalculateSpreadPriority`: Spread Pods by minimizing the number of Pods belonging to the same service on the same node. - `CalculateAntiAffinityPriority`: Spread Pods by minimizing the number of Pods belonging to the same service on nodes with the same value for a particular label. -The details of the above priority functions can be found in [plugin/pkg/scheduler/algorithm/priorities](../../plugin/pkg/scheduler/algorithm/priorities/). Kubernetes uses some, but not all, of these priority functions by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](../../plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). Similar as predicates, you can combine the above priority functions and assign weight factors (positive number) to them as you want (check [docs/devel/scheduler.md](../../docs/devel/scheduler.md) for how to customize). +The details of the above priority functions can be found in [plugin/pkg/scheduler/algorithm/priorities](../../plugin/pkg/scheduler/algorithm/priorities/). Kubernetes uses some, but not all, of these priority functions by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](../../plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). Similar as predicates, you can combine the above priority functions and assign weight factors (positive number) to them as you want (check [scheduler.md](scheduler.md) for how to customize). + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/scheduler_algorithm.md?pixel)]() + diff --git a/writing-a-getting-started-guide.md b/writing-a-getting-started-guide.md index 7b94d9a3..dec4d9c9 100644 --- a/writing-a-getting-started-guide.md +++ b/writing-a-getting-started-guide.md @@ -43,7 +43,7 @@ These guidelines say *what* to do. See the Rationale section for *why*. search for uses of flags by guides. - We may ask that you host binary assets or large amounts of code in our `contrib` directory or on your own repo. - - Setup a cluster and run the [conformance test](../../docs/devel/development.md#conformance-testing) against it, and report the + - Setup a cluster and run the [conformance test](development.md#conformance-testing) against it, and report the results in your PR. - Add or update a row in [The Matrix](../../docs/getting-started-guides/README.md). - State the binary version of kubernetes that you tested clearly in your Guide doc and in The Matrix. @@ -113,4 +113,6 @@ These guidelines say *what* to do. See the Rationale section for *why*. + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/writing-a-getting-started-guide.md?pixel)]() + -- cgit v1.2.3 From c8cc5f5d4a33e2c77e99580849f93c54a2fd1d11 Mon Sep 17 00:00:00 2001 From: Tim Hockin Date: Mon, 13 Jul 2015 15:15:35 -0700 Subject: Run gendocs --- README.md | 10 +++++++++- api_changes.md | 10 +++++++++- cherry-picks.md | 10 +++++++++- coding-conventions.md | 10 +++++++++- collab.md | 10 +++++++++- developer-guides/vagrant.md | 10 +++++++++- development.md | 10 +++++++++- faster_reviews.md | 10 +++++++++- flaky-tests.md | 10 +++++++++- getting-builds.md | 10 +++++++++- instrumentation.md | 10 +++++++++- issues.md | 10 +++++++++- logging.md | 10 +++++++++- making-release-notes.md | 10 +++++++++- profiling.md | 10 +++++++++- pull-requests.md | 10 +++++++++- releasing.md | 10 +++++++++- scheduler.md | 10 +++++++++- scheduler_algorithm.md | 10 +++++++++- writing-a-getting-started-guide.md | 10 +++++++++- 20 files changed, 180 insertions(+), 20 deletions(-) diff --git a/README.md b/README.md index 6ce86769..505e7f34 100644 --- a/README.md +++ b/README.md @@ -2,13 +2,21 @@ -

*** PLEASE NOTE: This document applies to the HEAD of the source +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) + +

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost certainly want the docs that go with that version.

Documentation for specific releases can be found at [releases.k8s.io](http://releases.k8s.io). +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) + diff --git a/api_changes.md b/api_changes.md index 3a0c1991..d132adf3 100644 --- a/api_changes.md +++ b/api_changes.md @@ -2,13 +2,21 @@ -

*** PLEASE NOTE: This document applies to the HEAD of the source +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) + +

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost certainly want the docs that go with that version.

Documentation for specific releases can be found at [releases.k8s.io](http://releases.k8s.io). +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) + diff --git a/cherry-picks.md b/cherry-picks.md index 04811f0b..0453102f 100644 --- a/cherry-picks.md +++ b/cherry-picks.md @@ -2,13 +2,21 @@ -

*** PLEASE NOTE: This document applies to the HEAD of the source +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) + +

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost certainly want the docs that go with that version.

Documentation for specific releases can be found at [releases.k8s.io](http://releases.k8s.io). +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) + diff --git a/coding-conventions.md b/coding-conventions.md index 54d9aaa6..030b3448 100644 --- a/coding-conventions.md +++ b/coding-conventions.md @@ -2,13 +2,21 @@ -

*** PLEASE NOTE: This document applies to the HEAD of the source +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) + +

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost certainly want the docs that go with that version.

Documentation for specific releases can be found at [releases.k8s.io](http://releases.k8s.io). +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) + diff --git a/collab.md b/collab.md index d212012f..e5fbf24d 100644 --- a/collab.md +++ b/collab.md @@ -2,13 +2,21 @@ -

*** PLEASE NOTE: This document applies to the HEAD of the source +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) + +

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost certainly want the docs that go with that version.

Documentation for specific releases can be found at [releases.k8s.io](http://releases.k8s.io). +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) + diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index 1b716648..5234e88a 100644 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -2,13 +2,21 @@ -

*** PLEASE NOTE: This document applies to the HEAD of the source +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) + +

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost certainly want the docs that go with that version.

Documentation for specific releases can be found at [releases.k8s.io](http://releases.k8s.io). +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) + diff --git a/development.md b/development.md index ba9b9897..435aac3a 100644 --- a/development.md +++ b/development.md @@ -2,13 +2,21 @@ -

*** PLEASE NOTE: This document applies to the HEAD of the source +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) + +

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost certainly want the docs that go with that version.

Documentation for specific releases can be found at [releases.k8s.io](http://releases.k8s.io). +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) + diff --git a/faster_reviews.md b/faster_reviews.md index eb3b25e9..8879075e 100644 --- a/faster_reviews.md +++ b/faster_reviews.md @@ -2,13 +2,21 @@ -

*** PLEASE NOTE: This document applies to the HEAD of the source +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) + +

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost certainly want the docs that go with that version.

Documentation for specific releases can be found at [releases.k8s.io](http://releases.k8s.io). +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) + diff --git a/flaky-tests.md b/flaky-tests.md index d26fc406..fe5af939 100644 --- a/flaky-tests.md +++ b/flaky-tests.md @@ -2,13 +2,21 @@ -

*** PLEASE NOTE: This document applies to the HEAD of the source +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) + +

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost certainly want the docs that go with that version.

Documentation for specific releases can be found at [releases.k8s.io](http://releases.k8s.io). +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) + diff --git a/getting-builds.md b/getting-builds.md index 770d486c..53193e84 100644 --- a/getting-builds.md +++ b/getting-builds.md @@ -2,13 +2,21 @@ -

*** PLEASE NOTE: This document applies to the HEAD of the source +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) + +

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost certainly want the docs that go with that version.

Documentation for specific releases can be found at [releases.k8s.io](http://releases.k8s.io). +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) + diff --git a/instrumentation.md b/instrumentation.md index 22cd38e1..39a9d922 100644 --- a/instrumentation.md +++ b/instrumentation.md @@ -2,13 +2,21 @@ -

*** PLEASE NOTE: This document applies to the HEAD of the source +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) + +

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost certainly want the docs that go with that version.

Documentation for specific releases can be found at [releases.k8s.io](http://releases.k8s.io). +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) + diff --git a/issues.md b/issues.md index d4d1d132..e73dcb1d 100644 --- a/issues.md +++ b/issues.md @@ -2,13 +2,21 @@ -

*** PLEASE NOTE: This document applies to the HEAD of the source +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) + +

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost certainly want the docs that go with that version.

Documentation for specific releases can be found at [releases.k8s.io](http://releases.k8s.io). +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) + diff --git a/logging.md b/logging.md index bf2bd5c8..68fd98f9 100644 --- a/logging.md +++ b/logging.md @@ -2,13 +2,21 @@ -

*** PLEASE NOTE: This document applies to the HEAD of the source +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) + +

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost certainly want the docs that go with that version.

Documentation for specific releases can be found at [releases.k8s.io](http://releases.k8s.io). +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) + diff --git a/making-release-notes.md b/making-release-notes.md index 877c1364..482c05a1 100644 --- a/making-release-notes.md +++ b/making-release-notes.md @@ -2,13 +2,21 @@ -

*** PLEASE NOTE: This document applies to the HEAD of the source +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) + +

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost certainly want the docs that go with that version.

Documentation for specific releases can be found at [releases.k8s.io](http://releases.k8s.io). +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) + diff --git a/profiling.md b/profiling.md index 41737414..7eadfbbe 100644 --- a/profiling.md +++ b/profiling.md @@ -2,13 +2,21 @@ -

*** PLEASE NOTE: This document applies to the HEAD of the source +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) + +

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost certainly want the docs that go with that version.

Documentation for specific releases can be found at [releases.k8s.io](http://releases.k8s.io). +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) + diff --git a/pull-requests.md b/pull-requests.md index 1c6bbe5f..cf325823 100644 --- a/pull-requests.md +++ b/pull-requests.md @@ -2,13 +2,21 @@ -

*** PLEASE NOTE: This document applies to the HEAD of the source +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) + +

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost certainly want the docs that go with that version.

Documentation for specific releases can be found at [releases.k8s.io](http://releases.k8s.io). +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) + diff --git a/releasing.md b/releasing.md index 5cdbde2f..3de00293 100644 --- a/releasing.md +++ b/releasing.md @@ -2,13 +2,21 @@ -

*** PLEASE NOTE: This document applies to the HEAD of the source +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) + +

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost certainly want the docs that go with that version.

Documentation for specific releases can be found at [releases.k8s.io](http://releases.k8s.io). +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) + diff --git a/scheduler.md b/scheduler.md index d9fccefc..3617a1dd 100644 --- a/scheduler.md +++ b/scheduler.md @@ -2,13 +2,21 @@ -

*** PLEASE NOTE: This document applies to the HEAD of the source +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) + +

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost certainly want the docs that go with that version.

Documentation for specific releases can be found at [releases.k8s.io](http://releases.k8s.io). +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) + diff --git a/scheduler_algorithm.md b/scheduler_algorithm.md index 119b0c86..d5ab280a 100644 --- a/scheduler_algorithm.md +++ b/scheduler_algorithm.md @@ -2,13 +2,21 @@ -

*** PLEASE NOTE: This document applies to the HEAD of the source +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) + +

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost certainly want the docs that go with that version.

Documentation for specific releases can be found at [releases.k8s.io](http://releases.k8s.io). +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) + diff --git a/writing-a-getting-started-guide.md b/writing-a-getting-started-guide.md index dec4d9c9..bb017814 100644 --- a/writing-a-getting-started-guide.md +++ b/writing-a-getting-started-guide.md @@ -2,13 +2,21 @@ -

*** PLEASE NOTE: This document applies to the HEAD of the source +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) + +

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost certainly want the docs that go with that version.

Documentation for specific releases can be found at [releases.k8s.io](http://releases.k8s.io). +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) + -- cgit v1.2.3 From 70aa961049adb9d481b720e42a4e984f93eaf842 Mon Sep 17 00:00:00 2001 From: Tim Hockin Date: Tue, 14 Jul 2015 17:28:47 -0700 Subject: Run gendocs --- README.md | 12 ++++++------ api_changes.md | 12 ++++++------ cherry-picks.md | 12 ++++++------ coding-conventions.md | 12 ++++++------ collab.md | 12 ++++++------ developer-guides/vagrant.md | 12 ++++++------ development.md | 12 ++++++------ faster_reviews.md | 12 ++++++------ flaky-tests.md | 12 ++++++------ getting-builds.md | 12 ++++++------ instrumentation.md | 12 ++++++------ issues.md | 12 ++++++------ logging.md | 12 ++++++------ making-release-notes.md | 12 ++++++------ profiling.md | 12 ++++++------ pull-requests.md | 12 ++++++------ releasing.md | 12 ++++++------ scheduler.md | 12 ++++++------ scheduler_algorithm.md | 12 ++++++------ writing-a-getting-started-guide.md | 12 ++++++------ 20 files changed, 120 insertions(+), 120 deletions(-) diff --git a/README.md b/README.md index 505e7f34..f97c49b4 100644 --- a/README.md +++ b/README.md @@ -2,9 +2,9 @@ -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png)

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost @@ -13,9 +13,9 @@ certainly want the docs that go with that version.

Documentation for specific releases can be found at [releases.k8s.io](http://releases.k8s.io). -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) diff --git a/api_changes.md b/api_changes.md index d132adf3..2d571eb5 100644 --- a/api_changes.md +++ b/api_changes.md @@ -2,9 +2,9 @@ -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png)

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost @@ -13,9 +13,9 @@ certainly want the docs that go with that version.

Documentation for specific releases can be found at [releases.k8s.io](http://releases.k8s.io). -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) diff --git a/cherry-picks.md b/cherry-picks.md index 0453102f..b971f2fc 100644 --- a/cherry-picks.md +++ b/cherry-picks.md @@ -2,9 +2,9 @@ -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png)

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost @@ -13,9 +13,9 @@ certainly want the docs that go with that version.

Documentation for specific releases can be found at [releases.k8s.io](http://releases.k8s.io). -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) diff --git a/coding-conventions.md b/coding-conventions.md index 030b3448..76ba29e8 100644 --- a/coding-conventions.md +++ b/coding-conventions.md @@ -2,9 +2,9 @@ -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png)

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost @@ -13,9 +13,9 @@ certainly want the docs that go with that version.

Documentation for specific releases can be found at [releases.k8s.io](http://releases.k8s.io). -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) diff --git a/collab.md b/collab.md index e5fbf24d..caadc8de 100644 --- a/collab.md +++ b/collab.md @@ -2,9 +2,9 @@ -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png)

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost @@ -13,9 +13,9 @@ certainly want the docs that go with that version.

Documentation for specific releases can be found at [releases.k8s.io](http://releases.k8s.io). -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index 5234e88a..0ef31c68 100644 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -2,9 +2,9 @@ -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png)

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost @@ -13,9 +13,9 @@ certainly want the docs that go with that version.

Documentation for specific releases can be found at [releases.k8s.io](http://releases.k8s.io). -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) diff --git a/development.md b/development.md index 435aac3a..e2ec2068 100644 --- a/development.md +++ b/development.md @@ -2,9 +2,9 @@ -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png)

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost @@ -13,9 +13,9 @@ certainly want the docs that go with that version.

Documentation for specific releases can be found at [releases.k8s.io](http://releases.k8s.io). -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) diff --git a/faster_reviews.md b/faster_reviews.md index 8879075e..335d2a3e 100644 --- a/faster_reviews.md +++ b/faster_reviews.md @@ -2,9 +2,9 @@ -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png)

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost @@ -13,9 +13,9 @@ certainly want the docs that go with that version.

Documentation for specific releases can be found at [releases.k8s.io](http://releases.k8s.io). -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) diff --git a/flaky-tests.md b/flaky-tests.md index fe5af939..fb000ea6 100644 --- a/flaky-tests.md +++ b/flaky-tests.md @@ -2,9 +2,9 @@ -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png)

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost @@ -13,9 +13,9 @@ certainly want the docs that go with that version.

Documentation for specific releases can be found at [releases.k8s.io](http://releases.k8s.io). -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) diff --git a/getting-builds.md b/getting-builds.md index 53193e84..372d080d 100644 --- a/getting-builds.md +++ b/getting-builds.md @@ -2,9 +2,9 @@ -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png)

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost @@ -13,9 +13,9 @@ certainly want the docs that go with that version.

Documentation for specific releases can be found at [releases.k8s.io](http://releases.k8s.io). -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) diff --git a/instrumentation.md b/instrumentation.md index 39a9d922..95786c52 100644 --- a/instrumentation.md +++ b/instrumentation.md @@ -2,9 +2,9 @@ -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png)

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost @@ -13,9 +13,9 @@ certainly want the docs that go with that version.

Documentation for specific releases can be found at [releases.k8s.io](http://releases.k8s.io). -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) diff --git a/issues.md b/issues.md index e73dcb1d..689a18ff 100644 --- a/issues.md +++ b/issues.md @@ -2,9 +2,9 @@ -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png)

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost @@ -13,9 +13,9 @@ certainly want the docs that go with that version.

Documentation for specific releases can be found at [releases.k8s.io](http://releases.k8s.io). -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) diff --git a/logging.md b/logging.md index 68fd98f9..1a536d07 100644 --- a/logging.md +++ b/logging.md @@ -2,9 +2,9 @@ -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png)

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost @@ -13,9 +13,9 @@ certainly want the docs that go with that version.

Documentation for specific releases can be found at [releases.k8s.io](http://releases.k8s.io). -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) diff --git a/making-release-notes.md b/making-release-notes.md index 482c05a1..5703965a 100644 --- a/making-release-notes.md +++ b/making-release-notes.md @@ -2,9 +2,9 @@ -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png)

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost @@ -13,9 +13,9 @@ certainly want the docs that go with that version.

Documentation for specific releases can be found at [releases.k8s.io](http://releases.k8s.io). -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) diff --git a/profiling.md b/profiling.md index 7eadfbbe..863dc4c1 100644 --- a/profiling.md +++ b/profiling.md @@ -2,9 +2,9 @@ -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png)

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost @@ -13,9 +13,9 @@ certainly want the docs that go with that version.

Documentation for specific releases can be found at [releases.k8s.io](http://releases.k8s.io). -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) diff --git a/pull-requests.md b/pull-requests.md index cf325823..bdb7a172 100644 --- a/pull-requests.md +++ b/pull-requests.md @@ -2,9 +2,9 @@ -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png)

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost @@ -13,9 +13,9 @@ certainly want the docs that go with that version.

Documentation for specific releases can be found at [releases.k8s.io](http://releases.k8s.io). -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) diff --git a/releasing.md b/releasing.md index 3de00293..2f5035cc 100644 --- a/releasing.md +++ b/releasing.md @@ -2,9 +2,9 @@ -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png)

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost @@ -13,9 +13,9 @@ certainly want the docs that go with that version.

Documentation for specific releases can be found at [releases.k8s.io](http://releases.k8s.io). -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) diff --git a/scheduler.md b/scheduler.md index 3617a1dd..912d1128 100644 --- a/scheduler.md +++ b/scheduler.md @@ -2,9 +2,9 @@ -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png)

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost @@ -13,9 +13,9 @@ certainly want the docs that go with that version.

Documentation for specific releases can be found at [releases.k8s.io](http://releases.k8s.io). -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) diff --git a/scheduler_algorithm.md b/scheduler_algorithm.md index d5ab280a..fc402516 100644 --- a/scheduler_algorithm.md +++ b/scheduler_algorithm.md @@ -2,9 +2,9 @@ -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png)

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost @@ -13,9 +13,9 @@ certainly want the docs that go with that version.

Documentation for specific releases can be found at [releases.k8s.io](http://releases.k8s.io). -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) diff --git a/writing-a-getting-started-guide.md b/writing-a-getting-started-guide.md index bb017814..348faf9b 100644 --- a/writing-a-getting-started-guide.md +++ b/writing-a-getting-started-guide.md @@ -2,9 +2,9 @@ -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png)

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost @@ -13,9 +13,9 @@ certainly want the docs that go with that version.

Documentation for specific releases can be found at [releases.k8s.io](http://releases.k8s.io). -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) -![WARNING](http://releases.k8s.io/HEAD/docs/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) -- cgit v1.2.3 From cb5465e2c6af85fd4f5b0577b8e4b16d930001d1 Mon Sep 17 00:00:00 2001 From: David Oppenheimer Date: Tue, 14 Jul 2015 22:07:44 -0700 Subject: Move some docs from docs/ top-level into docs/{admin/,devel/,user-guide/}. --- api-conventions.md | 637 ++++++++++++++++++++++++++++++++++++++++++++++++++++ cli-roadmap.md | 105 +++++++++ client-libraries.md | 43 ++++ developer-guide.md | 62 +++++ 4 files changed, 847 insertions(+) create mode 100644 api-conventions.md create mode 100644 cli-roadmap.md create mode 100644 client-libraries.md create mode 100644 developer-guide.md diff --git a/api-conventions.md b/api-conventions.md new file mode 100644 index 00000000..4a0cfccb --- /dev/null +++ b/api-conventions.md @@ -0,0 +1,637 @@ + + + + +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) + +

PLEASE NOTE: This document applies to the HEAD of the source +tree only. If you are using a released version of Kubernetes, you almost +certainly want the docs that go with that version.

+ +Documentation for specific releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) + + + + +API Conventions +=============== + +Updated: 4/16/2015 + +*This document is oriented at users who want a deeper understanding of the kubernetes +API structure, and developers wanting to extend the kubernetes API. An introduction to +using resources with kubectl can be found in (working_with_resources.md).* + +**Table of Contents** + + - [Types (Kinds)](#types-(kinds)) + - [Resources](#resources) + - [Objects](#objects) + - [Metadata](#metadata) + - [Spec and Status](#spec-and-status) + - [Typical status properties](#typical-status-properties) + - [References to related objects](#references-to-related-objects) + - [Lists of named subobjects preferred over maps](#lists-of-named-subobjects-preferred-over-maps) + - [Constants](#constants) + - [Lists and Simple kinds](#lists-and-simple-kinds) + - [Differing Representations](#differing-representations) + - [Verbs on Resources](#verbs-on-resources) + - [PATCH operations](#patch-operations) + - [Strategic Merge Patch](#strategic-merge-patch) + - [List Operations](#list-operations) + - [Map Operations](#map-operations) + - [Idempotency](#idempotency) + - [Defaulting](#defaulting) + - [Late Initialization](#late-initialization) + - [Concurrency Control and Consistency](#concurrency-control-and-consistency) + - [Serialization Format](#serialization-format) + - [Units](#units) + - [Selecting Fields](#selecting-fields) + - [HTTP Status codes](#http-status-codes) + - [Success codes](#success-codes) + - [Error codes](#error-codes) + - [Response Status Kind](#response-status-kind) + + + +The conventions of the [Kubernetes API](../api.md) (and related APIs in the ecosystem) are intended to ease client development and ensure that configuration mechanisms can be implemented that work across a diverse set of use cases consistently. + +The general style of the Kubernetes API is RESTful - clients create, update, delete, or retrieve a description of an object via the standard HTTP verbs (POST, PUT, DELETE, and GET) - and those APIs preferentially accept and return JSON. Kubernetes also exposes additional endpoints for non-standard verbs and allows alternative content types. All of the JSON accepted and returned by the server has a schema, identified by the "kind" and "apiVersion" fields. Where relevant HTTP header fields exist, they should mirror the content of JSON fields, but the information should not be represented only in the HTTP header. + +The following terms are defined: + +* **Kind** the name of a particular object schema (e.g. the "Cat" and "Dog" kinds would have different attributes and properties) +* **Resource** a representation of a system entity, sent or retrieved as JSON via HTTP to the server. Resources are exposed via: + * Collections - a list of resources of the same type, which may be queryable + * Elements - an individual resource, addressable via a URL + +Each resource typically accepts and returns data of a single kind. A kind may be accepted or returned by multiple resources that reflect specific use cases. For instance, the kind "pod" is exposed as a "pods" resource that allows end users to create, update, and delete pods, while a separate "pod status" resource (that acts on "pod" kind) allows automated processes to update a subset of the fields in that resource. A "restart" resource might be exposed for a number of different resources to allow the same action to have different results for each object. + +Resource collections should be all lowercase and plural, whereas kinds are CamelCase and singular. + + +## Types (Kinds) + +Kinds are grouped into three categories: + +1. **Objects** represent a persistent entity in the system. + + Creating an API object is a record of intent - once created, the system will work to ensure that resource exists. All API objects have common metadata. + + An object may have multiple resources that clients can use to perform specific actions that create, update, delete, or get. + + Examples: `Pods`, `ReplicationControllers`, `Services`, `Namespaces`, `Nodes` + +2. **Lists** are collections of **resources** of one (usually) or more (occasionally) kinds. + + Lists have a limited set of common metadata. All lists use the "items" field to contain the array of objects they return. + + Most objects defined in the system should have an endpoint that returns the full set of resources, as well as zero or more endpoints that return subsets of the full list. Some objects may be singletons (the current user, the system defaults) and may not have lists. + + In addition, all lists that return objects with labels should support label filtering (see [docs/user-guide/labels.md](../user-guide/labels.md), and most lists should support filtering by fields. + + Examples: PodLists, ServiceLists, NodeLists + + TODO: Describe field filtering below or in a separate doc. + +3. **Simple** kinds are used for specific actions on objects and for non-persistent entities. + + Given their limited scope, they have the same set of limited common metadata as lists. + + The "size" action may accept a simple resource that has only a single field as input (the number of things). The "status" kind is returned when errors occur and is not persisted in the system. + + Examples: Binding, Status + +The standard REST verbs (defined below) MUST return singular JSON objects. Some API endpoints may deviate from the strict REST pattern and return resources that are not singular JSON objects, such as streams of JSON objects or unstructured text log data. + +The term "kind" is reserved for these "top-level" API types. The term "type" should be used for distinguishing sub-categories within objects or subobjects. + +### Resources + +All JSON objects returned by an API MUST have the following fields: + +* kind: a string that identifies the schema this object should have +* apiVersion: a string that identifies the version of the schema the object should have + +These fields are required for proper decoding of the object. They may be populated by the server by default from the specified URL path, but the client likely needs to know the values in order to construct the URL path. + +### Objects + +#### Metadata + +Every object kind MUST have the following metadata in a nested object field called "metadata": + +* namespace: a namespace is a DNS compatible subdomain that objects are subdivided into. The default namespace is 'default'. See [docs/admin/namespaces.md](../admin/namespaces.md) for more. +* name: a string that uniquely identifies this object within the current namespace (see [docs/user-guide/identifiers.md](../user-guide/identifiers.md)). This value is used in the path when retrieving an individual object. +* uid: a unique in time and space value (typically an RFC 4122 generated identifier, see [docs/user-guide/identifiers.md](../user-guide/identifiers.md)) used to distinguish between objects with the same name that have been deleted and recreated + +Every object SHOULD have the following metadata in a nested object field called "metadata": + +* resourceVersion: a string that identifies the internal version of this object that can be used by clients to determine when objects have changed. This value MUST be treated as opaque by clients and passed unmodified back to the server. Clients should not assume that the resource version has meaning across namespaces, different kinds of resources, or different servers. (see [concurrency control](#concurrency-control-and-consistency), below, for more details) +* creationTimestamp: a string representing an RFC 3339 date of the date and time an object was created +* deletionTimestamp: a string representing an RFC 3339 date of the date and time after which this resource will be deleted. This field is set by the server when a graceful deletion is requested by the user, and is not directly settable by a client. The resource will be deleted (no longer visible from resource lists, and not reachable by name) after the time in this field. Once set, this value may not be unset or be set further into the future, although it may be shortened or the resource may be deleted prior to this time. +* labels: a map of string keys and values that can be used to organize and categorize objects (see [docs/user-guide/labels.md](../user-guide/labels.md)) +* annotations: a map of string keys and values that can be used by external tooling to store and retrieve arbitrary metadata about this object (see [docs/user-guide/annotations.md](../user-guide/annotations.md)) + +Labels are intended for organizational purposes by end users (select the pods that match this label query). Annotations enable third-party automation and tooling to decorate objects with additional metadata for their own use. + +#### Spec and Status + +By convention, the Kubernetes API makes a distinction between the specification of the desired state of an object (a nested object field called "spec") and the status of the object at the current time (a nested object field called "status"). The specification is a complete description of the desired state, including configuration settings provided by the user, [default values](#defaulting) expanded by the system, and properties initialized or otherwise changed after creation by other ecosystem components (e.g., schedulers, auto-scalers), and is persisted in stable storage with the API object. If the specification is deleted, the object will be purged from the system. The status summarizes the current state of the object in the system, and is usually persisted with the object by an automated processes but may be generated on the fly. At some cost and perhaps some temporary degradation in behavior, the status could be reconstructed by observation if it were lost. + +When a new version of an object is POSTed or PUT, the "spec" is updated and available immediately. Over time the system will work to bring the "status" into line with the "spec". The system will drive toward the most recent "spec" regardless of previous versions of that stanza. In other words, if a value is changed from 2 to 5 in one PUT and then back down to 3 in another PUT the system is not required to 'touch base' at 5 before changing the "status" to 3. In other words, the system's behavior is *level-based* rather than *edge-based*. This enables robust behavior in the presence of missed intermediate state changes. + +The Kubernetes API also serves as the foundation for the declarative configuration schema for the system. In order to facilitate level-based operation and expression of declarative configuration, fields in the specification should have declarative rather than imperative names and semantics -- they represent the desired state, not actions intended to yield the desired state. + +The PUT and POST verbs on objects will ignore the "status" values. A `/status` subresource is provided to enable system components to update statuses of resources they manage. + +Otherwise, PUT expects the whole object to be specified. Therefore, if a field is omitted it is assumed that the client wants to clear that field's value. The PUT verb does not accept partial updates. Modification of just part of an object may be achieved by GETting the resource, modifying part of the spec, labels, or annotations, and then PUTting it back. See [concurrency control](#concurrency-control-and-consistency), below, regarding read-modify-write consistency when using this pattern. Some objects may expose alternative resource representations that allow mutation of the status, or performing custom actions on the object. + +All objects that represent a physical resource whose state may vary from the user's desired intent SHOULD have a "spec" and a "status". Objects whose state cannot vary from the user's desired intent MAY have only "spec", and MAY rename "spec" to a more appropriate name. + +Objects that contain both spec and status should not contain additional top-level fields other than the standard metadata fields. + +##### Typical status properties + +* **phase**: The phase is a simple, high-level summary of the phase of the lifecycle of an object. The phase should progress monotonically. Typical phase values are `Pending` (not yet fully physically realized), `Running` or `Active` (fully realized and active, but not necessarily operating correctly), and `Terminated` (no longer active), but may vary slightly for different types of objects. New phase values should not be added to existing objects in the future. Like other status fields, it must be possible to ascertain the lifecycle phase by observation. Additional details regarding the current phase may be contained in other fields. +* **conditions**: Conditions represent orthogonal observations of an object's current state. Objects may report multiple conditions, and new types of conditions may be added in the future. Condition status values may be `True`, `False`, or `Unknown`. Unlike the phase, conditions are not expected to be monotonic -- their values may change back and forth. A typical condition type is `Ready`, which indicates the object was believed to be fully operational at the time it was last probed. Conditions may carry additional information, such as the last probe time or last transition time. + +TODO(@vishh): Reason and Message. + +Phases and conditions are observations and not, themselves, state machines, nor do we define comprehensive state machines for objects with behaviors associated with state transitions. The system is level-based and should assume an Open World. Additionally, new observations and details about these observations may be added over time. + +In order to preserve extensibility, in the future, we intend to explicitly convey properties that users and components care about rather than requiring those properties to be inferred from observations. + +Note that historical information status (e.g., last transition time, failure counts) is only provided at best effort, and is not guaranteed to not be lost. + +Status information that may be large (especially unbounded in size, such as lists of references to other objects -- see below) and/or rapidly changing, such as [resource usage](../design/resources.md#usage-data), should be put into separate objects, with possibly a reference from the original object. This helps to ensure that GETs and watch remain reasonably efficient for the majority of clients, which may not need that data. + +#### References to related objects + +References to loosely coupled sets of objects, such as [pods](../user-guide/pods.md) overseen by a [replication controller](../user-guide/replication-controller.md), are usually best referred to using a [label selector](../user-guide/labels.md). In order to ensure that GETs of individual objects remain bounded in time and space, these sets may be queried via separate API queries, but will not be expanded in the referring object's status. + +References to specific objects, especially specific resource versions and/or specific fields of those objects, are specified using the `ObjectReference` type. Unlike partial URLs, the ObjectReference type facilitates flexible defaulting of fields from the referring object or other contextual information. + +References in the status of the referee to the referrer may be permitted, when the references are one-to-one and do not need to be frequently updated, particularly in an edge-based manner. + +#### Lists of named subobjects preferred over maps + +Discussed in [#2004](https://github.com/GoogleCloudPlatform/kubernetes/issues/2004) and elsewhere. There are no maps of subobjects in any API objects. Instead, the convention is to use a list of subobjects containing name fields. + +For example: +```yaml +ports: + - name: www + containerPort: 80 +``` +vs. +```yaml +ports: + www: + containerPort: 80 +``` + +This rule maintains the invariant that all JSON/YAML keys are fields in API objects. The only exceptions are pure maps in the API (currently, labels, selectors, and annotations), as opposed to sets of subobjects. + +#### Constants + +Some fields will have a list of allowed values (enumerations). These values will be strings, and they will be in CamelCase, with an initial uppercase letter. Examples: "ClusterFirst", "Pending", "ClientIP". + +### Lists and Simple kinds + +Every list or simple kind SHOULD have the following metadata in a nested object field called "metadata": + +* resourceVersion: a string that identifies the common version of the objects returned by in a list. This value MUST be treated as opaque by clients and passed unmodified back to the server. A resource version is only valid within a single namespace on a single kind of resource. + +Every simple kind returned by the server, and any simple kind sent to the server that must support idempotency or optimistic concurrency should return this value.Since simple resources are often used as input alternate actions that modify objects, the resource version of the simple resource should correspond to the resource version of the object. + + +## Differing Representations + +An API may represent a single entity in different ways for different clients, or transform an object after certain transitions in the system occur. In these cases, one request object may have two representations available as different resources, or different kinds. + +An example is a Service, which represents the intent of the user to group a set of pods with common behavior on common ports. When Kubernetes detects a pod matches the service selector, the IP address and port of the pod are added to an Endpoints resource for that Service. The Endpoints resource exists only if the Service exists, but exposes only the IPs and ports of the selected pods. The full service is represented by two distinct resources - under the original Service resource the user created, as well as in the Endpoints resource. + +As another example, a "pod status" resource may accept a PUT with the "pod" kind, with different rules about what fields may be changed. + +Future versions of Kubernetes may allow alternative encodings of objects beyond JSON. + + +## Verbs on Resources + +API resources should use the traditional REST pattern: + +* GET /<resourceNamePlural> - Retrieve a list of type <resourceName>, e.g. GET /pods returns a list of Pods. +* POST /<resourceNamePlural> - Create a new resource from the JSON object provided by the client. +* GET /<resourceNamePlural>/<name> - Retrieves a single resource with the given name, e.g. GET /pods/first returns a Pod named 'first'. Should be constant time, and the resource should be bounded in size. +* DELETE /<resourceNamePlural>/<name> - Delete the single resource with the given name. DeleteOptions may specify gracePeriodSeconds, the optional duration in seconds before the object should be deleted. Individual kinds may declare fields which provide a default grace period, and different kinds may have differing kind-wide default grace periods. A user provided grace period overrides a default grace period, including the zero grace period ("now"). +* PUT /<resourceNamePlural>/<name> - Update or create the resource with the given name with the JSON object provided by the client. +* PATCH /<resourceNamePlural>/<name> - Selectively modify the specified fields of the resource. See more information [below](#patch). + +Kubernetes by convention exposes additional verbs as new root endpoints with singular names. Examples: + +* GET /watch/<resourceNamePlural> - Receive a stream of JSON objects corresponding to changes made to any resource of the given kind over time. +* GET /watch/<resourceNamePlural>/<name> - Receive a stream of JSON objects corresponding to changes made to the named resource of the given kind over time. + +These are verbs which change the fundamental type of data returned (watch returns a stream of JSON instead of a single JSON object). Support of additional verbs is not required for all object types. + +Two additional verbs `redirect` and `proxy` provide access to cluster resources as described in [docs/user-guide/accessing-the-cluster.md](../user-guide/accessing-the-cluster.md). + +When resources wish to expose alternative actions that are closely coupled to a single resource, they should do so using new sub-resources. An example is allowing automated processes to update the "status" field of a Pod. The `/pods` endpoint only allows updates to "metadata" and "spec", since those reflect end-user intent. An automated process should be able to modify status for users to see by sending an updated Pod kind to the server to the "/pods/<name>/status" endpoint - the alternate endpoint allows different rules to be applied to the update, and access to be appropriately restricted. Likewise, some actions like "stop" or "scale" are best represented as REST sub-resources that are POSTed to. The POST action may require a simple kind to be provided if the action requires parameters, or function without a request body. + +TODO: more documentation of Watch + +### PATCH operations + +The API supports three different PATCH operations, determined by their corresponding Content-Type header: + +* JSON Patch, `Content-Type: application/json-patch+json` + * As defined in [RFC6902](https://tools.ietf.org/html/rfc6902), a JSON Patch is a sequence of operations that are executed on the resource, e.g. `{"op": "add", "path": "/a/b/c", "value": [ "foo", "bar" ]}`. For more details on how to use JSON Patch, see the RFC. +* Merge Patch, `Content-Type: application/merge-json-patch+json` + * As defined in [RFC7386](https://tools.ietf.org/html/rfc7386), a Merge Patch is essentially a partial representation of the resource. The submitted JSON is "merged" with the current resource to create a new one, then the new one is saved. For more details on how to use Merge Patch, see the RFC. +* Strategic Merge Patch, `Content-Type: application/strategic-merge-patch+json` + * Strategic Merge Patch is a custom implementation of Merge Patch. For a detailed explanation of how it works and why it needed to be introduced, see below. + +#### Strategic Merge Patch + +In the standard JSON merge patch, JSON objects are always merged but lists are always replaced. Often that isn't what we want. Let's say we start with the following Pod: + +```yaml +spec: + containers: + - name: nginx + image: nginx-1.0 +``` + +...and we POST that to the server (as JSON). Then let's say we want to *add* a container to this Pod. + +```yaml +PATCH /api/v1/namespaces/default/pods/pod-name +spec: + containers: + - name: log-tailer + image: log-tailer-1.0 +``` + +If we were to use standard Merge Patch, the entire container list would be replaced with the single log-tailer container. However, our intent is for the container lists to merge together based on the `name` field. + +To solve this problem, Strategic Merge Patch uses metadata attached to the API objects to determine what lists should be merged and which ones should not. Currently the metadata is available as struct tags on the API objects themselves, but will become available to clients as Swagger annotations in the future. In the above example, the `patchStrategy` metadata for the `containers` field would be `merge` and the `patchMergeKey` would be `name`. + +Note: If the patch results in merging two lists of scalars, the scalars are first deduplicated and then merged. + +Strategic Merge Patch also supports special operations as listed below. + +### List Operations + +To override the container list to be strictly replaced, regardless of the default: + +```yaml +containers: + - name: nginx + image: nginx-1.0 + - $patch: replace # any further $patch operations nested in this list will be ignored +``` + +To delete an element of a list that should be merged: + +```yaml +containers: + - name: nginx + image: nginx-1.0 + - $patch: delete + name: log-tailer # merge key and value goes here +``` + +### Map Operations + +To indicate that a map should not be merged and instead should be taken literally: + +```yaml +$patch: replace # recursive and applies to all fields of the map it's in +containers: +- name: nginx + image: nginx-1.0 +``` + +To delete a field of a map: + +```yaml +name: nginx +image: nginx-1.0 +labels: + live: null # set the value of the map key to null +``` + + +## Idempotency + +All compatible Kubernetes APIs MUST support "name idempotency" and respond with an HTTP status code 409 when a request is made to POST an object that has the same name as an existing object in the system. See [docs/user-guide/identifiers.md](../user-guide/identifiers.md) for details. + +Names generated by the system may be requested using `metadata.generateName`. GenerateName indicates that the name should be made unique by the server prior to persisting it. A non-empty value for the field indicates the name will be made unique (and the name returned to the client will be different than the name passed). The value of this field will be combined with a unique suffix on the server if the Name field has not been provided. The provided value must be valid within the rules for Name, and may be truncated by the length of the suffix required to make the value unique on the server. If this field is specified, and Name is not present, the server will NOT return a 409 if the generated name exists - instead, it will either return 201 Created or 504 with Reason `ServerTimeout` indicating a unique name could not be found in the time allotted, and the client should retry (optionally after the time indicated in the Retry-After header). + +## Defaulting + +Default resource values are API version-specific, and they are applied during +the conversion from API-versioned declarative configuration to internal objects +representing the desired state (`Spec`) of the resource. Subsequent GETs of the +resource will include the default values explicitly. + +Incorporating the default values into the `Spec` ensures that `Spec` depicts the +full desired state so that it is easier for the system to determine how to +achieve the state, and for the user to know what to anticipate. + +API version-specific default values are set by the API server. + +## Late Initialization + +Late initialization is when resource fields are set by a system controller +after an object is created/updated. + +For example, the scheduler sets the `pod.spec.nodeName` field after the pod is created. + +Late-initializers should only make the following types of modifications: + - Setting previously unset fields + - Adding keys to maps + - Adding values to arrays which have mergeable semantics (`patchStrategy:"merge"` attribute in + the type definition). + +These conventions: + 1. allow a user (with sufficient privilege) to override any system-default behaviors by setting + the fields that would otherwise have been defaulted. + 1. enables updates from users to be merged with changes made during late initialization, using + strategic merge patch, as opposed to clobbering the change. + 1. allow the component which does the late-initialization to use strategic merge patch, which + facilitates composition and concurrency of such components. + +Although the apiserver Admission Control stage acts prior to object creation, +Admission Control plugins should follow the Late Initialization conventions +too, to allow their implementation to be later moved to a 'controller', or to client libraries. + +## Concurrency Control and Consistency + +Kubernetes leverages the concept of *resource versions* to achieve optimistic concurrency. All Kubernetes resources have a "resourceVersion" field as part of their metadata. This resourceVersion is a string that identifies the internal version of an object that can be used by clients to determine when objects have changed. When a record is about to be updated, it's version is checked against a pre-saved value, and if it doesn't match, the update fails with a StatusConflict (HTTP status code 409). + +The resourceVersion is changed by the server every time an object is modified. If resourceVersion is included with the PUT operation the system will verify that there have not been other successful mutations to the resource during a read/modify/write cycle, by verifying that the current value of resourceVersion matches the specified value. + +The resourceVersion is currently backed by [etcd's modifiedIndex](https://coreos.com/docs/distributed-configuration/etcd-api/). However, it's important to note that the application should *not* rely on the implementation details of the versioning system maintained by Kubernetes. We may change the implementation of resourceVersion in the future, such as to change it to a timestamp or per-object counter. + +The only way for a client to know the expected value of resourceVersion is to have received it from the server in response to a prior operation, typically a GET. This value MUST be treated as opaque by clients and passed unmodified back to the server. Clients should not assume that the resource version has meaning across namespaces, different kinds of resources, or different servers. Currently, the value of resourceVersion is set to match etcd's sequencer. You could think of it as a logical clock the API server can use to order requests. However, we expect the implementation of resourceVersion to change in the future, such as in the case we shard the state by kind and/or namespace, or port to another storage system. + +In the case of a conflict, the correct client action at this point is to GET the resource again, apply the changes afresh, and try submitting again. This mechanism can be used to prevent races like the following: + +``` +Client #1 Client #2 +GET Foo GET Foo +Set Foo.Bar = "one" Set Foo.Baz = "two" +PUT Foo PUT Foo +``` + +When these sequences occur in parallel, either the change to Foo.Bar or the change to Foo.Baz can be lost. + +On the other hand, when specifying the resourceVersion, one of the PUTs will fail, since whichever write succeeds changes the resourceVersion for Foo. + +resourceVersion may be used as a precondition for other operations (e.g., GET, DELETE) in the future, such as for read-after-write consistency in the presence of caching. + +"Watch" operations specify resourceVersion using a query parameter. It is used to specify the point at which to begin watching the specified resources. This may be used to ensure that no mutations are missed between a GET of a resource (or list of resources) and a subsequent Watch, even if the current version of the resource is more recent. This is currently the main reason that list operations (GET on a collection) return resourceVersion. + + +## Serialization Format + +APIs may return alternative representations of any resource in response to an Accept header or under alternative endpoints, but the default serialization for input and output of API responses MUST be JSON. + +All dates should be serialized as RFC3339 strings. + + +## Units + +Units must either be explicit in the field name (e.g., `timeoutSeconds`), or must be specified as part of the value (e.g., `resource.Quantity`). Which approach is preferred is TBD. + + +## Selecting Fields + +Some APIs may need to identify which field in a JSON object is invalid, or to reference a value to extract from a separate resource. The current recommendation is to use standard JavaScript syntax for accessing that field, assuming the JSON object was transformed into a JavaScript object. + +Examples: + +* Find the field "current" in the object "state" in the second item in the array "fields": `fields[0].state.current` + +TODO: Plugins, extensions, nested kinds, headers + + +## HTTP Status codes + +The server will respond with HTTP status codes that match the HTTP spec. See the section below for a breakdown of the types of status codes the server will send. + +The following HTTP status codes may be returned by the API. + +#### Success codes + +* `200 StatusOK` + * Indicates that the request completed successfully. +* `201 StatusCreated` + * Indicates that the request to create kind completed successfully. +* `204 StatusNoContent` + * Indicates that the request completed successfully, and the response contains no body. + * Returned in response to HTTP OPTIONS requests. + +#### Error codes +* `307 StatusTemporaryRedirect` + * Indicates that the address for the requested resource has changed. + * Suggested client recovery behavior + * Follow the redirect. +* `400 StatusBadRequest` + * Indicates the requested is invalid. + * Suggested client recovery behavior: + * Do not retry. Fix the request. +* `401 StatusUnauthorized` + * Indicates that the server can be reached and understood the request, but refuses to take any further action, because the client must provide authorization. If the client has provided authorization, the server is indicating the provided authorization is unsuitable or invalid. + * Suggested client recovery behavior + * If the user has not supplied authorization information, prompt them for the appropriate credentials + * If the user has supplied authorization information, inform them their credentials were rejected and optionally prompt them again. +* `403 StatusForbidden` + * Indicates that the server can be reached and understood the request, but refuses to take any further action, because it is configured to deny access for some reason to the requested resource by the client. + * Suggested client recovery behavior + * Do not retry. Fix the request. +* `404 StatusNotFound` + * Indicates that the requested resource does not exist. + * Suggested client recovery behavior + * Do not retry. Fix the request. +* `405 StatusMethodNotAllowed` + * Indicates that the action the client attempted to perform on the resource was not supported by the code. + * Suggested client recovery behavior + * Do not retry. Fix the request. +* `409 StatusConflict` + * Indicates that either the resource the client attempted to create already exists or the requested update operation cannot be completed due to a conflict. + * Suggested client recovery behavior + * * If creating a new resource + * * Either change the identifier and try again, or GET and compare the fields in the pre-existing object and issue a PUT/update to modify the existing object. + * * If updating an existing resource: + * See `Conflict` from the `status` response section below on how to retrieve more information about the nature of the conflict. + * GET and compare the fields in the pre-existing object, merge changes (if still valid according to preconditions), and retry with the updated request (including `ResourceVersion`). +* `422 StatusUnprocessableEntity` + * Indicates that the requested create or update operation cannot be completed due to invalid data provided as part of the request. + * Suggested client recovery behavior + * Do not retry. Fix the request. +* `429 StatusTooManyRequests` + * Indicates that the either the client rate limit has been exceeded or the server has received more requests then it can process. + * Suggested client recovery behavior: + * Read the ```Retry-After``` HTTP header from the response, and wait at least that long before retrying. +* `500 StatusInternalServerError` + * Indicates that the server can be reached and understood the request, but either an unexpected internal error occurred and the outcome of the call is unknown, or the server cannot complete the action in a reasonable time (this maybe due to temporary server load or a transient communication issue with another server). + * Suggested client recovery behavior: + * Retry with exponential backoff. +* `503 StatusServiceUnavailable` + * Indicates that required service is unavailable. + * Suggested client recovery behavior: + * Retry with exponential backoff. +* `504 StatusServerTimeout` + * Indicates that the request could not be completed within the given time. Clients can get this response ONLY when they specified a timeout param in the request. + * Suggested client recovery behavior: + * Increase the value of the timeout param and retry with exponential backoff + +## Response Status Kind + +Kubernetes will always return the ```Status``` kind from any API endpoint when an error occurs. +Clients SHOULD handle these types of objects when appropriate. + +A ```Status``` kind will be returned by the API in two cases: + * When an operation is not successful (i.e. when the server would return a non 2xx HTTP status code). + * When a HTTP ```DELETE``` call is successful. + +The status object is encoded as JSON and provided as the body of the response. The status object contains fields for humans and machine consumers of the API to get more detailed information for the cause of the failure. The information in the status object supplements, but does not override, the HTTP status code's meaning. When fields in the status object have the same meaning as generally defined HTTP headers and that header is returned with the response, the header should be considered as having higher priority. + +**Example:** +``` +$ curl -v -k -H "Authorization: Bearer WhCDvq4VPpYhrcfmF6ei7V9qlbqTubUc" https://10.240.122.184:443/api/v1/namespaces/default/pods/grafana + +> GET /api/v1/namespaces/default/pods/grafana HTTP/1.1 +> User-Agent: curl/7.26.0 +> Host: 10.240.122.184 +> Accept: */* +> Authorization: Bearer WhCDvq4VPpYhrcfmF6ei7V9qlbqTubUc +> + +< HTTP/1.1 404 Not Found +< Content-Type: application/json +< Date: Wed, 20 May 2015 18:10:42 GMT +< Content-Length: 232 +< +{ + "kind": "Status", + "apiVersion": "v1", + "metadata": {}, + "status": "Failure", + "message": "pods \"grafana\" not found", + "reason": "NotFound", + "details": { + "name": "grafana", + "kind": "pods" + }, + "code": 404 +} +``` + +```status``` field contains one of two possible values: +* `Success` +* `Failure` + +`message` may contain human-readable description of the error + +```reason``` may contain a machine-readable description of why this operation is in the `Failure` status. If this value is empty there is no information available. The `reason` clarifies an HTTP status code but does not override it. + +```details``` may contain extended data associated with the reason. Each reason may define its own extended details. This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type. + +Possible values for the ```reason``` and ```details``` fields: +* `BadRequest` + * Indicates that the request itself was invalid, because the request doesn't make any sense, for example deleting a read-only object. + * This is different than `status reason` `Invalid` above which indicates that the API call could possibly succeed, but the data was invalid. + * API calls that return BadRequest can never succeed. + * Http status code: `400 StatusBadRequest` +* `Unauthorized` + * Indicates that the server can be reached and understood the request, but refuses to take any further action without the client providing appropriate authorization. If the client has provided authorization, this error indicates the provided credentials are insufficient or invalid. + * Details (optional): + * `kind string` + * The kind attribute of the unauthorized resource (on some operations may differ from the requested resource). + * `name string` + * The identifier of the unauthorized resource. + * HTTP status code: `401 StatusUnauthorized` +* `Forbidden` + * Indicates that the server can be reached and understood the request, but refuses to take any further action, because it is configured to deny access for some reason to the requested resource by the client. + * Details (optional): + * `kind string` + * The kind attribute of the forbidden resource (on some operations may differ from the requested resource). + * `name string` + * The identifier of the forbidden resource. + * HTTP status code: `403 StatusForbidden` +* `NotFound` + * Indicates that one or more resources required for this operation could not be found. + * Details (optional): + * `kind string` + * The kind attribute of the missing resource (on some operations may differ from the requested resource). + * `name string` + * The identifier of the missing resource. + * HTTP status code: `404 StatusNotFound` +* `AlreadyExists` + * Indicates that the resource you are creating already exists. + * Details (optional): + * `kind string` + * The kind attribute of the conflicting resource. + * `name string` + * The identifier of the conflicting resource. + * HTTP status code: `409 StatusConflict` +* `Conflict` + * Indicates that the requested update operation cannot be completed due to a conflict. The client may need to alter the request. Each resource may define custom details that indicate the nature of the conflict. + * HTTP status code: `409 StatusConflict` +* `Invalid` + * Indicates that the requested create or update operation cannot be completed due to invalid data provided as part of the request. + * Details (optional): + * `kind string` + * the kind attribute of the invalid resource + * `name string` + * the identifier of the invalid resource + * `causes` + * One or more `StatusCause` entries indicating the data in the provided resource that was invalid. The `reason`, `message`, and `field` attributes will be set. + * HTTP status code: `422 StatusUnprocessableEntity` +* `Timeout` + * Indicates that the request could not be completed within the given time. Clients may receive this response if the server has decided to rate limit the client, or if the server is overloaded and cannot process the request at this time. + * Http status code: `429 TooManyRequests` + * The server should set the `Retry-After` HTTP header and return `retryAfterSeconds` in the details field of the object. A value of `0` is the default. +* `ServerTimeout` + * Indicates that the server can be reached and understood the request, but cannot complete the action in a reasonable time. This maybe due to temporary server load or a transient communication issue with another server. + * Details (optional): + * `kind string` + * The kind attribute of the resource being acted on. + * `name string` + * The operation that is being attempted. + * The server should set the `Retry-After` HTTP header and return `retryAfterSeconds` in the details field of the object. A value of `0` is the default. + * Http status code: `504 StatusServerTimeout` +* `MethodNotAllowed` + * Indicates that the action the client attempted to perform on the resource was not supported by the code. + * For instance, attempting to delete a resource that can only be created. + * API calls that return MethodNotAllowed can never succeed. + * Http status code: `405 StatusMethodNotAllowed` +* `InternalError` + * Indicates that an internal error occurred, it is unexpected and the outcome of the call is unknown. + * Details (optional): + * `causes` + * The original error. + * Http status code: `500 StatusInternalServerError` + +`code` may contain the suggested HTTP return code for this status. + + +## Events + +TODO: Document events (refer to another doc for details) + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/api-conventions.md?pixel)]() + diff --git a/cli-roadmap.md b/cli-roadmap.md new file mode 100644 index 00000000..fe8d5b0f --- /dev/null +++ b/cli-roadmap.md @@ -0,0 +1,105 @@ + + + + +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) + +

PLEASE NOTE: This document applies to the HEAD of the source +tree only. If you are using a released version of Kubernetes, you almost +certainly want the docs that go with that version.

+ +Documentation for specific releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) + + + + +# Kubernetes CLI/Configuration Roadmap + +See also issues with the following labels: +* [area/config-deployment](https://github.com/GoogleCloudPlatform/kubernetes/labels/area%2Fconfig-deployment) +* [component/CLI](https://github.com/GoogleCloudPlatform/kubernetes/labels/component%2FCLI) +* [component/client](https://github.com/GoogleCloudPlatform/kubernetes/labels/component%2Fclient) + +1. Create services before other objects, or at least before objects that depend upon them. Namespace-relative DNS mitigates this some, but most users are still using service environment variables. [#1768](https://github.com/GoogleCloudPlatform/kubernetes/issues/1768) +1. Finish rolling update [#1353](https://github.com/GoogleCloudPlatform/kubernetes/issues/1353) + 1. Friendly to auto-scaling [#2863](https://github.com/GoogleCloudPlatform/kubernetes/pull/2863#issuecomment-69701562) + 1. Rollback (make rolling-update reversible, and complete an in-progress rolling update by taking 2 replication controller names rather than always taking a file) + 1. Rollover (replace multiple replication controllers with one, such as to clean up an aborted partial rollout) + 1. Write a ReplicationController generator to derive the new ReplicationController from an old one (e.g., `--image-version=newversion`, which would apply a name suffix, update a label value, and apply an image tag) + 1. Use readiness [#620](https://github.com/GoogleCloudPlatform/kubernetes/issues/620) + 1. Perhaps factor this in a way that it can be shared with [Openshift’s deployment controller](https://github.com/GoogleCloudPlatform/kubernetes/issues/1743) + 1. Rolling update service as a plugin +1. Kind-based filtering on object streams -- only operate on the kinds of objects specified. This would make directory-based kubectl operations much more useful. Users should be able to instantiate the example applications using `kubectl create -f ...` +1. Improved pretty printing of endpoints, such as in the case that there are more than a few endpoints +1. Service address/port lookup command(s) +1. List supported resources +1. Swagger lookups [#3060](https://github.com/GoogleCloudPlatform/kubernetes/issues/3060) +1. --name, --name-suffix applied during creation and updates +1. --labels and opinionated label injection: --app=foo, --tier={fe,cache,be,db}, --uservice=redis, --env={dev,test,prod}, --stage={canary,final}, --track={hourly,daily,weekly}, --release=0.4.3c2. Exact ones TBD. We could allow arbitrary values -- the keys are important. The actual label keys would be (optionally?) namespaced with kubectl.kubernetes.io/, or perhaps the user’s namespace. +1. --annotations and opinionated annotation injection: --description, --revision +1. Imperative updates. We'll want to optionally make these safe(r) by supporting preconditions based on the current value and resourceVersion. + 1. annotation updates similar to label updates + 1. other custom commands for common imperative updates + 1. more user-friendly (but still generic) on-command-line json for patch +1. We also want to support the following flavors of more general updates: + 1. whichever we don’t support: + 1. safe update: update the full resource, guarded by resourceVersion precondition (and perhaps selected value-based preconditions) + 1. forced update: update the full resource, blowing away the previous Spec without preconditions; delete and re-create if necessary + 1. diff/dryrun: Compare new config with current Spec [#6284](https://github.com/GoogleCloudPlatform/kubernetes/issues/6284) + 1. submit/apply/reconcile/ensure/merge: Merge user-provided fields with current Spec. Keep track of user-provided fields using an annotation -- see [#1702](https://github.com/GoogleCloudPlatform/kubernetes/issues/1702). Delete all objects with deployment-specific labels. +1. --dry-run for all commands +1. Support full label selection syntax, including support for namespaces. +1. Wait on conditions [#1899](https://github.com/GoogleCloudPlatform/kubernetes/issues/1899) +1. Make kubectl scriptable: make output and exit code behavior consistent and useful for wrapping in workflows and piping back into kubectl and/or xargs (e.g., dump full URLs?, distinguish permanent and retry-able failure, identify objects that should be retried) + 1. Here's [an example](http://techoverflow.net/blog/2013/10/22/docker-remove-all-images-and-containers/) where multiple objects on the command line and an option to dump object names only (`-q`) would be useful in combination. [#5906](https://github.com/GoogleCloudPlatform/kubernetes/issues/5906) +1. Easy generation of clean configuration files from existing objects (including containers -- podex) -- remove readonly fields, status + 1. Export from one namespace, import into another is an important use case +1. Derive objects from other objects + 1. pod clone + 1. rc from pod + 1. --labels-from (services from pods or rcs) +1. Kind discovery (i.e., operate on objects of all kinds) [#5278](https://github.com/GoogleCloudPlatform/kubernetes/issues/5278) +1. A fairly general-purpose way to specify fields on the command line during creation and update, not just from a config file +1. Extensible API-based generator framework (i.e. invoke generators via an API/URL rather than building them into kubectl), so that complex client libraries don’t need to be rewritten in multiple languages, and so that the abstractions are available through all interfaces: API, CLI, UI, logs, ... [#5280](https://github.com/GoogleCloudPlatform/kubernetes/issues/5280) + 1. Need schema registry, and some way to invoke generator (e.g., using a container) + 1. Convert run command to API-based generator +1. Transformation framework + 1. More intelligent defaulting of fields (e.g., [#2643](https://github.com/GoogleCloudPlatform/kubernetes/issues/2643)) +1. Update preconditions based on the values of arbitrary object fields. +1. Deployment manager compatibility on GCP: [#3685](https://github.com/GoogleCloudPlatform/kubernetes/issues/3685) +1. Describe multiple objects, multiple kinds of objects [#5905](https://github.com/GoogleCloudPlatform/kubernetes/issues/5905) +1. Support yaml document separator [#5840](https://github.com/GoogleCloudPlatform/kubernetes/issues/5840) + +TODO: +* watch +* attach [#1521](https://github.com/GoogleCloudPlatform/kubernetes/issues/1521) +* image/registry commands +* do any other server paths make sense? validate? generic curl functionality? +* template parameterization +* dynamic/runtime configuration + +Server-side support: + +1. Default selectors from labels [#1698](https://github.com/GoogleCloudPlatform/kubernetes/issues/1698#issuecomment-71048278) +1. Stop [#1535](https://github.com/GoogleCloudPlatform/kubernetes/issues/1535) +1. Deleted objects [#2789](https://github.com/GoogleCloudPlatform/kubernetes/issues/2789) +1. Clone [#170](https://github.com/GoogleCloudPlatform/kubernetes/issues/170) +1. Resize [#1629](https://github.com/GoogleCloudPlatform/kubernetes/issues/1629) +1. Useful /operations API: wait for finalization/reification +1. List supported resources [#2057](https://github.com/GoogleCloudPlatform/kubernetes/issues/2057) +1. Reverse label lookup [#1348](https://github.com/GoogleCloudPlatform/kubernetes/issues/1348) +1. Field selection [#1362](https://github.com/GoogleCloudPlatform/kubernetes/issues/1362) +1. Field filtering [#1459](https://github.com/GoogleCloudPlatform/kubernetes/issues/1459) +1. Operate on uids + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/cli-roadmap.md?pixel)]() + diff --git a/client-libraries.md b/client-libraries.md new file mode 100644 index 00000000..b7529a01 --- /dev/null +++ b/client-libraries.md @@ -0,0 +1,43 @@ + + + + +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) + +

PLEASE NOTE: This document applies to the HEAD of the source +tree only. If you are using a released version of Kubernetes, you almost +certainly want the docs that go with that version.

+ +Documentation for specific releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) + + + + +## kubernetes API client libraries + +### Supported + * [Go](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/pkg/client) + +### User Contributed +*Note: Libraries provided by outside parties are supported by their authors, not the core Kubernetes team* + + * [Java (OSGI)](https://bitbucket.org/amdatulabs/amdatu-kubernetes) + * [Java (Fabric8)](https://github.com/fabric8io/fabric8/tree/master/components/kubernetes-api) + * [Ruby](https://github.com/Ch00k/kuber) + * [Ruby](https://github.com/abonas/kubeclient) + * [PHP](https://github.com/devstub/kubernetes-api-php-client) + * [PHP](https://github.com/maclof/kubernetes-client) + * [Node.js](https://github.com/tenxcloud/node-kubernetes-client) + * [Perl](https://metacpan.org/pod/Net::Kubernetes) + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/client-libraries.md?pixel)]() + diff --git a/developer-guide.md b/developer-guide.md new file mode 100644 index 00000000..8801cb3d --- /dev/null +++ b/developer-guide.md @@ -0,0 +1,62 @@ + + + + +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) + +

PLEASE NOTE: This document applies to the HEAD of the source +tree only. If you are using a released version of Kubernetes, you almost +certainly want the docs that go with that version.

+ +Documentation for specific releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) +![WARNING](http://kubernetes.io/img/warning.png) + + + + +# Kubernetes Developer Guide + +The developer guide is for anyone wanting to either write code which directly accesses the +kubernetes API, or to contribute directly to the kubernetes project. +It assumes some familiarity with concepts in the [User Guide](../user-guide/README.md) and the [Cluster Admin +Guide](../admin/README.md). + + +## Developing against the Kubernetes API + +* API objects are explained at [http://kubernetes.io/third_party/swagger-ui/](http://kubernetes.io/third_party/swagger-ui/). + +* **Annotations** ([docs/user-guide/annotations.md](../user-guide/annotations.md)): are for attaching arbitrary non-identifying metadata to objects. + Programs that automate Kubernetes objects may use annotations to store small amounts of their state. + +* **API Conventions** ([api-conventions.md](api-conventions.md)): + Defining the verbs and resources used in the Kubernetes API. + +* **API Client Libraries** ([client-libraries.md](client-libraries.md)): + A list of existing client libraries, both supported and user-contributed. + +## Writing Plugins + +* **Authentication Plugins** ([docs/admin/authentication.md](../admin/authentication.md)): + The current and planned states of authentication tokens. + +* **Authorization Plugins** ([docs/admin/authorization.md](../admin/authorization.md)): + Authorization applies to all HTTP requests on the main apiserver port. + This doc explains the available authorization implementations. + +* **Admission Control Plugins** ([admission_control](../design/admission_control.md)) + +## Contributing to the Kubernetes Project + +See this [README](README.md). + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/developer-guide.md?pixel)]() + -- cgit v1.2.3 From 3a1db27f1f46e9276c6b1aa28b82d0793f1e0db2 Mon Sep 17 00:00:00 2001 From: David Oppenheimer Date: Tue, 14 Jul 2015 23:56:51 -0700 Subject: Move diagrams out of top-level docs/ directory and merge docs/devel/developer-guide.md into docs/devel/README.md --- README.md | 66 ++++++++++++++++++++++++++++++++++++++++++------------ developer-guide.md | 62 -------------------------------------------------- 2 files changed, 52 insertions(+), 76 deletions(-) delete mode 100644 developer-guide.md diff --git a/README.md b/README.md index f97c49b4..aed7276d 100644 --- a/README.md +++ b/README.md @@ -20,27 +20,35 @@ certainly want the docs that go with that version. -# Developing Kubernetes +# Kubernetes Developer Guide -Docs in this directory relate to developing Kubernetes. +The developer guide is for anyone wanting to either write code which directly accesses the +kubernetes API, or to contribute directly to the kubernetes project. +It assumes some familiarity with concepts in the [User Guide](../user-guide/README.md) and the [Cluster Admin +Guide](../admin/README.md). -* **On Collaborative Development** ([collab.md](collab.md)): info on pull requests and code reviews. -* **Development Guide** ([development.md](development.md)): Setting up your environment tests. +## The process of developing and contributing code to the Kubernetes project -* **Making release notes** ([making-release-notes.md](making-release-notes.md)): Generating release nodes for a new release. - -* **Hunting flaky tests** ([flaky-tests.md](flaky-tests.md)): We have a goal of 99.9% flake free tests. - Here's how to run your tests many times. +* **On Collaborative Development** ([collab.md](collab.md)): Info on pull requests and code reviews. * **GitHub Issues** ([issues.md](issues.md)): How incoming issues are reviewed and prioritized. -* **Logging Conventions** ([logging.md](logging.md)]: Glog levels. - * **Pull Request Process** ([pull-requests.md](pull-requests.md)): When and why pull requests are closed. -* **Releasing Kubernetes** ([releasing.md](releasing.md)): How to create a Kubernetes release (as in version) - and how the version information gets embedded into the built binaries. +* **Faster PR reviews** ([faster_reviews.md](faster_reviews.md)): How to get faster PR reviews. + +* **Getting Recent Builds** ([getting-builds.md](getting-builds.md)): How to get recent builds including the latest builds that pass CI. + + +## Setting up your dev environment, coding, and debugging + +* **Development Guide** ([development.md](development.md)): Setting up your development environment. + +* **Hunting flaky tests** ([flaky-tests.md](flaky-tests.md)): We have a goal of 99.9% flake free tests. + Here's how to run your tests many times. + +* **Logging Conventions** ([logging.md](logging.md)]: Glog levels. * **Profiling Kubernetes** ([profiling.md](profiling.md)): How to plug in go pprof profiler to Kubernetes. @@ -51,9 +59,39 @@ Docs in this directory relate to developing Kubernetes. * **Coding Conventions** ([coding-conventions.md](coding-conventions.md)): Coding style advice for contributors. -* **Faster PR reviews** ([faster_reviews.md](faster_reviews.md)): How to get faster PR reviews. -* **Getting Recent Builds** ([getting-builds.md](getting-builds.md)): How to get recent builds including the latest builds to pass CI. +## Developing against the Kubernetes API + +* API objects are explained at [http://kubernetes.io/third_party/swagger-ui/](http://kubernetes.io/third_party/swagger-ui/). + +* **Annotations** ([docs/user-guide/annotations.md](../user-guide/annotations.md)): are for attaching arbitrary non-identifying metadata to objects. + Programs that automate Kubernetes objects may use annotations to store small amounts of their state. + +* **API Conventions** ([api-conventions.md](api-conventions.md)): + Defining the verbs and resources used in the Kubernetes API. + +* **API Client Libraries** ([client-libraries.md](client-libraries.md)): + A list of existing client libraries, both supported and user-contributed. + + +## Writing plugins + +* **Authentication Plugins** ([docs/admin/authentication.md](../admin/authentication.md)): + The current and planned states of authentication tokens. + +* **Authorization Plugins** ([docs/admin/authorization.md](../admin/authorization.md)): + Authorization applies to all HTTP requests on the main apiserver port. + This doc explains the available authorization implementations. + +* **Admission Control Plugins** ([admission_control](../design/admission_control.md)) + + +## Building releases + +* **Making release notes** ([making-release-notes.md](making-release-notes.md)): Generating release nodes for a new release. + +* **Releasing Kubernetes** ([releasing.md](releasing.md)): How to create a Kubernetes release (as in version) + and how the version information gets embedded into the built binaries. diff --git a/developer-guide.md b/developer-guide.md deleted file mode 100644 index 8801cb3d..00000000 --- a/developer-guide.md +++ /dev/null @@ -1,62 +0,0 @@ - - - - -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) - -

PLEASE NOTE: This document applies to the HEAD of the source -tree only. If you are using a released version of Kubernetes, you almost -certainly want the docs that go with that version.

- -Documentation for specific releases can be found at -[releases.k8s.io](http://releases.k8s.io). - -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) - - - - -# Kubernetes Developer Guide - -The developer guide is for anyone wanting to either write code which directly accesses the -kubernetes API, or to contribute directly to the kubernetes project. -It assumes some familiarity with concepts in the [User Guide](../user-guide/README.md) and the [Cluster Admin -Guide](../admin/README.md). - - -## Developing against the Kubernetes API - -* API objects are explained at [http://kubernetes.io/third_party/swagger-ui/](http://kubernetes.io/third_party/swagger-ui/). - -* **Annotations** ([docs/user-guide/annotations.md](../user-guide/annotations.md)): are for attaching arbitrary non-identifying metadata to objects. - Programs that automate Kubernetes objects may use annotations to store small amounts of their state. - -* **API Conventions** ([api-conventions.md](api-conventions.md)): - Defining the verbs and resources used in the Kubernetes API. - -* **API Client Libraries** ([client-libraries.md](client-libraries.md)): - A list of existing client libraries, both supported and user-contributed. - -## Writing Plugins - -* **Authentication Plugins** ([docs/admin/authentication.md](../admin/authentication.md)): - The current and planned states of authentication tokens. - -* **Authorization Plugins** ([docs/admin/authorization.md](../admin/authorization.md)): - Authorization applies to all HTTP requests on the main apiserver port. - This doc explains the available authorization implementations. - -* **Admission Control Plugins** ([admission_control](../design/admission_control.md)) - -## Contributing to the Kubernetes Project - -See this [README](README.md). - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/developer-guide.md?pixel)]() - -- cgit v1.2.3 From b6ca2b5bd605d4e65096d3cc2999f4d59d1f1495 Mon Sep 17 00:00:00 2001 From: Zach Loafman Date: Wed, 15 Jul 2015 09:31:28 -0700 Subject: Add hack/cherry_pick_list.sh to list all automated cherry picks * Adds hack/cherry_pick_list.sh to list all automated cherry picks since the last tag. * Adds a short python script to extract title/author and print it in markdown style like our current release notes. * Revises patch release instructions to use said script. --- releasing.md | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/releasing.md b/releasing.md index 2f5035cc..484620f0 100644 --- a/releasing.md +++ b/releasing.md @@ -137,7 +137,9 @@ manage cherry picks prior to cutting the release. version commit. 1. Follow the instructions given to you by that script. They are canon for the remainder of the Git process. If you don't understand something in that - process, please ask! + process, please ask! When proposing PRs, you can pre-fill the body with + `hack/cherry_pick_list.sh upstream/release-${VER}` to inform people of what + is already on the branch. **TODO**: how to fix tags, etc., if the release is changed. @@ -154,10 +156,10 @@ In your git repo (you still have `${VER}` and `${PATCH}` set from above right?): #### Writing Release Notes -Release notes for a patch release are relatives fast: `git log release-${VER}` -(If you followed the procedure in the first section, all the cherry-picks will -have the pull request number in the commit log). Unless there's some reason not -to, just include all the PRs back to the last release. +Run `hack/cherry_pick_list.sh ${VER}.${PATCH}~1` to get the release notes for +the patch release you just created. Feel free to prune anything internal, like +you would for a major release, but typically for patch releases we tend to +include everything in the release notes. ## Origin of the Sources -- cgit v1.2.3 From dcb2c5ffc12e3ffbf2cef7ebe35911446294b2f5 Mon Sep 17 00:00:00 2001 From: Eric Tune Date: Wed, 15 Jul 2015 11:34:04 -0700 Subject: Remove requirement of specifying kubernetes ver. Now that things are more stable, and we have a conformance test, the binary version is no longer needed. Remove that requirement. --- writing-a-getting-started-guide.md | 24 +++++++++--------------- 1 file changed, 9 insertions(+), 15 deletions(-) diff --git a/writing-a-getting-started-guide.md b/writing-a-getting-started-guide.md index 348faf9b..d7463a4c 100644 --- a/writing-a-getting-started-guide.md +++ b/writing-a-getting-started-guide.md @@ -20,10 +20,11 @@ certainly want the docs that go with that version. + # Writing a Getting Started Guide This page gives some advice for anyone planning to write or update a Getting Started Guide for Kubernetes. It also gives some guidelines which reviewers should follow when reviewing a pull request for a -guide. +guide. A Getting Started Guide is instructions on how to create a Kubernetes cluster on top of a particular type(s) of infrastructure. Infrastructure includes: the IaaS provider for VMs; @@ -36,11 +37,12 @@ the combination of all these things needed to run on a particular type of infras which is similar to the one you have planned, consider improving that one. -Distros fall into two categories: +Distros fall into two categories: - **versioned distros** are tested to work with a particular binary release of Kubernetes. These come in a wide variety, reflecting a wide range of ideas and preferences in how to run a cluster. - **development distros** are tested work with the latest Kubernetes source code. But, there are - relatively few of these and the bar is much higher for creating one. + relatively few of these and the bar is much higher for creating one. They must support + fully automated cluster creation, deletion, and upgrade. There are different guidelines for each. @@ -51,17 +53,14 @@ These guidelines say *what* to do. See the Rationale section for *why*. search for uses of flags by guides. - We may ask that you host binary assets or large amounts of code in our `contrib` directory or on your own repo. + - Add or update a row in [The Matrix](../../docs/getting-started-guides/README.md). + - State the binary version of kubernetes that you tested clearly in your Guide doc. - Setup a cluster and run the [conformance test](development.md#conformance-testing) against it, and report the results in your PR. - - Add or update a row in [The Matrix](../../docs/getting-started-guides/README.md). - - State the binary version of kubernetes that you tested clearly in your Guide doc and in The Matrix. - - Even if you are just updating the binary version used, please still do a conformance test. - - If it worked before and now fails, you can ask on IRC, - check the release notes since your last tested version, or look at git -logs for files in other distros - that are updated to the new version. - Versioned distros should typically not modify or add code in `cluster/`. That is just scripts for developer distros. - - If a versioned distro has not been updated for many binary releases, it may be dropped from the Matrix. + - When a new major or minor release of Kubernetes comes out, we may also release a new + conformance test, and require a new conformance test run to earn a conformance checkmark. If you have a cluster partially working, but doing all the above steps seems like too much work, we still want to hear from you. We suggest you write a blog post or a Gist, and we will link to it on our wiki page. @@ -93,11 +92,6 @@ These guidelines say *what* to do. See the Rationale section for *why*. - We want users to have a uniform experience with Kubernetes whenever they follow instructions anywhere in our Github repository. So, we ask that versioned distros pass a **conformance test** to make sure really work. - - We ask versioned distros to **clearly state a version**. People pulling from Github may - expect any instructions there to work at Head, so stuff that has not been tested at Head needs - to be called out. We are still changing things really fast, and, while the REST API is versioned, - it is not practical at this point to version or limit changes that affect distros. We still change - flags at the Kubernetes/Infrastructure interface. - We want to **limit the number of development distros** for several reasons. Developers should only have to change a limited number of places to add a new feature. Also, since we will gate commits on passing CI for all distros, and since end-to-end tests are typically somewhat -- cgit v1.2.3 From e854d97ff44c7a463a5350c546ce32eb3e7bc994 Mon Sep 17 00:00:00 2001 From: Tim Hockin Date: Wed, 15 Jul 2015 17:20:39 -0700 Subject: Add munger to verify kubectl -f targets, fix docs --- flaky-tests.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/flaky-tests.md b/flaky-tests.md index fb000ea6..86c898d9 100644 --- a/flaky-tests.md +++ b/flaky-tests.md @@ -56,7 +56,7 @@ spec: Note that we omit the labels and the selector fields of the replication controller, because they will be populated from the labels field of the pod template by default. ``` -kubectl create -f controller.yaml +kubectl create -f ./controller.yaml ``` This will spin up 24 instances of the test. They will run to completion, then exit, and the kubelet will restart them, accumulating more and more runs of the test. -- cgit v1.2.3 From d43894cdce090482de0d25f9510603c9d806870c Mon Sep 17 00:00:00 2001 From: Daniel Smith Date: Thu, 16 Jul 2015 14:54:28 -0700 Subject: (mostly) auto fixed links --- cli-roadmap.md | 6 +++--- client-libraries.md | 2 +- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/cli-roadmap.md b/cli-roadmap.md index fe8d5b0f..f2b9f8c1 100644 --- a/cli-roadmap.md +++ b/cli-roadmap.md @@ -23,9 +23,9 @@ certainly want the docs that go with that version. # Kubernetes CLI/Configuration Roadmap See also issues with the following labels: -* [area/config-deployment](https://github.com/GoogleCloudPlatform/kubernetes/labels/area%2Fconfig-deployment) -* [component/CLI](https://github.com/GoogleCloudPlatform/kubernetes/labels/component%2FCLI) -* [component/client](https://github.com/GoogleCloudPlatform/kubernetes/labels/component%2Fclient) +* [area/app-config-deployment](https://github.com/GoogleCloudPlatform/kubernetes/labels/area/app-config-deployment) +* [component/CLI](https://github.com/GoogleCloudPlatform/kubernetes/labels/component/CLI) +* [component/client](https://github.com/GoogleCloudPlatform/kubernetes/labels/component/client) 1. Create services before other objects, or at least before objects that depend upon them. Namespace-relative DNS mitigates this some, but most users are still using service environment variables. [#1768](https://github.com/GoogleCloudPlatform/kubernetes/issues/1768) 1. Finish rolling update [#1353](https://github.com/GoogleCloudPlatform/kubernetes/issues/1353) diff --git a/client-libraries.md b/client-libraries.md index b7529a01..ef9a1f69 100644 --- a/client-libraries.md +++ b/client-libraries.md @@ -23,7 +23,7 @@ certainly want the docs that go with that version. ## kubernetes API client libraries ### Supported - * [Go](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/pkg/client) + * [Go](../../pkg/client/) ### User Contributed *Note: Libraries provided by outside parties are supported by their authors, not the core Kubernetes team* -- cgit v1.2.3 From a7425fa6891a042de69331dba36282276084edeb Mon Sep 17 00:00:00 2001 From: Janet Kuo Date: Wed, 15 Jul 2015 17:28:59 -0700 Subject: Ensure all docs and examples in user guide are reachable --- scheduler_algorithm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/scheduler_algorithm.md b/scheduler_algorithm.md index fc402516..146c0190 100644 --- a/scheduler_algorithm.md +++ b/scheduler_algorithm.md @@ -31,7 +31,7 @@ The purpose of filtering the nodes is to filter out the nodes that do not meet c - `PodFitsResources`: Check if the free resource (CPU and Memory) meets the requirement of the Pod. The free resource is measured by the capacity minus the sum of limits of all Pods on the node. - `PodFitsPorts`: Check if any HostPort required by the Pod is already occupied on the node. - `PodFitsHost`: Filter out all nodes except the one specified in the PodSpec's NodeName field. -- `PodSelectorMatches`: Check if the labels of the node match the labels specified in the Pod's `nodeSelector` field. +- `PodSelectorMatches`: Check if the labels of the node match the labels specified in the Pod's `nodeSelector` field ([Here](../user-guide/node-selection/) is an example of how to use `nodeSelector` field). - `CheckNodeLabelPresence`: Check if all the specified labels exist on a node or not, regardless of the value. The details of the above predicates can be found in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](../../plugin/pkg/scheduler/algorithm/predicates/predicates.go). All predicates mentioned above can be used in combination to perform a sophisticated filtering policy. Kubernetes uses some, but not all, of these predicates by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](../../plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). -- cgit v1.2.3 From 7c7cbb2a44d2b0acf06ad8256ab404cdae509894 Mon Sep 17 00:00:00 2001 From: Janet Kuo Date: Thu, 16 Jul 2015 17:56:56 -0700 Subject: MUNGE generated table of contents should strip comma --- api-conventions.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/api-conventions.md b/api-conventions.md index 4a0cfccb..3014d0cb 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -31,7 +31,7 @@ using resources with kubectl can be found in (working_with_resources.md).* **Table of Contents** - - [Types (Kinds)](#types-(kinds)) + - [Types (Kinds)](#types-kinds) - [Resources](#resources) - [Objects](#objects) - [Metadata](#metadata) -- cgit v1.2.3 From 6a198dfa61ce281f5d092a9c2576c46bb01a1482 Mon Sep 17 00:00:00 2001 From: Tim Hockin Date: Thu, 16 Jul 2015 10:02:26 -0700 Subject: Better scary message --- README.md | 38 ++++++++++++++++++++++++-------------- api-conventions.md | 38 ++++++++++++++++++++++++-------------- api_changes.md | 38 ++++++++++++++++++++++++-------------- cherry-picks.md | 38 ++++++++++++++++++++++++-------------- cli-roadmap.md | 32 +++++++++++++++++++++----------- client-libraries.md | 38 ++++++++++++++++++++++++-------------- coding-conventions.md | 38 ++++++++++++++++++++++++-------------- collab.md | 38 ++++++++++++++++++++++++-------------- developer-guides/vagrant.md | 38 ++++++++++++++++++++++++-------------- development.md | 38 ++++++++++++++++++++++++-------------- faster_reviews.md | 38 ++++++++++++++++++++++++-------------- flaky-tests.md | 38 ++++++++++++++++++++++++-------------- getting-builds.md | 38 ++++++++++++++++++++++++-------------- instrumentation.md | 38 ++++++++++++++++++++++++-------------- issues.md | 38 ++++++++++++++++++++++++-------------- logging.md | 38 ++++++++++++++++++++++++-------------- making-release-notes.md | 38 ++++++++++++++++++++++++-------------- profiling.md | 38 ++++++++++++++++++++++++-------------- pull-requests.md | 38 ++++++++++++++++++++++++-------------- releasing.md | 38 ++++++++++++++++++++++++-------------- scheduler.md | 38 ++++++++++++++++++++++++-------------- scheduler_algorithm.md | 32 +++++++++++++++++++++----------- writing-a-getting-started-guide.md | 38 ++++++++++++++++++++++++-------------- 23 files changed, 546 insertions(+), 316 deletions(-) diff --git a/README.md b/README.md index aed7276d..a06efc8d 100644 --- a/README.md +++ b/README.md @@ -2,20 +2,30 @@ -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) - -

PLEASE NOTE: This document applies to the HEAD of the source -tree only. If you are using a released version of Kubernetes, you almost -certainly want the docs that go with that version.

- -Documentation for specific releases can be found at -[releases.k8s.io](http://releases.k8s.io). - -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + +The latest 1.0.x release of this document can be found +[here](http://releases.k8s.io/release-1.0/docs/devel/README.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- diff --git a/api-conventions.md b/api-conventions.md index 3014d0cb..50e1e1d2 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -2,20 +2,30 @@ -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) - -

PLEASE NOTE: This document applies to the HEAD of the source -tree only. If you are using a released version of Kubernetes, you almost -certainly want the docs that go with that version.

- -Documentation for specific releases can be found at -[releases.k8s.io](http://releases.k8s.io). - -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + +The latest 1.0.x release of this document can be found +[here](http://releases.k8s.io/release-1.0/docs/devel/api-conventions.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- diff --git a/api_changes.md b/api_changes.md index 2d571eb5..c7458d53 100644 --- a/api_changes.md +++ b/api_changes.md @@ -2,20 +2,30 @@ -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) - -

PLEASE NOTE: This document applies to the HEAD of the source -tree only. If you are using a released version of Kubernetes, you almost -certainly want the docs that go with that version.

- -Documentation for specific releases can be found at -[releases.k8s.io](http://releases.k8s.io). - -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + +The latest 1.0.x release of this document can be found +[here](http://releases.k8s.io/release-1.0/docs/devel/api_changes.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- diff --git a/cherry-picks.md b/cherry-picks.md index b971f2fc..1d59eaef 100644 --- a/cherry-picks.md +++ b/cherry-picks.md @@ -2,20 +2,30 @@ -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) - -

PLEASE NOTE: This document applies to the HEAD of the source -tree only. If you are using a released version of Kubernetes, you almost -certainly want the docs that go with that version.

- -Documentation for specific releases can be found at -[releases.k8s.io](http://releases.k8s.io). - -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + +The latest 1.0.x release of this document can be found +[here](http://releases.k8s.io/release-1.0/docs/devel/cherry-picks.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- diff --git a/cli-roadmap.md b/cli-roadmap.md index f2b9f8c1..45c26827 100644 --- a/cli-roadmap.md +++ b/cli-roadmap.md @@ -2,20 +2,30 @@ -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) +WARNING +WARNING +WARNING +WARNING +WARNING -

PLEASE NOTE: This document applies to the HEAD of the source -tree only. If you are using a released version of Kubernetes, you almost -certainly want the docs that go with that version.

+

PLEASE NOTE: This document applies to the HEAD of the source tree

-Documentation for specific releases can be found at -[releases.k8s.io](http://releases.k8s.io). +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) + +The latest 1.0.x release of this document can be found +[here](http://releases.k8s.io/release-1.0/docs/devel/cli-roadmap.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- diff --git a/client-libraries.md b/client-libraries.md index ef9a1f69..ae7cb623 100644 --- a/client-libraries.md +++ b/client-libraries.md @@ -2,20 +2,30 @@ -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) - -

PLEASE NOTE: This document applies to the HEAD of the source -tree only. If you are using a released version of Kubernetes, you almost -certainly want the docs that go with that version.

- -Documentation for specific releases can be found at -[releases.k8s.io](http://releases.k8s.io). - -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + +The latest 1.0.x release of this document can be found +[here](http://releases.k8s.io/release-1.0/docs/devel/client-libraries.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- diff --git a/coding-conventions.md b/coding-conventions.md index 76ba29e8..ac3d353f 100644 --- a/coding-conventions.md +++ b/coding-conventions.md @@ -2,20 +2,30 @@ -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) - -

PLEASE NOTE: This document applies to the HEAD of the source -tree only. If you are using a released version of Kubernetes, you almost -certainly want the docs that go with that version.

- -Documentation for specific releases can be found at -[releases.k8s.io](http://releases.k8s.io). - -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + +The latest 1.0.x release of this document can be found +[here](http://releases.k8s.io/release-1.0/docs/devel/coding-conventions.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- diff --git a/collab.md b/collab.md index caadc8de..38b6d586 100644 --- a/collab.md +++ b/collab.md @@ -2,20 +2,30 @@ -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) - -

PLEASE NOTE: This document applies to the HEAD of the source -tree only. If you are using a released version of Kubernetes, you almost -certainly want the docs that go with that version.

- -Documentation for specific releases can be found at -[releases.k8s.io](http://releases.k8s.io). - -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + +The latest 1.0.x release of this document can be found +[here](http://releases.k8s.io/release-1.0/docs/devel/collab.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index 0ef31c68..5b4013e3 100644 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -2,20 +2,30 @@ -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) - -

PLEASE NOTE: This document applies to the HEAD of the source -tree only. If you are using a released version of Kubernetes, you almost -certainly want the docs that go with that version.

- -Documentation for specific releases can be found at -[releases.k8s.io](http://releases.k8s.io). - -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + +The latest 1.0.x release of this document can be found +[here](http://releases.k8s.io/release-1.0/docs/devel/developer-guides/vagrant.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- diff --git a/development.md b/development.md index e2ec2068..1255b7a8 100644 --- a/development.md +++ b/development.md @@ -2,20 +2,30 @@ -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) - -

PLEASE NOTE: This document applies to the HEAD of the source -tree only. If you are using a released version of Kubernetes, you almost -certainly want the docs that go with that version.

- -Documentation for specific releases can be found at -[releases.k8s.io](http://releases.k8s.io). - -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + +The latest 1.0.x release of this document can be found +[here](http://releases.k8s.io/release-1.0/docs/devel/development.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- diff --git a/faster_reviews.md b/faster_reviews.md index 335d2a3e..20e3e990 100644 --- a/faster_reviews.md +++ b/faster_reviews.md @@ -2,20 +2,30 @@ -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) - -

PLEASE NOTE: This document applies to the HEAD of the source -tree only. If you are using a released version of Kubernetes, you almost -certainly want the docs that go with that version.

- -Documentation for specific releases can be found at -[releases.k8s.io](http://releases.k8s.io). - -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + +The latest 1.0.x release of this document can be found +[here](http://releases.k8s.io/release-1.0/docs/devel/faster_reviews.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- diff --git a/flaky-tests.md b/flaky-tests.md index 86c898d9..52ba45a2 100644 --- a/flaky-tests.md +++ b/flaky-tests.md @@ -2,20 +2,30 @@ -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) - -

PLEASE NOTE: This document applies to the HEAD of the source -tree only. If you are using a released version of Kubernetes, you almost -certainly want the docs that go with that version.

- -Documentation for specific releases can be found at -[releases.k8s.io](http://releases.k8s.io). - -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + +The latest 1.0.x release of this document can be found +[here](http://releases.k8s.io/release-1.0/docs/devel/flaky-tests.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- diff --git a/getting-builds.md b/getting-builds.md index 372d080d..e41c4fbf 100644 --- a/getting-builds.md +++ b/getting-builds.md @@ -2,20 +2,30 @@ -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) - -

PLEASE NOTE: This document applies to the HEAD of the source -tree only. If you are using a released version of Kubernetes, you almost -certainly want the docs that go with that version.

- -Documentation for specific releases can be found at -[releases.k8s.io](http://releases.k8s.io). - -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + +The latest 1.0.x release of this document can be found +[here](http://releases.k8s.io/release-1.0/docs/devel/getting-builds.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- diff --git a/instrumentation.md b/instrumentation.md index 95786c52..8cc9e2b2 100644 --- a/instrumentation.md +++ b/instrumentation.md @@ -2,20 +2,30 @@ -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) - -

PLEASE NOTE: This document applies to the HEAD of the source -tree only. If you are using a released version of Kubernetes, you almost -certainly want the docs that go with that version.

- -Documentation for specific releases can be found at -[releases.k8s.io](http://releases.k8s.io). - -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + +The latest 1.0.x release of this document can be found +[here](http://releases.k8s.io/release-1.0/docs/devel/instrumentation.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- diff --git a/issues.md b/issues.md index 689a18ff..46beb9ce 100644 --- a/issues.md +++ b/issues.md @@ -2,20 +2,30 @@ -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) - -

PLEASE NOTE: This document applies to the HEAD of the source -tree only. If you are using a released version of Kubernetes, you almost -certainly want the docs that go with that version.

- -Documentation for specific releases can be found at -[releases.k8s.io](http://releases.k8s.io). - -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + +The latest 1.0.x release of this document can be found +[here](http://releases.k8s.io/release-1.0/docs/devel/issues.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- diff --git a/logging.md b/logging.md index 1a536d07..3870c4c3 100644 --- a/logging.md +++ b/logging.md @@ -2,20 +2,30 @@ -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) - -

PLEASE NOTE: This document applies to the HEAD of the source -tree only. If you are using a released version of Kubernetes, you almost -certainly want the docs that go with that version.

- -Documentation for specific releases can be found at -[releases.k8s.io](http://releases.k8s.io). - -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + +The latest 1.0.x release of this document can be found +[here](http://releases.k8s.io/release-1.0/docs/devel/logging.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- diff --git a/making-release-notes.md b/making-release-notes.md index 5703965a..b362d857 100644 --- a/making-release-notes.md +++ b/making-release-notes.md @@ -2,20 +2,30 @@ -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) - -

PLEASE NOTE: This document applies to the HEAD of the source -tree only. If you are using a released version of Kubernetes, you almost -certainly want the docs that go with that version.

- -Documentation for specific releases can be found at -[releases.k8s.io](http://releases.k8s.io). - -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + +The latest 1.0.x release of this document can be found +[here](http://releases.k8s.io/release-1.0/docs/devel/making-release-notes.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- diff --git a/profiling.md b/profiling.md index 863dc4c1..215f0c41 100644 --- a/profiling.md +++ b/profiling.md @@ -2,20 +2,30 @@ -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) - -

PLEASE NOTE: This document applies to the HEAD of the source -tree only. If you are using a released version of Kubernetes, you almost -certainly want the docs that go with that version.

- -Documentation for specific releases can be found at -[releases.k8s.io](http://releases.k8s.io). - -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + +The latest 1.0.x release of this document can be found +[here](http://releases.k8s.io/release-1.0/docs/devel/profiling.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- diff --git a/pull-requests.md b/pull-requests.md index bdb7a172..e42faa51 100644 --- a/pull-requests.md +++ b/pull-requests.md @@ -2,20 +2,30 @@ -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) - -

PLEASE NOTE: This document applies to the HEAD of the source -tree only. If you are using a released version of Kubernetes, you almost -certainly want the docs that go with that version.

- -Documentation for specific releases can be found at -[releases.k8s.io](http://releases.k8s.io). - -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + +The latest 1.0.x release of this document can be found +[here](http://releases.k8s.io/release-1.0/docs/devel/pull-requests.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- diff --git a/releasing.md b/releasing.md index 484620f0..8469fc40 100644 --- a/releasing.md +++ b/releasing.md @@ -2,20 +2,30 @@ -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) - -

PLEASE NOTE: This document applies to the HEAD of the source -tree only. If you are using a released version of Kubernetes, you almost -certainly want the docs that go with that version.

- -Documentation for specific releases can be found at -[releases.k8s.io](http://releases.k8s.io). - -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + +The latest 1.0.x release of this document can be found +[here](http://releases.k8s.io/release-1.0/docs/devel/releasing.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- diff --git a/scheduler.md b/scheduler.md index 912d1128..1fccc7ad 100644 --- a/scheduler.md +++ b/scheduler.md @@ -2,20 +2,30 @@ -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) - -

PLEASE NOTE: This document applies to the HEAD of the source -tree only. If you are using a released version of Kubernetes, you almost -certainly want the docs that go with that version.

- -Documentation for specific releases can be found at -[releases.k8s.io](http://releases.k8s.io). - -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + +The latest 1.0.x release of this document can be found +[here](http://releases.k8s.io/release-1.0/docs/devel/scheduler.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- diff --git a/scheduler_algorithm.md b/scheduler_algorithm.md index 146c0190..791de7c4 100644 --- a/scheduler_algorithm.md +++ b/scheduler_algorithm.md @@ -2,20 +2,30 @@ -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) +WARNING +WARNING +WARNING +WARNING +WARNING -

PLEASE NOTE: This document applies to the HEAD of the source -tree only. If you are using a released version of Kubernetes, you almost -certainly want the docs that go with that version.

+

PLEASE NOTE: This document applies to the HEAD of the source tree

-Documentation for specific releases can be found at -[releases.k8s.io](http://releases.k8s.io). +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) + +The latest 1.0.x release of this document can be found +[here](http://releases.k8s.io/release-1.0/docs/devel/scheduler_algorithm.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- diff --git a/writing-a-getting-started-guide.md b/writing-a-getting-started-guide.md index d7463a4c..3e67b632 100644 --- a/writing-a-getting-started-guide.md +++ b/writing-a-getting-started-guide.md @@ -2,20 +2,30 @@ -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) - -

PLEASE NOTE: This document applies to the HEAD of the source -tree only. If you are using a released version of Kubernetes, you almost -certainly want the docs that go with that version.

- -Documentation for specific releases can be found at -[releases.k8s.io](http://releases.k8s.io). - -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) -![WARNING](http://kubernetes.io/img/warning.png) +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + +The latest 1.0.x release of this document can be found +[here](http://releases.k8s.io/release-1.0/docs/devel/writing-a-getting-started-guide.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- -- cgit v1.2.3 From c4d505d98dd2d215e0b52d8d0332bce5d9be11e0 Mon Sep 17 00:00:00 2001 From: David Oppenheimer Date: Fri, 17 Jul 2015 10:12:08 -0700 Subject: Various minor edits/clarifications to docs/admin/ docs. Deleted docs/admin/namespaces.md as it was content-free and the topic is already covered well in docs/user-guide/namespaces.md --- api-conventions.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/api-conventions.md b/api-conventions.md index 50e1e1d2..9f362097 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -138,7 +138,7 @@ These fields are required for proper decoding of the object. They may be populat Every object kind MUST have the following metadata in a nested object field called "metadata": -* namespace: a namespace is a DNS compatible subdomain that objects are subdivided into. The default namespace is 'default'. See [docs/admin/namespaces.md](../admin/namespaces.md) for more. +* namespace: a namespace is a DNS compatible subdomain that objects are subdivided into. The default namespace is 'default'. See [docs/user-guide/namespaces.md](../user-guide/namespaces.md) for more. * name: a string that uniquely identifies this object within the current namespace (see [docs/user-guide/identifiers.md](../user-guide/identifiers.md)). This value is used in the path when retrieving an individual object. * uid: a unique in time and space value (typically an RFC 4122 generated identifier, see [docs/user-guide/identifiers.md](../user-guide/identifiers.md)) used to distinguish between objects with the same name that have been deleted and recreated -- cgit v1.2.3 From 35f2829ae014c08b847b59ce06a205cc3fbb8770 Mon Sep 17 00:00:00 2001 From: Daniel Smith Date: Thu, 16 Jul 2015 19:01:02 -0700 Subject: apply changes --- api-conventions.md | 4 ++++ api_changes.md | 1 + developer-guides/vagrant.md | 5 +++++ development.md | 11 +++++++++++ flaky-tests.md | 2 ++ getting-builds.md | 1 + making-release-notes.md | 1 + profiling.md | 6 ++++++ releasing.md | 2 ++ 9 files changed, 33 insertions(+) diff --git a/api-conventions.md b/api-conventions.md index 50e1e1d2..f3c9ed6b 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -196,12 +196,15 @@ References in the status of the referee to the referrer may be permitted, when t Discussed in [#2004](https://github.com/GoogleCloudPlatform/kubernetes/issues/2004) and elsewhere. There are no maps of subobjects in any API objects. Instead, the convention is to use a list of subobjects containing name fields. For example: + ```yaml ports: - name: www containerPort: 80 ``` + vs. + ```yaml ports: www: @@ -518,6 +521,7 @@ A ```Status``` kind will be returned by the API in two cases: The status object is encoded as JSON and provided as the body of the response. The status object contains fields for humans and machine consumers of the API to get more detailed information for the cause of the failure. The information in the status object supplements, but does not override, the HTTP status code's meaning. When fields in the status object have the same meaning as generally defined HTTP headers and that header is returned with the response, the header should be considered as having higher priority. **Example:** + ``` $ curl -v -k -H "Authorization: Bearer WhCDvq4VPpYhrcfmF6ei7V9qlbqTubUc" https://10.240.122.184:443/api/v1/namespaces/default/pods/grafana diff --git a/api_changes.md b/api_changes.md index c7458d53..edf227cc 100644 --- a/api_changes.md +++ b/api_changes.md @@ -282,6 +282,7 @@ conversion functions when writing your conversion functions. Once all the necessary manually written conversions are added, you need to regenerate auto-generated ones. To regenerate them: - run + ``` $ hack/update-generated-conversions.sh ``` diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index 5b4013e3..2b6fcc42 100644 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -83,6 +83,7 @@ vagrant ssh minion-3 ``` To view the service status and/or logs on the kubernetes-master: + ```sh vagrant ssh master [vagrant@kubernetes-master ~] $ sudo systemctl status kube-apiserver @@ -96,6 +97,7 @@ vagrant ssh master ``` To view the services on any of the nodes: + ```sh vagrant ssh minion-1 [vagrant@kubernetes-minion-1] $ sudo systemctl status docker @@ -109,17 +111,20 @@ vagrant ssh minion-1 With your Kubernetes cluster up, you can manage the nodes in your cluster with the regular Vagrant commands. To push updates to new Kubernetes code after making source changes: + ```sh ./cluster/kube-push.sh ``` To stop and then restart the cluster: + ```sh vagrant halt ./cluster/kube-up.sh ``` To destroy the cluster: + ```sh vagrant destroy ``` diff --git a/development.md b/development.md index 1255b7a8..e258f841 100644 --- a/development.md +++ b/development.md @@ -109,6 +109,7 @@ source control system). Use ```apt-get install mercurial``` or ```yum install m directly from mercurial. 2) Create a new GOPATH for your tools and install godep: + ``` export GOPATH=$HOME/go-tools mkdir -p $GOPATH @@ -116,6 +117,7 @@ go get github.com/tools/godep ``` 3) Add $GOPATH/bin to your path. Typically you'd add this to your ~/.profile: + ``` export GOPATH=$HOME/go-tools export PATH=$PATH:$GOPATH/bin @@ -125,6 +127,7 @@ export PATH=$PATH:$GOPATH/bin Here's a quick walkthrough of one way to use godeps to add or update a Kubernetes dependency into Godeps/_workspace. For more details, please see the instructions in [godep's documentation](https://github.com/tools/godep). 1) Devote a directory to this endeavor: + ``` export KPATH=$HOME/code/kubernetes mkdir -p $KPATH/src/github.com/GoogleCloudPlatform/kubernetes @@ -134,6 +137,7 @@ git clone https://path/to/your/fork . ``` 2) Set up your GOPATH. + ``` # Option A: this will let your builds see packages that exist elsewhere on your system. export GOPATH=$KPATH:$GOPATH @@ -143,12 +147,14 @@ export GOPATH=$KPATH ``` 3) Populate your new GOPATH. + ``` cd $KPATH/src/github.com/GoogleCloudPlatform/kubernetes godep restore ``` 4) Next, you can either add a new dependency or update an existing one. + ``` # To add a new dependency, do: cd $KPATH/src/github.com/GoogleCloudPlatform/kubernetes @@ -218,6 +224,7 @@ KUBE_COVER=y hack/test-go.sh At the end of the run, an the HTML report will be generated with the path printed to stdout. To run tests and collect coverage in only one package, pass its relative path under the `kubernetes` directory as an argument, for example: + ``` cd kubernetes KUBE_COVER=y hack/test-go.sh pkg/kubectl @@ -230,6 +237,7 @@ Coverage results for the project can also be viewed on [Coveralls](https://cover ## Integration tests You need an [etcd](https://github.com/coreos/etcd/releases/tag/v2.0.0) in your path, please make sure it is installed and in your ``$PATH``. + ``` cd kubernetes hack/test-integration.sh @@ -238,12 +246,14 @@ hack/test-integration.sh ## End-to-End tests You can run an end-to-end test which will bring up a master and two nodes, perform some tests, and then tear everything down. Make sure you have followed the getting started steps for your chosen cloud platform (which might involve changing the `KUBERNETES_PROVIDER` environment variable to something other than "gce". + ``` cd kubernetes hack/e2e-test.sh ``` Pressing control-C should result in an orderly shutdown but if something goes wrong and you still have some VMs running you can force a cleanup with this command: + ``` go run hack/e2e.go --down ``` @@ -281,6 +291,7 @@ hack/ginkgo-e2e.sh --ginkgo.focus=Pods.*env ``` ### Combining flags + ```sh # Flags can be combined, and their actions will take place in this order: # -build, -push|-up|-pushup, -test|-tests=..., -down diff --git a/flaky-tests.md b/flaky-tests.md index 52ba45a2..0fbf643c 100644 --- a/flaky-tests.md +++ b/flaky-tests.md @@ -42,6 +42,7 @@ _Note: these instructions are mildly hacky for now, as we get run once semantics There is a testing image ```brendanburns/flake``` up on the docker hub. We will use this image to test our fix. Create a replication controller with the following config: + ```yaml apiVersion: v1 kind: ReplicationController @@ -63,6 +64,7 @@ spec: - name: REPO_SPEC value: https://github.com/GoogleCloudPlatform/kubernetes ``` + Note that we omit the labels and the selector fields of the replication controller, because they will be populated from the labels field of the pod template by default. ``` diff --git a/getting-builds.md b/getting-builds.md index e41c4fbf..f59a753b 100644 --- a/getting-builds.md +++ b/getting-builds.md @@ -45,6 +45,7 @@ usage: ``` You can also use the gsutil tool to explore the Google Cloud Storage release bucket. Here are some examples: + ``` gsutil cat gs://kubernetes-release/ci/latest.txt # output the latest ci version number gsutil cat gs://kubernetes-release/ci/latest-green.txt # output the latest ci version number that passed gce e2e diff --git a/making-release-notes.md b/making-release-notes.md index b362d857..343b9203 100644 --- a/making-release-notes.md +++ b/making-release-notes.md @@ -40,6 +40,7 @@ _TODO_: Figure out a way to record this somewhere to save the next release engin Find the most-recent PR that was merged with the current .0 release. Remeber this as $CURRENTPR. ### 2) Run the release-notes tool + ```bash ${KUBERNETES_ROOT}/build/make-release-notes.sh $LASTPR $CURRENTPR ``` diff --git a/profiling.md b/profiling.md index 215f0c41..fbb54c9f 100644 --- a/profiling.md +++ b/profiling.md @@ -41,24 +41,30 @@ Go comes with inbuilt 'net/http/pprof' profiling library and profiling web servi ## Adding profiling to services to APIserver. TL;DR: Add lines: + ``` m.mux.HandleFunc("/debug/pprof/", pprof.Index) m.mux.HandleFunc("/debug/pprof/profile", pprof.Profile) m.mux.HandleFunc("/debug/pprof/symbol", pprof.Symbol) ``` + to the init(c *Config) method in 'pkg/master/master.go' and import 'net/http/pprof' package. In most use cases to use profiler service it's enough to do 'import _ net/http/pprof', which automatically registers a handler in the default http.Server. Slight inconvenience is that APIserver uses default server for intra-cluster communication, so plugging profiler to it is not really useful. In 'pkg/master/server/server.go' more servers are created and started as separate goroutines. The one that is usually serving external traffic is secureServer. The handler for this traffic is defined in 'pkg/master/master.go' and stored in Handler variable. It is created from HTTP multiplexer, so the only thing that needs to be done is adding profiler handler functions to this multiplexer. This is exactly what lines after TL;DR do. ## Connecting to the profiler Even when running profiler I found not really straightforward to use 'go tool pprof' with it. The problem is that at least for dev purposes certificates generated for APIserver are not signed by anyone trusted and because secureServer serves only secure traffic it isn't straightforward to connect to the service. The best workaround I found is by creating an ssh tunnel from the kubernetes_master open unsecured port to some external server, and use this server as a proxy. To save everyone looking for correct ssh flags, it is done by running: + ``` ssh kubernetes_master -L:localhost:8080 ``` + or analogous one for you Cloud provider. Afterwards you can e.g. run + ``` go tool pprof http://localhost:/debug/pprof/profile ``` + to get 30 sec. CPU profile. ## Contention profiling diff --git a/releasing.md b/releasing.md index 8469fc40..8b1a661c 100644 --- a/releasing.md +++ b/releasing.md @@ -78,9 +78,11 @@ and you're trying to cut a release, don't hesitate to contact the GKE oncall. Before proceeding to the next step: + ``` export BRANCHPOINT=v0.20.2-322-g974377b ``` + Where `v0.20.2-322-g974377b` is the git hash you decided on. This will become our (retroactive) branch point. -- cgit v1.2.3 From e1a268be8375b68f4b4a1d546d2538fcdaa33da1 Mon Sep 17 00:00:00 2001 From: Tim Hockin Date: Fri, 17 Jul 2015 09:20:19 -0700 Subject: Make TOC munge include blank line before TOC --- api-conventions.md | 1 + 1 file changed, 1 insertion(+) diff --git a/api-conventions.md b/api-conventions.md index 323cde41..271efed4 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -41,6 +41,7 @@ using resources with kubectl can be found in (working_with_resources.md).* **Table of Contents** + - [Types (Kinds)](#types-kinds) - [Resources](#resources) - [Objects](#objects) -- cgit v1.2.3 From da3e5f056b57f17ae5234085a00e792adaa02d57 Mon Sep 17 00:00:00 2001 From: Tim Hockin Date: Fri, 17 Jul 2015 15:35:41 -0700 Subject: Run gendocs --- README.md | 1 + api-conventions.md | 1 + api_changes.md | 2 ++ cherry-picks.md | 1 + cli-roadmap.md | 1 + client-libraries.md | 3 +++ collab.md | 1 + developer-guides/vagrant.md | 3 +++ development.md | 8 ++++++++ faster_reviews.md | 1 + flaky-tests.md | 2 ++ getting-builds.md | 1 + making-release-notes.md | 6 ++++++ profiling.md | 2 ++ releasing.md | 2 ++ scheduler_algorithm.md | 2 ++ writing-a-getting-started-guide.md | 4 ++++ 17 files changed, 41 insertions(+) diff --git a/README.md b/README.md index a06efc8d..9a73d949 100644 --- a/README.md +++ b/README.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Kubernetes Developer Guide The developer guide is for anyone wanting to either write code which directly accesses the diff --git a/api-conventions.md b/api-conventions.md index 323cde41..7f46d5be 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -455,6 +455,7 @@ The following HTTP status codes may be returned by the API. * Returned in response to HTTP OPTIONS requests. #### Error codes + * `307 StatusTemporaryRedirect` * Indicates that the address for the requested resource has changed. * Suggested client recovery behavior diff --git a/api_changes.md b/api_changes.md index edf227cc..7a0418e8 100644 --- a/api_changes.md +++ b/api_changes.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # So you want to change the API? The Kubernetes API has two major components - the internal structures and @@ -365,6 +366,7 @@ $ hack/update-swagger-spec.sh The API spec changes should be in a commit separate from your other changes. ## Incompatible API changes + If your change is going to be backward incompatible or might be a breaking change for API consumers, please send an announcement to `kubernetes-dev@googlegroups.com` before the change gets in. If you are unsure, ask. Also make sure that the change gets documented in diff --git a/cherry-picks.md b/cherry-picks.md index 1d59eaef..7ed63d08 100644 --- a/cherry-picks.md +++ b/cherry-picks.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Overview This document explains cherry picks are managed on release branches within the diff --git a/cli-roadmap.md b/cli-roadmap.md index 45c26827..00b454fa 100644 --- a/cli-roadmap.md +++ b/cli-roadmap.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Kubernetes CLI/Configuration Roadmap See also issues with the following labels: diff --git a/client-libraries.md b/client-libraries.md index ae7cb623..69cba1e6 100644 --- a/client-libraries.md +++ b/client-libraries.md @@ -30,12 +30,15 @@ Documentation for other releases can be found at + ## kubernetes API client libraries ### Supported + * [Go](../../pkg/client/) ### User Contributed + *Note: Libraries provided by outside parties are supported by their authors, not the core Kubernetes team* * [Java (OSGI)](https://bitbucket.org/amdatulabs/amdatu-kubernetes) diff --git a/collab.md b/collab.md index 38b6d586..96db64c8 100644 --- a/collab.md +++ b/collab.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # On Collaborative Development Kubernetes is open source, but many of the people working on it do so as their day job. In order to avoid forcing people to be "at work" effectively 24/7, we want to establish some semi-formal protocols around development. Hopefully these rules make things go more smoothly. If you find that this is not the case, please complain loudly. diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index 2b6fcc42..e704bf3b 100644 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -30,11 +30,13 @@ Documentation for other releases can be found at + ## Getting started with Vagrant Running kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/develop on your local machine (Linux, Mac OS X). ### Prerequisites + 1. Install latest version >= 1.6.2 of vagrant from http://www.vagrantup.com/downloads.html 2. Install one of: 1. The latest version of Virtual Box from https://www.virtualbox.org/wiki/Downloads @@ -371,6 +373,7 @@ export KUBERNETES_MINION_MEMORY=2048 ``` #### I ran vagrant suspend and nothing works! + ```vagrant suspend``` seems to mess up the network. It's not supported at this time. diff --git a/development.md b/development.md index e258f841..6822ab5e 100644 --- a/development.md +++ b/development.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Development Guide # Releases and Official Builds @@ -45,6 +46,7 @@ Kubernetes is written in [Go](http://golang.org) programming language. If you ha Below, we outline one of the more common git workflows that core developers use. Other git workflows are also valid. ### Visual overview + ![Git workflow](git_workflow.png) ### Fork the main repository @@ -93,6 +95,7 @@ $ git push -f origin myfeature ``` ### Creating a pull request + 1. Visit http://github.com/$YOUR_GITHUB_USERNAME/kubernetes 2. Click the "Compare and pull request" button next to your "myfeature" branch. @@ -102,6 +105,7 @@ $ git push -f origin myfeature Kubernetes uses [godep](https://github.com/tools/godep) to manage dependencies. It is not strictly required for building Kubernetes but it is required when managing dependencies under the Godeps/ tree, and is required by a number of the build and test scripts. Please make sure that ``godep`` is installed and in your ``$PATH``. ### Installing godep + There are many ways to build and host go binaries. Here is an easy way to get utilities like ```godep``` installed: 1) Ensure that [mercurial](http://mercurial.selenic.com/wiki/Download) is installed on your system. (some of godep's dependencies use the mercurial @@ -124,6 +128,7 @@ export PATH=$PATH:$GOPATH/bin ``` ### Using godep + Here's a quick walkthrough of one way to use godeps to add or update a Kubernetes dependency into Godeps/_workspace. For more details, please see the instructions in [godep's documentation](https://github.com/tools/godep). 1) Devote a directory to this endeavor: @@ -259,6 +264,7 @@ go run hack/e2e.go --down ``` ### Flag options + See the flag definitions in `hack/e2e.go` for more options, such as reusing an existing cluster, here is an overview: ```sh @@ -309,6 +315,7 @@ go run hack/e2e.go -v -ctl='delete pod foobar' ``` ## Conformance testing + End-to-end testing, as described above, is for [development distributions](writing-a-getting-started-guide.md). A conformance test is used on a [versioned distro](writing-a-getting-started-guide.md). @@ -320,6 +327,7 @@ intended to run against a cluster at a specific binary release of Kubernetes. See [conformance-test.sh](../../hack/conformance-test.sh). ## Testing out flaky tests + [Instructions here](flaky-tests.md) ## Regenerating the CLI documentation diff --git a/faster_reviews.md b/faster_reviews.md index 20e3e990..d28e9b55 100644 --- a/faster_reviews.md +++ b/faster_reviews.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # How to get faster PR reviews Most of what is written here is not at all specific to Kubernetes, but it bears diff --git a/flaky-tests.md b/flaky-tests.md index 0fbf643c..1e7f5fcb 100644 --- a/flaky-tests.md +++ b/flaky-tests.md @@ -30,7 +30,9 @@ Documentation for other releases can be found at + # Hunting flaky tests in Kubernetes + Sometimes unit tests are flaky. This means that due to (usually) race conditions, they will occasionally fail, even though most of the time they pass. We have a goal of 99.9% flake free tests. This means that there is only one flake in one thousand runs of a test. diff --git a/getting-builds.md b/getting-builds.md index f59a753b..4c92a446 100644 --- a/getting-builds.md +++ b/getting-builds.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Getting Kubernetes Builds You can use [hack/get-build.sh](../../hack/get-build.sh) to or use as a reference on how to get the most recent builds with curl. With `get-build.sh` you can grab the most recent stable build, the most recent release candidate, or the most recent build to pass our ci and gce e2e tests (essentially a nightly build). diff --git a/making-release-notes.md b/making-release-notes.md index 343b9203..d76f7415 100644 --- a/making-release-notes.md +++ b/making-release-notes.md @@ -30,10 +30,13 @@ Documentation for other releases can be found at + ## Making release notes + This documents the process for making release notes for a release. ### 1) Note the PR number of the previous release + Find the most-recent PR that was merged with the previous .0 release. Remember this as $LASTPR. _TODO_: Figure out a way to record this somewhere to save the next release engineer time. @@ -46,6 +49,7 @@ ${KUBERNETES_ROOT}/build/make-release-notes.sh $LASTPR $CURRENTPR ``` ### 3) Trim the release notes + This generates a list of the entire set of PRs merged since the last minor release. It is likely long and many PRs aren't worth mentioning. If any of the PRs were cherrypicked into patches on the last minor release, you should exclude @@ -57,9 +61,11 @@ Remove, regroup, organize to your hearts content. ### 4) Update CHANGELOG.md + With the final markdown all set, cut and paste it to the top of ```CHANGELOG.md``` ### 5) Update the Release page + * Switch to the [releases](https://github.com/GoogleCloudPlatform/kubernetes/releases) page. * Open up the release you are working on. * Cut and paste the final markdown from above into the release notes diff --git a/profiling.md b/profiling.md index fbb54c9f..d36885dd 100644 --- a/profiling.md +++ b/profiling.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Profiling Kubernetes This document explain how to plug in profiler and how to profile Kubernetes services. @@ -53,6 +54,7 @@ to the init(c *Config) method in 'pkg/master/master.go' and import 'net/http/ppr In most use cases to use profiler service it's enough to do 'import _ net/http/pprof', which automatically registers a handler in the default http.Server. Slight inconvenience is that APIserver uses default server for intra-cluster communication, so plugging profiler to it is not really useful. In 'pkg/master/server/server.go' more servers are created and started as separate goroutines. The one that is usually serving external traffic is secureServer. The handler for this traffic is defined in 'pkg/master/master.go' and stored in Handler variable. It is created from HTTP multiplexer, so the only thing that needs to be done is adding profiler handler functions to this multiplexer. This is exactly what lines after TL;DR do. ## Connecting to the profiler + Even when running profiler I found not really straightforward to use 'go tool pprof' with it. The problem is that at least for dev purposes certificates generated for APIserver are not signed by anyone trusted and because secureServer serves only secure traffic it isn't straightforward to connect to the service. The best workaround I found is by creating an ssh tunnel from the kubernetes_master open unsecured port to some external server, and use this server as a proxy. To save everyone looking for correct ssh flags, it is done by running: ``` diff --git a/releasing.md b/releasing.md index 8b1a661c..65db081d 100644 --- a/releasing.md +++ b/releasing.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Releasing Kubernetes This document explains how to cut a release, and the theory behind it. If you @@ -87,6 +88,7 @@ Where `v0.20.2-322-g974377b` is the git hash you decided on. This will become our (retroactive) branch point. #### Branching, Tagging and Merging + Do the following: 1. `export VER=x.y` (e.g. `0.20` for v0.20) diff --git a/scheduler_algorithm.md b/scheduler_algorithm.md index 791de7c4..e73e4f27 100644 --- a/scheduler_algorithm.md +++ b/scheduler_algorithm.md @@ -30,11 +30,13 @@ Documentation for other releases can be found at + # Scheduler Algorithm in Kubernetes For each unscheduled Pod, the Kubernetes scheduler tries to find a node across the cluster according to a set of rules. A general introduction to the Kubernetes scheduler can be found at [scheduler.md](scheduler.md). In this document, the algorithm of how to select a node for the Pod is explained. There are two steps before a destination node of a Pod is chosen. The first step is filtering all the nodes and the second is ranking the remaining nodes to find a best fit for the Pod. ## Filtering the nodes + The purpose of filtering the nodes is to filter out the nodes that do not meet certain requirements of the Pod. For example, if the free resource on a node (measured by the capacity minus the sum of the resource limits of all the Pods that already run on the node) is less than the Pod's required resource, the node should not be considered in the ranking phase so it is filtered out. Currently, there are several "predicates" implementing different filtering policies, including: - `NoDiskConflict`: Evaluate if a pod can fit due to the volumes it requests, and those that are already mounted. diff --git a/writing-a-getting-started-guide.md b/writing-a-getting-started-guide.md index 3e67b632..c22d9204 100644 --- a/writing-a-getting-started-guide.md +++ b/writing-a-getting-started-guide.md @@ -32,6 +32,7 @@ Documentation for other releases can be found at # Writing a Getting Started Guide + This page gives some advice for anyone planning to write or update a Getting Started Guide for Kubernetes. It also gives some guidelines which reviewers should follow when reviewing a pull request for a guide. @@ -57,6 +58,7 @@ Distros fall into two categories: There are different guidelines for each. ## Versioned Distro Guidelines + These guidelines say *what* to do. See the Rationale section for *why*. - Send us a PR. - Put the instructions in `docs/getting-started-guides/...`. Scripts go there too. This helps devs easily @@ -77,6 +79,7 @@ we still want to hear from you. We suggest you write a blog post or a Gist, and Just file an issue or chat us on IRC and one of the committers will link to it from the wiki. ## Development Distro Guidelines + These guidelines say *what* to do. See the Rationale section for *why*. - the main reason to add a new development distro is to support a new IaaS provider (VM and network management). This means implementing a new `pkg/cloudprovider/$IAAS_NAME`. @@ -93,6 +96,7 @@ These guidelines say *what* to do. See the Rationale section for *why*. refactoring and feature additions that affect code for their IaaS. ## Rationale + - We want people to create Kubernetes clusters with whatever IaaS, Node OS, configuration management tools, and so on, which they are familiar with. The guidelines for **versioned distros** are designed for flexibility. -- cgit v1.2.3 From 9d1ae2e76424babe7f7975ddb86433a6b93e1812 Mon Sep 17 00:00:00 2001 From: Brian Grant Date: Sat, 18 Jul 2015 00:05:57 +0000 Subject: Gut stale roadmaps. Move useful content elsewhere. --- cli-roadmap.md | 74 +--------------------------------------------------------- 1 file changed, 1 insertion(+), 73 deletions(-) diff --git a/cli-roadmap.md b/cli-roadmap.md index 00b454fa..69084555 100644 --- a/cli-roadmap.md +++ b/cli-roadmap.md @@ -33,83 +33,11 @@ Documentation for other releases can be found at # Kubernetes CLI/Configuration Roadmap -See also issues with the following labels: +See github issues with the following labels: * [area/app-config-deployment](https://github.com/GoogleCloudPlatform/kubernetes/labels/area/app-config-deployment) * [component/CLI](https://github.com/GoogleCloudPlatform/kubernetes/labels/component/CLI) * [component/client](https://github.com/GoogleCloudPlatform/kubernetes/labels/component/client) -1. Create services before other objects, or at least before objects that depend upon them. Namespace-relative DNS mitigates this some, but most users are still using service environment variables. [#1768](https://github.com/GoogleCloudPlatform/kubernetes/issues/1768) -1. Finish rolling update [#1353](https://github.com/GoogleCloudPlatform/kubernetes/issues/1353) - 1. Friendly to auto-scaling [#2863](https://github.com/GoogleCloudPlatform/kubernetes/pull/2863#issuecomment-69701562) - 1. Rollback (make rolling-update reversible, and complete an in-progress rolling update by taking 2 replication controller names rather than always taking a file) - 1. Rollover (replace multiple replication controllers with one, such as to clean up an aborted partial rollout) - 1. Write a ReplicationController generator to derive the new ReplicationController from an old one (e.g., `--image-version=newversion`, which would apply a name suffix, update a label value, and apply an image tag) - 1. Use readiness [#620](https://github.com/GoogleCloudPlatform/kubernetes/issues/620) - 1. Perhaps factor this in a way that it can be shared with [Openshift’s deployment controller](https://github.com/GoogleCloudPlatform/kubernetes/issues/1743) - 1. Rolling update service as a plugin -1. Kind-based filtering on object streams -- only operate on the kinds of objects specified. This would make directory-based kubectl operations much more useful. Users should be able to instantiate the example applications using `kubectl create -f ...` -1. Improved pretty printing of endpoints, such as in the case that there are more than a few endpoints -1. Service address/port lookup command(s) -1. List supported resources -1. Swagger lookups [#3060](https://github.com/GoogleCloudPlatform/kubernetes/issues/3060) -1. --name, --name-suffix applied during creation and updates -1. --labels and opinionated label injection: --app=foo, --tier={fe,cache,be,db}, --uservice=redis, --env={dev,test,prod}, --stage={canary,final}, --track={hourly,daily,weekly}, --release=0.4.3c2. Exact ones TBD. We could allow arbitrary values -- the keys are important. The actual label keys would be (optionally?) namespaced with kubectl.kubernetes.io/, or perhaps the user’s namespace. -1. --annotations and opinionated annotation injection: --description, --revision -1. Imperative updates. We'll want to optionally make these safe(r) by supporting preconditions based on the current value and resourceVersion. - 1. annotation updates similar to label updates - 1. other custom commands for common imperative updates - 1. more user-friendly (but still generic) on-command-line json for patch -1. We also want to support the following flavors of more general updates: - 1. whichever we don’t support: - 1. safe update: update the full resource, guarded by resourceVersion precondition (and perhaps selected value-based preconditions) - 1. forced update: update the full resource, blowing away the previous Spec without preconditions; delete and re-create if necessary - 1. diff/dryrun: Compare new config with current Spec [#6284](https://github.com/GoogleCloudPlatform/kubernetes/issues/6284) - 1. submit/apply/reconcile/ensure/merge: Merge user-provided fields with current Spec. Keep track of user-provided fields using an annotation -- see [#1702](https://github.com/GoogleCloudPlatform/kubernetes/issues/1702). Delete all objects with deployment-specific labels. -1. --dry-run for all commands -1. Support full label selection syntax, including support for namespaces. -1. Wait on conditions [#1899](https://github.com/GoogleCloudPlatform/kubernetes/issues/1899) -1. Make kubectl scriptable: make output and exit code behavior consistent and useful for wrapping in workflows and piping back into kubectl and/or xargs (e.g., dump full URLs?, distinguish permanent and retry-able failure, identify objects that should be retried) - 1. Here's [an example](http://techoverflow.net/blog/2013/10/22/docker-remove-all-images-and-containers/) where multiple objects on the command line and an option to dump object names only (`-q`) would be useful in combination. [#5906](https://github.com/GoogleCloudPlatform/kubernetes/issues/5906) -1. Easy generation of clean configuration files from existing objects (including containers -- podex) -- remove readonly fields, status - 1. Export from one namespace, import into another is an important use case -1. Derive objects from other objects - 1. pod clone - 1. rc from pod - 1. --labels-from (services from pods or rcs) -1. Kind discovery (i.e., operate on objects of all kinds) [#5278](https://github.com/GoogleCloudPlatform/kubernetes/issues/5278) -1. A fairly general-purpose way to specify fields on the command line during creation and update, not just from a config file -1. Extensible API-based generator framework (i.e. invoke generators via an API/URL rather than building them into kubectl), so that complex client libraries don’t need to be rewritten in multiple languages, and so that the abstractions are available through all interfaces: API, CLI, UI, logs, ... [#5280](https://github.com/GoogleCloudPlatform/kubernetes/issues/5280) - 1. Need schema registry, and some way to invoke generator (e.g., using a container) - 1. Convert run command to API-based generator -1. Transformation framework - 1. More intelligent defaulting of fields (e.g., [#2643](https://github.com/GoogleCloudPlatform/kubernetes/issues/2643)) -1. Update preconditions based on the values of arbitrary object fields. -1. Deployment manager compatibility on GCP: [#3685](https://github.com/GoogleCloudPlatform/kubernetes/issues/3685) -1. Describe multiple objects, multiple kinds of objects [#5905](https://github.com/GoogleCloudPlatform/kubernetes/issues/5905) -1. Support yaml document separator [#5840](https://github.com/GoogleCloudPlatform/kubernetes/issues/5840) - -TODO: -* watch -* attach [#1521](https://github.com/GoogleCloudPlatform/kubernetes/issues/1521) -* image/registry commands -* do any other server paths make sense? validate? generic curl functionality? -* template parameterization -* dynamic/runtime configuration - -Server-side support: - -1. Default selectors from labels [#1698](https://github.com/GoogleCloudPlatform/kubernetes/issues/1698#issuecomment-71048278) -1. Stop [#1535](https://github.com/GoogleCloudPlatform/kubernetes/issues/1535) -1. Deleted objects [#2789](https://github.com/GoogleCloudPlatform/kubernetes/issues/2789) -1. Clone [#170](https://github.com/GoogleCloudPlatform/kubernetes/issues/170) -1. Resize [#1629](https://github.com/GoogleCloudPlatform/kubernetes/issues/1629) -1. Useful /operations API: wait for finalization/reification -1. List supported resources [#2057](https://github.com/GoogleCloudPlatform/kubernetes/issues/2057) -1. Reverse label lookup [#1348](https://github.com/GoogleCloudPlatform/kubernetes/issues/1348) -1. Field selection [#1362](https://github.com/GoogleCloudPlatform/kubernetes/issues/1362) -1. Field filtering [#1459](https://github.com/GoogleCloudPlatform/kubernetes/issues/1459) -1. Operate on uids - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/cli-roadmap.md?pixel)]() -- cgit v1.2.3 From 883791a848441058457a4ab4ac50388b42396af8 Mon Sep 17 00:00:00 2001 From: Alex Robinson Date: Sun, 19 Jul 2015 08:54:49 +0000 Subject: Improve devel docs syntax highlighting. --- api-conventions.md | 2 +- api_changes.md | 8 +++--- cherry-picks.md | 2 +- developer-guides/vagrant.md | 34 ++++++++++++------------- development.md | 62 ++++++++++++++++++++++----------------------- flaky-tests.md | 2 +- getting-builds.md | 4 +-- profiling.md | 14 +++++----- releasing.md | 26 +++++++++---------- 9 files changed, 77 insertions(+), 77 deletions(-) diff --git a/api-conventions.md b/api-conventions.md index c2d71078..64509dae 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -524,7 +524,7 @@ The status object is encoded as JSON and provided as the body of the response. **Example:** -``` +```console $ curl -v -k -H "Authorization: Bearer WhCDvq4VPpYhrcfmF6ei7V9qlbqTubUc" https://10.240.122.184:443/api/v1/namespaces/default/pods/grafana > GET /api/v1/namespaces/default/pods/grafana HTTP/1.1 diff --git a/api_changes.md b/api_changes.md index 7a0418e8..d8e20014 100644 --- a/api_changes.md +++ b/api_changes.md @@ -284,8 +284,8 @@ Once all the necessary manually written conversions are added, you need to regenerate auto-generated ones. To regenerate them: - run -``` - $ hack/update-generated-conversions.sh +```sh +hack/update-generated-conversions.sh ``` If running the above script is impossible due to compile errors, the easiest @@ -359,8 +359,8 @@ an example to illustrate your change. Make sure you update the swagger API spec by running: -```shell -$ hack/update-swagger-spec.sh +```sh +hack/update-swagger-spec.sh ``` The API spec changes should be in a commit separate from your other changes. diff --git a/cherry-picks.md b/cherry-picks.md index 7ed63d08..c36741c4 100644 --- a/cherry-picks.md +++ b/cherry-picks.md @@ -40,7 +40,7 @@ Kubernetes projects. Any contributor can propose a cherry pick of any pull request, like so: -``` +```sh hack/cherry_pick_pull.sh upstream/release-3.14 98765 ``` diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index e704bf3b..c1e02ff4 100644 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -86,8 +86,8 @@ vagrant ssh minion-3 To view the service status and/or logs on the kubernetes-master: -```sh -vagrant ssh master +```console +$ vagrant ssh master [vagrant@kubernetes-master ~] $ sudo systemctl status kube-apiserver [vagrant@kubernetes-master ~] $ sudo journalctl -r -u kube-apiserver @@ -100,8 +100,8 @@ vagrant ssh master To view the services on any of the nodes: -```sh -vagrant ssh minion-1 +```console +$ vagrant ssh minion-1 [vagrant@kubernetes-minion-1] $ sudo systemctl status docker [vagrant@kubernetes-minion-1] $ sudo journalctl -r -u docker [vagrant@kubernetes-minion-1] $ sudo systemctl status kubelet @@ -135,7 +135,7 @@ Once your Vagrant machines are up and provisioned, the first thing to do is to c You may need to build the binaries first, you can do this with ```make``` -```sh +```console $ ./cluster/kubectl.sh get nodes NAME LABELS STATUS @@ -182,8 +182,8 @@ Interact with the cluster When using the vagrant provider in Kubernetes, the `cluster/kubectl.sh` script will cache your credentials in a `~/.kubernetes_vagrant_auth` file so you will not be prompted for them in the future. -```sh -cat ~/.kubernetes_vagrant_auth +```console +$ cat ~/.kubernetes_vagrant_auth { "User": "vagrant", "Password": "vagrant" "CAFile": "/home/k8s_user/.kubernetes.vagrant.ca.crt", @@ -202,7 +202,7 @@ You should now be set to use the `cluster/kubectl.sh` script. For example try to Your cluster is running, you can list the nodes in your cluster: -```sh +```console $ ./cluster/kubectl.sh get nodes NAME LABELS STATUS @@ -216,7 +216,7 @@ Now start running some containers! You can now use any of the cluster/kube-*.sh commands to interact with your VM machines. Before starting a container there will be no pods, services and replication controllers. -``` +```console $ cluster/kubectl.sh get pods NAME READY STATUS RESTARTS AGE @@ -229,7 +229,7 @@ CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS Start a container running nginx with a replication controller and three replicas -``` +```console $ cluster/kubectl.sh run my-nginx --image=nginx --replicas=3 --port=80 CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS my-nginx my-nginx nginx run=my-nginx 3 @@ -237,7 +237,7 @@ my-nginx my-nginx nginx run=my-nginx 3 When listing the pods, you will see that three containers have been started and are in Waiting state: -``` +```console $ cluster/kubectl.sh get pods NAME READY STATUS RESTARTS AGE my-nginx-389da 1/1 Waiting 0 33s @@ -247,7 +247,7 @@ my-nginx-nyj3x 1/1 Waiting 0 33s You need to wait for the provisioning to complete, you can monitor the minions by doing: -```sh +```console $ sudo salt '*minion-1' cmd.run 'docker images' kubernetes-minion-1: REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE @@ -257,7 +257,7 @@ kubernetes-minion-1: Once the docker image for nginx has been downloaded, the container will start and you can list it: -```sh +```console $ sudo salt '*minion-1' cmd.run 'docker ps' kubernetes-minion-1: CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES @@ -267,7 +267,7 @@ kubernetes-minion-1: Going back to listing the pods, services and replicationcontrollers, you now have: -``` +```console $ cluster/kubectl.sh get pods NAME READY STATUS RESTARTS AGE my-nginx-389da 1/1 Running 0 33s @@ -286,7 +286,7 @@ We did not start any services, hence there are none listed. But we see three rep Check the [guestbook](../../../examples/guestbook/README.md) application to learn how to create a service. You can already play with scaling the replicas with: -```sh +```console $ ./cluster/kubectl.sh scale rc my-nginx --replicas=2 $ ./cluster/kubectl.sh get pods NAME READY STATUS RESTARTS AGE @@ -327,8 +327,8 @@ rm ~/.kubernetes_vagrant_auth After using kubectl.sh make sure that the correct credentials are set: -```sh -cat ~/.kubernetes_vagrant_auth +```console +$ cat ~/.kubernetes_vagrant_auth { "User": "vagrant", "Password": "vagrant" diff --git a/development.md b/development.md index 6822ab5e..bb233051 100644 --- a/development.md +++ b/development.md @@ -58,40 +58,40 @@ Below, we outline one of the more common git workflows that core developers use. The commands below require that you have $GOPATH set ([$GOPATH docs](https://golang.org/doc/code.html#GOPATH)). We highly recommend you put kubernetes' code into your GOPATH. Note: the commands below will not work if there is more than one directory in your `$GOPATH`. -``` -$ mkdir -p $GOPATH/src/github.com/GoogleCloudPlatform/ -$ cd $GOPATH/src/github.com/GoogleCloudPlatform/ +```sh +mkdir -p $GOPATH/src/github.com/GoogleCloudPlatform/ +cd $GOPATH/src/github.com/GoogleCloudPlatform/ # Replace "$YOUR_GITHUB_USERNAME" below with your github username -$ git clone https://github.com/$YOUR_GITHUB_USERNAME/kubernetes.git -$ cd kubernetes -$ git remote add upstream 'https://github.com/GoogleCloudPlatform/kubernetes.git' +git clone https://github.com/$YOUR_GITHUB_USERNAME/kubernetes.git +cd kubernetes +git remote add upstream 'https://github.com/GoogleCloudPlatform/kubernetes.git' ``` ### Create a branch and make changes -``` -$ git checkout -b myfeature +```sh +git checkout -b myfeature # Make your code changes ``` ### Keeping your development fork in sync -``` -$ git fetch upstream -$ git rebase upstream/master +```sh +git fetch upstream +git rebase upstream/master ``` Note: If you have write access to the main repository at github.com/GoogleCloudPlatform/kubernetes, you should modify your git configuration so that you can't accidentally push to upstream: -``` +```sh git remote set-url --push upstream no_push ``` ### Commiting changes to your fork -``` -$ git commit -$ git push -f origin myfeature +```sh +git commit +git push -f origin myfeature ``` ### Creating a pull request @@ -114,7 +114,7 @@ directly from mercurial. 2) Create a new GOPATH for your tools and install godep: -``` +```sh export GOPATH=$HOME/go-tools mkdir -p $GOPATH go get github.com/tools/godep @@ -122,7 +122,7 @@ go get github.com/tools/godep 3) Add $GOPATH/bin to your path. Typically you'd add this to your ~/.profile: -``` +```sh export GOPATH=$HOME/go-tools export PATH=$PATH:$GOPATH/bin ``` @@ -133,7 +133,7 @@ Here's a quick walkthrough of one way to use godeps to add or update a Kubernete 1) Devote a directory to this endeavor: -``` +```sh export KPATH=$HOME/code/kubernetes mkdir -p $KPATH/src/github.com/GoogleCloudPlatform/kubernetes cd $KPATH/src/github.com/GoogleCloudPlatform/kubernetes @@ -143,7 +143,7 @@ git clone https://path/to/your/fork . 2) Set up your GOPATH. -``` +```sh # Option A: this will let your builds see packages that exist elsewhere on your system. export GOPATH=$KPATH:$GOPATH # Option B: This will *not* let your local builds see packages that exist elsewhere on your system. @@ -153,14 +153,14 @@ export GOPATH=$KPATH 3) Populate your new GOPATH. -``` +```sh cd $KPATH/src/github.com/GoogleCloudPlatform/kubernetes godep restore ``` 4) Next, you can either add a new dependency or update an existing one. -``` +```sh # To add a new dependency, do: cd $KPATH/src/github.com/GoogleCloudPlatform/kubernetes go get path/to/dependency @@ -185,28 +185,28 @@ Please send dependency updates in separate commits within your PR, for easier re Before committing any changes, please link/copy these hooks into your .git directory. This will keep you from accidentally committing non-gofmt'd go code. -``` +```sh cd kubernetes/.git/hooks/ ln -s ../../hooks/pre-commit . ``` ## Unit tests -``` +```sh cd kubernetes hack/test-go.sh ``` Alternatively, you could also run: -``` +```sh cd kubernetes godep go test ./... ``` If you only want to run unit tests in one package, you could run ``godep go test`` under the package directory. For example, the following commands will run all unit tests in package kubelet: -``` +```console $ cd kubernetes # step into kubernetes' directory. $ cd pkg/kubelet $ godep go test @@ -221,7 +221,7 @@ Currently, collecting coverage is only supported for the Go unit tests. To run all unit tests and generate an HTML coverage report, run the following: -``` +```sh cd kubernetes KUBE_COVER=y hack/test-go.sh ``` @@ -230,7 +230,7 @@ At the end of the run, an the HTML report will be generated with the path printe To run tests and collect coverage in only one package, pass its relative path under the `kubernetes` directory as an argument, for example: -``` +```sh cd kubernetes KUBE_COVER=y hack/test-go.sh pkg/kubectl ``` @@ -243,7 +243,7 @@ Coverage results for the project can also be viewed on [Coveralls](https://cover You need an [etcd](https://github.com/coreos/etcd/releases/tag/v2.0.0) in your path, please make sure it is installed and in your ``$PATH``. -``` +```sh cd kubernetes hack/test-integration.sh ``` @@ -252,14 +252,14 @@ hack/test-integration.sh You can run an end-to-end test which will bring up a master and two nodes, perform some tests, and then tear everything down. Make sure you have followed the getting started steps for your chosen cloud platform (which might involve changing the `KUBERNETES_PROVIDER` environment variable to something other than "gce". -``` +```sh cd kubernetes hack/e2e-test.sh ``` Pressing control-C should result in an orderly shutdown but if something goes wrong and you still have some VMs running you can force a cleanup with this command: -``` +```sh go run hack/e2e.go --down ``` @@ -332,7 +332,7 @@ See [conformance-test.sh](../../hack/conformance-test.sh). ## Regenerating the CLI documentation -``` +```sh hack/run-gendocs.sh ``` diff --git a/flaky-tests.md b/flaky-tests.md index 1e7f5fcb..1568baed 100644 --- a/flaky-tests.md +++ b/flaky-tests.md @@ -69,7 +69,7 @@ spec: Note that we omit the labels and the selector fields of the replication controller, because they will be populated from the labels field of the pod template by default. -``` +```sh kubectl create -f ./controller.yaml ``` diff --git a/getting-builds.md b/getting-builds.md index 4c92a446..4265b77a 100644 --- a/getting-builds.md +++ b/getting-builds.md @@ -35,7 +35,7 @@ Documentation for other releases can be found at You can use [hack/get-build.sh](../../hack/get-build.sh) to or use as a reference on how to get the most recent builds with curl. With `get-build.sh` you can grab the most recent stable build, the most recent release candidate, or the most recent build to pass our ci and gce e2e tests (essentially a nightly build). -``` +```console usage: ./hack/get-build.sh [stable|release|latest|latest-green] @@ -47,7 +47,7 @@ usage: You can also use the gsutil tool to explore the Google Cloud Storage release bucket. Here are some examples: -``` +```sh gsutil cat gs://kubernetes-release/ci/latest.txt # output the latest ci version number gsutil cat gs://kubernetes-release/ci/latest-green.txt # output the latest ci version number that passed gce e2e gsutil ls gs://kubernetes-release/ci/v0.20.0-29-g29a55cc/ # list the contents of a ci release diff --git a/profiling.md b/profiling.md index d36885dd..36bbfbae 100644 --- a/profiling.md +++ b/profiling.md @@ -43,10 +43,10 @@ Go comes with inbuilt 'net/http/pprof' profiling library and profiling web servi TL;DR: Add lines: -``` - m.mux.HandleFunc("/debug/pprof/", pprof.Index) - m.mux.HandleFunc("/debug/pprof/profile", pprof.Profile) - m.mux.HandleFunc("/debug/pprof/symbol", pprof.Symbol) +```go +m.mux.HandleFunc("/debug/pprof/", pprof.Index) +m.mux.HandleFunc("/debug/pprof/profile", pprof.Profile) +m.mux.HandleFunc("/debug/pprof/symbol", pprof.Symbol) ``` to the init(c *Config) method in 'pkg/master/master.go' and import 'net/http/pprof' package. @@ -57,13 +57,13 @@ In most use cases to use profiler service it's enough to do 'import _ net/http/p Even when running profiler I found not really straightforward to use 'go tool pprof' with it. The problem is that at least for dev purposes certificates generated for APIserver are not signed by anyone trusted and because secureServer serves only secure traffic it isn't straightforward to connect to the service. The best workaround I found is by creating an ssh tunnel from the kubernetes_master open unsecured port to some external server, and use this server as a proxy. To save everyone looking for correct ssh flags, it is done by running: -``` - ssh kubernetes_master -L:localhost:8080 +```sh +ssh kubernetes_master -L:localhost:8080 ``` or analogous one for you Cloud provider. Afterwards you can e.g. run -``` +```sh go tool pprof http://localhost:/debug/pprof/profile ``` diff --git a/releasing.md b/releasing.md index 65db081d..9950e6e4 100644 --- a/releasing.md +++ b/releasing.md @@ -65,7 +65,7 @@ to make sure they're solid around then as well. Once you find some greens, you can find the Git hash for a build by looking at the "Console Log", then look for `githash=`. You should see a line line: -``` +```console + githash=v0.20.2-322-g974377b ``` @@ -80,7 +80,7 @@ oncall. Before proceeding to the next step: -``` +```sh export BRANCHPOINT=v0.20.2-322-g974377b ``` @@ -230,11 +230,11 @@ present. We are using `pkg/version/base.go` as the source of versioning in absence of information from git. Here is a sample of that file's contents: -``` - var ( - gitVersion string = "v0.4-dev" // version from git, output of $(git describe) - gitCommit string = "" // sha1 from git, output of $(git rev-parse HEAD) - ) +```go +var ( + gitVersion string = "v0.4-dev" // version from git, output of $(git describe) + gitCommit string = "" // sha1 from git, output of $(git rev-parse HEAD) +) ``` This means a build with `go install` or `go get` or a build from a tarball will @@ -313,14 +313,14 @@ projects seem to live with that and it does not really become a large problem. As an example, Docker commit a327d9b91edf has a `v1.1.1-N-gXXX` label but it is not present in Docker `v1.2.0`: -``` - $ git describe a327d9b91edf - v1.1.1-822-ga327d9b91edf +```console +$ git describe a327d9b91edf +v1.1.1-822-ga327d9b91edf - $ git log --oneline v1.2.0..a327d9b91edf - a327d9b91edf Fix data space reporting from Kb/Mb to KB/MB +$ git log --oneline v1.2.0..a327d9b91edf +a327d9b91edf Fix data space reporting from Kb/Mb to KB/MB - (Non-empty output here means the commit is not present on v1.2.0.) +(Non-empty output here means the commit is not present on v1.2.0.) ``` ## Release Notes -- cgit v1.2.3 From dc711364b082ae691bcf0592653b025db0fa2ef5 Mon Sep 17 00:00:00 2001 From: Alex Robinson Date: Sun, 19 Jul 2015 09:04:42 +0000 Subject: Fix gendocs --- api-conventions.md | 1 + 1 file changed, 1 insertion(+) diff --git a/api-conventions.md b/api-conventions.md index c2d71078..0c12e5a6 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -69,6 +69,7 @@ using resources with kubectl can be found in (working_with_resources.md).* - [Success codes](#success-codes) - [Error codes](#error-codes) - [Response Status Kind](#response-status-kind) + - [Events](#events) -- cgit v1.2.3 From 753fab889e0f6de95ba44a06b3b0c60a8fd34f5b Mon Sep 17 00:00:00 2001 From: Alex Robinson Date: Sun, 19 Jul 2015 05:58:13 +0000 Subject: Replace ``` with ` when emphasizing something inline in docs/ --- api-conventions.md | 16 ++++++++-------- developer-guides/vagrant.md | 4 ++-- development.md | 6 +++--- flaky-tests.md | 6 +++--- making-release-notes.md | 4 ++-- profiling.md | 2 +- 6 files changed, 19 insertions(+), 19 deletions(-) diff --git a/api-conventions.md b/api-conventions.md index 0c12e5a6..1438bc8c 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -498,7 +498,7 @@ The following HTTP status codes may be returned by the API. * `429 StatusTooManyRequests` * Indicates that the either the client rate limit has been exceeded or the server has received more requests then it can process. * Suggested client recovery behavior: - * Read the ```Retry-After``` HTTP header from the response, and wait at least that long before retrying. + * Read the `Retry-After` HTTP header from the response, and wait at least that long before retrying. * `500 StatusInternalServerError` * Indicates that the server can be reached and understood the request, but either an unexpected internal error occurred and the outcome of the call is unknown, or the server cannot complete the action in a reasonable time (this maybe due to temporary server load or a transient communication issue with another server). * Suggested client recovery behavior: @@ -514,12 +514,12 @@ The following HTTP status codes may be returned by the API. ## Response Status Kind -Kubernetes will always return the ```Status``` kind from any API endpoint when an error occurs. +Kubernetes will always return the `Status` kind from any API endpoint when an error occurs. Clients SHOULD handle these types of objects when appropriate. -A ```Status``` kind will be returned by the API in two cases: +A `Status` kind will be returned by the API in two cases: * When an operation is not successful (i.e. when the server would return a non 2xx HTTP status code). - * When a HTTP ```DELETE``` call is successful. + * When a HTTP `DELETE` call is successful. The status object is encoded as JSON and provided as the body of the response. The status object contains fields for humans and machine consumers of the API to get more detailed information for the cause of the failure. The information in the status object supplements, but does not override, the HTTP status code's meaning. When fields in the status object have the same meaning as generally defined HTTP headers and that header is returned with the response, the header should be considered as having higher priority. @@ -555,17 +555,17 @@ $ curl -v -k -H "Authorization: Bearer WhCDvq4VPpYhrcfmF6ei7V9qlbqTubUc" https:/ } ``` -```status``` field contains one of two possible values: +`status` field contains one of two possible values: * `Success` * `Failure` `message` may contain human-readable description of the error -```reason``` may contain a machine-readable description of why this operation is in the `Failure` status. If this value is empty there is no information available. The `reason` clarifies an HTTP status code but does not override it. +`reason` may contain a machine-readable description of why this operation is in the `Failure` status. If this value is empty there is no information available. The `reason` clarifies an HTTP status code but does not override it. -```details``` may contain extended data associated with the reason. Each reason may define its own extended details. This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type. +`details` may contain extended data associated with the reason. Each reason may define its own extended details. This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type. -Possible values for the ```reason``` and ```details``` fields: +Possible values for the `reason` and `details` fields: * `BadRequest` * Indicates that the request itself was invalid, because the request doesn't make any sense, for example deleting a read-only object. * This is different than `status reason` `Invalid` above which indicates that the API call could possibly succeed, but the data was invalid. diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index e704bf3b..bf4ca862 100644 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -133,7 +133,7 @@ vagrant destroy Once your Vagrant machines are up and provisioned, the first thing to do is to check that you can use the `kubectl.sh` script. -You may need to build the binaries first, you can do this with ```make``` +You may need to build the binaries first, you can do this with `make` ```sh $ ./cluster/kubectl.sh get nodes @@ -374,7 +374,7 @@ export KUBERNETES_MINION_MEMORY=2048 #### I ran vagrant suspend and nothing works! -```vagrant suspend``` seems to mess up the network. It's not supported at this time. +`vagrant suspend` seems to mess up the network. It's not supported at this time. diff --git a/development.md b/development.md index 6822ab5e..cbcac1de 100644 --- a/development.md +++ b/development.md @@ -106,10 +106,10 @@ Kubernetes uses [godep](https://github.com/tools/godep) to manage dependencies. ### Installing godep -There are many ways to build and host go binaries. Here is an easy way to get utilities like ```godep``` installed: +There are many ways to build and host go binaries. Here is an easy way to get utilities like `godep` installed: 1) Ensure that [mercurial](http://mercurial.selenic.com/wiki/Download) is installed on your system. (some of godep's dependencies use the mercurial -source control system). Use ```apt-get install mercurial``` or ```yum install mercurial``` on Linux, or [brew.sh](http://brew.sh) on OS X, or download +source control system). Use `apt-get install mercurial` or `yum install mercurial` on Linux, or [brew.sh](http://brew.sh) on OS X, or download directly from mercurial. 2) Create a new GOPATH for your tools and install godep: @@ -174,7 +174,7 @@ go get -u path/to/dependency godep update path/to/dependency ``` -5) Before sending your PR, it's a good idea to sanity check that your Godeps.json file is ok by re-restoring: ```godep restore``` +5) Before sending your PR, it's a good idea to sanity check that your Godeps.json file is ok by re-restoring: `godep restore` It is sometimes expedient to manually fix the /Godeps/godeps.json file to minimize the changes. diff --git a/flaky-tests.md b/flaky-tests.md index 1e7f5fcb..522c684f 100644 --- a/flaky-tests.md +++ b/flaky-tests.md @@ -41,7 +41,7 @@ Running a test 1000 times on your own machine can be tedious and time consuming. _Note: these instructions are mildly hacky for now, as we get run once semantics and logging they will get better_ -There is a testing image ```brendanburns/flake``` up on the docker hub. We will use this image to test our fix. +There is a testing image `brendanburns/flake` up on the docker hub. We will use this image to test our fix. Create a replication controller with the following config: @@ -74,7 +74,7 @@ kubectl create -f ./controller.yaml ``` This will spin up 24 instances of the test. They will run to completion, then exit, and the kubelet will restart them, accumulating more and more runs of the test. -You can examine the recent runs of the test by calling ```docker ps -a``` and looking for tasks that exited with non-zero exit codes. Unfortunately, docker ps -a only keeps around the exit status of the last 15-20 containers with the same image, so you have to check them frequently. +You can examine the recent runs of the test by calling `docker ps -a` and looking for tasks that exited with non-zero exit codes. Unfortunately, docker ps -a only keeps around the exit status of the last 15-20 containers with the same image, so you have to check them frequently. You can use this script to automate checking for failures, assuming your cluster is running on GCE and has four nodes: ```sh @@ -93,7 +93,7 @@ Eventually you will have sufficient runs for your purposes. At that point you ca kubectl stop replicationcontroller flakecontroller ``` -If you do a final check for flakes with ```docker ps -a```, ignore tasks that exited -1, since that's what happens when you stop the replication controller. +If you do a final check for flakes with `docker ps -a`, ignore tasks that exited -1, since that's what happens when you stop the replication controller. Happy flake hunting! diff --git a/making-release-notes.md b/making-release-notes.md index d76f7415..d4ec6ccf 100644 --- a/making-release-notes.md +++ b/making-release-notes.md @@ -55,14 +55,14 @@ release. It is likely long and many PRs aren't worth mentioning. If any of the PRs were cherrypicked into patches on the last minor release, you should exclude them from the current release's notes. -Open up ```candidate-notes.md``` in your favorite editor. +Open up `candidate-notes.md` in your favorite editor. Remove, regroup, organize to your hearts content. ### 4) Update CHANGELOG.md -With the final markdown all set, cut and paste it to the top of ```CHANGELOG.md``` +With the final markdown all set, cut and paste it to the top of `CHANGELOG.md` ### 5) Update the Release page diff --git a/profiling.md b/profiling.md index d36885dd..816e600c 100644 --- a/profiling.md +++ b/profiling.md @@ -71,7 +71,7 @@ to get 30 sec. CPU profile. ## Contention profiling -To enable contention profiling you need to add line ```rt.SetBlockProfileRate(1)``` in addition to ```m.mux.HandleFunc(...)``` added before (```rt``` stands for ```runtime``` in ```master.go```). This enables 'debug/pprof/block' subpage, which can be used as an input to ```go tool pprof```. +To enable contention profiling you need to add line `rt.SetBlockProfileRate(1)` in addition to `m.mux.HandleFunc(...)` added before (`rt` stands for `runtime` in `master.go`). This enables 'debug/pprof/block' subpage, which can be used as an input to `go tool pprof`. -- cgit v1.2.3 From 4ebeb731ad8c73ebd05b63c160c033ced6904505 Mon Sep 17 00:00:00 2001 From: David Oppenheimer Date: Mon, 20 Jul 2015 00:25:07 -0700 Subject: Absolutize links that leave the docs/ tree to go anywhere other than to examples/ or back to docs/ --- cherry-picks.md | 2 +- client-libraries.md | 2 +- development.md | 4 +-- getting-builds.md | 2 +- scheduler.md | 12 ++++----- scheduler_algorithm.md | 68 +++++++++++++++++++++++++------------------------- 6 files changed, 45 insertions(+), 45 deletions(-) diff --git a/cherry-picks.md b/cherry-picks.md index c36741c4..519c73c3 100644 --- a/cherry-picks.md +++ b/cherry-picks.md @@ -54,7 +54,7 @@ particular, they may be self-merged by the release branch owner without fanfare, in the case the release branch owner knows the cherry pick was already requested - this should not be the norm, but it may happen. -[Contributor License Agreements](../../CONTRIBUTING.md) is considered implicit +[Contributor License Agreements](http://releases.k8s.io/HEAD/CONTRIBUTING.md) is considered implicit for all code within cherry-pick pull requests, ***unless there is a large conflict***. diff --git a/client-libraries.md b/client-libraries.md index 69cba1e6..e41c6514 100644 --- a/client-libraries.md +++ b/client-libraries.md @@ -35,7 +35,7 @@ Documentation for other releases can be found at ### Supported - * [Go](../../pkg/client/) + * [Go](http://releases.k8s.io/HEAD/pkg/client/) ### User Contributed diff --git a/development.md b/development.md index 3ff03fdd..f5233a0e 100644 --- a/development.md +++ b/development.md @@ -35,7 +35,7 @@ Documentation for other releases can be found at # Releases and Official Builds -Official releases are built in Docker containers. Details are [here](../../build/README.md). You can do simple builds and development with just a local Docker installation. If want to build go locally outside of docker, please continue below. +Official releases are built in Docker containers. Details are [here](http://releases.k8s.io/HEAD/build/README.md). You can do simple builds and development with just a local Docker installation. If want to build go locally outside of docker, please continue below. ## Go development environment @@ -324,7 +324,7 @@ The conformance test runs a subset of the e2e-tests against a manually-created c require support for up/push/down and other operations. To run a conformance test, you need to know the IP of the master for your cluster and the authorization arguments to use. The conformance test is intended to run against a cluster at a specific binary release of Kubernetes. -See [conformance-test.sh](../../hack/conformance-test.sh). +See [conformance-test.sh](http://releases.k8s.io/HEAD/hack/conformance-test.sh). ## Testing out flaky tests diff --git a/getting-builds.md b/getting-builds.md index 4265b77a..bcb981c4 100644 --- a/getting-builds.md +++ b/getting-builds.md @@ -33,7 +33,7 @@ Documentation for other releases can be found at # Getting Kubernetes Builds -You can use [hack/get-build.sh](../../hack/get-build.sh) to or use as a reference on how to get the most recent builds with curl. With `get-build.sh` you can grab the most recent stable build, the most recent release candidate, or the most recent build to pass our ci and gce e2e tests (essentially a nightly build). +You can use [hack/get-build.sh](http://releases.k8s.io/HEAD/hack/get-build.sh) to or use as a reference on how to get the most recent builds with curl. With `get-build.sh` you can grab the most recent stable build, the most recent release candidate, or the most recent build to pass our ci and gce e2e tests (essentially a nightly build). ```console usage: diff --git a/scheduler.md b/scheduler.md index 1fccc7ad..b2a137d5 100644 --- a/scheduler.md +++ b/scheduler.md @@ -53,30 +53,30 @@ divided by the node's capacity). Finally, the node with the highest priority is chosen (or, if there are multiple such nodes, then one of them is chosen at random). The code for this main scheduling loop is in the function `Schedule()` in -[plugin/pkg/scheduler/generic_scheduler.go](../../plugin/pkg/scheduler/generic_scheduler.go) +[plugin/pkg/scheduler/generic_scheduler.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/generic_scheduler.go) ## Scheduler extensibility The scheduler is extensible: the cluster administrator can choose which of the pre-defined scheduling policies to apply, and can add new ones. The built-in predicates and priorities are -defined in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](../../plugin/pkg/scheduler/algorithm/predicates/predicates.go) and -[plugin/pkg/scheduler/algorithm/priorities/priorities.go](../../plugin/pkg/scheduler/algorithm/priorities/priorities.go), respectively. +defined in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/predicates/predicates.go) and +[plugin/pkg/scheduler/algorithm/priorities/priorities.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/priorities/priorities.go), respectively. The policies that are applied when scheduling can be chosen in one of two ways. Normally, the policies used are selected by the functions `defaultPredicates()` and `defaultPriorities()` in -[plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](../../plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). +[plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). However, the choice of policies can be overridden by passing the command-line flag `--policy-config-file` to the scheduler, pointing to a JSON file specifying which scheduling policies to use. See [examples/scheduler-policy-config.json](../../examples/scheduler-policy-config.json) for an example config file. (Note that the config file format is versioned; the API is defined in -[plugin/pkg/scheduler/api](../../plugin/pkg/scheduler/api/)). +[plugin/pkg/scheduler/api](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/api/)). Thus to add a new scheduling policy, you should modify predicates.go or priorities.go, and either register the policy in `defaultPredicates()` or `defaultPriorities()`, or use a policy config file. ## Exploring the code If you want to get a global picture of how the scheduler works, you can start in -[plugin/cmd/kube-scheduler/app/server.go](../../plugin/cmd/kube-scheduler/app/server.go) +[plugin/cmd/kube-scheduler/app/server.go](http://releases.k8s.io/HEAD/plugin/cmd/kube-scheduler/app/server.go) diff --git a/scheduler_algorithm.md b/scheduler_algorithm.md index e73e4f27..c67bcdbf 100644 --- a/scheduler_algorithm.md +++ b/scheduler_algorithm.md @@ -31,40 +31,40 @@ Documentation for other releases can be found at -# Scheduler Algorithm in Kubernetes - -For each unscheduled Pod, the Kubernetes scheduler tries to find a node across the cluster according to a set of rules. A general introduction to the Kubernetes scheduler can be found at [scheduler.md](scheduler.md). In this document, the algorithm of how to select a node for the Pod is explained. There are two steps before a destination node of a Pod is chosen. The first step is filtering all the nodes and the second is ranking the remaining nodes to find a best fit for the Pod. - -## Filtering the nodes - -The purpose of filtering the nodes is to filter out the nodes that do not meet certain requirements of the Pod. For example, if the free resource on a node (measured by the capacity minus the sum of the resource limits of all the Pods that already run on the node) is less than the Pod's required resource, the node should not be considered in the ranking phase so it is filtered out. Currently, there are several "predicates" implementing different filtering policies, including: - -- `NoDiskConflict`: Evaluate if a pod can fit due to the volumes it requests, and those that are already mounted. -- `PodFitsResources`: Check if the free resource (CPU and Memory) meets the requirement of the Pod. The free resource is measured by the capacity minus the sum of limits of all Pods on the node. -- `PodFitsPorts`: Check if any HostPort required by the Pod is already occupied on the node. -- `PodFitsHost`: Filter out all nodes except the one specified in the PodSpec's NodeName field. -- `PodSelectorMatches`: Check if the labels of the node match the labels specified in the Pod's `nodeSelector` field ([Here](../user-guide/node-selection/) is an example of how to use `nodeSelector` field). -- `CheckNodeLabelPresence`: Check if all the specified labels exist on a node or not, regardless of the value. - -The details of the above predicates can be found in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](../../plugin/pkg/scheduler/algorithm/predicates/predicates.go). All predicates mentioned above can be used in combination to perform a sophisticated filtering policy. Kubernetes uses some, but not all, of these predicates by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](../../plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). - -## Ranking the nodes - -The filtered nodes are considered suitable to host the Pod, and it is often that there are more than one nodes remaining. Kubernetes prioritizes the remaining nodes to find the "best" one for the Pod. The prioritization is performed by a set of priority functions. For each remaining node, a priority function gives a score which scales from 0-10 with 10 representing for "most preferred" and 0 for "least preferred". Each priority function is weighted by a positive number and the final score of each node is calculated by adding up all the weighted scores. For example, suppose there are two priority functions, `priorityFunc1` and `priorityFunc2` with weighting factors `weight1` and `weight2` respectively, the final score of some NodeA is: - - finalScoreNodeA = (weight1 * priorityFunc1) + (weight2 * priorityFunc2) - -After the scores of all nodes are calculated, the node with highest score is chosen as the host of the Pod. If there are more than one nodes with equal highest scores, a random one among them is chosen. - -Currently, Kubernetes scheduler provides some practical priority functions, including: - -- `LeastRequestedPriority`: The node is prioritized based on the fraction of the node that would be free if the new Pod were scheduled onto the node. (In other words, (capacity - sum of limits of all Pods already on the node - limit of Pod that is being scheduled) / capacity). CPU and memory are equally weighted. The node with the highest free fraction is the most preferred. Note that this priority function has the effect of spreading Pods across the nodes with respect to resource consumption. -- `CalculateNodeLabelPriority`: Prefer nodes that have the specified label. -- `BalancedResourceAllocation`: This priority function tries to put the Pod on a node such that the CPU and Memory utilization rate is balanced after the Pod is deployed. -- `CalculateSpreadPriority`: Spread Pods by minimizing the number of Pods belonging to the same service on the same node. -- `CalculateAntiAffinityPriority`: Spread Pods by minimizing the number of Pods belonging to the same service on nodes with the same value for a particular label. - -The details of the above priority functions can be found in [plugin/pkg/scheduler/algorithm/priorities](../../plugin/pkg/scheduler/algorithm/priorities/). Kubernetes uses some, but not all, of these priority functions by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](../../plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). Similar as predicates, you can combine the above priority functions and assign weight factors (positive number) to them as you want (check [scheduler.md](scheduler.md) for how to customize). +# Scheduler Algorithm in Kubernetes + +For each unscheduled Pod, the Kubernetes scheduler tries to find a node across the cluster according to a set of rules. A general introduction to the Kubernetes scheduler can be found at [scheduler.md](scheduler.md). In this document, the algorithm of how to select a node for the Pod is explained. There are two steps before a destination node of a Pod is chosen. The first step is filtering all the nodes and the second is ranking the remaining nodes to find a best fit for the Pod. + +## Filtering the nodes + +The purpose of filtering the nodes is to filter out the nodes that do not meet certain requirements of the Pod. For example, if the free resource on a node (measured by the capacity minus the sum of the resource limits of all the Pods that already run on the node) is less than the Pod's required resource, the node should not be considered in the ranking phase so it is filtered out. Currently, there are several "predicates" implementing different filtering policies, including: + +- `NoDiskConflict`: Evaluate if a pod can fit due to the volumes it requests, and those that are already mounted. +- `PodFitsResources`: Check if the free resource (CPU and Memory) meets the requirement of the Pod. The free resource is measured by the capacity minus the sum of limits of all Pods on the node. +- `PodFitsPorts`: Check if any HostPort required by the Pod is already occupied on the node. +- `PodFitsHost`: Filter out all nodes except the one specified in the PodSpec's NodeName field. +- `PodSelectorMatches`: Check if the labels of the node match the labels specified in the Pod's `nodeSelector` field ([Here](../user-guide/node-selection/) is an example of how to use `nodeSelector` field). +- `CheckNodeLabelPresence`: Check if all the specified labels exist on a node or not, regardless of the value. + +The details of the above predicates can be found in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/predicates/predicates.go). All predicates mentioned above can be used in combination to perform a sophisticated filtering policy. Kubernetes uses some, but not all, of these predicates by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). + +## Ranking the nodes + +The filtered nodes are considered suitable to host the Pod, and it is often that there are more than one nodes remaining. Kubernetes prioritizes the remaining nodes to find the "best" one for the Pod. The prioritization is performed by a set of priority functions. For each remaining node, a priority function gives a score which scales from 0-10 with 10 representing for "most preferred" and 0 for "least preferred". Each priority function is weighted by a positive number and the final score of each node is calculated by adding up all the weighted scores. For example, suppose there are two priority functions, `priorityFunc1` and `priorityFunc2` with weighting factors `weight1` and `weight2` respectively, the final score of some NodeA is: + + finalScoreNodeA = (weight1 * priorityFunc1) + (weight2 * priorityFunc2) + +After the scores of all nodes are calculated, the node with highest score is chosen as the host of the Pod. If there are more than one nodes with equal highest scores, a random one among them is chosen. + +Currently, Kubernetes scheduler provides some practical priority functions, including: + +- `LeastRequestedPriority`: The node is prioritized based on the fraction of the node that would be free if the new Pod were scheduled onto the node. (In other words, (capacity - sum of limits of all Pods already on the node - limit of Pod that is being scheduled) / capacity). CPU and memory are equally weighted. The node with the highest free fraction is the most preferred. Note that this priority function has the effect of spreading Pods across the nodes with respect to resource consumption. +- `CalculateNodeLabelPriority`: Prefer nodes that have the specified label. +- `BalancedResourceAllocation`: This priority function tries to put the Pod on a node such that the CPU and Memory utilization rate is balanced after the Pod is deployed. +- `CalculateSpreadPriority`: Spread Pods by minimizing the number of Pods belonging to the same service on the same node. +- `CalculateAntiAffinityPriority`: Spread Pods by minimizing the number of Pods belonging to the same service on nodes with the same value for a particular label. + +The details of the above priority functions can be found in [plugin/pkg/scheduler/algorithm/priorities](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/priorities/). Kubernetes uses some, but not all, of these priority functions by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). Similar as predicates, you can combine the above priority functions and assign weight factors (positive number) to them as you want (check [scheduler.md](scheduler.md) for how to customize). -- cgit v1.2.3 From e0554bbf167b4c0d315fda4a3ddd9511460064c1 Mon Sep 17 00:00:00 2001 From: Alex Robinson Date: Mon, 20 Jul 2015 13:45:36 -0700 Subject: Fix capitalization of Kubernetes in the documentation. --- README.md | 2 +- api-conventions.md | 4 ++-- client-libraries.md | 2 +- development.md | 4 ++-- writing-a-getting-started-guide.md | 2 +- 5 files changed, 7 insertions(+), 7 deletions(-) diff --git a/README.md b/README.md index 9a73d949..267bca23 100644 --- a/README.md +++ b/README.md @@ -34,7 +34,7 @@ Documentation for other releases can be found at # Kubernetes Developer Guide The developer guide is for anyone wanting to either write code which directly accesses the -kubernetes API, or to contribute directly to the kubernetes project. +Kubernetes API, or to contribute directly to the Kubernetes project. It assumes some familiarity with concepts in the [User Guide](../user-guide/README.md) and the [Cluster Admin Guide](../admin/README.md). diff --git a/api-conventions.md b/api-conventions.md index 8b2216cd..8889b721 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -35,8 +35,8 @@ API Conventions Updated: 4/16/2015 -*This document is oriented at users who want a deeper understanding of the kubernetes -API structure, and developers wanting to extend the kubernetes API. An introduction to +*This document is oriented at users who want a deeper understanding of the Kubernetes +API structure, and developers wanting to extend the Kubernetes API. An introduction to using resources with kubectl can be found in (working_with_resources.md).* **Table of Contents** diff --git a/client-libraries.md b/client-libraries.md index e41c6514..9e41688c 100644 --- a/client-libraries.md +++ b/client-libraries.md @@ -31,7 +31,7 @@ Documentation for other releases can be found at -## kubernetes API client libraries +## Kubernetes API client libraries ### Supported diff --git a/development.md b/development.md index f5233a0e..27cb034d 100644 --- a/development.md +++ b/development.md @@ -56,7 +56,7 @@ Below, we outline one of the more common git workflows that core developers use. ### Clone your fork -The commands below require that you have $GOPATH set ([$GOPATH docs](https://golang.org/doc/code.html#GOPATH)). We highly recommend you put kubernetes' code into your GOPATH. Note: the commands below will not work if there is more than one directory in your `$GOPATH`. +The commands below require that you have $GOPATH set ([$GOPATH docs](https://golang.org/doc/code.html#GOPATH)). We highly recommend you put Kubernetes' code into your GOPATH. Note: the commands below will not work if there is more than one directory in your `$GOPATH`. ```sh mkdir -p $GOPATH/src/github.com/GoogleCloudPlatform/ @@ -207,7 +207,7 @@ godep go test ./... If you only want to run unit tests in one package, you could run ``godep go test`` under the package directory. For example, the following commands will run all unit tests in package kubelet: ```console -$ cd kubernetes # step into kubernetes' directory. +$ cd kubernetes # step into the kubernetes directory. $ cd pkg/kubelet $ godep go test # some output from unit tests diff --git a/writing-a-getting-started-guide.md b/writing-a-getting-started-guide.md index c22d9204..40f513be 100644 --- a/writing-a-getting-started-guide.md +++ b/writing-a-getting-started-guide.md @@ -66,7 +66,7 @@ These guidelines say *what* to do. See the Rationale section for *why*. - We may ask that you host binary assets or large amounts of code in our `contrib` directory or on your own repo. - Add or update a row in [The Matrix](../../docs/getting-started-guides/README.md). - - State the binary version of kubernetes that you tested clearly in your Guide doc. + - State the binary version of Kubernetes that you tested clearly in your Guide doc. - Setup a cluster and run the [conformance test](development.md#conformance-testing) against it, and report the results in your PR. - Versioned distros should typically not modify or add code in `cluster/`. That is just scripts for developer -- cgit v1.2.3 From e605969e9a2636a2a1c4c1f86c7ea9596bbf3174 Mon Sep 17 00:00:00 2001 From: Tim Hockin Date: Thu, 30 Jul 2015 15:11:38 -0700 Subject: Add a note on when to use commits --- development.md | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/development.md b/development.md index 27cb034d..87b4b5d0 100644 --- a/development.md +++ b/development.md @@ -99,6 +99,17 @@ git push -f origin myfeature 1. Visit http://github.com/$YOUR_GITHUB_USERNAME/kubernetes 2. Click the "Compare and pull request" button next to your "myfeature" branch. +### When to retain commits and when to squash + +Upon merge, all git commits should represent meaningful milestones or units of +work. Use commits to add clarity to the development and review process. + +Before merging a PR, squash any "fix review feedback", "typo", and "rebased" +sorts of commits. It is not imperative that every commit in a PR compile and +pass tests independently, but it is worth striving for. For mass automated +fixups (e.g. automated doc formatting), use one or more commits for the +changes to tooling and a final commit to apply the fixup en masse. This makes +reviews much easier. ## godep and dependency management -- cgit v1.2.3 From 9a5c3748cc3469907bef1c8b053df544ed1d7f54 Mon Sep 17 00:00:00 2001 From: Eric Paris Date: Fri, 24 Jul 2015 17:52:18 -0400 Subject: Fix trailing whitespace in all docs --- api-conventions.md | 6 +++--- api_changes.md | 2 +- collab.md | 2 +- scheduler_algorithm.md | 4 ++-- writing-a-getting-started-guide.md | 14 +++++++------- 5 files changed, 14 insertions(+), 14 deletions(-) diff --git a/api-conventions.md b/api-conventions.md index 8889b721..5a1bfe81 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -173,11 +173,11 @@ Objects that contain both spec and status should not contain additional top-leve ##### Typical status properties * **phase**: The phase is a simple, high-level summary of the phase of the lifecycle of an object. The phase should progress monotonically. Typical phase values are `Pending` (not yet fully physically realized), `Running` or `Active` (fully realized and active, but not necessarily operating correctly), and `Terminated` (no longer active), but may vary slightly for different types of objects. New phase values should not be added to existing objects in the future. Like other status fields, it must be possible to ascertain the lifecycle phase by observation. Additional details regarding the current phase may be contained in other fields. -* **conditions**: Conditions represent orthogonal observations of an object's current state. Objects may report multiple conditions, and new types of conditions may be added in the future. Condition status values may be `True`, `False`, or `Unknown`. Unlike the phase, conditions are not expected to be monotonic -- their values may change back and forth. A typical condition type is `Ready`, which indicates the object was believed to be fully operational at the time it was last probed. Conditions may carry additional information, such as the last probe time or last transition time. +* **conditions**: Conditions represent orthogonal observations of an object's current state. Objects may report multiple conditions, and new types of conditions may be added in the future. Condition status values may be `True`, `False`, or `Unknown`. Unlike the phase, conditions are not expected to be monotonic -- their values may change back and forth. A typical condition type is `Ready`, which indicates the object was believed to be fully operational at the time it was last probed. Conditions may carry additional information, such as the last probe time or last transition time. TODO(@vishh): Reason and Message. -Phases and conditions are observations and not, themselves, state machines, nor do we define comprehensive state machines for objects with behaviors associated with state transitions. The system is level-based and should assume an Open World. Additionally, new observations and details about these observations may be added over time. +Phases and conditions are observations and not, themselves, state machines, nor do we define comprehensive state machines for objects with behaviors associated with state transitions. The system is level-based and should assume an Open World. Additionally, new observations and details about these observations may be added over time. In order to preserve extensibility, in the future, we intend to explicitly convey properties that users and components care about rather than requiring those properties to be inferred from observations. @@ -376,7 +376,7 @@ Late-initializers should only make the following types of modifications: - Adding keys to maps - Adding values to arrays which have mergeable semantics (`patchStrategy:"merge"` attribute in the type definition). - + These conventions: 1. allow a user (with sufficient privilege) to override any system-default behaviors by setting the fields that would otherwise have been defaulted. diff --git a/api_changes.md b/api_changes.md index d8e20014..687af00a 100644 --- a/api_changes.md +++ b/api_changes.md @@ -309,7 +309,7 @@ a panic from the `serialization_test`. If so, look at the diff it produces (or the backtrace in case of a panic) and figure out what you forgot. Encode that into the fuzzer's custom fuzz functions. Hint: if you added defaults for a field, that field will need to have a custom fuzz function that ensures that the field is -fuzzed to a non-empty value. +fuzzed to a non-empty value. The fuzzer can be found in `pkg/api/testing/fuzzer.go`. diff --git a/collab.md b/collab.md index 96db64c8..624b3bcb 100644 --- a/collab.md +++ b/collab.md @@ -61,7 +61,7 @@ Maintainers will do merges of appropriately reviewed-and-approved changes during There may be discussion an even approvals granted outside of the above hours, but merges will generally be deferred. -If a PR is considered complex or controversial, the merge of that PR should be delayed to give all interested parties in all timezones the opportunity to provide feedback. Concretely, this means that such PRs should be held for 24 +If a PR is considered complex or controversial, the merge of that PR should be delayed to give all interested parties in all timezones the opportunity to provide feedback. Concretely, this means that such PRs should be held for 24 hours before merging. Of course "complex" and "controversial" are left to the judgment of the people involved, but we trust that part of being a committer is the judgment required to evaluate such things honestly, and not be motivated by your desire (or your cube-mate's desire) to get their code merged. Also see "Holds" below, any reviewer can issue a "hold" to indicate that the PR is in fact complicated or complex and deserves further review. diff --git a/scheduler_algorithm.md b/scheduler_algorithm.md index c67bcdbf..ab8e69ef 100644 --- a/scheduler_algorithm.md +++ b/scheduler_algorithm.md @@ -44,7 +44,7 @@ The purpose of filtering the nodes is to filter out the nodes that do not meet c - `PodFitsPorts`: Check if any HostPort required by the Pod is already occupied on the node. - `PodFitsHost`: Filter out all nodes except the one specified in the PodSpec's NodeName field. - `PodSelectorMatches`: Check if the labels of the node match the labels specified in the Pod's `nodeSelector` field ([Here](../user-guide/node-selection/) is an example of how to use `nodeSelector` field). -- `CheckNodeLabelPresence`: Check if all the specified labels exist on a node or not, regardless of the value. +- `CheckNodeLabelPresence`: Check if all the specified labels exist on a node or not, regardless of the value. The details of the above predicates can be found in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/predicates/predicates.go). All predicates mentioned above can be used in combination to perform a sophisticated filtering policy. Kubernetes uses some, but not all, of these predicates by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). @@ -53,7 +53,7 @@ The details of the above predicates can be found in [plugin/pkg/scheduler/algori The filtered nodes are considered suitable to host the Pod, and it is often that there are more than one nodes remaining. Kubernetes prioritizes the remaining nodes to find the "best" one for the Pod. The prioritization is performed by a set of priority functions. For each remaining node, a priority function gives a score which scales from 0-10 with 10 representing for "most preferred" and 0 for "least preferred". Each priority function is weighted by a positive number and the final score of each node is calculated by adding up all the weighted scores. For example, suppose there are two priority functions, `priorityFunc1` and `priorityFunc2` with weighting factors `weight1` and `weight2` respectively, the final score of some NodeA is: finalScoreNodeA = (weight1 * priorityFunc1) + (weight2 * priorityFunc2) - + After the scores of all nodes are calculated, the node with highest score is chosen as the host of the Pod. If there are more than one nodes with equal highest scores, a random one among them is chosen. Currently, Kubernetes scheduler provides some practical priority functions, including: diff --git a/writing-a-getting-started-guide.md b/writing-a-getting-started-guide.md index 40f513be..04d0d67f 100644 --- a/writing-a-getting-started-guide.md +++ b/writing-a-getting-started-guide.md @@ -70,7 +70,7 @@ These guidelines say *what* to do. See the Rationale section for *why*. - Setup a cluster and run the [conformance test](development.md#conformance-testing) against it, and report the results in your PR. - Versioned distros should typically not modify or add code in `cluster/`. That is just scripts for developer - distros. + distros. - When a new major or minor release of Kubernetes comes out, we may also release a new conformance test, and require a new conformance test run to earn a conformance checkmark. @@ -82,20 +82,20 @@ Just file an issue or chat us on IRC and one of the committers will link to it f These guidelines say *what* to do. See the Rationale section for *why*. - the main reason to add a new development distro is to support a new IaaS provider (VM and - network management). This means implementing a new `pkg/cloudprovider/$IAAS_NAME`. + network management). This means implementing a new `pkg/cloudprovider/$IAAS_NAME`. - Development distros should use Saltstack for Configuration Management. - development distros need to support automated cluster creation, deletion, upgrading, etc. This mean writing scripts in `cluster/$IAAS_NAME`. - all commits to the tip of this repo need to not break any of the development distros - the author of the change is responsible for making changes necessary on all the cloud-providers if the change affects any of them, and reverting the change if it breaks any of the CIs. - - a development distro needs to have an organization which owns it. This organization needs to: + - a development distro needs to have an organization which owns it. This organization needs to: - Setting up and maintaining Continuous Integration that runs e2e frequently (multiple times per day) against the Distro at head, and which notifies all devs of breakage. - being reasonably available for questions and assisting with refactoring and feature additions that affect code for their IaaS. -## Rationale +## Rationale - We want people to create Kubernetes clusters with whatever IaaS, Node OS, configuration management tools, and so on, which they are familiar with. The @@ -114,19 +114,19 @@ These guidelines say *what* to do. See the Rationale section for *why*. learning curve to understand our automated testing scripts. And it is considerable effort to fully automate setup and teardown of a cluster, which is needed for CI. And, not everyone has the time and money to run CI. We do not want to - discourage people from writing and sharing guides because of this. + discourage people from writing and sharing guides because of this. - Versioned distro authors are free to run their own CI and let us know if there is breakage, but we will not include them as commit hooks -- there cannot be so many commit checks that it is impossible to pass them all. - We prefer a single Configuration Management tool for development distros. If there were more than one, the core developers would have to learn multiple tools and update config in multiple places. **Saltstack** happens to be the one we picked when we started the project. We - welcome versioned distros that use any tool; there are already examples of + welcome versioned distros that use any tool; there are already examples of CoreOS Fleet, Ansible, and others. - You can still run code from head or your own branch if you use another Configuration Management tool -- you just have to do some manual steps during testing and deployment. - + -- cgit v1.2.3 From 1a50eb50808fbbac7e657ef47bf25cc6181f1e2c Mon Sep 17 00:00:00 2001 From: goltermann Date: Wed, 5 Aug 2015 14:34:52 -0700 Subject: Add post v1.0 PR merge details. --- development.md | 1 + pull-requests.md | 17 +++++++++-------- 2 files changed, 10 insertions(+), 8 deletions(-) diff --git a/development.md b/development.md index 87b4b5d0..7fcd6a89 100644 --- a/development.md +++ b/development.md @@ -98,6 +98,7 @@ git push -f origin myfeature 1. Visit http://github.com/$YOUR_GITHUB_USERNAME/kubernetes 2. Click the "Compare and pull request" button next to your "myfeature" branch. +3. Check out the pull request [process](pull-requests.md) for more details ### When to retain commits and when to squash diff --git a/pull-requests.md b/pull-requests.md index e42faa51..6d2eb597 100644 --- a/pull-requests.md +++ b/pull-requests.md @@ -47,18 +47,19 @@ We want to limit the total number of PRs in flight to: * Remove old PRs that would be difficult to rebase as the underlying code has changed over time * Encourage code velocity -RC to v1.0 Pull Requests ------------------------- +Life of a Pull Request +---------------------- -Between the first RC build (~6/22) and v1.0, we will adopt a higher bar for PR merges. For v1.0 to be a stable release, we need to ensure that any fixes going in are very well tested and have a low risk of breaking anything. Refactors and complex changes will be rejected in favor of more strategic and smaller workarounds. +Unless in the last few weeks of a milestone when we need to reduce churn and stabilize, we aim to be always accepting pull requests. -These PRs require: -* A risk assessment by the code author in the PR. This should outline which parts of the code are being touched, the risk of regression, and complexity of the code. -* Two LGTMs from experienced reviewers. +Either the [on call](https://github.com/GoogleCloudPlatform/kubernetes/wiki/Kubernetes-on-call-rotation) manually or the [submit queue](../../contrib/submit-queue/) automatically will manage merging PRs. -Once those requirements are met, they will be labeled [ok-to-merge](https://github.com/GoogleCloudPlatform/kubernetes/pulls?utf8=%E2%9C%93&q=is%3Aopen+is%3Apr+label%3Aok-to-merge) and can be merged. +There are several requirements for the submit queue to work: +* Author must have signed CLA ("cla: yes" label added to PR) +* No changes can be made since last lgtm label was applied +* k8s-bot must have reported the GCE E2E build and test steps passed (Travis, Shippable and Jenkins build) -These restrictions will be relaxed after v1.0 is released. +Additionally, for infrequent or new contributors, we require the on call to apply the "ok-to-merge" label manually. This is gated by the [whitelist](../../contrib/submit-queue/whitelist.txt). -- cgit v1.2.3 From 1e074e74ea7b0ddef2f6b1726babe7397510cf84 Mon Sep 17 00:00:00 2001 From: Mike Danese Date: Wed, 5 Aug 2015 15:16:36 -0700 Subject: fixup development doc for new vanity path --- development.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/development.md b/development.md index 7fcd6a89..45463293 100644 --- a/development.md +++ b/development.md @@ -59,8 +59,8 @@ Below, we outline one of the more common git workflows that core developers use. The commands below require that you have $GOPATH set ([$GOPATH docs](https://golang.org/doc/code.html#GOPATH)). We highly recommend you put Kubernetes' code into your GOPATH. Note: the commands below will not work if there is more than one directory in your `$GOPATH`. ```sh -mkdir -p $GOPATH/src/github.com/GoogleCloudPlatform/ -cd $GOPATH/src/github.com/GoogleCloudPlatform/ +mkdir -p $GOPATH/src/k8s.io +cd $GOPATH/src/k8s.io # Replace "$YOUR_GITHUB_USERNAME" below with your github username git clone https://github.com/$YOUR_GITHUB_USERNAME/kubernetes.git cd kubernetes @@ -147,8 +147,8 @@ Here's a quick walkthrough of one way to use godeps to add or update a Kubernete ```sh export KPATH=$HOME/code/kubernetes -mkdir -p $KPATH/src/github.com/GoogleCloudPlatform/kubernetes -cd $KPATH/src/github.com/GoogleCloudPlatform/kubernetes +mkdir -p $KPATH/src/k8s.io/kubernetes +cd $KPATH/src/k8s.io/kubernetes git clone https://path/to/your/fork . # Or copy your existing local repo here. IMPORTANT: making a symlink doesn't work. ``` @@ -174,13 +174,13 @@ godep restore ```sh # To add a new dependency, do: -cd $KPATH/src/github.com/GoogleCloudPlatform/kubernetes +cd $KPATH/src/k8s.io/kubernetes go get path/to/dependency # Change code in Kubernetes to use the dependency. godep save ./... # To update an existing dependency, do: -cd $KPATH/src/github.com/GoogleCloudPlatform/kubernetes +cd $KPATH/src/k8s.io/kubernetes go get -u path/to/dependency # Change code in Kubernetes accordingly if necessary. godep update path/to/dependency @@ -224,7 +224,7 @@ $ cd pkg/kubelet $ godep go test # some output from unit tests PASS -ok github.com/GoogleCloudPlatform/kubernetes/pkg/kubelet 0.317s +ok k8s.io/kubernetes/pkg/kubelet 0.317s ``` ## Coverage -- cgit v1.2.3 From 4c0410dd60e9dbbfc4d71f5a2f2f7e00c0f1a0b2 Mon Sep 17 00:00:00 2001 From: Mike Danese Date: Wed, 5 Aug 2015 18:08:26 -0700 Subject: rewrite all links to issues to k8s links --- api-conventions.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/api-conventions.md b/api-conventions.md index 5a1bfe81..bdd38830 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -195,7 +195,7 @@ References in the status of the referee to the referrer may be permitted, when t #### Lists of named subobjects preferred over maps -Discussed in [#2004](https://github.com/GoogleCloudPlatform/kubernetes/issues/2004) and elsewhere. There are no maps of subobjects in any API objects. Instead, the convention is to use a list of subobjects containing name fields. +Discussed in [#2004](http://issue.k8s.io/2004) and elsewhere. There are no maps of subobjects in any API objects. Instead, the convention is to use a list of subobjects containing name fields. For example: -- cgit v1.2.3 From 57ff799db24ba9e16a95fcc8ad0558b8623058c9 Mon Sep 17 00:00:00 2001 From: Veres Lajos Date: Sat, 8 Aug 2015 22:29:57 +0100 Subject: typofix - https://github.com/vlajos/misspell_fixer --- development.md | 2 +- making-release-notes.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/development.md b/development.md index 45463293..2929f281 100644 --- a/development.md +++ b/development.md @@ -87,7 +87,7 @@ Note: If you have write access to the main repository at github.com/GoogleCloudP git remote set-url --push upstream no_push ``` -### Commiting changes to your fork +### Committing changes to your fork ```sh git commit diff --git a/making-release-notes.md b/making-release-notes.md index d4ec6ccf..1efab1ac 100644 --- a/making-release-notes.md +++ b/making-release-notes.md @@ -40,7 +40,7 @@ This documents the process for making release notes for a release. Find the most-recent PR that was merged with the previous .0 release. Remember this as $LASTPR. _TODO_: Figure out a way to record this somewhere to save the next release engineer time. -Find the most-recent PR that was merged with the current .0 release. Remeber this as $CURRENTPR. +Find the most-recent PR that was merged with the current .0 release. Remember this as $CURRENTPR. ### 2) Run the release-notes tool -- cgit v1.2.3 From 54575568d1fb2e91b038e65917531c61209d3047 Mon Sep 17 00:00:00 2001 From: Ed Costello Date: Sun, 9 Aug 2015 14:18:06 -0400 Subject: Copy edits for typos --- development.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/development.md b/development.md index 45463293..2929f281 100644 --- a/development.md +++ b/development.md @@ -87,7 +87,7 @@ Note: If you have write access to the main repository at github.com/GoogleCloudP git remote set-url --push upstream no_push ``` -### Commiting changes to your fork +### Committing changes to your fork ```sh git commit -- cgit v1.2.3 From de3a07b932c811b768c513b72d3917718c01c3ab Mon Sep 17 00:00:00 2001 From: Eric Paris Date: Mon, 20 Jul 2015 08:24:20 -0500 Subject: Split hack/{verify,update}-* files so we don't always go build Right now some of the hack/* tools use `go run` and build almost every time. There are some which expect you to have already run `go install`. And in all cases the pre-commit hook, which runs a full build wouldn't want to do either, since it just built! This creates a new hack/after-build/ directory and has the scripts which REQUIRE that the binary already be built. It doesn't test and complain. It just fails miserably. Users should not be in this directory. Users should just use hack/verify-* which will just do the build and then call the "after-build" version. The pre-commit hook or anything which KNOWS the binaries have been built can use the fast version. --- development.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/development.md b/development.md index 2929f281..294f825a 100644 --- a/development.md +++ b/development.md @@ -345,7 +345,7 @@ See [conformance-test.sh](http://releases.k8s.io/HEAD/hack/conformance-test.sh). ## Regenerating the CLI documentation ```sh -hack/run-gendocs.sh +hack/update-generated-docs.sh ``` -- cgit v1.2.3 From a577fe59554100f7339e4784089095734875afd5 Mon Sep 17 00:00:00 2001 From: Bryan Stenson Date: Tue, 11 Aug 2015 22:36:51 -0700 Subject: create cloudprovider "providers" package move all providers into new package update all references to old package path --- writing-a-getting-started-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/writing-a-getting-started-guide.md b/writing-a-getting-started-guide.md index 04d0d67f..7441474a 100644 --- a/writing-a-getting-started-guide.md +++ b/writing-a-getting-started-guide.md @@ -82,7 +82,7 @@ Just file an issue or chat us on IRC and one of the committers will link to it f These guidelines say *what* to do. See the Rationale section for *why*. - the main reason to add a new development distro is to support a new IaaS provider (VM and - network management). This means implementing a new `pkg/cloudprovider/$IAAS_NAME`. + network management). This means implementing a new `pkg/cloudprovider/providers/$IAAS_NAME`. - Development distros should use Saltstack for Configuration Management. - development distros need to support automated cluster creation, deletion, upgrading, etc. This mean writing scripts in `cluster/$IAAS_NAME`. -- cgit v1.2.3 From 36a2019fe707de032c352ffe1375cf476564ef53 Mon Sep 17 00:00:00 2001 From: Robert Bailey Date: Wed, 12 Aug 2015 13:12:32 -0700 Subject: Update repository links in development.md. --- development.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/development.md b/development.md index 294f825a..db74adaf 100644 --- a/development.md +++ b/development.md @@ -51,7 +51,7 @@ Below, we outline one of the more common git workflows that core developers use. ### Fork the main repository -1. Go to https://github.com/GoogleCloudPlatform/kubernetes +1. Go to https://github.com/kubernetes/kubernetes 2. Click the "Fork" button (at the top right) ### Clone your fork @@ -64,7 +64,7 @@ cd $GOPATH/src/k8s.io # Replace "$YOUR_GITHUB_USERNAME" below with your github username git clone https://github.com/$YOUR_GITHUB_USERNAME/kubernetes.git cd kubernetes -git remote add upstream 'https://github.com/GoogleCloudPlatform/kubernetes.git' +git remote add upstream 'https://github.com/kubernetes/kubernetes.git' ``` ### Create a branch and make changes @@ -81,7 +81,7 @@ git fetch upstream git rebase upstream/master ``` -Note: If you have write access to the main repository at github.com/GoogleCloudPlatform/kubernetes, you should modify your git configuration so that you can't accidentally push to upstream: +Note: If you have write access to the main repository at github.com/kubernetes/kubernetes, you should modify your git configuration so that you can't accidentally push to upstream: ```sh git remote set-url --push upstream no_push @@ -166,7 +166,7 @@ export GOPATH=$KPATH 3) Populate your new GOPATH. ```sh -cd $KPATH/src/github.com/GoogleCloudPlatform/kubernetes +cd $KPATH/src/github.com/kubernetes/kubernetes godep restore ``` -- cgit v1.2.3 From 00ce437841e772d61fc732f99860df17bb01d3ea Mon Sep 17 00:00:00 2001 From: goltermann Date: Thu, 13 Aug 2015 11:29:59 -0700 Subject: Adding teams lists to faster_reviews. --- faster_reviews.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/faster_reviews.md b/faster_reviews.md index d28e9b55..3ea030d3 100644 --- a/faster_reviews.md +++ b/faster_reviews.md @@ -187,6 +187,9 @@ things you can do that might help kick a stalled process along: * Ping the assignee by email (many of us have email addresses that are well published or are the same as our GitHub handle @google.com or @redhat.com). + * Ping the [team](https://github.com/orgs/kubernetes/teams) (via @team-name) + that works in the area you're submitting code. + If you think you have fixed all the issues in a round of review, and you haven't heard back, you should ping the reviewer (assignee) on the comment stream with a "please take another look" (PTAL) or similar comment indicating you are done and -- cgit v1.2.3 From bc21d6f1f433a0fdcc8696b1c65a4908cf0b27d5 Mon Sep 17 00:00:00 2001 From: Brian Grant Date: Tue, 11 Aug 2015 06:30:48 +0000 Subject: Update API conventions. Add kubectl conventions. Ref #12322. Fixes #6797. --- api-conventions.md | 122 ++++++++++++++++++++++++++++++++++++------------- kubectl-conventions.md | 115 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 205 insertions(+), 32 deletions(-) create mode 100644 kubectl-conventions.md diff --git a/api-conventions.md b/api-conventions.md index bdd38830..75612820 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -33,11 +33,11 @@ Documentation for other releases can be found at API Conventions =============== -Updated: 4/16/2015 +Updated: 8/12/2015 *This document is oriented at users who want a deeper understanding of the Kubernetes API structure, and developers wanting to extend the Kubernetes API. An introduction to -using resources with kubectl can be found in (working_with_resources.md).* +using resources with kubectl can be found in [Working with resources](../user-guide/working-with-resources.md).* **Table of Contents** @@ -65,11 +65,14 @@ using resources with kubectl can be found in (working_with_resources.md).* - [Serialization Format](#serialization-format) - [Units](#units) - [Selecting Fields](#selecting-fields) + - [Object references](#object-references) - [HTTP Status codes](#http-status-codes) - [Success codes](#success-codes) - [Error codes](#error-codes) - [Response Status Kind](#response-status-kind) - [Events](#events) + - [Naming conventions](#naming-conventions) + - [Label, selector, and annotation conventions](#label-selector-and-annotation-conventions) @@ -84,7 +87,7 @@ The following terms are defined: * Collections - a list of resources of the same type, which may be queryable * Elements - an individual resource, addressable via a URL -Each resource typically accepts and returns data of a single kind. A kind may be accepted or returned by multiple resources that reflect specific use cases. For instance, the kind "pod" is exposed as a "pods" resource that allows end users to create, update, and delete pods, while a separate "pod status" resource (that acts on "pod" kind) allows automated processes to update a subset of the fields in that resource. A "restart" resource might be exposed for a number of different resources to allow the same action to have different results for each object. +Each resource typically accepts and returns data of a single kind. A kind may be accepted or returned by multiple resources that reflect specific use cases. For instance, the kind "Pod" is exposed as a "pods" resource that allows end users to create, update, and delete pods, while a separate "pod status" resource (that acts on "Pod" kind) allows automated processes to update a subset of the fields in that resource. Resource collections should be all lowercase and plural, whereas kinds are CamelCase and singular. @@ -99,7 +102,7 @@ Kinds are grouped into three categories: An object may have multiple resources that clients can use to perform specific actions that create, update, delete, or get. - Examples: `Pods`, `ReplicationControllers`, `Services`, `Namespaces`, `Nodes` + Examples: `Pod`, `ReplicationController`, `Service`, `Namespace`, `Node`. 2. **Lists** are collections of **resources** of one (usually) or more (occasionally) kinds. @@ -117,9 +120,15 @@ Kinds are grouped into three categories: Given their limited scope, they have the same set of limited common metadata as lists. - The "size" action may accept a simple resource that has only a single field as input (the number of things). The "status" kind is returned when errors occur and is not persisted in the system. + For instance, the "Status" kind is returned when errors occur and is not persisted in the system. - Examples: Binding, Status + Many simple resources are "subresources", which are rooted at API paths of specific resources. When resources wish to expose alternative actions or views that are closely coupled to a single resource, they should do so using new sub-resources. Common subresources include: + + * `/binding`: Used to bind a resource representing a user request (e.g., Pod, PersistentVolumeClaim) to a cluster infrastructure resource (e.g., Node, PersistentVolume). + * `/status`: Used to write just the status portion of a resource. For example, the `/pods` endpoint only allows updates to `metadata` and `spec`, since those reflect end-user intent. An automated process should be able to modify status for users to see by sending an updated Pod kind to the server to the "/pods/<name>/status" endpoint - the alternate endpoint allows different rules to be applied to the update, and access to be appropriately restricted. + * `/scale`: Used to read and write the count of a resource in a manner that is independent of the specific resource schema. + + Two additional subresources, `proxy` and `portforward`, provide access to cluster resources as described in [docs/user-guide/accessing-the-cluster.md](../user-guide/accessing-the-cluster.md). The standard REST verbs (defined below) MUST return singular JSON objects. Some API endpoints may deviate from the strict REST pattern and return resources that are not singular JSON objects, such as streams of JSON objects or unstructured text log data. @@ -147,6 +156,7 @@ Every object kind MUST have the following metadata in a nested object field call Every object SHOULD have the following metadata in a nested object field called "metadata": * resourceVersion: a string that identifies the internal version of this object that can be used by clients to determine when objects have changed. This value MUST be treated as opaque by clients and passed unmodified back to the server. Clients should not assume that the resource version has meaning across namespaces, different kinds of resources, or different servers. (see [concurrency control](#concurrency-control-and-consistency), below, for more details) +* generation: a sequence number representing a specific generation of the desired state. Set by the system and monotonically increasing, per-resource. May be compared, such as for RAW and WAW consistency. * creationTimestamp: a string representing an RFC 3339 date of the date and time an object was created * deletionTimestamp: a string representing an RFC 3339 date of the date and time after which this resource will be deleted. This field is set by the server when a graceful deletion is requested by the user, and is not directly settable by a client. The resource will be deleted (no longer visible from resource lists, and not reachable by name) after the time in this field. Once set, this value may not be unset or be set further into the future, although it may be shortened or the resource may be deleted prior to this time. * labels: a map of string keys and values that can be used to organize and categorize objects (see [docs/user-guide/labels.md](../user-guide/labels.md)) @@ -172,18 +182,38 @@ Objects that contain both spec and status should not contain additional top-leve ##### Typical status properties -* **phase**: The phase is a simple, high-level summary of the phase of the lifecycle of an object. The phase should progress monotonically. Typical phase values are `Pending` (not yet fully physically realized), `Running` or `Active` (fully realized and active, but not necessarily operating correctly), and `Terminated` (no longer active), but may vary slightly for different types of objects. New phase values should not be added to existing objects in the future. Like other status fields, it must be possible to ascertain the lifecycle phase by observation. Additional details regarding the current phase may be contained in other fields. -* **conditions**: Conditions represent orthogonal observations of an object's current state. Objects may report multiple conditions, and new types of conditions may be added in the future. Condition status values may be `True`, `False`, or `Unknown`. Unlike the phase, conditions are not expected to be monotonic -- their values may change back and forth. A typical condition type is `Ready`, which indicates the object was believed to be fully operational at the time it was last probed. Conditions may carry additional information, such as the last probe time or last transition time. +**Conditions** represent the latest available observations of an object's current state. Objects may report multiple conditions, and new types of conditions may be added in the future. Therefore, conditions are represented using a list/slice, where all have similar structure. + +The `FooCondition` type for some resource type `Foo` may include a subset of the following fields, but must contain at least `type` and `status` fields: + +```golang + Type FooConditionType `json:"type" description:"type of Foo condition"` + Status ConditionStatus `json:"status" description:"status of the condition, one of True, False, Unknown"` + LastHeartbeatTime util.Time `json:"lastHeartbeatTime,omitempty" description:"last time we got an update on a given condition"` + LastTransitionTime util.Time `json:"lastTransitionTime,omitempty" description:"last time the condition transit from one status to another"` + Reason string `json:"reason,omitempty" description:"one-word CamelCase reason for the condition's last transition"` + Message string `json:"message,omitempty" description:"human-readable message indicating details about last transition"` +``` + +Additional fields may be added in the future. + +Conditions should be added to explicitly convey properties that users and components care about rather than requiring those properties to be inferred from other observations. + +Condition status values may be `True`, `False`, or `Unknown`. The absence of a condition should be interpreted the same as `Unknown`. + +In general, condition values may change back and forth, but some condition transitions may be monotonic, depending on the resource and condition type. However, conditions are observations and not, themselves, state machines, nor do we define comprehensive state machines for objects, nor behaviors associated with state transitions. The system is level-based rather than edge-triggered, and should assume an Open World. -TODO(@vishh): Reason and Message. +A typical oscillating condition type is `Ready`, which indicates the object was believed to be fully operational at the time it was last probed. A possible monotonic condition could be `Succeeded`. A `False` status for `Succeeded` would imply failure. An object that was still active would not have a `Succeeded` condition, or its status would be `Unknown`. -Phases and conditions are observations and not, themselves, state machines, nor do we define comprehensive state machines for objects with behaviors associated with state transitions. The system is level-based and should assume an Open World. Additionally, new observations and details about these observations may be added over time. +Some resources in the v1 API contain fields called **`phase`**, and associated `message`, `reason`, and other status fields. The pattern of using `phase` is deprecated. Newer API types should use conditions instead. Phase was essentially a state-machine enumeration field, that contradicted [system-design principles](../design/principles.md#control-logic) and hampered evolution, since [adding new enum values breaks backward compatibility](api_changes.md). Rather than encouraging clients to infer implicit properties from phases, we intend to explicitly expose the conditions that clients need to monitor. Conditions also have the benefit that it is possible to create some conditions with uniform meaning across all resource types, while still exposing others that are unique to specific resource types. See [#7856](http://issues.k8s.io/7856) for more details and discussion. -In order to preserve extensibility, in the future, we intend to explicitly convey properties that users and components care about rather than requiring those properties to be inferred from observations. +In condition types, and everywhere else they appear in the API, **`Reason`** is intended to be a one-word, CamelCase representation of the category of cause of the current status, and **`Message`** is intended to be a human-readable phrase or sentence, which may contain specific details of the individual occurrence. `Reason` is intended to be used in concise output, such as one-line `kubectl get` output, and in summarizing occurrences of causes, whereas `Message` is intended to be presented to users in detailed status explanations, such as `kubectl describe` output. -Note that historical information status (e.g., last transition time, failure counts) is only provided at best effort, and is not guaranteed to not be lost. +Historical information status (e.g., last transition time, failure counts) is only provided with reasonable effort, and is not guaranteed to not be lost. -Status information that may be large (especially unbounded in size, such as lists of references to other objects -- see below) and/or rapidly changing, such as [resource usage](../design/resources.md#usage-data), should be put into separate objects, with possibly a reference from the original object. This helps to ensure that GETs and watch remain reasonably efficient for the majority of clients, which may not need that data. +Status information that may be large (especially proportional in size to collections of other resources, such as lists of references to other objects -- see below) and/or rapidly changing, such as [resource usage](../design/resources.md#usage-data), should be put into separate objects, with possibly a reference from the original object. This helps to ensure that GETs and watch remain reasonably efficient for the majority of clients, which may not need that data. + +Some resources report the `observedGeneration`, which is the `generation` most recently observed by the component responsible for acting upon changes to the desired state of the resource. This can be used, for instance, to ensure that the reported status reflects the most recent desired status. #### References to related objects @@ -213,7 +243,7 @@ ports: containerPort: 80 ``` -This rule maintains the invariant that all JSON/YAML keys are fields in API objects. The only exceptions are pure maps in the API (currently, labels, selectors, and annotations), as opposed to sets of subobjects. +This rule maintains the invariant that all JSON/YAML keys are fields in API objects. The only exceptions are pure maps in the API (currently, labels, selectors, annotations, data), as opposed to sets of subobjects. #### Constants @@ -249,19 +279,7 @@ API resources should use the traditional REST pattern: * DELETE /<resourceNamePlural>/<name> - Delete the single resource with the given name. DeleteOptions may specify gracePeriodSeconds, the optional duration in seconds before the object should be deleted. Individual kinds may declare fields which provide a default grace period, and different kinds may have differing kind-wide default grace periods. A user provided grace period overrides a default grace period, including the zero grace period ("now"). * PUT /<resourceNamePlural>/<name> - Update or create the resource with the given name with the JSON object provided by the client. * PATCH /<resourceNamePlural>/<name> - Selectively modify the specified fields of the resource. See more information [below](#patch). - -Kubernetes by convention exposes additional verbs as new root endpoints with singular names. Examples: - -* GET /watch/<resourceNamePlural> - Receive a stream of JSON objects corresponding to changes made to any resource of the given kind over time. -* GET /watch/<resourceNamePlural>/<name> - Receive a stream of JSON objects corresponding to changes made to the named resource of the given kind over time. - -These are verbs which change the fundamental type of data returned (watch returns a stream of JSON instead of a single JSON object). Support of additional verbs is not required for all object types. - -Two additional verbs `redirect` and `proxy` provide access to cluster resources as described in [docs/user-guide/accessing-the-cluster.md](../user-guide/accessing-the-cluster.md). - -When resources wish to expose alternative actions that are closely coupled to a single resource, they should do so using new sub-resources. An example is allowing automated processes to update the "status" field of a Pod. The `/pods` endpoint only allows updates to "metadata" and "spec", since those reflect end-user intent. An automated process should be able to modify status for users to see by sending an updated Pod kind to the server to the "/pods/<name>/status" endpoint - the alternate endpoint allows different rules to be applied to the update, and access to be appropriately restricted. Likewise, some actions like "stop" or "scale" are best represented as REST sub-resources that are POSTed to. The POST action may require a simple kind to be provided if the action requires parameters, or function without a request body. - -TODO: more documentation of Watch +* GET /<resourceNamePlural>&watch=true - Receive a stream of JSON objects corresponding to changes made to any resource of the given kind over time. ### PATCH operations @@ -423,7 +441,6 @@ APIs may return alternative representations of any resource in response to an Ac All dates should be serialized as RFC3339 strings. - ## Units Units must either be explicit in the field name (e.g., `timeoutSeconds`), or must be specified as part of the value (e.g., `resource.Quantity`). Which approach is preferred is TBD. @@ -431,11 +448,16 @@ Units must either be explicit in the field name (e.g., `timeoutSeconds`), or mus ## Selecting Fields -Some APIs may need to identify which field in a JSON object is invalid, or to reference a value to extract from a separate resource. The current recommendation is to use standard JavaScript syntax for accessing that field, assuming the JSON object was transformed into a JavaScript object. +Some APIs may need to identify which field in a JSON object is invalid, or to reference a value to extract from a separate resource. The current recommendation is to use standard JavaScript syntax for accessing that field, assuming the JSON object was transformed into a JavaScript object, without the leading dot, such as `metadata.name`. Examples: -* Find the field "current" in the object "state" in the second item in the array "fields": `fields[0].state.current` +* Find the field "current" in the object "state" in the second item in the array "fields": `fields[1].state.current` + +## Object references + +Object references should either be called `fooName` if referring to an object of kind `Foo` by just the name (within the current namespace, if a namespaced resource), or should be called `fooRef`, and should contain a subset of the fields of the `ObjectReference` type. + TODO: Plugins, extensions, nested kinds, headers @@ -561,7 +583,7 @@ $ curl -v -k -H "Authorization: Bearer WhCDvq4VPpYhrcfmF6ei7V9qlbqTubUc" https:/ `message` may contain human-readable description of the error -`reason` may contain a machine-readable description of why this operation is in the `Failure` status. If this value is empty there is no information available. The `reason` clarifies an HTTP status code but does not override it. +`reason` may contain a machine-readable, one-word, CamelCase description of why this operation is in the `Failure` status. If this value is empty there is no information available. The `reason` clarifies an HTTP status code but does not override it. `details` may contain extended data associated with the reason. Each reason may define its own extended details. This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type. @@ -646,7 +668,43 @@ Possible values for the `reason` and `details` fields: ## Events -TODO: Document events (refer to another doc for details) +Events are complementary to status information, since they can provide some historical information about status and occurrences in addition to current or previous status. Generate events for situations users or administrators should be alerted about. + +Choose a unique, specific, short, CamelCase reason for each event category. For example, `FreeDiskSpaceInvalid` is a good event reason because it is likely to refer to just one situation, but `Started` is not a good reason because it doesn't sufficiently indicate what started, even when combined with other event fields. + +`Error creating foo` or `Error creating foo %s` would be appropriate for an event message, with the latter being preferable, since it is more informational. + +Accumulate repeated events in the client, especially for frequent events, to reduce data volume, load on the system, and noise exposed to users. + +## Naming conventions + +* `Minion` has been deprecated in favor of `Node`. Use `Node` where referring to the node resource in the context of the cluster. Use `Host` where referring to properties of the individual physical/virtual system, such as `hostname`, `hostPath`, `hostNetwork`, etc. +* `FooController` is a deprecated kind naming convention. Name the kind after the thing being controlled instead (e.g., `Job` rather than `JobController`). +* The name of a field that specifies the time at which `something` occurs should be called `somethingTime`. Do not use `stamp` (e.g., `creationTimestamp`). +* Do not use abbreviations in the API, except where they are extremely commonly used, such as "id", "args", or "stdin". +* Acronyms should similarly only be used when extremely commonly known. All letters in the acronym should have the same case, using the appropriate case for the situation. For example, at the beginning of a field name, the acronym should be all lowercase, such as "httpGet". Where used as a constant, all letters should be uppercase, such as "TCP" or "UDP". + +## Label, selector, and annotation conventions + +Labels are the domain of users. They are intended to facilitate organization and management of API resources using attributes that are meaningful to users, as opposed to meaningful to the system. Think of them as user-created mp3 or email inbox labels, as opposed to the directory structure used by a program to store its data. The former is enables the user to apply an arbitrary ontology, whereas the latter is implementation-centric and inflexible. Users will use labels to select resources to operate on, display label values in CLI/UI columns, etc. Users should always retain full power and flexibility over the label schemas they apply to labels in their namespaces. + +However, we should support conveniences for common cases by default. For example, what we now do in ReplicationController is automatically set the RC's selector and labels to the labels in the pod template by default, if they are not already set. That ensures that the selector will match the template, and that the RC can be managed using the same labels as the pods it creates. Note that once we generalize selectors, it won't necessarily be possible to unambiguously generate labels that match an arbitrary selector. + +If the user wants to apply additional labels to the pods that it doesn't select upon, such as to facilitate adoption of pods or in the expectation that some label values will change, they can set the selector to a subset of the pod labels. Similarly, the RC's labels could be initialized to a subset of the pod template's labels, or could include additional/different labels. + +For disciplined users managing resources within their own namespaces, it's not that hard to consistently apply schemas that ensure uniqueness. One just needs to ensure that at least one value of some label key in common differs compared to all other comparable resources. We could/should provide a verification tool to check that. However, development of conventions similar to the examples in [Labels](../user-guide/labels.md) make uniqueness straightforward. Furthermore, relatively narrowly used namespaces (e.g., per environment, per application) can be used to reduce the set of resources that could potentially cause overlap. + +In cases where users could be running misc. examples with inconsistent schemas, or where tooling or components need to programmatically generate new objects to be selected, there needs to be a straightforward way to generate unique label sets. A simple way to ensure uniqueness of the set is to ensure uniqueness of a single label value, such as by using a resource name, uid, resource hash, or generation number. + +Problems with uids and hashes, however, include that they have no semantic meaning to the user, are not memorable nor readily recognizable, and are not predictable. Lack of predictability obstructs use cases such as creation of a replication controller from a pod, such as people want to do when exploring the system, bootstrapping a self-hosted cluster, or deletion and re-creation of a new RC that adopts the pods of the previous one, such as to rename it. Generation numbers are more predictable and much clearer, assuming there is a logical sequence. Fortunately, for deployments that's the case. For jobs, use of creation timestamps is common internally. Users should always be able to turn off auto-generation, in order to permit some of the scenarios described above. Note that auto-generated labels will also become one more field that needs to be stripped out when cloning a resource, within a namespace, in a new namespace, in a new cluster, etc., and will need to be ignored around when updating a resource via patch or read-modify-write sequence. + +Inclusion of a system prefix in a label key is fairly hostile to UX. A prefix is only necessary in the case that the user cannot choose the label key, in order to avoid collisions with user-defined labels. However, I firmly believe that the user should always be allowed to select the label keys to use on their resources, so it should always be possible to override default label keys. + +Therefore, resources supporting auto-generation of unique labels should have a `uniqueLabelKey` field, so that the user could specify the key if they wanted to, but if unspecified, it could be set by default, such as to the resource type, like job, deployment, or replicationController. The value would need to be at least spatially unique, and perhaps temporally unique in the case of job. + +Annotations have very different intended usage from labels. We expect them to be primarily generated and consumed by tooling and system extensions. I'm inclined to generalize annotations to permit them to directly store arbitrary json. Rigid names and name prefixes make sense, since they are analogous to API fields. + +In fact, experimental API fields, including to represent fields of newer alpha/beta API versions in the older, stable storage version, may be represented as annotations with the prefix `experimental.kubernetes.io/`. diff --git a/kubectl-conventions.md b/kubectl-conventions.md new file mode 100644 index 00000000..e5d1df75 --- /dev/null +++ b/kubectl-conventions.md @@ -0,0 +1,115 @@ + + + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + +The latest 1.0.x release of this document can be found +[here](http://releases.k8s.io/release-1.0/docs/devel/kubectl-conventions.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + + + + +Kubectl Conventions +=================== + +Updated: 8/12/2015 + +**Table of Contents** + + + - [Principles](#principles) + - [Command conventions](#command-conventions) + - [Flag conventions](#flag-conventions) + - [Output conventions](#output-conventions) + - [Documentation conventions](#documentation-conventions) + + + +## Principles + +* Strive for consistency across commands +* Explicit should always override implicit + * Environment variables should override default values + * Command-line flags should override default values and environment variables + * --namespace should also override the value specified in a specified resource + +## Command conventions + +* Command names are all lowercase, and hyphenated if multiple words. +* kubectl VERB NOUNs for commands that apply to multiple resource types +* NOUNs may be specified as TYPE name1 name2 ... or TYPE/name1 TYPE/name2; TYPE is omitted when only a single type is expected +* Resource types are all lowercase, with no hyphens; both singular and plural forms are accepted +* NOUNs may also be specified by one or more file arguments: -f file1 -f file2 ... +* Resource types may have 2- or 3-letter aliases. +* Business logic should be decoupled from the command framework, so that it can be reused independently of kubectl, cobra, etc. + * Ideally, commonly needed functionality would be implemented server-side in order to avoid problems typical of "fat" clients and to make it readily available to non-Go clients +* Commands that generate resources, such as `run` or `expose`, should obey the following conventions: + * Flags should be converted to a parameter Go map or json map prior to invoking the generator + * The generator must be versioned so that users depending on a specific behavior may pin to that version, via `--generator=` + * Generation should be decoupled from creation + * `--dry-run` should output the resource that would be created, without creating it +* A command group (e.g., `kubectl config`) may be used to group related non-standard commands, such as custom generators, mutations, and computations + +## Flag conventions + +* Flags are all lowercase, with words separated by hyphens +* Flag names and single-character aliases should have the same meaning across all commands +* Command-line flags corresponding to API fields should accept API enums exactly (e.g., --restart=Always) + +## Output conventions + +* By default, output is intended for humans rather than programs + * However, affordances are made for simple parsing of `get` output +* Only errors should be directed to stderr +* `get` commands should output one row per resource, and one resource per row + * Column titles and values should not contain spaces in order to facilitate commands that break lines into fields: cut, awk, etc. + * By default, `get` output should fit within about 80 columns + * Eventually we could perhaps auto-detect width + * `-o wide` may be used to display additional columns + * The first column should be the resource name, titled `NAME` (may change this to an abbreviation of resource type) + * NAMESPACE should be displayed as the first column when --all-namespaces is specified + * The last default column should be time since creation, titled `AGE` + * `-Lkey` should append a column containing the value of label with key `key`, with `` if not present + * json, yaml, Go template, and jsonpath template formats should be supported and encouraged for subsequent processing + * Users should use --api-version or --output-version to ensure the output uses the version they expect +* `describe` commands may output on multiple lines and may include information from related resources, such as events. Describe should add additional information from related resources that a normal user may need to know - if a user would always run "describe resource1" and the immediately want to run a "get type2" or "describe resource2", consider including that info. Examples, persistent volume claims for pods that reference claims, events for most resources, nodes and the pods scheduled on them. When fetching related resources, a targeted field selector should be used in favor of client side filtering of related resources. +* Mutations should output TYPE/name verbed by default, where TYPE is singular; `-o name` may be used to just display TYPE/name, which may be used to specify resources in other commands + +## Documentation conventions + +* Commands are documented using Cobra; docs are then auto-generated by hack/run-gendocs.sh. + * Use should contain a short usage string for the most common use case(s), not an exhaustive specification + * Short should contain a one-line explanation of what the command does + * Long may contain multiple lines, including additional information about input, output, commonly used flags, etc. + * Example should contain examples + * Start commands with `$` + * A comment should precede each example command, and should begin with `#` +* Use "FILENAME" for filenames +* Use "TYPE" for the particular flavor of resource type accepted by kubectl, rather than "RESOURCE" or "KIND" +* Use "NAME" for resource names + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/kubectl-conventions.md?pixel)]() + -- cgit v1.2.3 From d4d6d71afde5f59e7098c12c14c154cd62930531 Mon Sep 17 00:00:00 2001 From: Mike Danese Date: Fri, 14 Aug 2015 13:54:04 -0700 Subject: remove contrib/submit-queue as it is moving to the contrib repo --- pull-requests.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pull-requests.md b/pull-requests.md index 6d2eb597..126b8996 100644 --- a/pull-requests.md +++ b/pull-requests.md @@ -52,14 +52,14 @@ Life of a Pull Request Unless in the last few weeks of a milestone when we need to reduce churn and stabilize, we aim to be always accepting pull requests. -Either the [on call](https://github.com/GoogleCloudPlatform/kubernetes/wiki/Kubernetes-on-call-rotation) manually or the [submit queue](../../contrib/submit-queue/) automatically will manage merging PRs. +Either the [on call](https://github.com/GoogleCloudPlatform/kubernetes/wiki/Kubernetes-on-call-rotation) manually or the [submit queue](https://github.com/contrib/tree/master/submit-queue) automatically will manage merging PRs. There are several requirements for the submit queue to work: * Author must have signed CLA ("cla: yes" label added to PR) * No changes can be made since last lgtm label was applied * k8s-bot must have reported the GCE E2E build and test steps passed (Travis, Shippable and Jenkins build) -Additionally, for infrequent or new contributors, we require the on call to apply the "ok-to-merge" label manually. This is gated by the [whitelist](../../contrib/submit-queue/whitelist.txt). +Additionally, for infrequent or new contributors, we require the on call to apply the "ok-to-merge" label manually. This is gated by the [whitelist](https://github.com/contrib/tree/master/submit-queue/whitelist.txt). -- cgit v1.2.3 From 0eb6b6ec3d1bba2e956e63e6309b3d23b9e6c8c4 Mon Sep 17 00:00:00 2001 From: Eric Paris Date: Fri, 14 Aug 2015 18:50:03 -0400 Subject: TYPO: fix documentation to point at update-generated-docs.sh --- kubectl-conventions.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kubectl-conventions.md b/kubectl-conventions.md index e5d1df75..5739708c 100644 --- a/kubectl-conventions.md +++ b/kubectl-conventions.md @@ -99,7 +99,7 @@ Updated: 8/12/2015 ## Documentation conventions -* Commands are documented using Cobra; docs are then auto-generated by hack/run-gendocs.sh. +* Commands are documented using Cobra; docs are then auto-generated by `hack/update-generated-docs.sh`. * Use should contain a short usage string for the most common use case(s), not an exhaustive specification * Short should contain a one-line explanation of what the command does * Long may contain multiple lines, including additional information about input, output, commonly used flags, etc. -- cgit v1.2.3 From abb5b4de722f05ed7c24a3cfc5017c71bb2252f2 Mon Sep 17 00:00:00 2001 From: Patrick Flor Date: Mon, 17 Aug 2015 09:17:03 -0700 Subject: Update dev docs to note new coveralls URL (also noting old URL for interested parties and future historians) --- development.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/development.md b/development.md index db74adaf..a266f7cb 100644 --- a/development.md +++ b/development.md @@ -249,7 +249,7 @@ KUBE_COVER=y hack/test-go.sh pkg/kubectl Multiple arguments can be passed, in which case the coverage results will be combined for all tests run. -Coverage results for the project can also be viewed on [Coveralls](https://coveralls.io/r/GoogleCloudPlatform/kubernetes), and are continuously updated as commits are merged. Additionally, all pull requests which spawn a Travis build will report unit test coverage results to Coveralls. +Coverage results for the project can also be viewed on [Coveralls](https://coveralls.io/r/kubernetes/kubernetes), and are continuously updated as commits are merged. Additionally, all pull requests which spawn a Travis build will report unit test coverage results to Coveralls. Coverage reports from before the Kubernetes Github organization was created can be found [here](https://coveralls.io/r/GoogleCloudPlatform/kubernetes). ## Integration tests -- cgit v1.2.3 From d02af2b80ce483b79cd57e2b5bde3040f12de0d4 Mon Sep 17 00:00:00 2001 From: Brian Grant Date: Tue, 18 Aug 2015 23:29:40 +0000 Subject: Add duration naming conventions. --- api-conventions.md | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/api-conventions.md b/api-conventions.md index 75612820..1730ada8 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -443,7 +443,7 @@ All dates should be serialized as RFC3339 strings. ## Units -Units must either be explicit in the field name (e.g., `timeoutSeconds`), or must be specified as part of the value (e.g., `resource.Quantity`). Which approach is preferred is TBD. +Units must either be explicit in the field name (e.g., `timeoutSeconds`), or must be specified as part of the value (e.g., `resource.Quantity`). Which approach is preferred is TBD, though currently we use the `fooSeconds` convention for durations. ## Selecting Fields @@ -681,6 +681,10 @@ Accumulate repeated events in the client, especially for frequent events, to red * `Minion` has been deprecated in favor of `Node`. Use `Node` where referring to the node resource in the context of the cluster. Use `Host` where referring to properties of the individual physical/virtual system, such as `hostname`, `hostPath`, `hostNetwork`, etc. * `FooController` is a deprecated kind naming convention. Name the kind after the thing being controlled instead (e.g., `Job` rather than `JobController`). * The name of a field that specifies the time at which `something` occurs should be called `somethingTime`. Do not use `stamp` (e.g., `creationTimestamp`). +* We use the `fooSeconds` convention for durations, as discussed in the [units subsection](#units). + * `fooPeriodSeconds` is preferred for periodic intervals and other waiting periods (e.g., over `fooIntervalSeconds`). + * `fooTimeoutSeconds` is preferred for inactivity/unresponsiveness deadlines. + * `fooDeadlineSeconds` is preferred for activity completion deadlines. * Do not use abbreviations in the API, except where they are extremely commonly used, such as "id", "args", or "stdin". * Acronyms should similarly only be used when extremely commonly known. All letters in the acronym should have the same case, using the appropriate case for the situation. For example, at the beginning of a field name, the acronym should be all lowercase, such as "httpGet". Where used as a constant, all letters should be uppercase, such as "TCP" or "UDP". -- cgit v1.2.3 From 66a9ff2d9b98120b3c9afe832f72e35dc22d301a Mon Sep 17 00:00:00 2001 From: dinghaiyang Date: Thu, 20 Aug 2015 22:15:21 +0800 Subject: Repalce limits with requests in scheduler documentation. Due to #11713 --- scheduler.md | 6 +++--- scheduler_algorithm.md | 6 +++--- 2 files changed, 6 insertions(+), 6 deletions(-) mode change 100644 => 100755 scheduler.md mode change 100644 => 100755 scheduler_algorithm.md diff --git a/scheduler.md b/scheduler.md old mode 100644 new mode 100755 index b2a137d5..c9d32aa4 --- a/scheduler.md +++ b/scheduler.md @@ -42,13 +42,13 @@ indicating where the Pod should be scheduled. The scheduler tries to find a node for each Pod, one at a time, as it notices these Pods via watch. There are three steps. First it applies a set of "predicates" that filter out -inappropriate nodes. For example, if the PodSpec specifies resource limits, then the scheduler +inappropriate nodes. For example, if the PodSpec specifies resource requests, then the scheduler will filter out nodes that don't have at least that much resources available (computed -as the capacity of the node minus the sum of the resource limits of the containers that +as the capacity of the node minus the sum of the resource requests of the containers that are already running on the node). Second, it applies a set of "priority functions" that rank the nodes that weren't filtered out by the predicate check. For example, it tries to spread Pods across nodes while at the same time favoring the least-loaded -nodes (where "load" here is sum of the resource limits of the containers running on the node, +nodes (where "load" here is sum of the resource requests of the containers running on the node, divided by the node's capacity). Finally, the node with the highest priority is chosen (or, if there are multiple such nodes, then one of them is chosen at random). The code diff --git a/scheduler_algorithm.md b/scheduler_algorithm.md old mode 100644 new mode 100755 index ab8e69ef..7964ab33 --- a/scheduler_algorithm.md +++ b/scheduler_algorithm.md @@ -37,10 +37,10 @@ For each unscheduled Pod, the Kubernetes scheduler tries to find a node across t ## Filtering the nodes -The purpose of filtering the nodes is to filter out the nodes that do not meet certain requirements of the Pod. For example, if the free resource on a node (measured by the capacity minus the sum of the resource limits of all the Pods that already run on the node) is less than the Pod's required resource, the node should not be considered in the ranking phase so it is filtered out. Currently, there are several "predicates" implementing different filtering policies, including: +The purpose of filtering the nodes is to filter out the nodes that do not meet certain requirements of the Pod. For example, if the free resource on a node (measured by the capacity minus the sum of the resource requests of all the Pods that already run on the node) is less than the Pod's required resource, the node should not be considered in the ranking phase so it is filtered out. Currently, there are several "predicates" implementing different filtering policies, including: - `NoDiskConflict`: Evaluate if a pod can fit due to the volumes it requests, and those that are already mounted. -- `PodFitsResources`: Check if the free resource (CPU and Memory) meets the requirement of the Pod. The free resource is measured by the capacity minus the sum of limits of all Pods on the node. +- `PodFitsResources`: Check if the free resource (CPU and Memory) meets the requirement of the Pod. The free resource is measured by the capacity minus the sum of requests of all Pods on the node. To learn more about the resource QoS in Kubernetes, please check [QoS proposal](../proposals/resource-qos.md). - `PodFitsPorts`: Check if any HostPort required by the Pod is already occupied on the node. - `PodFitsHost`: Filter out all nodes except the one specified in the PodSpec's NodeName field. - `PodSelectorMatches`: Check if the labels of the node match the labels specified in the Pod's `nodeSelector` field ([Here](../user-guide/node-selection/) is an example of how to use `nodeSelector` field). @@ -58,7 +58,7 @@ After the scores of all nodes are calculated, the node with highest score is cho Currently, Kubernetes scheduler provides some practical priority functions, including: -- `LeastRequestedPriority`: The node is prioritized based on the fraction of the node that would be free if the new Pod were scheduled onto the node. (In other words, (capacity - sum of limits of all Pods already on the node - limit of Pod that is being scheduled) / capacity). CPU and memory are equally weighted. The node with the highest free fraction is the most preferred. Note that this priority function has the effect of spreading Pods across the nodes with respect to resource consumption. +- `LeastRequestedPriority`: The node is prioritized based on the fraction of the node that would be free if the new Pod were scheduled onto the node. (In other words, (capacity - sum of requests of all Pods already on the node - request of Pod that is being scheduled) / capacity). CPU and memory are equally weighted. The node with the highest free fraction is the most preferred. Note that this priority function has the effect of spreading Pods across the nodes with respect to resource consumption. - `CalculateNodeLabelPriority`: Prefer nodes that have the specified label. - `BalancedResourceAllocation`: This priority function tries to put the Pod on a node such that the CPU and Memory utilization rate is balanced after the Pod is deployed. - `CalculateSpreadPriority`: Spread Pods by minimizing the number of Pods belonging to the same service on the same node. -- cgit v1.2.3 From 96988acedbfedc32087610a04d4b8fb6ead25b4e Mon Sep 17 00:00:00 2001 From: derekwaynecarr Date: Mon, 10 Aug 2015 12:22:44 -0400 Subject: Document need to run generated deep copy --- api_changes.md | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/api_changes.md b/api_changes.md index 687af00a..5c2c4a2a 100644 --- a/api_changes.md +++ b/api_changes.md @@ -297,6 +297,22 @@ generator to create it from scratch. Unsurprisingly, adding manually written conversion also requires you to add tests to `pkg/api//conversion_test.go`. +## Edit deep copy files + +At this point you have both the versioned API changes and the internal +structure changes done. You now need to generate code to handle deep copy +of your versioned api objects. + +The deep copy code resides with each versioned API: + - `pkg/api//deep_copy_generated.go` containing auto-generated copy functions + +To regenerate them: + - run + +```sh +hack/update-generated-deep-copies.sh +``` + ## Update the fuzzer Part of our testing regimen for APIs is to "fuzz" (fill with random values) API -- cgit v1.2.3 From c90e062aec5f1d1deb2f2c384c9dc6e65845e5b2 Mon Sep 17 00:00:00 2001 From: Brian Grant Date: Wed, 19 Aug 2015 18:27:54 +0000 Subject: Added more API conventions. --- api-conventions.md | 7 +++-- api_changes.md | 75 +++++++++++++++++++++++++++++++++++++++++------------- 2 files changed, 63 insertions(+), 19 deletions(-) diff --git a/api-conventions.md b/api-conventions.md index 75612820..e68f53c7 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -33,7 +33,7 @@ Documentation for other releases can be found at API Conventions =============== -Updated: 8/12/2015 +Updated: 8/24/2015 *This document is oriented at users who want a deeper understanding of the Kubernetes API structure, and developers wanting to extend the Kubernetes API. An introduction to @@ -219,7 +219,7 @@ Some resources report the `observedGeneration`, which is the `generation` most r References to loosely coupled sets of objects, such as [pods](../user-guide/pods.md) overseen by a [replication controller](../user-guide/replication-controller.md), are usually best referred to using a [label selector](../user-guide/labels.md). In order to ensure that GETs of individual objects remain bounded in time and space, these sets may be queried via separate API queries, but will not be expanded in the referring object's status. -References to specific objects, especially specific resource versions and/or specific fields of those objects, are specified using the `ObjectReference` type. Unlike partial URLs, the ObjectReference type facilitates flexible defaulting of fields from the referring object or other contextual information. +References to specific objects, especially specific resource versions and/or specific fields of those objects, are specified using the `ObjectReference` type (or other types representing strict subsets of it). Unlike partial URLs, the ObjectReference type facilitates flexible defaulting of fields from the referring object or other contextual information. References in the status of the referee to the referrer may be permitted, when the references are one-to-one and do not need to be frequently updated, particularly in an edge-based manner. @@ -678,11 +678,14 @@ Accumulate repeated events in the client, especially for frequent events, to red ## Naming conventions +* Go field names must be CamelCase. JSON field names must be camelCase. Other than capitalization of the initial letter, the two should almost always match. No underscores nor dashes in either. +* Field and resource names should be declarative, not imperative (DoSomething, SomethingDoer). * `Minion` has been deprecated in favor of `Node`. Use `Node` where referring to the node resource in the context of the cluster. Use `Host` where referring to properties of the individual physical/virtual system, such as `hostname`, `hostPath`, `hostNetwork`, etc. * `FooController` is a deprecated kind naming convention. Name the kind after the thing being controlled instead (e.g., `Job` rather than `JobController`). * The name of a field that specifies the time at which `something` occurs should be called `somethingTime`. Do not use `stamp` (e.g., `creationTimestamp`). * Do not use abbreviations in the API, except where they are extremely commonly used, such as "id", "args", or "stdin". * Acronyms should similarly only be used when extremely commonly known. All letters in the acronym should have the same case, using the appropriate case for the situation. For example, at the beginning of a field name, the acronym should be all lowercase, such as "httpGet". Where used as a constant, all letters should be uppercase, such as "TCP" or "UDP". +* The name of a field referring to another resource of kind `Foo` by name should be called `fooName`. The name of a field referring to another resource of kind `Foo` by ObjectReference (or subset thereof) should be called `fooRef`. ## Label, selector, and annotation conventions diff --git a/api_changes.md b/api_changes.md index 687af00a..72c38b7f 100644 --- a/api_changes.md +++ b/api_changes.md @@ -33,6 +33,13 @@ Documentation for other releases can be found at # So you want to change the API? +Before attempting a change to the API, you should familiarize yourself +with a number of existing API types and with the [API +conventions](api-conventions.md). If creating a new API +type/resource, we also recommend that you first send a PR containing +just a proposal for the new API types, and that you initially target +the experimental API (pkg/expapi). + The Kubernetes API has two major components - the internal structures and the versioned APIs. The versioned APIs are intended to be stable, while the internal structures are implemented to best reflect the needs of the Kubernetes @@ -92,9 +99,12 @@ backward-compatibly. Before talking about how to make API changes, it is worthwhile to clarify what we mean by API compatibility. An API change is considered backward-compatible if it: - * adds new functionality that is not required for correct behavior - * does not change existing semantics - * does not change existing defaults + * adds new functionality that is not required for correct behavior (e.g., + does not add a new required field) + * does not change existing semantics, including: + * default values and behavior + * interpretation of existing API types, fields, and values + * which fields are required and which are not Put another way: @@ -104,11 +114,11 @@ Put another way: degrade behavior) when issued against servers that do not include your change. 3. It must be possible to round-trip your change (convert to different API versions and back) with no loss of information. +4. Existing clients need not be aware of your change in order for them to continue + to function as they did previously, even when your change is utilized If your change does not meet these criteria, it is not considered strictly -compatible. There are times when this might be OK, but mostly we want changes -that meet this definition. If you think you need to break compatibility, you -should talk to the Kubernetes team first. +compatible. Let's consider some examples. In a hypothetical API (assume we're at version v6), the `Frobber` struct looks something like this: @@ -179,14 +189,43 @@ API call might POST an object in API v7beta1 format, which uses the cleaner form (since v7beta1 is "beta"). When the user reads the object back in the v7beta1 API it would be unacceptable to have lost all but `Params[0]`. This means that, even though it is ugly, a compatible change must be made to the v6 -API. +API. However, this is very challenging to do correctly. It generally requires +multiple representations of the same information in the same API resource, which +need to be kept in sync in the event that either is changed. However, if +the new representation is more expressive than the old, this breaks +backward compatibility, since clients that only understood the old representation +would not be aware of the new representation nor its semantics. Examples of +proposals that have run into this challenge include [generalized label +selectors](http://issues.k8s.io/341) and [pod-level security +context](http://prs.k8s.io/12823). As another interesting example, enumerated values provide a unique challenge. Adding a new value to an enumerated set is *not* a compatible change. Clients which assume they know how to handle all possible values of a given field will not be able to handle the new values. However, removing value from an enumerated set *can* be a compatible change, if handled properly (treat the -removed value as deprecated but allowed). +removed value as deprecated but allowed). This is actually a special case of +a new representation, discussed above. + +## Incompatible API changes + +There are times when this might be OK, but mostly we want changes that +meet this definition. If you think you need to break compatibility, +you should talk to the Kubernetes team first. + +Breaking compatibility of a beta or stable API version, such as v1, is unacceptable. +Compatibility for experimental or alpha APIs is not strictly required, but +breaking compatibility should not be done lightly, as it disrupts all users of the +feature. Experimental APIs may be removed. Alpha and beta API versions may be deprecated +and eventually removed wholesale, as described in the [versioning document](../design/versioning.md). +Document incompatible changes across API versions under the [conversion tips](../api.md). + +If your change is going to be backward incompatible or might be a breaking change for API +consumers, please send an announcement to `kubernetes-dev@googlegroups.com` before +the change gets in. If you are unsure, ask. Also make sure that the change gets documented in +the release notes for the next release by labeling the PR with the "release-note" github label. + +If you found that your change accidentally broke clients, it should be reverted. ## Changing versioned APIs @@ -199,10 +238,13 @@ before starting "all the rest". ### Edit types.go The struct definitions for each API are in `pkg/api//types.go`. Edit -those files to reflect the change you want to make. Note that all non-online -fields in versioned APIs must have description tags - these are used to generate +those files to reflect the change you want to make. Note that all types and non-inline +fields in versioned APIs must be preceded by descriptive comments - these are used to generate documentation. +Optional fields should have the `,omitempty` json tag; fields are interpreted as being +required otherwise. + ### Edit defaults.go If your change includes new fields for which you will need default values, you @@ -228,6 +270,12 @@ incompatible change you might or might not want to do this now, but you will have to do more later. The files you want are `pkg/api//conversion.go` and `pkg/api//conversion_test.go`. +Note that the conversion machinery doesn't generically handle conversion of values, +such as various kinds of field references and API constants. [The client +library](../../pkg/client/unversioned/request.go) has custom conversion code for +field references. You also need to add a call to api.Scheme.AddFieldLabelConversionFunc +with a mapping function that understands supported translations. + ## Changing the internal structures Now it is time to change the internal structs so your versioned changes can be @@ -365,13 +413,6 @@ hack/update-swagger-spec.sh The API spec changes should be in a commit separate from your other changes. -## Incompatible API changes - -If your change is going to be backward incompatible or might be a breaking change for API -consumers, please send an announcement to `kubernetes-dev@googlegroups.com` before -the change gets in. If you are unsure, ask. Also make sure that the change gets documented in -`CHANGELOG.md` for the next release. - ## Adding new REST objects TODO(smarterclayton): write this. -- cgit v1.2.3 From 1f3791a4b008ee13c4003ce044f1faee6ab88197 Mon Sep 17 00:00:00 2001 From: Jimmi Dyson Date: Wed, 26 Aug 2015 10:59:03 +0100 Subject: Update fabric8 client library location --- client-libraries.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/client-libraries.md b/client-libraries.md index 9e41688c..b63e2d44 100644 --- a/client-libraries.md +++ b/client-libraries.md @@ -41,8 +41,8 @@ Documentation for other releases can be found at *Note: Libraries provided by outside parties are supported by their authors, not the core Kubernetes team* - * [Java (OSGI)](https://bitbucket.org/amdatulabs/amdatu-kubernetes) - * [Java (Fabric8)](https://github.com/fabric8io/fabric8/tree/master/components/kubernetes-api) + * [Java (OSGi)](https://bitbucket.org/amdatulabs/amdatu-kubernetes) + * [Java (Fabric8, OSGi)](https://github.com/fabric8io/kubernetes-client) * [Ruby](https://github.com/Ch00k/kuber) * [Ruby](https://github.com/abonas/kubeclient) * [PHP](https://github.com/devstub/kubernetes-api-php-client) -- cgit v1.2.3 From d2d04300c597902f646e846bbdb762080b6b7eba Mon Sep 17 00:00:00 2001 From: Phillip Wittrock Date: Wed, 26 Aug 2015 17:22:27 -0700 Subject: Update development godep instructions to work for cadvisor and changing transitive deps --- development.md | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/development.md b/development.md index a266f7cb..fc1de093 100644 --- a/development.md +++ b/development.md @@ -145,6 +145,8 @@ Here's a quick walkthrough of one way to use godeps to add or update a Kubernete 1) Devote a directory to this endeavor: +_Devoting a separate directory is not required, but it is helpful to separate dependency updates from other changes._ + ```sh export KPATH=$HOME/code/kubernetes mkdir -p $KPATH/src/k8s.io/kubernetes @@ -183,10 +185,17 @@ godep save ./... cd $KPATH/src/k8s.io/kubernetes go get -u path/to/dependency # Change code in Kubernetes accordingly if necessary. -godep update path/to/dependency +godep update path/to/dependency/... ``` -5) Before sending your PR, it's a good idea to sanity check that your Godeps.json file is ok by re-restoring: `godep restore` +_If `go get -u path/to/dependency` fails with compilation errors, instead try `go get -d -u path/to/dependency` +to fetch the dependencies without compiling them. This can happen when updating the cadvisor dependency._ + + +5) Before sending your PR, it's a good idea to sanity check that your Godeps.json file is ok by running hack/verify-godeps.sh + +_If hack/verify-godeps.sh fails after a `godep update`, it is possible that a transitive dependency was added or removed but not +updated by godeps. It then may be necessary to perform a `godep save ./...` to pick up the transitive dependency changes._ It is sometimes expedient to manually fix the /Godeps/godeps.json file to minimize the changes. -- cgit v1.2.3 From cda298a5a0c75419481754b80077d77fbf23b7e8 Mon Sep 17 00:00:00 2001 From: Piotr Szczesniak Date: Thu, 27 Aug 2015 10:50:50 +0200 Subject: Revert "LimitRange updates for Resource Requirements Requests" --- api_changes.md | 16 ---------------- 1 file changed, 16 deletions(-) diff --git a/api_changes.md b/api_changes.md index 709f8c2c..72c38b7f 100644 --- a/api_changes.md +++ b/api_changes.md @@ -345,22 +345,6 @@ generator to create it from scratch. Unsurprisingly, adding manually written conversion also requires you to add tests to `pkg/api//conversion_test.go`. -## Edit deep copy files - -At this point you have both the versioned API changes and the internal -structure changes done. You now need to generate code to handle deep copy -of your versioned api objects. - -The deep copy code resides with each versioned API: - - `pkg/api//deep_copy_generated.go` containing auto-generated copy functions - -To regenerate them: - - run - -```sh -hack/update-generated-deep-copies.sh -``` - ## Update the fuzzer Part of our testing regimen for APIs is to "fuzz" (fill with random values) API -- cgit v1.2.3 From 52a0abcbe29b4c98c3d3a6274dd2dc7a9a1b27ed Mon Sep 17 00:00:00 2001 From: Prashanth B Date: Fri, 28 Aug 2015 09:26:36 -0700 Subject: Revert "Revert "LimitRange updates for Resource Requirements Requests"" --- api_changes.md | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/api_changes.md b/api_changes.md index 72c38b7f..709f8c2c 100644 --- a/api_changes.md +++ b/api_changes.md @@ -345,6 +345,22 @@ generator to create it from scratch. Unsurprisingly, adding manually written conversion also requires you to add tests to `pkg/api//conversion_test.go`. +## Edit deep copy files + +At this point you have both the versioned API changes and the internal +structure changes done. You now need to generate code to handle deep copy +of your versioned api objects. + +The deep copy code resides with each versioned API: + - `pkg/api//deep_copy_generated.go` containing auto-generated copy functions + +To regenerate them: + - run + +```sh +hack/update-generated-deep-copies.sh +``` + ## Update the fuzzer Part of our testing regimen for APIs is to "fuzz" (fill with random values) API -- cgit v1.2.3 From 2acd635b4749add50f4e5b9753fe77e14180eb6e Mon Sep 17 00:00:00 2001 From: Harry Zhang Date: Mon, 31 Aug 2015 12:15:05 +0800 Subject: Fix inconsistency path in GOPATH doc we set up $KPATH/src/k8s.io/kubernetes directory, but ask user to `cd` into $KPATH/src/github.com/kubernetes Close this if I made mistaken this --- development.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/development.md b/development.md index fc1de093..65ab981b 100644 --- a/development.md +++ b/development.md @@ -168,7 +168,7 @@ export GOPATH=$KPATH 3) Populate your new GOPATH. ```sh -cd $KPATH/src/github.com/kubernetes/kubernetes +cd $KPATH/src/k8s.io/kubernetes godep restore ``` -- cgit v1.2.3 From ca9f771cf90bd88378a3e6b0cee9f1dcfeea58c7 Mon Sep 17 00:00:00 2001 From: Brian Grant Date: Thu, 27 Aug 2015 21:12:06 +0000 Subject: Start on expanding code expectations (aka "The bar") --- api-conventions.md | 7 ++++ api_changes.md | 90 +++++++++++++++++++++++++++++++++++++++++++++++--- coding-conventions.md | 51 ++++++++++++++++++++++++++-- development.md | 2 ++ faster_reviews.md | 32 +++++++++++++++--- kubectl-conventions.md | 27 ++++++++++++++- 6 files changed, 195 insertions(+), 14 deletions(-) diff --git a/api-conventions.md b/api-conventions.md index f00dde1e..746d56cb 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -713,6 +713,13 @@ Annotations have very different intended usage from labels. We expect them to be In fact, experimental API fields, including to represent fields of newer alpha/beta API versions in the older, stable storage version, may be represented as annotations with the prefix `experimental.kubernetes.io/`. +Other advice regarding use of labels, annotations, and other generic map keys by Kubernetes components and tools: + - Key names should be all lowercase, with words separated by dashes, such as `desired-replicas` + - Prefix the key with `kubernetes.io/` or `foo.kubernetes.io/`, preferably the latter if the label/annotation is specific to `foo` + - For instance, prefer `service-account.kubernetes.io/name` over `kubernetes.io/service-account.name` + - Use annotations to store API extensions that the controller responsible for the resource doesn't need to know about, experimental fields that aren't intended to be generally used API fields, etc. Beware that annotations aren't automatically handled by the API conversion machinery. + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/api-conventions.md?pixel)]() diff --git a/api_changes.md b/api_changes.md index 72c38b7f..289123d5 100644 --- a/api_changes.md +++ b/api_changes.md @@ -189,17 +189,82 @@ API call might POST an object in API v7beta1 format, which uses the cleaner form (since v7beta1 is "beta"). When the user reads the object back in the v7beta1 API it would be unacceptable to have lost all but `Params[0]`. This means that, even though it is ugly, a compatible change must be made to the v6 -API. However, this is very challenging to do correctly. It generally requires +API. + +However, this is very challenging to do correctly. It often requires multiple representations of the same information in the same API resource, which -need to be kept in sync in the event that either is changed. However, if -the new representation is more expressive than the old, this breaks -backward compatibility, since clients that only understood the old representation +need to be kept in sync in the event that either is changed. For example, +let's say you decide to rename a field within the same API version. In this case, +you add units to `height` and `width`. You implement this by adding duplicate +fields: + +```go +type Frobber struct { + Height *int `json:"height"` + Width *int `json:"width"` + HeightInInches *int `json:"heightInInches"` + WidthInInches *int `json:"widthInInches"` +} +``` + +You convert all of the fields to pointers in order to distinguish between unset and +set to 0, and then set each corresponding field from the other in the defaulting +pass (e.g., `heightInInches` from `height`, and vice versa), which runs just prior +to conversion. That works fine when the user creates a resource from a hand-written +configuration -- clients can write either field and read either field, but what about +creation or update from the output of GET, or update via PATCH (see +[In-place updates](../user-guide/managing-deployments.md#in-place-updates-of-resources))? +In this case, the two fields will conflict, because only one field would be updated +in the case of an old client that was only aware of the old field (e.g., `height`). + +Say the client creates: + +```json +{ + "height": 10, + "width": 5 +} +``` + +and GETs: + +```json +{ + "height": 10, + "heightInInches": 10, + "width": 5, + "widthInInches": 5 +} +``` + +then PUTs back: + +```json +{ + "height": 13, + "heightInInches": 10, + "width": 5, + "widthInInches": 5 +} +``` + +The update should not fail, because it would have worked before `heightInInches` was added. + +Therefore, when there are duplicate fields, the old field MUST take precedence +over the new, and the new field should be set to match by the server upon write. +A new client would be aware of the old field as well as the new, and so can ensure +that the old field is either unset or is set consistently with the new field. However, +older clients would be unaware of the new field. Please avoid introducing duplicate +fields due to the complexity they incur in the API. + +A new representation, even in a new API version, that is more expressive than an old one +breaks backward compatibility, since clients that only understood the old representation would not be aware of the new representation nor its semantics. Examples of proposals that have run into this challenge include [generalized label selectors](http://issues.k8s.io/341) and [pod-level security context](http://prs.k8s.io/12823). -As another interesting example, enumerated values provide a unique challenge. +As another interesting example, enumerated values cause similar challenges. Adding a new value to an enumerated set is *not* a compatible change. Clients which assume they know how to handle all possible values of a given field will not be able to handle the new values. However, removing value from an @@ -227,6 +292,21 @@ the release notes for the next release by labeling the PR with the "release-note If you found that your change accidentally broke clients, it should be reverted. +In short, the expected API evolution is as follows: +* `experimental/v1alpha1` -> +* `newapigroup/v1alpha1` -> ... -> `newapigroup/v1alphaN` -> +* `newapigroup/v1beta1` -> ... -> `newapigroup/v1betaN` -> +* `newapigroup/v1` -> +* `newapigroup/v2alpha1` -> ... + +While in experimental we have no obligation to move forward with the API at all and may delete or break it at any time. + +While in alpha we expect to move forward with it, but may break it. + +Once in beta we will preserve forward compatibility, but may introduce new versions and delete old ones. + +v1 must be backward-compatible for an extended length of time. + ## Changing versioned APIs For most changes, you will probably find it easiest to change the versioned diff --git a/coding-conventions.md b/coding-conventions.md index ac3d353f..1569d1aa 100644 --- a/coding-conventions.md +++ b/coding-conventions.md @@ -30,12 +30,57 @@ Documentation for other releases can be found at -Coding style advice for contributors +Code conventions - Bash - https://google-styleguide.googlecode.com/svn/trunk/shell.xml + - Ensure that build, release, test, and cluster-management scripts run on OS X - Go - - https://github.com/golang/go/wiki/CodeReviewComments - - https://gist.github.com/lavalamp/4bd23295a9f32706a48f + - Ensure your code passes the [presubmit checks](development.md#hooks) + - [Go Code Review Comments](https://github.com/golang/go/wiki/CodeReviewComments) + - [Effective Go](https://golang.org/doc/effective_go.html) + - Comment your code. + - [Go's commenting conventions](http://blog.golang.org/godoc-documenting-go-code) + - If reviewers ask questions about why the code is the way it is, that's a sign that comments might be helpful. + - Command-line flags should use dashes, not underscores + - Naming + - Please consider package name when selecting an interface name, and avoid redundancy. + - e.g.: `storage.Interface` is better than `storage.StorageInterface`. + - Do not use uppercase characters, underscores, or dashes in package names. + - Please consider parent directory name when choosing a package name. + - so pkg/controllers/autoscaler/foo.go should say `package autoscaler` not `package autoscalercontroller`. + - Unless there's a good reason, the `package foo` line should match the name of the directory in which the .go file exists. + - Importers can use a different name if they need to disambiguate. + - API conventions + - [API changes](api_changes.md) + - [API conventions](api-conventions.md) + - [Kubectl conventions](kubectl-conventions.md) + - [Logging conventions](logging.md) + +Testing conventions + - All new packages and most new significant functionality must come with unit tests + - Table-driven tests are preferred for testing multiple scenarios/inputs; for example, see [TestNamespaceAuthorization](../../test/integration/auth_test.go) + - Significant features should come with integration (test/integration) and/or end-to-end (test/e2e) tests + - Including new kubectl commands and major features of existing commands + - Unit tests must pass on OS X and Windows platforms - if you use Linux specific features, your test case must either be skipped on windows or compiled out (skipped is better when running Linux specific commands, compiled out is required when your code does not compile on Windows). + +Directory and file conventions + - Avoid package sprawl. Find an appropriate subdirectory for new packages. (See [#4851](http://issues.k8s.io/4851) for discussion.) + - Libraries with no more appropriate home belong in new package subdirectories of pkg/util + - Avoid general utility packages. Packages called "util" are suspect. Instead, derive a name that describes your desired function. For example, the utility functions dealing with waiting for operations are in the "wait" package and include functionality like Poll. So the full name is wait.Poll + - Go source files and directories use underscores, not dashes + - Package directories should generally avoid using separators as much as possible (when packages are multiple words, they usually should be in nested subdirectories). + - Document directories and filenames should use dashes rather than underscores + - Contrived examples that illustrate system features belong in /docs/user-guide or /docs/admin, depending on whether it is a feature primarily intended for users that deploy applications or cluster administrators, respectively. Actual application examples belong in /examples. + - Examples should also illustrate [best practices for using the system](../user-guide/config-best-practices.md) + - Third-party code + - Third-party Go code is managed using Godeps + - Other third-party code belongs in /third_party + - Third-party code must include licenses + - This includes modified third-party code and excerpts, as well + +Coding advice + - Go + - [Go landmines](https://gist.github.com/lavalamp/4bd23295a9f32706a48f) diff --git a/development.md b/development.md index a266f7cb..44ceee1c 100644 --- a/development.md +++ b/development.md @@ -112,6 +112,8 @@ fixups (e.g. automated doc formatting), use one or more commits for the changes to tooling and a final commit to apply the fixup en masse. This makes reviews much easier. +See [Faster Reviews](faster_reviews.md) for more details. + ## godep and dependency management Kubernetes uses [godep](https://github.com/tools/godep) to manage dependencies. It is not strictly required for building Kubernetes but it is required when managing dependencies under the Godeps/ tree, and is required by a number of the build and test scripts. Please make sure that ``godep`` is installed and in your ``$PATH``. diff --git a/faster_reviews.md b/faster_reviews.md index 3ea030d3..0c70e435 100644 --- a/faster_reviews.md +++ b/faster_reviews.md @@ -53,15 +53,24 @@ later, just as soon as they have more free time (ha!). Let's talk about how to avoid this. +## 0. Familiarize yourself with project conventions + +* [Development guide](development.md) +* [Coding conventions](coding-conventions.md) +* [API conventions](api-conventions.md) +* [Kubectl conventions](kubectl-conventions.md) + ## 1. Don't build a cathedral in one PR Are you sure FeatureX is something the Kubernetes team wants or will accept, or that it is implemented to fit with other changes in flight? Are you willing to bet a few days or weeks of work on it? If you have any doubt at all about the -usefulness of your feature or the design - make a proposal doc or a sketch PR -or both. Write or code up just enough to express the idea and the design and -why you made those choices, then get feedback on this. Now, when we ask you to -change a bunch of facets of the design, you don't have to re-write it all. +usefulness of your feature or the design - make a proposal doc (in docs/proposals; +for example [the QoS proposal](http://prs.k8s.io/11713)) or a sketch PR (e.g., just +the API or Go interface) or both. Write or code up just enough to express the idea +and the design and why you made those choices, then get feedback on this. Be clear +about what type of feedback you are asking for. Now, if we ask you to change a +bunch of facets of the design, you won't have to re-write it all. ## 2. Smaller diffs are exponentially better @@ -154,7 +163,20 @@ commit and re-push. Your reviewer can then look at that commit on its own - so much faster to review than starting over. We might still ask you to clean up your commits at the very end, for the sake -of a more readable history. +of a more readable history, but don't do this until asked, typically at the point +where the PR would otherwise be tagged LGTM. + +General squashing guidelines: + +* Sausage => squash + + When there are several commits to fix bugs in the original commit(s), address reviewer feedback, etc. Really we only want to see the end state and commit message for the whole PR. + +* Layers => don't squash + + When there are independent changes layered upon each other to achieve a single goal. For instance, writing a code munger could be one commit, applying it could be another, and adding a precommit check could be a third. One could argue they should be separate PRs, but there's really no way to test/review the munger without seeing it applied, and there needs to be a precommit check to ensure the munged output doesn't immediately get out of date. + +A commit, as much as possible, should be a single logical change. Each commit should always have a good title line (<70 characters) and include an additional description paragraph describing in more detail the change intended. Do not link pull requests by `#` in a commit description, because GitHub creates lots of spam. Instead, reference other PRs via the PR your commit is in. ## 8. KISS, YAGNI, MVP, etc diff --git a/kubectl-conventions.md b/kubectl-conventions.md index 5739708c..a37e5899 100644 --- a/kubectl-conventions.md +++ b/kubectl-conventions.md @@ -34,7 +34,7 @@ Documentation for other releases can be found at Kubectl Conventions =================== -Updated: 8/12/2015 +Updated: 8/27/2015 **Table of Contents** @@ -77,6 +77,31 @@ Updated: 8/12/2015 * Flags are all lowercase, with words separated by hyphens * Flag names and single-character aliases should have the same meaning across all commands * Command-line flags corresponding to API fields should accept API enums exactly (e.g., --restart=Always) +* Do not reuse flags for different semantic purposes, and do not use different flag names for the same semantic purpose -- grep for `"Flags()"` before adding a new flag +* Use short flags sparingly, only for the most frequently used options, prefer lowercase over uppercase for the most common cases, try to stick to well known conventions for UNIX commands and/or Docker, where they exist, and update this list when adding new short flags + * `-f`: Resource file + * also used for `--follow` in `logs`, but should be deprecated in favor of `-F` + * `-l`: Label selector + * also used for `--labels` in `expose`, but should be deprecated + * `-L`: Label columns + * `-c`: Container + * also used for `--client` in `version`, but should be deprecated + * `-i`: Attach stdin + * `-t`: Allocate TTY + * also used for `--template`, but deprecated + * `-w`: Watch (currently also used for `--www` in `proxy`, but should be deprecated) + * `-p`: Previous + * also used for `--pod` in `exec`, but deprecated + * also used for `--patch` in `patch`, but should be deprecated + * also used for `--port` in `proxy`, but should be deprecated + * `-P`: Static file prefix in `proxy`, but should be deprecated + * `-r`: Replicas + * `-u`: Unix socket + * `-v`: Verbose logging level +* `--dry-run`: Don't modify the live state; simulate the mutation and display the output +* `--local`: Don't contact the server; just do local read, transformation, generation, etc. and display the output +* `--output-version=...`: Convert the output to a different API group/version +* `--validate`: Validate the resource schema ## Output conventions -- cgit v1.2.3 From f8a0e45ebb98f2d15446906c651962a395f38dbe Mon Sep 17 00:00:00 2001 From: Eric Paris Date: Wed, 2 Sep 2015 18:00:52 -0400 Subject: Fix the link to the submit-queue whitelist --- pull-requests.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pull-requests.md b/pull-requests.md index 126b8996..157646c0 100644 --- a/pull-requests.md +++ b/pull-requests.md @@ -59,7 +59,7 @@ There are several requirements for the submit queue to work: * No changes can be made since last lgtm label was applied * k8s-bot must have reported the GCE E2E build and test steps passed (Travis, Shippable and Jenkins build) -Additionally, for infrequent or new contributors, we require the on call to apply the "ok-to-merge" label manually. This is gated by the [whitelist](https://github.com/contrib/tree/master/submit-queue/whitelist.txt). +Additionally, for infrequent or new contributors, we require the on call to apply the "ok-to-merge" label manually. This is gated by the [whitelist](https://github.com/kubernetes/contrib/tree/master/submit-queue/whitelist.txt). -- cgit v1.2.3 From d5abea115d0aef5aae87565c2e31165da70c96da Mon Sep 17 00:00:00 2001 From: Eric Paris Date: Thu, 3 Sep 2015 10:10:11 -0400 Subject: s|github.com/GoogleCloudPlatform/kubernetes|github.com/kubernetes/kubernetes| --- cherry-picks.md | 2 +- cli-roadmap.md | 6 +++--- flaky-tests.md | 2 +- instrumentation.md | 12 ++++++------ issues.md | 2 +- making-release-notes.md | 2 +- pull-requests.md | 2 +- 7 files changed, 14 insertions(+), 14 deletions(-) diff --git a/cherry-picks.md b/cherry-picks.md index 519c73c3..7cb60465 100644 --- a/cherry-picks.md +++ b/cherry-picks.md @@ -62,7 +62,7 @@ conflict***. Now that we've structured cherry picks as PRs, searching for all cherry-picks against a release is a GitHub query: For example, -[this query is all of the v0.21.x cherry-picks](https://github.com/GoogleCloudPlatform/kubernetes/pulls?utf8=%E2%9C%93&q=is%3Apr+%22automated+cherry+pick%22+base%3Arelease-0.21) +[this query is all of the v0.21.x cherry-picks](https://github.com/kubernetes/kubernetes/pulls?utf8=%E2%9C%93&q=is%3Apr+%22automated+cherry+pick%22+base%3Arelease-0.21) diff --git a/cli-roadmap.md b/cli-roadmap.md index 69084555..42784dbc 100644 --- a/cli-roadmap.md +++ b/cli-roadmap.md @@ -34,9 +34,9 @@ Documentation for other releases can be found at # Kubernetes CLI/Configuration Roadmap See github issues with the following labels: -* [area/app-config-deployment](https://github.com/GoogleCloudPlatform/kubernetes/labels/area/app-config-deployment) -* [component/CLI](https://github.com/GoogleCloudPlatform/kubernetes/labels/component/CLI) -* [component/client](https://github.com/GoogleCloudPlatform/kubernetes/labels/component/client) +* [area/app-config-deployment](https://github.com/kubernetes/kubernetes/labels/area/app-config-deployment) +* [component/CLI](https://github.com/kubernetes/kubernetes/labels/component/CLI) +* [component/client](https://github.com/kubernetes/kubernetes/labels/component/client) diff --git a/flaky-tests.md b/flaky-tests.md index 9db9e15c..3a7af51e 100644 --- a/flaky-tests.md +++ b/flaky-tests.md @@ -64,7 +64,7 @@ spec: - name: TEST_PACKAGE value: pkg/tools - name: REPO_SPEC - value: https://github.com/GoogleCloudPlatform/kubernetes + value: https://github.com/kubernetes/kubernetes ``` Note that we omit the labels and the selector fields of the replication controller, because they will be populated from the labels field of the pod template by default. diff --git a/instrumentation.md b/instrumentation.md index 8cc9e2b2..683f9d93 100644 --- a/instrumentation.md +++ b/instrumentation.md @@ -44,18 +44,18 @@ We use the Prometheus monitoring system's golang client library for instrumentin 2. Give the metric a name and description. 3. Pick whether you want to distinguish different categories of things using labels on the metric. If so, add "Vec" to the name of the type of metric you want and add a slice of the label names to the definition. - https://github.com/GoogleCloudPlatform/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/apiserver/apiserver.go#L53 - https://github.com/GoogleCloudPlatform/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/kubelet/metrics/metrics.go#L31 + https://github.com/kubernetes/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/apiserver/apiserver.go#L53 + https://github.com/kubernetes/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/kubelet/metrics/metrics.go#L31 3. Register the metric so that prometheus will know to export it. - https://github.com/GoogleCloudPlatform/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/kubelet/metrics/metrics.go#L74 - https://github.com/GoogleCloudPlatform/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/apiserver/apiserver.go#L78 + https://github.com/kubernetes/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/kubelet/metrics/metrics.go#L74 + https://github.com/kubernetes/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/apiserver/apiserver.go#L78 4. Use the metric by calling the appropriate method for your metric type (Set, Inc/Add, or Observe, respectively for Gauge, Counter, or Histogram/Summary), first calling WithLabelValues if your metric has any labels - https://github.com/GoogleCloudPlatform/kubernetes/blob/3ce7fe8310ff081dbbd3d95490193e1d5250d2c9/pkg/kubelet/kubelet.go#L1384 - https://github.com/GoogleCloudPlatform/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/apiserver/apiserver.go#L87 + https://github.com/kubernetes/kubernetes/blob/3ce7fe8310ff081dbbd3d95490193e1d5250d2c9/pkg/kubelet/kubelet.go#L1384 + https://github.com/kubernetes/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/apiserver/apiserver.go#L87 These are the metric type definitions if you're curious to learn about them or need more information: diff --git a/issues.md b/issues.md index 46beb9ce..c7bda07b 100644 --- a/issues.md +++ b/issues.md @@ -33,7 +33,7 @@ Documentation for other releases can be found at GitHub Issues for the Kubernetes Project ======================================== -A list quick overview of how we will review and prioritize incoming issues at https://github.com/GoogleCloudPlatform/kubernetes/issues +A list quick overview of how we will review and prioritize incoming issues at https://github.com/kubernetes/kubernetes/issues Priorities ---------- diff --git a/making-release-notes.md b/making-release-notes.md index 1efab1ac..871e65b4 100644 --- a/making-release-notes.md +++ b/making-release-notes.md @@ -66,7 +66,7 @@ With the final markdown all set, cut and paste it to the top of `CHANGELOG.md` ### 5) Update the Release page - * Switch to the [releases](https://github.com/GoogleCloudPlatform/kubernetes/releases) page. + * Switch to the [releases](https://github.com/kubernetes/kubernetes/releases) page. * Open up the release you are working on. * Cut and paste the final markdown from above into the release notes * Press Save. diff --git a/pull-requests.md b/pull-requests.md index 157646c0..a81c01c5 100644 --- a/pull-requests.md +++ b/pull-requests.md @@ -52,7 +52,7 @@ Life of a Pull Request Unless in the last few weeks of a milestone when we need to reduce churn and stabilize, we aim to be always accepting pull requests. -Either the [on call](https://github.com/GoogleCloudPlatform/kubernetes/wiki/Kubernetes-on-call-rotation) manually or the [submit queue](https://github.com/contrib/tree/master/submit-queue) automatically will manage merging PRs. +Either the [on call](https://github.com/kubernetes/kubernetes/wiki/Kubernetes-on-call-rotation) manually or the [submit queue](https://github.com/contrib/tree/master/submit-queue) automatically will manage merging PRs. There are several requirements for the submit queue to work: * Author must have signed CLA ("cla: yes" label added to PR) -- cgit v1.2.3 From cf287959ee7840799aefe2e49fd3909b45669dd9 Mon Sep 17 00:00:00 2001 From: Paul Morie Date: Tue, 8 Sep 2015 13:37:12 -0400 Subject: Update api change docs --- api_changes.md | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/api_changes.md b/api_changes.md index d26fdda9..45f0dd4c 100644 --- a/api_changes.md +++ b/api_changes.md @@ -399,6 +399,10 @@ The conversion code resides with each versioned API. There are two files: functions - `pkg/api//conversion_generated.go` containing auto-generated conversion functions + - `pkg/expapi//conversion.go` containing manually written conversion + functions + - `pkg/expapi//conversion_generated.go` containing auto-generated + conversion functions Since auto-generated conversion functions are using manually written ones, those manually written should be named with a defined convention, i.e. a function @@ -433,6 +437,7 @@ of your versioned api objects. The deep copy code resides with each versioned API: - `pkg/api//deep_copy_generated.go` containing auto-generated copy functions + - `pkg/expapi//deep_copy_generated.go` containing auto-generated copy functions To regenerate them: - run -- cgit v1.2.3 From 99bb877ce3f282ac5cc0899ae0e0645801e963f3 Mon Sep 17 00:00:00 2001 From: goltermann Date: Wed, 2 Sep 2015 14:51:19 -0700 Subject: Replace IRC with Slack in docs. --- writing-a-getting-started-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/writing-a-getting-started-guide.md b/writing-a-getting-started-guide.md index 7441474a..c9d4e2ca 100644 --- a/writing-a-getting-started-guide.md +++ b/writing-a-getting-started-guide.md @@ -76,7 +76,7 @@ These guidelines say *what* to do. See the Rationale section for *why*. If you have a cluster partially working, but doing all the above steps seems like too much work, we still want to hear from you. We suggest you write a blog post or a Gist, and we will link to it on our wiki page. -Just file an issue or chat us on IRC and one of the committers will link to it from the wiki. +Just file an issue or chat us on [Slack](../troubleshooting.md#slack) and one of the committers will link to it from the wiki. ## Development Distro Guidelines -- cgit v1.2.3 From 7c58a4a72923659e2a6036eec4eef7cd86b88d28 Mon Sep 17 00:00:00 2001 From: Kevin Date: Thu, 10 Sep 2015 00:22:43 +0800 Subject: fix a typo in development.md and update git_workflow.png --- development.md | 2 +- git_workflow.png | Bin 90004 -> 114745 bytes 2 files changed, 1 insertion(+), 1 deletion(-) diff --git a/development.md b/development.md index fc14333b..75cb2365 100644 --- a/development.md +++ b/development.md @@ -96,7 +96,7 @@ git push -f origin myfeature ### Creating a pull request -1. Visit http://github.com/$YOUR_GITHUB_USERNAME/kubernetes +1. Visit https://github.com/$YOUR_GITHUB_USERNAME/kubernetes 2. Click the "Compare and pull request" button next to your "myfeature" branch. 3. Check out the pull request [process](pull-requests.md) for more details diff --git a/git_workflow.png b/git_workflow.png index e3bd70da..80a66248 100644 Binary files a/git_workflow.png and b/git_workflow.png differ -- cgit v1.2.3 From 94c5155a987711b6cff4a4bdf13d463a6eddb42a Mon Sep 17 00:00:00 2001 From: Clayton Coleman Date: Wed, 9 Sep 2015 18:03:54 -0400 Subject: Define lock coding convention --- coding-conventions.md | 1 + 1 file changed, 1 insertion(+) diff --git a/coding-conventions.md b/coding-conventions.md index 1569d1aa..8ddf000e 100644 --- a/coding-conventions.md +++ b/coding-conventions.md @@ -50,6 +50,7 @@ Code conventions - so pkg/controllers/autoscaler/foo.go should say `package autoscaler` not `package autoscalercontroller`. - Unless there's a good reason, the `package foo` line should match the name of the directory in which the .go file exists. - Importers can use a different name if they need to disambiguate. + - Locks should be called `lock` and should never be embedded (always `lock sync.Mutex`). When multiple locks are present, give each lock a distinct name following Go conventions - `stateLock`, `mapLock` etc. - API conventions - [API changes](api_changes.md) - [API conventions](api-conventions.md) -- cgit v1.2.3 From acb2ce01b3f5000553d4cc407efcd046cb5c46de Mon Sep 17 00:00:00 2001 From: Daniel Smith Date: Wed, 9 Sep 2015 16:01:08 -0700 Subject: Fix tooling for apis/experimental's new home * fix package name * add a script to auto-gofmt everything, useful after grep/sed incantations * update conversion/deep copy generation * doc update --- api_changes.md | 27 ++++++++++++++++++++++----- 1 file changed, 22 insertions(+), 5 deletions(-) diff --git a/api_changes.md b/api_changes.md index 45f0dd4c..e0a65fe0 100644 --- a/api_changes.md +++ b/api_changes.md @@ -38,7 +38,7 @@ with a number of existing API types and with the [API conventions](api-conventions.md). If creating a new API type/resource, we also recommend that you first send a PR containing just a proposal for the new API types, and that you initially target -the experimental API (pkg/expapi). +the experimental API (pkg/apis/experimental). The Kubernetes API has two major components - the internal structures and the versioned APIs. The versioned APIs are intended to be stable, while the @@ -399,10 +399,10 @@ The conversion code resides with each versioned API. There are two files: functions - `pkg/api//conversion_generated.go` containing auto-generated conversion functions - - `pkg/expapi//conversion.go` containing manually written conversion - functions - - `pkg/expapi//conversion_generated.go` containing auto-generated + - `pkg/apis/experimental//conversion.go` containing manually written conversion functions + - `pkg/apis/experimental//conversion_generated.go` containing + auto-generated conversion functions Since auto-generated conversion functions are using manually written ones, those manually written should be named with a defined convention, i.e. a function @@ -437,7 +437,7 @@ of your versioned api objects. The deep copy code resides with each versioned API: - `pkg/api//deep_copy_generated.go` containing auto-generated copy functions - - `pkg/expapi//deep_copy_generated.go` containing auto-generated copy functions + - `pkg/apis/experimental//deep_copy_generated.go` containing auto-generated copy functions To regenerate them: - run @@ -446,6 +446,23 @@ To regenerate them: hack/update-generated-deep-copies.sh ``` +## Making a new API Group + +This section is under construction, as we make the tooling completely generic. + +At the moment, you'll have to make a new directory under pkg/apis/; copy the +directory structure from pkg/apis/experimental. Add the new group/version to all +of the hack/{verify,update}-generated-{deep-copy,conversions,swagger}.sh files +in the appropriate places--it should just require adding your new group/version +to a bash array. You will also need to make sure your new types are imported by +the generation commands (cmd/gendeepcopy/ & cmd/genconversion). These +instructions may not be complete and will be updated as we gain experience. + +Adding API groups outside of the pkg/apis/ directory is not currently supported, +but is clearly desirable. The deep copy & conversion generators need to work by +parsing go files instead of by reflection; then they will be easy to point at +arbitrary directories: see issue [#13775](http://issue.k8s.io/13775). + ## Update the fuzzer Part of our testing regimen for APIs is to "fuzz" (fill with random values) API -- cgit v1.2.3 From 04666c6e834df249cf6d56cd4831477be9f512b1 Mon Sep 17 00:00:00 2001 From: derekwaynecarr Date: Mon, 14 Sep 2015 13:03:11 -0400 Subject: Fix broken link to submit queue --- pull-requests.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pull-requests.md b/pull-requests.md index a81c01c5..1050cd0d 100644 --- a/pull-requests.md +++ b/pull-requests.md @@ -52,7 +52,7 @@ Life of a Pull Request Unless in the last few weeks of a milestone when we need to reduce churn and stabilize, we aim to be always accepting pull requests. -Either the [on call](https://github.com/kubernetes/kubernetes/wiki/Kubernetes-on-call-rotation) manually or the [submit queue](https://github.com/contrib/tree/master/submit-queue) automatically will manage merging PRs. +Either the [on call](https://github.com/kubernetes/kubernetes/wiki/Kubernetes-on-call-rotation) manually or the [submit queue](https://github.com/kubernetes/contrib/tree/master/submit-queue) automatically will manage merging PRs. There are several requirements for the submit queue to work: * Author must have signed CLA ("cla: yes" label added to PR) -- cgit v1.2.3 From 15de2cf23060b648291401e201359d32794885c0 Mon Sep 17 00:00:00 2001 From: Brendan Burns Date: Tue, 8 Sep 2015 11:16:14 -0700 Subject: Add some documentation describing out developer/repository automation. --- README.md | 2 + automation.md | 138 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ pull-requests.md | 6 +++ 3 files changed, 146 insertions(+) create mode 100644 automation.md diff --git a/README.md b/README.md index 267bca23..756846ce 100644 --- a/README.md +++ b/README.md @@ -51,6 +51,8 @@ Guide](../admin/README.md). * **Getting Recent Builds** ([getting-builds.md](getting-builds.md)): How to get recent builds including the latest builds that pass CI. +* **Automated Tools** ([automation.md](automation.md)): Descriptions of the automation that is running on our github repository. + ## Setting up your dev environment, coding, and debugging diff --git a/automation.md b/automation.md new file mode 100644 index 00000000..eb36cc63 --- /dev/null +++ b/automation.md @@ -0,0 +1,138 @@ + + + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + +The latest 1.0.x release of this document can be found +[here](http://releases.k8s.io/release-1.0/docs/devel/automation.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + + + + +# Kubernetes Development Automation + +## Overview + +Kubernetes uses a variety of automated tools in an attempt to relieve developers of repeptitive, low +brain power work. This document attempts to describe these processes. + + +## Submit Queue + +In an effort to + * reduce load on core developers + * maintain e2e stability + * load test githubs label feature + +We have added an automated [submit-queue](https://github.com/kubernetes/contrib/tree/master/submit-queue) +for kubernetes. + +The submit-queue does the following: + +```go +for _, pr := range readyToMergePRs() { + if testsAreStable() { + mergePR(pr) + } +} +``` + +The status of the submit-queue is [online.](http://submit-queue.k8s.io/) + +### Ready to merge status + +A PR is considered "ready for merging" if it matches the following: + * it has the `lgtm` label, and that `lgtm` is newer than the latest commit + * it has passed the cla pre-submit and has the `cla:yes` label + * it has passed the travis and shippable pre-submit tests + * one (or all) of + * its author is in kubernetes/contrib/submit-queue/whitelist.txt + * its author is in contributors.txt via the github API. + * the PR has the `ok-to-merge` label + * One (or both of) + * it has passed the Jenkins e2e test + * it has the `e2e-not-required` label + +Note that the combined whitelist/committer list is available at [submit-queue.k8s.io](http://submit-queue.k8s.io) + +### Merge process + +Merges _only_ occur when the `critical builds` (Jenkins e2e for gce, gke, scalability, upgrade) are passing. +We're open to including more builds here, let us know... + +Merges are serialized, so only a single PR is merged at a time, to ensure against races. + +If the PR has the `e2e-not-required` label, it is simply merged. +If the PR does not have this label, e2e tests are re-run, if these new tests pass, the PR is merged. + +If e2e flakes or is currently buggy, the PR will not be merged, but it will be re-run on the following +pass. + +## Github Munger + +We also run a [github "munger"](https://github.com/kubernetes/contrib/tree/master/mungegithub) + +This runs repeatedly over github pulls and issues and runs modular "mungers" similar to "mungedocs" + +Currently this runs: + * blunderbuss - Tries to automatically find an owner for a PR without an owner, uses mapping file here: + https://github.com/kubernetes/contrib/blob/master/mungegithub/blunderbuss.yml + * needs-rebase - Adds `needs-rebase` to PRs that aren't currently mergeable, and removes it from those that are. + * size - Adds `size/xs` - `size/xxl` labels to PRs + * ok-to-test - Adds the `ok-to-test` message to PRs that have an `lgtm` but the e2e-builder would otherwise not test due to whitelist + * ping-ci - Attempts to ping the ci systems (Travis/Shippable) if they are missing from a PR. + * lgtm-after-commit - Removes the `lgtm` label from PRs where there are commits that are newer than the `lgtm` label + +In the works: + * issue-detector - machine learning for determining if an issue that has been filed is a `support` issue, `bug` or `feature` + +Please feel free to unleash your creativity on this tool, send us new mungers that you think will help support the Kubernetes development process. + +## PR builder + +We also run a robotic PR builder that attempts to run e2e tests for each PR. + +Before a PR from an unknown user is run, the PR builder bot (`k8s-bot`) asks to a message from a +contributor that a PR is "ok to test", the contributor replies with that message. Contributors can also +add users to the whitelist by replying with the message "add to whitelist" ("please" is optional, but +remember to treat your robots with kindness...) + +If a PR is approved for testing, and tests either haven't run, or need to be re-run, you can ask the +PR builder to re-run the tests. To do this, reply to the PR with a message that begins with `@k8s-bot test this`, this should trigger a re-build/re-test. + + +## FAQ: + +#### How can I ask my PR to be tested again for Jenkins failures? + +Right now you have to ask a contributor (this may be you!) to re-run the test with "@k8s-bot test this" + +### How can I kick Shippable to re-test on a failure? + +Right now the easiest way is to close and then immediately re-open the PR. + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/automation.md?pixel)]() + diff --git a/pull-requests.md b/pull-requests.md index a81c01c5..a187920e 100644 --- a/pull-requests.md +++ b/pull-requests.md @@ -61,6 +61,12 @@ There are several requirements for the submit queue to work: Additionally, for infrequent or new contributors, we require the on call to apply the "ok-to-merge" label manually. This is gated by the [whitelist](https://github.com/kubernetes/contrib/tree/master/submit-queue/whitelist.txt). +Automation +---------- + +We use a variety of automation to manage pull requests. This automation is described in detail +[elsewhere.](automation.md) + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/pull-requests.md?pixel)]() -- cgit v1.2.3 From a277212041843009b67f824bbf15ee57fea82b7e Mon Sep 17 00:00:00 2001 From: Zach Loafman Date: Mon, 14 Sep 2015 17:05:05 -0700 Subject: Fix the checkout instructions --- releasing.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/releasing.md b/releasing.md index 9950e6e4..9a73405f 100644 --- a/releasing.md +++ b/releasing.md @@ -139,7 +139,7 @@ manage cherry picks prior to cutting the release. 1. `export VER=x.y` (e.g. `0.20` for v0.20) 1. `export PATCH=Z` where `Z` is the patch level of `vX.Y.Z` 1. cd to the base of the repo -1. `git fetch upstream && git checkout -b upstream/release-${VER}` +1. `git fetch upstream && git checkout -b upstream/release-${VER} release-${VER}` 1. Make sure you don't have any files you care about littering your repo (they better be checked in or outside the repo, or the next step will delete them). 1. `make clean && git reset --hard HEAD && git clean -xdf` -- cgit v1.2.3 From 19ba8e37c486422cafcfcfa8647c22d08e20a981 Mon Sep 17 00:00:00 2001 From: Brian Grant Date: Tue, 15 Sep 2015 18:24:02 +0000 Subject: A couple more naming conventions. --- api-conventions.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/api-conventions.md b/api-conventions.md index 746d56cb..e7b8b4e9 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -679,7 +679,7 @@ Accumulate repeated events in the client, especially for frequent events, to red ## Naming conventions * Go field names must be CamelCase. JSON field names must be camelCase. Other than capitalization of the initial letter, the two should almost always match. No underscores nor dashes in either. -* Field and resource names should be declarative, not imperative (DoSomething, SomethingDoer). +* Field and resource names should be declarative, not imperative (DoSomething, SomethingDoer, DoneBy, DoneAt). * `Minion` has been deprecated in favor of `Node`. Use `Node` where referring to the node resource in the context of the cluster. Use `Host` where referring to properties of the individual physical/virtual system, such as `hostname`, `hostPath`, `hostNetwork`, etc. * `FooController` is a deprecated kind naming convention. Name the kind after the thing being controlled instead (e.g., `Job` rather than `JobController`). * The name of a field that specifies the time at which `something` occurs should be called `somethingTime`. Do not use `stamp` (e.g., `creationTimestamp`). @@ -690,6 +690,7 @@ Accumulate repeated events in the client, especially for frequent events, to red * Do not use abbreviations in the API, except where they are extremely commonly used, such as "id", "args", or "stdin". * Acronyms should similarly only be used when extremely commonly known. All letters in the acronym should have the same case, using the appropriate case for the situation. For example, at the beginning of a field name, the acronym should be all lowercase, such as "httpGet". Where used as a constant, all letters should be uppercase, such as "TCP" or "UDP". * The name of a field referring to another resource of kind `Foo` by name should be called `fooName`. The name of a field referring to another resource of kind `Foo` by ObjectReference (or subset thereof) should be called `fooRef`. +* More generally, include the units and/or type in the field name if they could be ambiguous and they are not specified by the value or value type. ## Label, selector, and annotation conventions -- cgit v1.2.3 From 26b055b78d12a6bf5f59ea717181ead15fa93f74 Mon Sep 17 00:00:00 2001 From: eulerzgy Date: Wed, 16 Sep 2015 02:30:42 +0800 Subject: fix the change of minions to nodes --- developer-guides/vagrant.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index f451d755..d6a902b2 100644 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -245,7 +245,7 @@ my-nginx-kqdjk 1/1 Waiting 0 33s my-nginx-nyj3x 1/1 Waiting 0 33s ``` -You need to wait for the provisioning to complete, you can monitor the minions by doing: +You need to wait for the provisioning to complete, you can monitor the nodes by doing: ```console $ sudo salt '*minion-1' cmd.run 'docker images' -- cgit v1.2.3 From 09de43d161f9b9545255b9c84ef1e82e5867d67f Mon Sep 17 00:00:00 2001 From: zhengguoyong Date: Wed, 16 Sep 2015 09:10:47 +0800 Subject: Update vagrant.md --- developer-guides/vagrant.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index d6a902b2..f451d755 100644 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -245,7 +245,7 @@ my-nginx-kqdjk 1/1 Waiting 0 33s my-nginx-nyj3x 1/1 Waiting 0 33s ``` -You need to wait for the provisioning to complete, you can monitor the nodes by doing: +You need to wait for the provisioning to complete, you can monitor the minions by doing: ```console $ sudo salt '*minion-1' cmd.run 'docker images' -- cgit v1.2.3 From 3692d4871fc7c34291fe1cfca9b24b0e8a5ecfd6 Mon Sep 17 00:00:00 2001 From: "Timothy St. Clair" Date: Fri, 11 Sep 2015 16:16:56 -0500 Subject: Add developer documentation on e2e testing. --- e2e-tests.md | 145 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 145 insertions(+) create mode 100644 e2e-tests.md diff --git a/e2e-tests.md b/e2e-tests.md new file mode 100644 index 00000000..ca55b901 --- /dev/null +++ b/e2e-tests.md @@ -0,0 +1,145 @@ + + + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + +The latest 1.0.x release of this document can be found +[here](http://releases.k8s.io/release-1.0/docs/devel/e2e-tests.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + + + + +# End-2-End Testing in Kubernetes + +## Overview + +The end-2-end tests for kubernetes provide a mechanism to test behavior of the system, and to ensure end user operations match developer specifications. In distributed systems it is not uncommon that a minor change may pass all unit tests, but cause unforseen changes at the system level. Thus, the primary objectives of the end-2-end tests are to ensure a consistent and reliable behavior of the kubernetes code base, and to catch bugs early. + +The end-2-end tests in kubernetes are built atop of [ginkgo] (http://onsi.github.io/ginkgo/) and [gomega] (http://onsi.github.io/gomega/). There are a host of features that this BDD testing framework provides, and it is recommended that the developer read the documentation prior to diving into the tests. + +The purpose of *this* document is to serve as a primer for developers who are looking to execute, or add tests, using a local development environment. + +## Building and Running the Tests + +**NOTE:** The tests have an array of options. For simplicity, the examples will focus on leveraging the tests on a local cluster using `sudo ./hack/local-up-cluster.sh` + +### Building the Tests + +The tests are built into a single binary which can be run against any deployed kubernetes system. To build the tests, navigate to your source directory and execute: + +`$ make all` + +The output for the end-2-end tests will be a single binary called `e2e.test` under the default output directory, which is typically `_output/local/bin/linux/amd64/`. Within the repository there are scripts that are provided under the `./hack` directory that are helpful for automation, but may not apply for a local development purposes. Instead, we recommend familiarizing yourself with the executable options. To obtain the full list of options, run the following: + +`$ ./e2e.test --help` + +### Running the Tests + +For the purposes of brevity, we will look at a subset of the options, which are listed below: + +``` +-ginkgo.dryRun=false: If set, ginkgo will walk the test hierarchy without actually running anything. Best paired with -v. +-ginkgo.failFast=false: If set, ginkgo will stop running a test suite after a failure occurs. +-ginkgo.failOnPending=false: If set, ginkgo will mark the test suite as failed if any specs are pending. +-ginkgo.focus="": If set, ginkgo will only run specs that match this regular expression. +-ginkgo.skip="": If set, ginkgo will only run specs that do not match this regular expression. +-ginkgo.trace=false: If set, default reporter prints out the full stack trace when a failure occurs +-ginkgo.v=false: If set, default reporter print out all specs as they begin. +-host="": The host, or api-server, to connect to +-kubeconfig="": Path to kubeconfig containing embedded authinfo. +-prom-push-gateway="": The URL to prometheus gateway, so that metrics can be pushed during e2es and scraped by prometheus. Typically something like 127.0.0.1:9091. +-provider="": The name of the Kubernetes provider (gce, gke, local, vagrant, etc.) +-repo-root="../../": Root directory of kubernetes repository, for finding test files. +``` + +Prior to running the tests, it is recommended that you first create a simple auth file in your home directory, e.g. `$HOME/.kubernetes_auth` , with the following: + +``` +{ + "User": "root", + "Password": "" +} +``` + +Next, you will need a cluster that you can test against. As mentioned earlier, you will want to execute `sudo ./hack/local-up-cluster.sh`. To get a sense of what tests exist, you may want to run: + +`e2e.test --host="127.0.0.1:8080" --provider="local" --ginkgo.v=true -ginkgo.dryRun=true --kubeconfig="$HOME/.kubernetes_auth" --repo-root="$KUBERNETES_SRC_PATH"` + +If you wish to execute a specific set of tests you can use the `-ginkgo.focus=` regex, e.g.: + +`e2e.test ... --ginkgo.focus="DNS|(?i)nodeport(?-i)|kubectl guestbook"` + +Conversely, if you wish to exclude a set of tests, you can run: + +`e2e.test ... --ginkgo.skip="Density|Scale"` + +As mentioned earlier there are a host of other options that are available, but are left to the developer + +**NOTE:** If you are running tests on a local cluster repeatedly, you may need to periodically perform some manual cleanup. +- `rm -rf /var/run/kubernetes`, clear kube generated credentials, sometimes stale permissions can cause problems. +- `sudo iptables -F`, clear ip tables rules left by the kube-proxy. + +## Adding a New Test + +As mentioned above, prior to adding a new test, it is a good idea to perform a `-ginkgo.dryRun=true` on the system, in order to see if a behavior is already being tested, or to determine if it may be possible to augment an existing set of tests for a specific use case. + +If a behavior does not currently have coverage and a developer wishes to add a new e2e test, navigate to the ./test/e2e directory and create a new test using the existing suite as a guide. + +**TODO:** Create a self-documented example which has been disabled, but can be copied to create new tests and outlines the capabilities and libraries used. + +## Performance Evaluation + +Another benefit of the end-2-end tests is the ability to create reproducible loads on the system, which can then be used to determine the responsiveness, or analyze other characteristics of the system. For example, the density tests load the system to 30,50,100 pods per/node and measures the different characteristics of the system, such as throughput, api-latency, etc. + +For a good overview of how we analyze performance data, please read the following [post](http://blog.kubernetes.io/2015/09/kubernetes-performance-measurements-and.html) + +For developers who are interested in doing their own performance analysis, we recommend setting up [prometheus](http://prometheus.io/) for data collection, and using [promdash](http://prometheus.io/docs/visualization/promdash/) to visualize the data. There also exists the option of pushing your own metrics in from the tests using a [prom-push-gateway](http://prometheus.io/docs/instrumenting/pushing/). Containers for all of these components can be found [here](https://hub.docker.com/u/prom/). + +For more accurate measurements, you may wish to set up prometheus external to kubernetes in an environment where it can access the major system components (api-server, controller-manager, scheduler). This is especially useful when attempting to gather metrics in a load-balanced api-server environment, because all api-servers can be analyzed independently as well as collectively. On startup, configuration file is passed to prometheus that specifies the endpoints that prometheus will scrape, as well as the sampling interval. + +``` +#prometheus.conf +job: { + name: "kubernetes" + scrape_interval: "1s" + target_group: { + # apiserver(s) + target: "http://localhost:8080/metrics" + # scheduler + target: "http://localhost:10251/metrics" + # controller-manager + target: "http://localhost:10252/metrics" + } +``` + +Once prometheus is scraping the kubernetes endpoints, that data can then be plotted using promdash, and alerts can be created against the assortment of metrics that kubernetes provides. + +**HAPPY TESTING!** + + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/e2e-tests.md?pixel)]() + -- cgit v1.2.3 From c0e44162bc75fe062e183b27fccc578e837c19b2 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Daniel=20Mart=C3=AD?= Date: Thu, 17 Sep 2015 15:21:55 -0700 Subject: Move pkg/util.Time to pkg/api/unversioned.Time Along with our time.Duration wrapper, as suggested by @lavalamp. --- api-conventions.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/api-conventions.md b/api-conventions.md index e7b8b4e9..31225e18 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -189,8 +189,8 @@ The `FooCondition` type for some resource type `Foo` may include a subset of the ```golang Type FooConditionType `json:"type" description:"type of Foo condition"` Status ConditionStatus `json:"status" description:"status of the condition, one of True, False, Unknown"` - LastHeartbeatTime util.Time `json:"lastHeartbeatTime,omitempty" description:"last time we got an update on a given condition"` - LastTransitionTime util.Time `json:"lastTransitionTime,omitempty" description:"last time the condition transit from one status to another"` + LastHeartbeatTime unversioned.Time `json:"lastHeartbeatTime,omitempty" description:"last time we got an update on a given condition"` + LastTransitionTime unversioned.Time `json:"lastTransitionTime,omitempty" description:"last time the condition transit from one status to another"` Reason string `json:"reason,omitempty" description:"one-word CamelCase reason for the condition's last transition"` Message string `json:"message,omitempty" description:"human-readable message indicating details about last transition"` ``` -- cgit v1.2.3 From 292225b77b525a0e55c9f2c5ad6904288e751c7c Mon Sep 17 00:00:00 2001 From: Matt McNaughton Date: Fri, 18 Sep 2015 00:34:25 -0400 Subject: Fix indendation on devel/coding-conventions.md Fixing the indendation means the markdown will now render correcly on Github. Signed-off-by: Matt McNaughton --- coding-conventions.md | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/coding-conventions.md b/coding-conventions.md index 8ddf000e..3e3abaf7 100644 --- a/coding-conventions.md +++ b/coding-conventions.md @@ -65,19 +65,19 @@ Testing conventions - Unit tests must pass on OS X and Windows platforms - if you use Linux specific features, your test case must either be skipped on windows or compiled out (skipped is better when running Linux specific commands, compiled out is required when your code does not compile on Windows). Directory and file conventions - - Avoid package sprawl. Find an appropriate subdirectory for new packages. (See [#4851](http://issues.k8s.io/4851) for discussion.) - - Libraries with no more appropriate home belong in new package subdirectories of pkg/util - - Avoid general utility packages. Packages called "util" are suspect. Instead, derive a name that describes your desired function. For example, the utility functions dealing with waiting for operations are in the "wait" package and include functionality like Poll. So the full name is wait.Poll - - Go source files and directories use underscores, not dashes - - Package directories should generally avoid using separators as much as possible (when packages are multiple words, they usually should be in nested subdirectories). - - Document directories and filenames should use dashes rather than underscores - - Contrived examples that illustrate system features belong in /docs/user-guide or /docs/admin, depending on whether it is a feature primarily intended for users that deploy applications or cluster administrators, respectively. Actual application examples belong in /examples. - - Examples should also illustrate [best practices for using the system](../user-guide/config-best-practices.md) - - Third-party code - - Third-party Go code is managed using Godeps - - Other third-party code belongs in /third_party - - Third-party code must include licenses - - This includes modified third-party code and excerpts, as well + - Avoid package sprawl. Find an appropriate subdirectory for new packages. (See [#4851](http://issues.k8s.io/4851) for discussion.) + - Libraries with no more appropriate home belong in new package subdirectories of pkg/util + - Avoid general utility packages. Packages called "util" are suspect. Instead, derive a name that describes your desired function. For example, the utility functions dealing with waiting for operations are in the "wait" package and include functionality like Poll. So the full name is wait.Poll + - Go source files and directories use underscores, not dashes + - Package directories should generally avoid using separators as much as possible (when packages are multiple words, they usually should be in nested subdirectories). + - Document directories and filenames should use dashes rather than underscores + - Contrived examples that illustrate system features belong in /docs/user-guide or /docs/admin, depending on whether it is a feature primarily intended for users that deploy applications or cluster administrators, respectively. Actual application examples belong in /examples. + - Examples should also illustrate [best practices for using the system](../user-guide/config-best-practices.md) + - Third-party code + - Third-party Go code is managed using Godeps + - Other third-party code belongs in /third_party + - Third-party code must include licenses + - This includes modified third-party code and excerpts, as well Coding advice - Go -- cgit v1.2.3 From 6d04d610747b27e3fd27ee0c240648024eafe2da Mon Sep 17 00:00:00 2001 From: qiaolei Date: Sat, 19 Sep 2015 09:32:17 +0800 Subject: Change 'params' to 'extraParams' to keep align with naming conventions Go field names must be CamelCase. JSON field names must be camelCase. Other than capitalization of the initial letter, the two should almost always match. No underscores nor dashes in either Please refer 'https://github.com/kubernetes/kubernetes/blob/master/docs/devel/api-conventions.md#naming-conventions' --- api_changes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/api_changes.md b/api_changes.md index e0a65fe0..a7c4c3c8 100644 --- a/api_changes.md +++ b/api_changes.md @@ -157,7 +157,7 @@ type Frobber struct { Height int `json:"height"` Width int `json:"width"` Param string `json:"param"` // the first param - ExtraParams []string `json:"params"` // additional params + ExtraParams []string `json:"extraParams"` // additional params } ``` -- cgit v1.2.3 From efee6727cd73f876454bbcb7d7f2737f0ea3a0b5 Mon Sep 17 00:00:00 2001 From: Tim Hockin Date: Sun, 20 Sep 2015 21:00:41 -0700 Subject: Clarify experimental annotation format --- api-conventions.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/api-conventions.md b/api-conventions.md index 31225e18..fb7cbe10 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -33,7 +33,7 @@ Documentation for other releases can be found at API Conventions =============== -Updated: 8/24/2015 +Updated: 9/20/2015 *This document is oriented at users who want a deeper understanding of the Kubernetes API structure, and developers wanting to extend the Kubernetes API. An introduction to @@ -712,7 +712,7 @@ Therefore, resources supporting auto-generation of unique labels should have a ` Annotations have very different intended usage from labels. We expect them to be primarily generated and consumed by tooling and system extensions. I'm inclined to generalize annotations to permit them to directly store arbitrary json. Rigid names and name prefixes make sense, since they are analogous to API fields. -In fact, experimental API fields, including to represent fields of newer alpha/beta API versions in the older, stable storage version, may be represented as annotations with the prefix `experimental.kubernetes.io/`. +In fact, experimental API fields, including those used to represent fields of newer alpha/beta API versions in the older stable storage version, may be represented as annotations with the form `something.experimental.kubernetes.io/name`. For example `net.experimental.kubernetes.io/policy` might represent an experimental network policy field. Other advice regarding use of labels, annotations, and other generic map keys by Kubernetes components and tools: - Key names should be all lowercase, with words separated by dashes, such as `desired-replicas` -- cgit v1.2.3 From 7b4fa0ae9049528038d68cfdd941b9b40f702334 Mon Sep 17 00:00:00 2001 From: Paul Morie Date: Wed, 23 Sep 2015 14:45:00 -0400 Subject: Add link to dev e2e docs from api_changes doc --- api_changes.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/api_changes.md b/api_changes.md index e0a65fe0..b9fcd392 100644 --- a/api_changes.md +++ b/api_changes.md @@ -508,8 +508,8 @@ doing! ## Write end-to-end tests -This is, sadly, still sort of painful. Talk to us and we'll try to help you -figure out the best way to make sure your cool feature keeps working forever. +Check out the [E2E docs](e2e-tests.md) for detailed information about how to write end-to-end +tests for your feature. ## Examples and docs -- cgit v1.2.3 From d3d7bf18668c2ea71583ca8a86a2470a5aa46b8f Mon Sep 17 00:00:00 2001 From: feihujiang Date: Wed, 30 Sep 2015 09:49:29 +0800 Subject: Fix wrong URL in cli-roadmap doc --- cli-roadmap.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/cli-roadmap.md b/cli-roadmap.md index 42784dbc..2b713260 100644 --- a/cli-roadmap.md +++ b/cli-roadmap.md @@ -35,8 +35,8 @@ Documentation for other releases can be found at See github issues with the following labels: * [area/app-config-deployment](https://github.com/kubernetes/kubernetes/labels/area/app-config-deployment) -* [component/CLI](https://github.com/kubernetes/kubernetes/labels/component/CLI) -* [component/client](https://github.com/kubernetes/kubernetes/labels/component/client) +* [component/kubectl](https://github.com/kubernetes/kubernetes/labels/component/kubectl) +* [component/clientlib](https://github.com/kubernetes/kubernetes/labels/component/clientlib) -- cgit v1.2.3 From d29c41354ec8aec136df95d3a9a03b6807b7bcd6 Mon Sep 17 00:00:00 2001 From: HaiyangDING Date: Tue, 29 Sep 2015 17:44:26 +0800 Subject: Replace PodFitsPorts with PodFitsHostPorts --- scheduler_algorithm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/scheduler_algorithm.md b/scheduler_algorithm.md index 7964ab33..d6a8b6c5 100755 --- a/scheduler_algorithm.md +++ b/scheduler_algorithm.md @@ -41,7 +41,7 @@ The purpose of filtering the nodes is to filter out the nodes that do not meet c - `NoDiskConflict`: Evaluate if a pod can fit due to the volumes it requests, and those that are already mounted. - `PodFitsResources`: Check if the free resource (CPU and Memory) meets the requirement of the Pod. The free resource is measured by the capacity minus the sum of requests of all Pods on the node. To learn more about the resource QoS in Kubernetes, please check [QoS proposal](../proposals/resource-qos.md). -- `PodFitsPorts`: Check if any HostPort required by the Pod is already occupied on the node. +- `PodFitsHostPorts`: Check if any HostPort required by the Pod is already occupied on the node. - `PodFitsHost`: Filter out all nodes except the one specified in the PodSpec's NodeName field. - `PodSelectorMatches`: Check if the labels of the node match the labels specified in the Pod's `nodeSelector` field ([Here](../user-guide/node-selection/) is an example of how to use `nodeSelector` field). - `CheckNodeLabelPresence`: Check if all the specified labels exist on a node or not, regardless of the value. -- cgit v1.2.3 From 8589eb45f56e423954754d2a61c3991d27fd4e5c Mon Sep 17 00:00:00 2001 From: "Madhusudan.C.S" Date: Fri, 2 Oct 2015 12:26:59 -0700 Subject: Move the hooks section to the commit section. It doesn't make much sense to have a separate section for hooks right now because we only have a pre-commit hook at the moment and we should have it setup before making the first commit. We can probably create a separate section for hooks again when we have other types of hooks. --- development.md | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/development.md b/development.md index 75cb2365..87fb02d5 100644 --- a/development.md +++ b/development.md @@ -89,6 +89,16 @@ git remote set-url --push upstream no_push ### Committing changes to your fork +Before committing any changes, please link/copy these pre-commit hooks into your .git +directory. This will keep you from accidentally committing non-gofmt'd go code. + +```sh +cd kubernetes/.git/hooks/ +ln -s ../../hooks/pre-commit . +``` + +Then you can commit your changes and push them to your fork: + ```sh git commit git push -f origin myfeature @@ -203,16 +213,6 @@ It is sometimes expedient to manually fix the /Godeps/godeps.json file to minimi Please send dependency updates in separate commits within your PR, for easier reviewing. -## Hooks - -Before committing any changes, please link/copy these hooks into your .git -directory. This will keep you from accidentally committing non-gofmt'd go code. - -```sh -cd kubernetes/.git/hooks/ -ln -s ../../hooks/pre-commit . -``` - ## Unit tests ```sh -- cgit v1.2.3 From b0884c7373c19d12448ecc54675ac98e78c7bdc9 Mon Sep 17 00:00:00 2001 From: Brian Grant Date: Fri, 9 Oct 2015 02:13:28 +0000 Subject: Strengthen wording about status behavior. --- api-conventions.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/api-conventions.md b/api-conventions.md index fb7cbe10..99aa0cf8 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -33,7 +33,7 @@ Documentation for other releases can be found at API Conventions =============== -Updated: 9/20/2015 +Updated: 10/8/2015 *This document is oriented at users who want a deeper understanding of the Kubernetes API structure, and developers wanting to extend the Kubernetes API. An introduction to @@ -172,7 +172,7 @@ When a new version of an object is POSTed or PUT, the "spec" is updated and avai The Kubernetes API also serves as the foundation for the declarative configuration schema for the system. In order to facilitate level-based operation and expression of declarative configuration, fields in the specification should have declarative rather than imperative names and semantics -- they represent the desired state, not actions intended to yield the desired state. -The PUT and POST verbs on objects will ignore the "status" values. A `/status` subresource is provided to enable system components to update statuses of resources they manage. +The PUT and POST verbs on objects MUST ignore the "status" values, to avoid accidentally overwriting the status in read-modify-write scenarios. A `/status` subresource MUST be provided to enable system components to update statuses of resources they manage. Otherwise, PUT expects the whole object to be specified. Therefore, if a field is omitted it is assumed that the client wants to clear that field's value. The PUT verb does not accept partial updates. Modification of just part of an object may be achieved by GETting the resource, modifying part of the spec, labels, or annotations, and then PUTting it back. See [concurrency control](#concurrency-control-and-consistency), below, regarding read-modify-write consistency when using this pattern. Some objects may expose alternative resource representations that allow mutation of the status, or performing custom actions on the object. -- cgit v1.2.3 From 499f571b4ec667021e17f2abcf31637738b26b18 Mon Sep 17 00:00:00 2001 From: Clayton Coleman Date: Fri, 11 Sep 2015 16:09:51 -0400 Subject: Expose exec and logs via WebSockets Not all clients and systems can support SPDY protocols. This commit adds support for two new websocket protocols, one to handle streaming of pod logs from a pod, and the other to allow exec to be tunneled over websocket. Browser support for chunked encoding is still poor, and web consoles that wish to show pod logs may need to make compromises to display the output. The /pods//log endpoint now supports websocket upgrade to the 'binary.k8s.io' subprotocol, which sends chunks of logs as binary to the client. Messages are written as logs are streamed from the container daemon, so flushing should be unaffected. Browser support for raw communication over SDPY is not possible, and some languages lack libraries for it and HTTP/2. The Kubelet supports upgrade to WebSocket instead of SPDY, and will multiplex STDOUT/IN/ERR over websockets by prepending each binary message with a single byte representing the channel (0 for IN, 1 for OUT, and 2 for ERR). Because framing on WebSockets suffers from head-of-line blocking, clients and other server code should ensure that no particular stream blocks. An alternative subprotocol 'base64.channel.k8s.io' base64 encodes the body and uses '0'-'9' to represent the channel for ease of use in browsers. --- api-conventions.md | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/api-conventions.md b/api-conventions.md index 99aa0cf8..a23dc270 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -73,6 +73,7 @@ using resources with kubectl can be found in [Working with resources](../user-gu - [Events](#events) - [Naming conventions](#naming-conventions) - [Label, selector, and annotation conventions](#label-selector-and-annotation-conventions) + - [WebSockets and SPDY](#websockets-and-spdy) @@ -721,6 +722,22 @@ Other advice regarding use of labels, annotations, and other generic map keys by - Use annotations to store API extensions that the controller responsible for the resource doesn't need to know about, experimental fields that aren't intended to be generally used API fields, etc. Beware that annotations aren't automatically handled by the API conversion machinery. +## WebSockets and SPDY + +Some of the API operations exposed by Kubernetes involve transfer of binary streams between the client and a container, including attach, exec, portforward, and logging. The API therefore exposes certain operations over upgradeable HTTP connections ([described in RFC 2817](https://tools.ietf.org/html/rfc2817)) via the WebSocket and SPDY protocols. These actions are exposed as subresources with their associated verbs (exec, log, attach, and portforward) and are requested via a GET (to support JavaScript in a browser) and POST (semantically accurate). + +There are two primary protocols in use today: + +1. Streamed channels + + When dealing with multiple independent binary streams of data such as the remote execution of a shell command (writing to STDIN, reading from STDOUT and STDERR) or forwarding multiple ports the streams can be multiplexed onto a single TCP connection. Kubernetes supports a SPDY based framing protocol that leverages SPDY channels and a WebSocket framing protocol that multiplexes multiple channels onto the same stream by prefixing each binary chunk with a byte indicating its channel. The WebSocket protocol supports an optional subprotocol that handles base64-encoded bytes from the client and returns base64-encoded bytes from the server and character based channel prefixes ('0', '1', '2') for ease of use from JavaScript in a browser. + +2. Streaming response + + The default log output for a channel of streaming data is an HTTP Chunked Transfer-Encoding, which can return an arbitrary stream of binary data from the server. Browser-based JavaScript is limited in its ability to access the raw data from a chunked response, especially when very large amounts of logs are returned, and in future API calls it may be desirable to transfer large files. The streaming API endpoints support an optional WebSocket upgrade that provides a unidirectional channel from the server to the client and chunks data as binary WebSocket frames. An optional WebSocket subprotocol is exposed that base64 encodes the stream before returning it to the client. + +Clients should use the SPDY protocols if their clients have native support, or WebSockets as a fallback. Note that WebSockets is susceptible to Head-of-Line blocking and so clients must read and process each message sequentionally. In the future, an HTTP/2 implementation will be exposed that deprecates SPDY. + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/api-conventions.md?pixel)]() -- cgit v1.2.3 From e2dd98e6052420a1198bcb632c5353f10d5b2894 Mon Sep 17 00:00:00 2001 From: Mike Danese Date: Mon, 12 Oct 2015 11:35:30 -0700 Subject: fix incorrect merge MIME type in api-conventions doc --- api-conventions.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/api-conventions.md b/api-conventions.md index a23dc270..7ad1dbc6 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -288,7 +288,7 @@ The API supports three different PATCH operations, determined by their correspon * JSON Patch, `Content-Type: application/json-patch+json` * As defined in [RFC6902](https://tools.ietf.org/html/rfc6902), a JSON Patch is a sequence of operations that are executed on the resource, e.g. `{"op": "add", "path": "/a/b/c", "value": [ "foo", "bar" ]}`. For more details on how to use JSON Patch, see the RFC. -* Merge Patch, `Content-Type: application/merge-json-patch+json` +* Merge Patch, `Content-Type: application/merge-patch+json` * As defined in [RFC7386](https://tools.ietf.org/html/rfc7386), a Merge Patch is essentially a partial representation of the resource. The submitted JSON is "merged" with the current resource to create a new one, then the new one is saved. For more details on how to use Merge Patch, see the RFC. * Strategic Merge Patch, `Content-Type: application/strategic-merge-patch+json` * Strategic Merge Patch is a custom implementation of Merge Patch. For a detailed explanation of how it works and why it needed to be introduced, see below. -- cgit v1.2.3 From d587e3b9f965b6528fbe9b058056f4e7c70ef6f4 Mon Sep 17 00:00:00 2001 From: Jeff Grafton Date: Thu, 8 Oct 2015 15:57:24 -0700 Subject: Update test helpers and dev doc to use etcd v2.0.12. --- development.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/development.md b/development.md index 87fb02d5..4375d73e 100644 --- a/development.md +++ b/development.md @@ -264,7 +264,7 @@ Coverage results for the project can also be viewed on [Coveralls](https://cover ## Integration tests -You need an [etcd](https://github.com/coreos/etcd/releases/tag/v2.0.0) in your path, please make sure it is installed and in your ``$PATH``. +You need an [etcd](https://github.com/coreos/etcd/releases/tag/v2.0.12) in your path, please make sure it is installed and in your ``$PATH``. ```sh cd kubernetes -- cgit v1.2.3 From aee2383f9b350d0ea7b5d14b60b5c77fdef08391 Mon Sep 17 00:00:00 2001 From: Jeff Grafton Date: Thu, 8 Oct 2015 17:57:36 -0700 Subject: Update documentation to describe how to install etcd for testing --- development.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/development.md b/development.md index 4375d73e..0b778dd9 100644 --- a/development.md +++ b/development.md @@ -264,7 +264,9 @@ Coverage results for the project can also be viewed on [Coveralls](https://cover ## Integration tests -You need an [etcd](https://github.com/coreos/etcd/releases/tag/v2.0.12) in your path, please make sure it is installed and in your ``$PATH``. +You need an [etcd](https://github.com/coreos/etcd/releases) in your path. To download a copy of the latest version used by Kubernetes, either + * run `hack/install-etcd.sh`, which will download etcd to `third_party/etcd`, and then set your `PATH` to include `third_party/etcd`. + * inspect `cluster/saltbase/salt/etcd/etcd.manifest` for the correct version, and then manually download and install it to some place in your `PATH`. ```sh cd kubernetes -- cgit v1.2.3 From 576acdb7fb386e5f34104da4042dbe9f728a79b5 Mon Sep 17 00:00:00 2001 From: Eric Tune Date: Thu, 8 Oct 2015 16:29:02 -0700 Subject: Doc: apigroups, alpha, beta, experimental/v1alpha1 --- api_changes.md | 63 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 63 insertions(+) diff --git a/api_changes.md b/api_changes.md index 6b96b0e2..24430f26 100644 --- a/api_changes.md +++ b/api_changes.md @@ -535,6 +535,69 @@ The API spec changes should be in a commit separate from your other changes. TODO(smarterclayton): write this. +## Alpha, Beta, and Stable Versions + +New feature development proceeds through a series of stages of increasing maturity: + +- Development level + - Object Versioning: no convention + - Availability: not commited to main kubernetes repo, and thus not available in offical releases + - Audience: other developers closely collaborating on a feature or proof-of-concept + - Upgradeability, Reliability, Completeness, and Support: no requirements or guarantees +- Alpha level + - Object Versioning: API version name contains `alpha` (e.g. `v1alpha1`) + - Availability: committed to main kubernetes repo; appears in an official release; feature is + disabled by default, but may be enabled by flag + - Audience: developers and expert users interested in giving early feedback on features + - Completeness: some API operations, CLI commands, or UI support may not be implemented; the API + need not have had an *API review* (an intensive and targeted review of the API, on top of a normal + code review) + - Upgradeability: the object schema and semantics may change in a later software release, without + any provision for preserving objects in an existing cluster; + removing the upgradability concern allows developers to make rapid progress; in particular, + API versions can increment faster than the minor release cadence and the developer need not + maintain multiple versions; developers should still increment the API version when object schema + or semantics change in an [incompatible way](#on-compatibility) + - Cluster Reliability: because the feature is relatively new, and may lack complete end-to-end + tests, enabling the feature via a flag might expose bugs with destabilize the cluster (e.g. a + bug in a control loop might rapidly create excessive numbers of object, exhausting API storage). + - Support: there is *no commitment* from the project to complete the feature; the feature may be + dropped entirely in a later software release + - Recommended Use Cases: only in short-lived testing clusters, due to complexity of upgradeability + and lack of long-term support and lack of upgradability. +- Beta level: + - Object Versioning: API version name contains `beta` (e.g. `v2beta3`) + - Availability: in official Kubernetes releases, and enabled by default + - Audience: users interested in providing feedback on features + - Completeness: all API operations, CLI commands, and UI support should be implemented; end-to-end + tests complete; the API has had a thorough API review and is thought to be complete, though use + during beta may frequently turn up API issues not thought of during review + - Upgradeability: the object schema and semantics may change in a later software release; when + this happens, an upgrade path will be documentedr; in some cases, objects will be automatically + converted to the new version; in other cases, a manual upgrade may be necessary; a manual + upgrade may require downtime for anything relying on the new feature, and may require + manual conversion of objects to the new version; when manual conversion is necessary, the + project will provide documentation on the process (for an example, see [v1 conversion + tips](../api.md)) + - Cluster Reliability: since the feature has e2e tests, enabling the feature via a flag should not + create new bugs in unrelated features; because the feature is new, it may have minor bugs + - Support: the project commits to complete the feature, in some form, in a subsequent Stable + version; typically this will happen within 3 months, but sometimes longer; releases should + simultaneously support two consecutive versions (e.g. `v1beta1` and `v1beta2`; or `v1beta2` and + `v1`) for at least one minor release cycle (typically 3 months) so that users have enough time + to upgrade and migrate objects + - Recommended Use Cases: in short-lived testing clusters; in production clusters as part of a + short-lived evaluation of the feature in order to provide feedback +- Stable level: + - Object Versioning: API version `vX` where `X` is an integer (e.g. `v1`) + - Availability: in official Kubernetes releases, and enabled by default + - Audience: all users + - Completeness: same as beta + - Upgradeability: only [strictly compatible](#on-compatibility) changes allowed in subsequent + software releases + - Cluster Reliability: high + - Support: API version will continue to be present for many subsequent software releases; + - Recommended Use Cases: any [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/api_changes.md?pixel)]() -- cgit v1.2.3 From bb2aa8770ff269515fe5f83ce0620f634eb2cadc Mon Sep 17 00:00:00 2001 From: Isaac Hollander McCreery Date: Mon, 12 Oct 2015 16:11:12 -0700 Subject: Upgrades and upgrade tests take versions of the form release/stable instead of stable_release: - Refactor common and gce/upgrade.sh to use arbitrary published releases - Update hack/get-build to use cluster/common code - Use hack/get-build.sh in cluster upgrade test logic --- getting-builds.md | 24 +++++++++++++++++------- 1 file changed, 17 insertions(+), 7 deletions(-) diff --git a/getting-builds.md b/getting-builds.md index bcb981c4..3803c873 100644 --- a/getting-builds.md +++ b/getting-builds.md @@ -35,17 +35,27 @@ Documentation for other releases can be found at You can use [hack/get-build.sh](http://releases.k8s.io/HEAD/hack/get-build.sh) to or use as a reference on how to get the most recent builds with curl. With `get-build.sh` you can grab the most recent stable build, the most recent release candidate, or the most recent build to pass our ci and gce e2e tests (essentially a nightly build). +Run `./hack/get-build.sh -h` for its usage. + +For example, to get a build at a specific version (v1.0.2): + ```console -usage: - ./hack/get-build.sh [stable|release|latest|latest-green] +./hack/get-build.sh v1.0.2 +``` - stable: latest stable version - release: latest release candidate - latest: latest ci build - latest-green: latest ci build to pass gce e2e +Alternatively, to get the latest stable release: + +```console +./hack/get-build.sh release/stable +``` + +Finally, you can just print the latest or stable version: + +```console +./hack/get-build.sh -v ci/latest ``` -You can also use the gsutil tool to explore the Google Cloud Storage release bucket. Here are some examples: +You can also use the gsutil tool to explore the Google Cloud Storage release buckets. Here are some examples: ```sh gsutil cat gs://kubernetes-release/ci/latest.txt # output the latest ci version number -- cgit v1.2.3 From 7707173defcebeb95d061638e0dcfe0ace605d3a Mon Sep 17 00:00:00 2001 From: Tim Hockin Date: Fri, 9 Oct 2015 16:54:49 -0700 Subject: update docs on experimental annotations --- api-conventions.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/api-conventions.md b/api-conventions.md index 7ad1dbc6..2568d952 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -713,7 +713,7 @@ Therefore, resources supporting auto-generation of unique labels should have a ` Annotations have very different intended usage from labels. We expect them to be primarily generated and consumed by tooling and system extensions. I'm inclined to generalize annotations to permit them to directly store arbitrary json. Rigid names and name prefixes make sense, since they are analogous to API fields. -In fact, experimental API fields, including those used to represent fields of newer alpha/beta API versions in the older stable storage version, may be represented as annotations with the form `something.experimental.kubernetes.io/name`. For example `net.experimental.kubernetes.io/policy` might represent an experimental network policy field. +In fact, in-development API fields, including those used to represent fields of newer alpha/beta API versions in the older stable storage version, may be represented as annotations with the form `something.alpha.kubernetes.io/name` or `something.beta.kubernetes.io/name` (depending on our confidence in it). For example `net.alpha.kubernetes.io/policy` might represent an experimental network policy field. Other advice regarding use of labels, annotations, and other generic map keys by Kubernetes components and tools: - Key names should be all lowercase, with words separated by dashes, such as `desired-replicas` -- cgit v1.2.3 From 21ea4045ce292f7594efdbe045f1b163ffad90e1 Mon Sep 17 00:00:00 2001 From: Wojciech Tyczynski Date: Mon, 19 Oct 2015 09:29:10 +0200 Subject: api_changes.md changes for json-related code autogeneration. --- api_changes.md | 30 +++++++++++++++++++++++------- 1 file changed, 23 insertions(+), 7 deletions(-) diff --git a/api_changes.md b/api_changes.md index 24430f26..53dfb014 100644 --- a/api_changes.md +++ b/api_changes.md @@ -38,7 +38,7 @@ with a number of existing API types and with the [API conventions](api-conventions.md). If creating a new API type/resource, we also recommend that you first send a PR containing just a proposal for the new API types, and that you initially target -the experimental API (pkg/apis/experimental). +the extensions API (pkg/apis/extensions). The Kubernetes API has two major components - the internal structures and the versioned APIs. The versioned APIs are intended to be stable, while the @@ -293,13 +293,13 @@ the release notes for the next release by labeling the PR with the "release-note If you found that your change accidentally broke clients, it should be reverted. In short, the expected API evolution is as follows: -* `experimental/v1alpha1` -> +* `extensions/v1alpha1` -> * `newapigroup/v1alpha1` -> ... -> `newapigroup/v1alphaN` -> * `newapigroup/v1beta1` -> ... -> `newapigroup/v1betaN` -> * `newapigroup/v1` -> * `newapigroup/v2alpha1` -> ... -While in experimental we have no obligation to move forward with the API at all and may delete or break it at any time. +While in extensions we have no obligation to move forward with the API at all and may delete or break it at any time. While in alpha we expect to move forward with it, but may break it. @@ -399,9 +399,9 @@ The conversion code resides with each versioned API. There are two files: functions - `pkg/api//conversion_generated.go` containing auto-generated conversion functions - - `pkg/apis/experimental//conversion.go` containing manually written + - `pkg/apis/extensions//conversion.go` containing manually written conversion functions - - `pkg/apis/experimental//conversion_generated.go` containing + - `pkg/apis/extensions//conversion_generated.go` containing auto-generated conversion functions Since auto-generated conversion functions are using manually written ones, @@ -437,7 +437,7 @@ of your versioned api objects. The deep copy code resides with each versioned API: - `pkg/api//deep_copy_generated.go` containing auto-generated copy functions - - `pkg/apis/experimental//deep_copy_generated.go` containing auto-generated copy functions + - `pkg/apis/extensions//deep_copy_generated.go` containing auto-generated copy functions To regenerate them: - run @@ -446,12 +446,28 @@ To regenerate them: hack/update-generated-deep-copies.sh ``` +## Edit json (un)marshaling code + +We are auto-generating code for marshaling and unmarshaling json representation +of api objects - this is to improve the overall system performance. + +The auto-generated code resides with each versioned API: + - `pkg/api//types.generated.go` + - `pkg/apis/extensions//types.generated.go` + +To regenerate them: + - run + +```sh +hack/update-codecgen.sh +``` + ## Making a new API Group This section is under construction, as we make the tooling completely generic. At the moment, you'll have to make a new directory under pkg/apis/; copy the -directory structure from pkg/apis/experimental. Add the new group/version to all +directory structure from pkg/apis/extensions. Add the new group/version to all of the hack/{verify,update}-generated-{deep-copy,conversions,swagger}.sh files in the appropriate places--it should just require adding your new group/version to a bash array. You will also need to make sure your new types are imported by -- cgit v1.2.3 From 0b10e0b16a0477314a55161fe7422da441c7379d Mon Sep 17 00:00:00 2001 From: Eric Tune Date: Tue, 13 Oct 2015 12:42:49 -0700 Subject: Documented required/optional fields. --- api-conventions.md | 33 +++++++++++++++++++++++++++++++++ 1 file changed, 33 insertions(+) diff --git a/api-conventions.md b/api-conventions.md index 7ad1dbc6..710fff51 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -59,6 +59,7 @@ using resources with kubectl can be found in [Working with resources](../user-gu - [List Operations](#list-operations) - [Map Operations](#map-operations) - [Idempotency](#idempotency) + - [Optional vs Required](#optional-vs-required) - [Defaulting](#defaulting) - [Late Initialization](#late-initialization) - [Concurrency Control and Consistency](#concurrency-control-and-consistency) @@ -370,6 +371,38 @@ All compatible Kubernetes APIs MUST support "name idempotency" and respond with Names generated by the system may be requested using `metadata.generateName`. GenerateName indicates that the name should be made unique by the server prior to persisting it. A non-empty value for the field indicates the name will be made unique (and the name returned to the client will be different than the name passed). The value of this field will be combined with a unique suffix on the server if the Name field has not been provided. The provided value must be valid within the rules for Name, and may be truncated by the length of the suffix required to make the value unique on the server. If this field is specified, and Name is not present, the server will NOT return a 409 if the generated name exists - instead, it will either return 201 Created or 504 with Reason `ServerTimeout` indicating a unique name could not be found in the time allotted, and the client should retry (optionally after the time indicated in the Retry-After header). +## Optional vs Required + +Fields must be either optional or required. + +Optional fields have the following properties: + +- They have `omitempty` struct tag in Go. +- They are a pointer type in the Go definition (e.g. `bool *awesomeFlag`). +- The API server should allow POSTing and PUTing a resource with this field unset. + +Required fields have the opposite properties, namely: + +- They do not have an `omitempty` struct tag. +- They are not a pointer type in the Go definition (e.g. `bool otherFlag`). +- The API server should not allow POSTing or PUTing a resource with this field unset. + +Using the `omitempty` tag causes swagger documentation to reflect that the field is optional. + +Using a pointer allows distinguishing unset from the zero value for that type. +There are some cases where, in principle, a pointer is not needed for an optional field +since the zero value is forbidden, and thus imples unset. There are examples of this in the +codebase. However: + +- it can be difficult for implementors to anticipate all cases where an empty value might need to be + distinguished from a zero value +- structs are not omitted from encoder output even where omitempty is specified, which is messy; +- having a pointer consistently imply optional is clearer for users of the Go language client, and any + other clients that use corresponding types + +Therefore, we ask that pointers always be used with optional fields. + + ## Defaulting Default resource values are API version-specific, and they are applied during -- cgit v1.2.3 From 6579078d9ddafb933fd765304208a4a67af3d32e Mon Sep 17 00:00:00 2001 From: dingh Date: Fri, 23 Oct 2015 13:46:32 +0800 Subject: fix typo in api-converntions.md --- api-conventions.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/api-conventions.md b/api-conventions.md index 75c1cf51..35ae7bb6 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -728,7 +728,7 @@ Accumulate repeated events in the client, especially for frequent events, to red ## Label, selector, and annotation conventions -Labels are the domain of users. They are intended to facilitate organization and management of API resources using attributes that are meaningful to users, as opposed to meaningful to the system. Think of them as user-created mp3 or email inbox labels, as opposed to the directory structure used by a program to store its data. The former is enables the user to apply an arbitrary ontology, whereas the latter is implementation-centric and inflexible. Users will use labels to select resources to operate on, display label values in CLI/UI columns, etc. Users should always retain full power and flexibility over the label schemas they apply to labels in their namespaces. +Labels are the domain of users. They are intended to facilitate organization and management of API resources using attributes that are meaningful to users, as opposed to meaningful to the system. Think of them as user-created mp3 or email inbox labels, as opposed to the directory structure used by a program to store its data. The former enables the user to apply an arbitrary ontology, whereas the latter is implementation-centric and inflexible. Users will use labels to select resources to operate on, display label values in CLI/UI columns, etc. Users should always retain full power and flexibility over the label schemas they apply to labels in their namespaces. However, we should support conveniences for common cases by default. For example, what we now do in ReplicationController is automatically set the RC's selector and labels to the labels in the pod template by default, if they are not already set. That ensures that the selector will match the template, and that the RC can be managed using the same labels as the pods it creates. Note that once we generalize selectors, it won't necessarily be possible to unambiguously generate labels that match an arbitrary selector. -- cgit v1.2.3 From 27b87616c75fe7506feff77dbb2de29b327652c6 Mon Sep 17 00:00:00 2001 From: Tim Hockin Date: Fri, 23 Oct 2015 15:08:27 -0700 Subject: syntax is 'go' not 'golang' --- api-conventions.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/api-conventions.md b/api-conventions.md index 75c1cf51..69d1f740 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -188,7 +188,7 @@ Objects that contain both spec and status should not contain additional top-leve The `FooCondition` type for some resource type `Foo` may include a subset of the following fields, but must contain at least `type` and `status` fields: -```golang +```go Type FooConditionType `json:"type" description:"type of Foo condition"` Status ConditionStatus `json:"status" description:"status of the condition, one of True, False, Unknown"` LastHeartbeatTime unversioned.Time `json:"lastHeartbeatTime,omitempty" description:"last time we got an update on a given condition"` -- cgit v1.2.3 From 2df426d3f2657c059d0dfa99863dd1bacbaba323 Mon Sep 17 00:00:00 2001 From: Eric Tune Date: Fri, 23 Oct 2015 15:41:49 -0700 Subject: In devel docs, refer to .kube/config not .kubernetes_auth --- e2e-tests.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/e2e-tests.md b/e2e-tests.md index ca55b901..882da396 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -74,7 +74,7 @@ For the purposes of brevity, we will look at a subset of the options, which are -repo-root="../../": Root directory of kubernetes repository, for finding test files. ``` -Prior to running the tests, it is recommended that you first create a simple auth file in your home directory, e.g. `$HOME/.kubernetes_auth` , with the following: +Prior to running the tests, it is recommended that you first create a simple auth file in your home directory, e.g. `$HOME/.kube/config` , with the following: ``` { @@ -85,7 +85,7 @@ Prior to running the tests, it is recommended that you first create a simple aut Next, you will need a cluster that you can test against. As mentioned earlier, you will want to execute `sudo ./hack/local-up-cluster.sh`. To get a sense of what tests exist, you may want to run: -`e2e.test --host="127.0.0.1:8080" --provider="local" --ginkgo.v=true -ginkgo.dryRun=true --kubeconfig="$HOME/.kubernetes_auth" --repo-root="$KUBERNETES_SRC_PATH"` +`e2e.test --host="127.0.0.1:8080" --provider="local" --ginkgo.v=true -ginkgo.dryRun=true --kubeconfig="$HOME/.kube/config" --repo-root="$KUBERNETES_SRC_PATH"` If you wish to execute a specific set of tests you can use the `-ginkgo.focus=` regex, e.g.: -- cgit v1.2.3 From 202e7b6567f7aefd2abd9058ae3cdc7ffb4bfe5d Mon Sep 17 00:00:00 2001 From: Robert Wehner Date: Sat, 24 Oct 2015 20:02:54 -0600 Subject: Fix dead links to submit-queue * https://github.com/kubernetes/contrib/pull/122 merged submit-queue into mungegithub. This fixes links to the old submit-queue location. * Standardized to use "submit-queue" instead of "submit queue". Just picked one since both were used. * Fixes dead link to on-call wiki. --- automation.md | 4 ++-- pull-requests.md | 6 +++--- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/automation.md b/automation.md index eb36cc63..f01b6158 100644 --- a/automation.md +++ b/automation.md @@ -46,8 +46,8 @@ In an effort to * maintain e2e stability * load test githubs label feature -We have added an automated [submit-queue](https://github.com/kubernetes/contrib/tree/master/submit-queue) -for kubernetes. +We have added an automated [submit-queue](https://github.com/kubernetes/contrib/blob/master/mungegithub/pulls/submit-queue.go) to the +[github "munger"](https://github.com/kubernetes/contrib/tree/master/mungegithub) for kubernetes. The submit-queue does the following: diff --git a/pull-requests.md b/pull-requests.md index 7b955b3d..15a0f447 100644 --- a/pull-requests.md +++ b/pull-requests.md @@ -52,14 +52,14 @@ Life of a Pull Request Unless in the last few weeks of a milestone when we need to reduce churn and stabilize, we aim to be always accepting pull requests. -Either the [on call](https://github.com/kubernetes/kubernetes/wiki/Kubernetes-on-call-rotation) manually or the [submit queue](https://github.com/kubernetes/contrib/tree/master/submit-queue) automatically will manage merging PRs. +Either the [on call](https://github.com/kubernetes/kubernetes/wiki/Kubernetes-on-call-rotations) manually or the [github "munger"](https://github.com/kubernetes/contrib/tree/master/mungegithub) submit-queue plugin automatically will manage merging PRs. -There are several requirements for the submit queue to work: +There are several requirements for the submit-queue to work: * Author must have signed CLA ("cla: yes" label added to PR) * No changes can be made since last lgtm label was applied * k8s-bot must have reported the GCE E2E build and test steps passed (Travis, Shippable and Jenkins build) -Additionally, for infrequent or new contributors, we require the on call to apply the "ok-to-merge" label manually. This is gated by the [whitelist](https://github.com/kubernetes/contrib/tree/master/submit-queue/whitelist.txt). +Additionally, for infrequent or new contributors, we require the on call to apply the "ok-to-merge" label manually. This is gated by the [whitelist](https://github.com/kubernetes/contrib/blob/master/mungegithub/whitelist.txt). Automation ---------- -- cgit v1.2.3 From 6f050771d96266a78901d8cb9946b159c70fb14f Mon Sep 17 00:00:00 2001 From: hurf Date: Fri, 30 Oct 2015 14:12:20 +0800 Subject: Remove trace of "kubectl stop" Remove doc and use of "kubectl stop" since it's deprecated. --- flaky-tests.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/flaky-tests.md b/flaky-tests.md index 3a7af51e..2470a815 100644 --- a/flaky-tests.md +++ b/flaky-tests.md @@ -87,10 +87,10 @@ done grep "Exited ([^0])" output.txt ``` -Eventually you will have sufficient runs for your purposes. At that point you can stop and delete the replication controller by running: +Eventually you will have sufficient runs for your purposes. At that point you can delete the replication controller by running: ```sh -kubectl stop replicationcontroller flakecontroller +kubectl delete replicationcontroller flakecontroller ``` If you do a final check for flakes with `docker ps -a`, ignore tasks that exited -1, since that's what happens when you stop the replication controller. -- cgit v1.2.3 From 6a9d36de0ba742897ec9239118f5dd32c5daaefa Mon Sep 17 00:00:00 2001 From: Isaac Hollander McCreery Date: Thu, 22 Oct 2015 05:24:34 -0700 Subject: Remove out-of-date information about releasing --- releasing.md | 74 ------------------------------------------------------------ 1 file changed, 74 deletions(-) diff --git a/releasing.md b/releasing.md index 9a73405f..6ff8e862 100644 --- a/releasing.md +++ b/releasing.md @@ -249,80 +249,6 @@ can, for instance, tell it to override `gitVersion` and set it to `v0.4-13-g4567bcdef6789-dirty` and set `gitCommit` to `4567bcdef6789...` which is the complete SHA1 of the (dirty) tree used at build time. -## Handling Official Versions - -Handling official versions from git is easy, as long as there is an annotated -git tag pointing to a specific version then `git describe` will return that tag -exactly which will match the idea of an official version (e.g. `v0.5`). - -Handling it on tarballs is a bit harder since the exact version string must be -present in `pkg/version/base.go` for it to get embedded into the binaries. But -simply creating a commit with `v0.5` on its own would mean that the commits -coming after it would also get the `v0.5` version when built from tarball or `go -get` while in fact they do not match `v0.5` (the one that was tagged) exactly. - -To handle that case, creating a new release should involve creating two adjacent -commits where the first of them will set the version to `v0.5` and the second -will set it to `v0.5-dev`. In that case, even in the presence of merges, there -will be a single commit where the exact `v0.5` version will be used and all -others around it will either have `v0.4-dev` or `v0.5-dev`. - -The diagram below illustrates it. - -![Diagram of git commits involved in the release](releasing.png) - -After working on `v0.4-dev` and merging PR 99 we decide it is time to release -`v0.5`. So we start a new branch, create one commit to update -`pkg/version/base.go` to include `gitVersion = "v0.5"` and `git commit` it. - -We test it and make sure everything is working as expected. - -Before sending a PR for it, we create a second commit on that same branch, -updating `pkg/version/base.go` to include `gitVersion = "v0.5-dev"`. That will -ensure that further builds (from tarball or `go install`) on that tree will -always include the `-dev` prefix and will not have a `v0.5` version (since they -do not match the official `v0.5` exactly.) - -We then send PR 100 with both commits in it. - -Once the PR is accepted, we can use `git tag -a` to create an annotated tag -*pointing to the one commit* that has `v0.5` in `pkg/version/base.go` and push -it to GitHub. (Unfortunately GitHub tags/releases are not annotated tags, so -this needs to be done from a git client and pushed to GitHub using SSH or -HTTPS.) - -## Parallel Commits - -While we are working on releasing `v0.5`, other development takes place and -other PRs get merged. For instance, in the example above, PRs 101 and 102 get -merged to the master branch before the versioning PR gets merged. - -This is not a problem, it is only slightly inaccurate that checking out the tree -at commit `012abc` or commit `345cde` or at the commit of the merges of PR 101 -or 102 will yield a version of `v0.4-dev` *but* those commits are not present in -`v0.5`. - -In that sense, there is a small window in which commits will get a -`v0.4-dev` or `v0.4-N-gXXX` label and while they're indeed later than `v0.4` -but they are not really before `v0.5` in that `v0.5` does not contain those -commits. - -Unfortunately, there is not much we can do about it. On the other hand, other -projects seem to live with that and it does not really become a large problem. - -As an example, Docker commit a327d9b91edf has a `v1.1.1-N-gXXX` label but it is -not present in Docker `v1.2.0`: - -```console -$ git describe a327d9b91edf -v1.1.1-822-ga327d9b91edf - -$ git log --oneline v1.2.0..a327d9b91edf -a327d9b91edf Fix data space reporting from Kb/Mb to KB/MB - -(Non-empty output here means the commit is not present on v1.2.0.) -``` - ## Release Notes No official release should be made final without properly matching release notes. -- cgit v1.2.3 From 87e5266e0aecf72c5f018c363fabecf4c422d824 Mon Sep 17 00:00:00 2001 From: Isaac Hollander McCreery Date: Thu, 22 Oct 2015 05:25:35 -0700 Subject: TODOs --- releasing.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/releasing.md b/releasing.md index 6ff8e862..7366d999 100644 --- a/releasing.md +++ b/releasing.md @@ -177,6 +177,8 @@ include everything in the release notes. ## Origin of the Sources +TODO(ihmccreery) update this + Kubernetes may be built from either a git tree (using `hack/build-go.sh`) or from a tarball (using either `hack/build-go.sh` or `go install`) or directly by the Go native build system (using `go get`). @@ -193,6 +195,8 @@ between releases (e.g. at some point in development between v0.3 and v0.4). ## Version Number Format +TODO(ihmccreery) update this + In order to account for these use cases, there are some specific formats that may end up representing the Kubernetes version. Here are a few examples: @@ -251,6 +255,8 @@ is the complete SHA1 of the (dirty) tree used at build time. ## Release Notes +TODO(ihmccreery) update this + No official release should be made final without properly matching release notes. There should be made available, per release, a small summary, preamble, of the -- cgit v1.2.3 From 2650762ee43dea96a2a166527f4a3c9c1615f930 Mon Sep 17 00:00:00 2001 From: Isaac Hollander McCreery Date: Thu, 22 Oct 2015 08:37:26 -0700 Subject: Proposed design for release infra --- releasing.md | 255 +++++++++++++++++++++++++++++++++-------------------------- 1 file changed, 145 insertions(+), 110 deletions(-) diff --git a/releasing.md b/releasing.md index 7366d999..acb46a34 100644 --- a/releasing.md +++ b/releasing.md @@ -42,31 +42,53 @@ after the first section. Regardless of whether you are cutting a major or minor version, cutting a release breaks down into four pieces: -1. Selecting release components. -1. Tagging and merging the release in Git. -1. Building and pushing the binaries. -1. Writing release notes. +1. selecting release components; +1. cutting/branching the release; +1. publishing binaries and release notes. You should progress in this strict order. -### Building a New Major/Minor Version (`vX.Y.0`) +### Selecting release components + +First, figure out what kind of release you're doing, what branch you're cutting +from, and other prerequisites. + +* Alpha releases (`vX.Y.0-alpha.W`) are cut directly from `master`. + * Alpha releases don't require anything besides green tests, (see below). +* Official releases (`vX.Y.Z`) are cut from their respective release branch, + `release-X.Y`. + * Make sure all necessary cherry picks have been resolved. You should ensure + that all outstanding cherry picks have been reviewed and merged and the + branch validated on Jenkins. See [Cherry Picks](cherry-picks.md) for more + information on how to manage cherry picks prior to cutting the release. + * Official releases also require green tests, (see below). +* New release series are also cut direclty from `master`. + * **This is a big deal!** If you're reading this doc for the first time, you + probably shouldn't be doing this release, and should talk to someone on the + release team. + * New release series cut a new release branch, `release-X.Y`, off of + `master`, and also release the first beta in the series, `vX.Y.0-beta`. + * Every change in the `vX.Y` series from this point on will have to be + cherry picked, so be sure you want to do this before proceeding. + * You should still look for green tests, (see below). + +No matter what you're cutting, you're going to want to look at +[Jenkins](http://go/k8s-test/). Figure out what branch you're cutting from, +(see above,) and look at the critical jobs building from that branch. First +glance through builds and look for nice solid rows of green builds, and then +check temporally with the other critical builds to make sure they're solid +around then as well. Once you find some greens, you can find the Git hash for a +build by looking at the Full Console Output and searching for `githash=`. You +should see a line: -#### Selecting Release Components +```console +githash=v1.2.0-alpha.2.164+b44c7d79d6c9bb +``` -When cutting a major/minor release, your first job is to find the branch -point. We cut `vX.Y.0` releases directly from `master`, which is also the -branch that we have most continuous validation on. Go first to [the main GCE -Jenkins end-to-end job](http://go/k8s-test/job/kubernetes-e2e-gce) and next to [the -Critical Builds page](http://go/k8s-test/view/Critical%20Builds) and hopefully find a -recent Git hash that looks stable across at least `kubernetes-e2e-gce` and -`kubernetes-e2e-gke-ci`. First glance through builds and look for nice solid -rows of green builds, and then check temporally with the other Critical Builds -to make sure they're solid around then as well. Once you find some greens, you -can find the Git hash for a build by looking at the "Console Log", then look for -`githash=`. You should see a line line: +Or, if you're cutting from a release branch (i.e. doing an official release), ```console -+ githash=v0.20.2-322-g974377b +githash=v1.1.0-beta.567+d79d6c9bbb44c7 ``` Because Jenkins builds frequently, if you're looking between jobs @@ -81,99 +103,112 @@ oncall. Before proceeding to the next step: ```sh -export BRANCHPOINT=v0.20.2-322-g974377b +export GITHASH=v1.2.0-alpha.2.164+b44c7d79d6c9bb +``` + +Where `v1.2.0-alpha.2.164+b44c7d79d6c9bb` is the Git hash you decided on. This +will become your release point. + +### Cutting/branching the release + +You'll need the latest version of the releasing tools: + +```console +git clone git@github.com:kubernetes/contrib.git +cd contrib/release +``` + +#### Cutting an alpha release (`vX.Y.0-alpha.W`) + +Figure out what version you're cutting, and + +```console +export VER=vX.Y.0-alpha.W +``` + +then, from `contrib/release`, run + +```console +cut-alpha.sh "${VER}" "${GITHASH}" +``` + +This will: + +1. clone a temporary copy of the [kubernetes repo](https://github.com/kubernetes/kubernetes); +1. mark the `vX.Y.0-alpha.W` tag at the given Git hash; +1. push the tag to GitHub; +1. build the release binaries at the given Git hash; +1. publish the binaries to GCS; +1. prompt you to do the remainder of the work. + +#### Cutting an official release (`vX.Y.Z`) + +Figure out what version you're cutting, and + +```console +export VER=vX.Y.Z +``` + +then, from `contrib/release`, run + +```console +cut-official.sh "${VER}" "${GITHASH}" ``` -Where `v0.20.2-322-g974377b` is the git hash you decided on. This will become -our (retroactive) branch point. - -#### Branching, Tagging and Merging - -Do the following: - -1. `export VER=x.y` (e.g. `0.20` for v0.20) -1. cd to the base of the repo -1. `git fetch upstream && git checkout -b release-${VER} ${BRANCHPOINT}` (you did set `${BRANCHPOINT}`, right?) -1. Make sure you don't have any files you care about littering your repo (they - better be checked in or outside the repo, or the next step will delete them). -1. `make clean && git reset --hard HEAD && git clean -xdf` -1. `make` (TBD: you really shouldn't have to do this, but the swagger output step requires it right now) -1. `./build/mark-new-version.sh v${VER}.0` to mark the new release and get further - instructions. This creates a series of commits on the branch you're working - on (`release-${VER}`), including forking our documentation for the release, - the release version commit (which is then tagged), and the post-release - version commit. -1. Follow the instructions given to you by that script. They are canon for the - remainder of the Git process. If you don't understand something in that - process, please ask! - -**TODO**: how to fix tags, etc., if you have to shift the release branchpoint. - -#### Building and Pushing Binaries - -In your git repo (you still have `${VER}` set from above right?): - -1. `git checkout upstream/master && build/build-official-release.sh v${VER}.0` (the `build-official-release.sh` script is version agnostic, so it's best to run it off `master` directly). -1. Follow the instructions given to you by that script. -1. At this point, you've done all the Git bits, you've got all the binary bits pushed, and you've got the template for the release started on GitHub. - -#### Writing Release Notes - -[This helpful guide](making-release-notes.md) describes how to write release -notes for a major/minor release. In the release template on GitHub, leave the -last PR number that the tool finds for the `.0` release, so the next releaser -doesn't have to hunt. - -### Building a New Patch Release (`vX.Y.Z` for `Z > 0`) - -#### Selecting Release Components - -We cut `vX.Y.Z` releases from the `release-vX.Y` branch after all cherry picks -to the branch have been resolved. You should ensure all outstanding cherry picks -have been reviewed and merged and the branch validated on Jenkins (validation -TBD). See the [Cherry Picks](cherry-picks.md) for more information on how to -manage cherry picks prior to cutting the release. - -#### Tagging and Merging - -1. `export VER=x.y` (e.g. `0.20` for v0.20) -1. `export PATCH=Z` where `Z` is the patch level of `vX.Y.Z` -1. cd to the base of the repo -1. `git fetch upstream && git checkout -b upstream/release-${VER} release-${VER}` -1. Make sure you don't have any files you care about littering your repo (they - better be checked in or outside the repo, or the next step will delete them). -1. `make clean && git reset --hard HEAD && git clean -xdf` -1. `make` (TBD: you really shouldn't have to do this, but the swagger output step requires it right now) -1. `./build/mark-new-version.sh v${VER}.${PATCH}` to mark the new release and get further - instructions. This creates a series of commits on the branch you're working - on (`release-${VER}`), including forking our documentation for the release, - the release version commit (which is then tagged), and the post-release - version commit. -1. Follow the instructions given to you by that script. They are canon for the - remainder of the Git process. If you don't understand something in that - process, please ask! When proposing PRs, you can pre-fill the body with - `hack/cherry_pick_list.sh upstream/release-${VER}` to inform people of what - is already on the branch. - -**TODO**: how to fix tags, etc., if the release is changed. - -#### Building and Pushing Binaries - -In your git repo (you still have `${VER}` and `${PATCH}` set from above right?): - -1. `git checkout upstream/master && build/build-official-release.sh - v${VER}.${PATCH}` (the `build-official-release.sh` script is version - agnostic, so it's best to run it off `master` directly). -1. Follow the instructions given to you by that script. At this point, you've - done all the Git bits, you've got all the binary bits pushed, and you've got - the template for the release started on GitHub. - -#### Writing Release Notes - -Run `hack/cherry_pick_list.sh ${VER}.${PATCH}~1` to get the release notes for -the patch release you just created. Feel free to prune anything internal, like -you would for a major release, but typically for patch releases we tend to -include everything in the release notes. +This will: + +1. clone a temporary copy of the [kubernetes repo](https://github.com/kubernetes/kubernetes); +1. do a series of commits on the branch, including forking the documentation + and doing the release version commit; + * TODO(ihmccreery) it's not yet clear what exactly this is going to look like. +1. mark both the `vX.Y.Z` and `vX.Y.(Z+1)-beta` tags at the given Git hash; +1. push the tags to GitHub; +1. build the release binaries at the given Git hash (on the appropriate + branch); +1. publish the binaries to GCS; +1. prompt you to do the remainder of the work. + +#### Branching a new release series (`vX.Y`) + +Once again, **this is a big deal!** If you're reading this doc for the first +time, you probably shouldn't be doing this release, and should talk to someone +on the release team. + +Figure out what series you're cutting, and + +```console +export VER=vX.Y +``` + +then, from `contrib/release`, run + +```console +branch-series.sh "${VER}" "${GITHASH}" +``` + +This will: + +1. clone a temporary copy of the [kubernetes repo](https://github.com/kubernetes/kubernetes); +1. mark the `vX.(Y+1).0-alpha.0` tag at the given Git hash on `master`; +1. fork a new branch `release-X.Y` off of `master` at the Given Git hash; +1. do a series of commits on the branch, including forking the documentation + and doing the release version commit; + * TODO(ihmccreery) it's not yet clear what exactly this is going to look like. +1. mark the `vX.Y.0-beta` tag at the appropriate commit on the new `release-X.Y` branch; +1. push the tags to GitHub; +1. build the release binaries at the appropriate Git hash on the appropriate + branches, (for both the new alpha and beta releases); +1. publish the binaries to GCS; +1. prompt you to do the remainder of the work. + +**TODO(ihmccreery)**: can we fix tags, etc., if you have to shift the release branchpoint? + +### Publishing binaries and release notes + +Whichever script you ran above will prompt you to take any remaining steps, +including publishing binaries and release notes. + +**TODO(ihmccreery)**: deal with the `making-release-notes` doc in `docs/devel`. ## Origin of the Sources @@ -195,7 +230,7 @@ between releases (e.g. at some point in development between v0.3 and v0.4). ## Version Number Format -TODO(ihmccreery) update this +TODO(ihmccreery) update everything below here In order to account for these use cases, there are some specific formats that may end up representing the Kubernetes version. Here are a few examples: -- cgit v1.2.3 From b264864ea601b33b938c1fc2429a2170779356e1 Mon Sep 17 00:00:00 2001 From: Isaac Hollander McCreery Date: Wed, 28 Oct 2015 11:08:55 -0700 Subject: Doc fixup to reflect script reality --- releasing.md | 68 +++++++++++++++++++++++++----------------------------------- 1 file changed, 28 insertions(+), 40 deletions(-) diff --git a/releasing.md b/releasing.md index acb46a34..971c2878 100644 --- a/releasing.md +++ b/releasing.md @@ -44,6 +44,7 @@ release breaks down into four pieces: 1. selecting release components; 1. cutting/branching the release; +1. building and pushing the binaries; and 1. publishing binaries and release notes. You should progress in this strict order. @@ -77,7 +78,7 @@ No matter what you're cutting, you're going to want to look at (see above,) and look at the critical jobs building from that branch. First glance through builds and look for nice solid rows of green builds, and then check temporally with the other critical builds to make sure they're solid -around then as well. Once you find some greens, you can find the Git hash for a +around then as well. Once you find some greens, you can find the git hash for a build by looking at the Full Console Output and searching for `githash=`. You should see a line: @@ -106,7 +107,7 @@ Before proceeding to the next step: export GITHASH=v1.2.0-alpha.2.164+b44c7d79d6c9bb ``` -Where `v1.2.0-alpha.2.164+b44c7d79d6c9bb` is the Git hash you decided on. This +Where `v1.2.0-alpha.2.164+b44c7d79d6c9bb` is the git hash you decided on. This will become your release point. ### Cutting/branching the release @@ -123,50 +124,46 @@ cd contrib/release Figure out what version you're cutting, and ```console -export VER=vX.Y.0-alpha.W +export VER="vX.Y.0-alpha.W" ``` then, from `contrib/release`, run ```console -cut-alpha.sh "${VER}" "${GITHASH}" +cut.sh "${VER}" "${GITHASH}" ``` This will: -1. clone a temporary copy of the [kubernetes repo](https://github.com/kubernetes/kubernetes); -1. mark the `vX.Y.0-alpha.W` tag at the given Git hash; -1. push the tag to GitHub; -1. build the release binaries at the given Git hash; -1. publish the binaries to GCS; -1. prompt you to do the remainder of the work. +1. mark the `vX.Y.0-alpha.W` tag at the given git hash; +1. prompt you to do the remainder of the work, including building the + appropriate binaries and pushing them to the appropriate places. #### Cutting an official release (`vX.Y.Z`) Figure out what version you're cutting, and ```console -export VER=vX.Y.Z +export VER="vX.Y.Z" ``` then, from `contrib/release`, run ```console -cut-official.sh "${VER}" "${GITHASH}" +cut.sh "${VER}" "${GITHASH}" ``` This will: -1. clone a temporary copy of the [kubernetes repo](https://github.com/kubernetes/kubernetes); -1. do a series of commits on the branch, including forking the documentation - and doing the release version commit; - * TODO(ihmccreery) it's not yet clear what exactly this is going to look like. -1. mark both the `vX.Y.Z` and `vX.Y.(Z+1)-beta` tags at the given Git hash; -1. push the tags to GitHub; -1. build the release binaries at the given Git hash (on the appropriate - branch); -1. publish the binaries to GCS; -1. prompt you to do the remainder of the work. +1. do a series of commits on the branch for `vX.Y.Z`, including versionizing + the documentation and doing the release version commit; +1. mark the `vX.Y.Z` tag at the release version commit; +1. do a series of commits on the branch for `vX.Y.(Z+1)-beta` on top of the + previous commits, including versionizing the documentation and doing the + beta version commit; +1. mark the `vX.Y.(Z+1)-beta` tag at the release version commit; +1. prompt you to do the remainder of the work, including building the + appropriate binaries and pushing them to the appropriate places. #### Branching a new release series (`vX.Y`) @@ -177,39 +174,30 @@ on the release team. Figure out what series you're cutting, and ```console -export VER=vX.Y +export VER="vX.Y" ``` then, from `contrib/release`, run ```console -branch-series.sh "${VER}" "${GITHASH}" +cut.sh "${VER}" "${GITHASH}" ``` This will: -1. clone a temporary copy of the [kubernetes repo](https://github.com/kubernetes/kubernetes); -1. mark the `vX.(Y+1).0-alpha.0` tag at the given Git hash on `master`; -1. fork a new branch `release-X.Y` off of `master` at the Given Git hash; -1. do a series of commits on the branch, including forking the documentation - and doing the release version commit; - * TODO(ihmccreery) it's not yet clear what exactly this is going to look like. -1. mark the `vX.Y.0-beta` tag at the appropriate commit on the new `release-X.Y` branch; -1. push the tags to GitHub; -1. build the release binaries at the appropriate Git hash on the appropriate - branches, (for both the new alpha and beta releases); -1. publish the binaries to GCS; -1. prompt you to do the remainder of the work. - -**TODO(ihmccreery)**: can we fix tags, etc., if you have to shift the release branchpoint? +1. mark the `vX.(Y+1).0-alpha.0` tag at the given git hash on `master`; +1. fork a new branch `release-X.Y` off of `master` at the given git hash; +1. do a series of commits on the branch for `vX.Y.0-beta`, including versionizing + the documentation and doing the release version commit; +1. mark the `vX.Y.(Z+1)-beta` tag at the beta version commit; +1. prompt you to do the remainder of the work, including building the + appropriate binaries and pushing them to the appropriate places. ### Publishing binaries and release notes Whichever script you ran above will prompt you to take any remaining steps, including publishing binaries and release notes. -**TODO(ihmccreery)**: deal with the `making-release-notes` doc in `docs/devel`. - ## Origin of the Sources TODO(ihmccreery) update this -- cgit v1.2.3 From a17031110e8725d3944979fc52cc2540e75531f9 Mon Sep 17 00:00:00 2001 From: Isaac Hollander McCreery Date: Thu, 29 Oct 2015 15:10:00 -0700 Subject: Fixups of docs and scripts --- releasing.md | 90 ++++++++++++++++++------------------------------------------ 1 file changed, 27 insertions(+), 63 deletions(-) diff --git a/releasing.md b/releasing.md index 971c2878..2ba88bd3 100644 --- a/releasing.md +++ b/releasing.md @@ -78,9 +78,16 @@ No matter what you're cutting, you're going to want to look at (see above,) and look at the critical jobs building from that branch. First glance through builds and look for nice solid rows of green builds, and then check temporally with the other critical builds to make sure they're solid -around then as well. Once you find some greens, you can find the git hash for a -build by looking at the Full Console Output and searching for `githash=`. You -should see a line: +around then as well. + +If you're doing an alpha release or cutting a new release series, you can +choose an arbitrary build. If you are doing an official release, you have to +release from HEAD of the branch, (because you have to do some version-rev +commits,) so choose the latest build on the release branch. (Remember, that +branch should be frozen.) + +Once you find some greens, you can find the git hash for a build by looking at +the Full Console Output and searching for `githash=`. You should see a line: ```console githash=v1.2.0-alpha.2.164+b44c7d79d6c9bb @@ -115,10 +122,12 @@ will become your release point. You'll need the latest version of the releasing tools: ```console -git clone git@github.com:kubernetes/contrib.git -cd contrib/release +git clone git@github.com:kubernetes/kubernetes.git +cd kubernetes ``` +or `git checkout upstream/master` from an existing repo. + #### Cutting an alpha release (`vX.Y.0-alpha.W`) Figure out what version you're cutting, and @@ -127,10 +136,10 @@ Figure out what version you're cutting, and export VER="vX.Y.0-alpha.W" ``` -then, from `contrib/release`, run +then, run ```console -cut.sh "${VER}" "${GITHASH}" +build/cut-official-release.sh "${VER}" "${GITHASH}" ``` This will: @@ -147,10 +156,10 @@ Figure out what version you're cutting, and export VER="vX.Y.Z" ``` -then, from `contrib/release`, run +then, run ```console -cut.sh "${VER}" "${GITHASH}" +build/cut-official-release.sh "${VER}" "${GITHASH}" ``` This will: @@ -161,7 +170,7 @@ This will: 1. do a series of commits on the branch for `vX.Y.(Z+1)-beta` on top of the previous commits, including versionizing the documentation and doing the beta version commit; -1. mark the `vX.Y.(Z+1)-beta` tag at the release version commit; +1. mark the `vX.Y.(Z+1)-beta` tag at the beta version commit; 1. prompt you to do the remainder of the work, including building the appropriate binaries and pushing them to the appropriate places. @@ -177,10 +186,10 @@ Figure out what series you're cutting, and export VER="vX.Y" ``` -then, from `contrib/release`, run +then, run ```console -cut.sh "${VER}" "${GITHASH}" +build/cut-official-release.sh "${VER}" "${GITHASH}" ``` This will: @@ -189,18 +198,19 @@ This will: 1. fork a new branch `release-X.Y` off of `master` at the given git hash; 1. do a series of commits on the branch for `vX.Y.0-beta`, including versionizing the documentation and doing the release version commit; -1. mark the `vX.Y.(Z+1)-beta` tag at the beta version commit; +1. mark the `vX.Y.0-beta` tag at the beta version commit; 1. prompt you to do the remainder of the work, including building the appropriate binaries and pushing them to the appropriate places. ### Publishing binaries and release notes -Whichever script you ran above will prompt you to take any remaining steps, -including publishing binaries and release notes. +The script you ran above will prompt you to take any remaining steps, including +publishing binaries and release notes. -## Origin of the Sources +## Injecting Version into Binaries -TODO(ihmccreery) update this +*Please note that this information may be out of date. The scripts are the +authoritative source on how version injection works.* Kubernetes may be built from either a git tree (using `hack/build-go.sh`) or from a tarball (using either `hack/build-go.sh` or `go install`) or directly by @@ -216,36 +226,6 @@ access to the information about the git tree, but we still want to be able to tell whether this build corresponds to an exact release (e.g. v0.3) or is between releases (e.g. at some point in development between v0.3 and v0.4). -## Version Number Format - -TODO(ihmccreery) update everything below here - -In order to account for these use cases, there are some specific formats that -may end up representing the Kubernetes version. Here are a few examples: - -- **v0.5**: This is official version 0.5 and this version will only be used - when building from a clean git tree at the v0.5 git tag, or from a tree - extracted from the tarball corresponding to that specific release. -- **v0.5-15-g0123abcd4567**: This is the `git describe` output and it indicates - that we are 15 commits past the v0.5 release and that the SHA1 of the commit - where the binaries were built was `0123abcd4567`. It is only possible to have - this level of detail in the version information when building from git, not - when building from a tarball. -- **v0.5-15-g0123abcd4567-dirty** or **v0.5-dirty**: The extra `-dirty` prefix - means that the tree had local modifications or untracked files at the time of - the build, so there's no guarantee that the source code matches exactly the - state of the tree at the `0123abcd4567` commit or at the `v0.5` git tag - (resp.) -- **v0.5-dev**: This means we are building from a tarball or using `go get` or, - if we have a git tree, we are using `go install` directly, so it is not - possible to inject the git version into the build information. Additionally, - this is not an official release, so the `-dev` prefix indicates that the - version we are building is after `v0.5` but before `v0.6`. (There is actually - an exception where a commit with `v0.5-dev` is not present on `v0.6`, see - later for details.) - -## Injecting Version into Binaries - In order to cover the different build cases, we start by providing information that can be used when using only Go build tools or when we do not have the git version information available. @@ -276,22 +256,6 @@ can, for instance, tell it to override `gitVersion` and set it to `v0.4-13-g4567bcdef6789-dirty` and set `gitCommit` to `4567bcdef6789...` which is the complete SHA1 of the (dirty) tree used at build time. -## Release Notes - -TODO(ihmccreery) update this - -No official release should be made final without properly matching release notes. - -There should be made available, per release, a small summary, preamble, of the -major changes, both in terms of feature improvements/bug fixes and notes about -functional feature changes (if any) regarding the previous released version so -that the BOM regarding updating to it gets as obvious and trouble free as possible. - -After this summary, preamble, all the relevant PRs/issues that got in that -version should be listed and linked together with a small summary understandable -by plain mortals (in a perfect world PR/issue's title would be enough but often -it is just too cryptic/geeky/domain-specific that it isn't). - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/releasing.md?pixel)]() -- cgit v1.2.3 From 1e19e8e1c8137d11c6f9eab041990fa32299bd14 Mon Sep 17 00:00:00 2001 From: Isaac Hollander McCreery Date: Thu, 29 Oct 2015 15:14:13 -0700 Subject: Move to release/ --- releasing.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/releasing.md b/releasing.md index 2ba88bd3..fad957b6 100644 --- a/releasing.md +++ b/releasing.md @@ -139,7 +139,7 @@ export VER="vX.Y.0-alpha.W" then, run ```console -build/cut-official-release.sh "${VER}" "${GITHASH}" +release/cut-official-release.sh "${VER}" "${GITHASH}" ``` This will: @@ -159,7 +159,7 @@ export VER="vX.Y.Z" then, run ```console -build/cut-official-release.sh "${VER}" "${GITHASH}" +release/cut-official-release.sh "${VER}" "${GITHASH}" ``` This will: @@ -189,7 +189,7 @@ export VER="vX.Y" then, run ```console -build/cut-official-release.sh "${VER}" "${GITHASH}" +release/cut-official-release.sh "${VER}" "${GITHASH}" ``` This will: -- cgit v1.2.3 From bea654021f08ac105301afac36662cf487354cdb Mon Sep 17 00:00:00 2001 From: Isaac Hollander McCreery Date: Thu, 29 Oct 2015 15:21:35 -0700 Subject: Remove old releasing illustrations --- releasing.dot | 113 ---------------------------------------------------------- releasing.png | Bin 30693 -> 0 bytes releasing.svg | 113 ---------------------------------------------------------- 3 files changed, 226 deletions(-) delete mode 100644 releasing.dot delete mode 100644 releasing.png delete mode 100644 releasing.svg diff --git a/releasing.dot b/releasing.dot deleted file mode 100644 index fe8124c3..00000000 --- a/releasing.dot +++ /dev/null @@ -1,113 +0,0 @@ -// Build it with: -// $ dot -Tsvg releasing.dot >releasing.svg - -digraph tagged_release { - size = "5,5" - // Arrows go up. - rankdir = BT - subgraph left { - // Group the left nodes together. - ci012abc -> pr101 -> ci345cde -> pr102 - style = invis - } - subgraph right { - // Group the right nodes together. - version_commit -> dev_commit - style = invis - } - { // Align the version commit and the info about it. - rank = same - // Align them with pr101 - pr101 - version_commit - // release_info shows the change in the commit. - release_info - } - { // Align the dev commit and the info about it. - rank = same - // Align them with 345cde - ci345cde - dev_commit - dev_info - } - // Join the nodes from subgraph left. - pr99 -> ci012abc - pr102 -> pr100 - // Do the version node. - pr99 -> version_commit - dev_commit -> pr100 - tag -> version_commit - pr99 [ - label = "Merge PR #99" - shape = box - fillcolor = "#ccccff" - style = "filled" - fontname = "Helvetica Neue, Helvetica, Segoe UI, Arial, freesans, sans-serif" - ]; - ci012abc [ - label = "012abc" - shape = circle - fillcolor = "#ffffcc" - style = "filled" - fontname = "Consolas, Liberation Mono, Menlo, Courier, monospace" - ]; - pr101 [ - label = "Merge PR #101" - shape = box - fillcolor = "#ccccff" - style = "filled" - fontname = "Helvetica Neue, Helvetica, Segoe UI, Arial, freesans, sans-serif" - ]; - ci345cde [ - label = "345cde" - shape = circle - fillcolor = "#ffffcc" - style = "filled" - fontname = "Consolas, Liberation Mono, Menlo, Courier, monospace" - ]; - pr102 [ - label = "Merge PR #102" - shape = box - fillcolor = "#ccccff" - style = "filled" - fontname = "Helvetica Neue, Helvetica, Segoe UI, Arial, freesans, sans-serif" - ]; - version_commit [ - label = "678fed" - shape = circle - fillcolor = "#ccffcc" - style = "filled" - fontname = "Consolas, Liberation Mono, Menlo, Courier, monospace" - ]; - dev_commit [ - label = "456dcb" - shape = circle - fillcolor = "#ffffcc" - style = "filled" - fontname = "Consolas, Liberation Mono, Menlo, Courier, monospace" - ]; - pr100 [ - label = "Merge PR #100" - shape = box - fillcolor = "#ccccff" - style = "filled" - fontname = "Helvetica Neue, Helvetica, Segoe UI, Arial, freesans, sans-serif" - ]; - release_info [ - label = "pkg/version/base.go:\ngitVersion = \"v0.5\";" - shape = none - fontname = "Helvetica Neue, Helvetica, Segoe UI, Arial, freesans, sans-serif" - ]; - dev_info [ - label = "pkg/version/base.go:\ngitVersion = \"v0.5-dev\";" - shape = none - fontname = "Helvetica Neue, Helvetica, Segoe UI, Arial, freesans, sans-serif" - ]; - tag [ - label = "$ git tag -a v0.5" - fillcolor = "#ffcccc" - style = "filled" - fontname = "Helvetica Neue, Helvetica, Segoe UI, Arial, freesans, sans-serif" - ]; -} - diff --git a/releasing.png b/releasing.png deleted file mode 100644 index 935628de..00000000 Binary files a/releasing.png and /dev/null differ diff --git a/releasing.svg b/releasing.svg deleted file mode 100644 index f703e6e2..00000000 --- a/releasing.svg +++ /dev/null @@ -1,113 +0,0 @@ - - - - - - -tagged_release - - -ci012abc - -012abc - - -pr101 - -Merge PR #101 - - -ci012abc->pr101 - - - - -ci345cde - -345cde - - -pr101->ci345cde - - - - -pr102 - -Merge PR #102 - - -ci345cde->pr102 - - - - -pr100 - -Merge PR #100 - - -pr102->pr100 - - - - -version_commit - -678fed - - -dev_commit - -456dcb - - -version_commit->dev_commit - - - - -dev_commit->pr100 - - - - -release_info -pkg/version/base.go: -gitVersion = "v0.5"; - - -dev_info -pkg/version/base.go: -gitVersion = "v0.5-dev"; - - -pr99 - -Merge PR #99 - - -pr99->ci012abc - - - - -pr99->version_commit - - - - -tag - -$ git tag -a v0.5 - - -tag->version_commit - - - - - -- cgit v1.2.3 From 37361519a63b34464956e92cd78ed426c68ff325 Mon Sep 17 00:00:00 2001 From: Isaac Hollander McCreery Date: Mon, 2 Nov 2015 14:54:11 -0800 Subject: Versioned beta releases --- releasing.md | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/releasing.md b/releasing.md index fad957b6..f430b3a3 100644 --- a/releasing.md +++ b/releasing.md @@ -56,14 +56,20 @@ from, and other prerequisites. * Alpha releases (`vX.Y.0-alpha.W`) are cut directly from `master`. * Alpha releases don't require anything besides green tests, (see below). -* Official releases (`vX.Y.Z`) are cut from their respective release branch, +* Beta releases (`vX.Y.Z-beta.W`) are cut from their respective release branch, `release-X.Y`. * Make sure all necessary cherry picks have been resolved. You should ensure that all outstanding cherry picks have been reviewed and merged and the branch validated on Jenkins. See [Cherry Picks](cherry-picks.md) for more information on how to manage cherry picks prior to cutting the release. + * Beta releases also require green tests, (see below). +* Official releases (`vX.Y.Z`) are cut from their respective release branch, + `release-X.Y`. + * Official releases should be similar or identical to their respective beta + releases, so have a look at the cherry picks that have been merged since + the beta release and question everything you find. * Official releases also require green tests, (see below). -* New release series are also cut direclty from `master`. +* New release series are also cut directly from `master`. * **This is a big deal!** If you're reading this doc for the first time, you probably shouldn't be doing this release, and should talk to someone on the release team. -- cgit v1.2.3 From ab925644290c51d9491cb698bc1f4991af5d6039 Mon Sep 17 00:00:00 2001 From: Isaac Hollander McCreery Date: Mon, 2 Nov 2015 15:38:57 -0800 Subject: Update docs and prompts for better dry-runs and no more versionizing docs --- releasing.md | 68 ++++++++++++++++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 59 insertions(+), 9 deletions(-) diff --git a/releasing.md b/releasing.md index f430b3a3..671ab8af 100644 --- a/releasing.md +++ b/releasing.md @@ -148,12 +148,49 @@ then, run release/cut-official-release.sh "${VER}" "${GITHASH}" ``` -This will: +This will do a dry run of: 1. mark the `vX.Y.0-alpha.W` tag at the given git hash; 1. prompt you to do the remainder of the work, including building the appropriate binaries and pushing them to the appropriate places. +If you're satisfied with the result, run + +```console +release/cut-official-release.sh "${VER}" "${GITHASH}" --no-dry-run +``` + +and follow the instructions. + +#### Cutting an beta release (`vX.Y.Z-beta.W`) + +Figure out what version you're cutting, and + +```console +export VER="vX.Y.Z-beta.W" +``` + +then, run + +```console +release/cut-official-release.sh "${VER}" "${GITHASH}" +``` + +This will do a dry run of: + +1. do a series of commits on the release branch for `vX.Y.Z-beta`; +1. mark the `vX.Y.Z-beta` tag at the beta version commit; +1. prompt you to do the remainder of the work, including building the + appropriate binaries and pushing them to the appropriate places. + +If you're satisfied with the result, run + +```console +release/cut-official-release.sh "${VER}" "${GITHASH}" --no-dry-run +``` + +and follow the instructions. + #### Cutting an official release (`vX.Y.Z`) Figure out what version you're cutting, and @@ -168,18 +205,24 @@ then, run release/cut-official-release.sh "${VER}" "${GITHASH}" ``` -This will: +This will do a dry run of: -1. do a series of commits on the branch for `vX.Y.Z`, including versionizing - the documentation and doing the release version commit; +1. do a series of commits on the branch for `vX.Y.Z`; 1. mark the `vX.Y.Z` tag at the release version commit; 1. do a series of commits on the branch for `vX.Y.(Z+1)-beta` on top of the - previous commits, including versionizing the documentation and doing the - beta version commit; + previous commits; 1. mark the `vX.Y.(Z+1)-beta` tag at the beta version commit; 1. prompt you to do the remainder of the work, including building the appropriate binaries and pushing them to the appropriate places. +If you're satisfied with the result, run + +```console +release/cut-official-release.sh "${VER}" "${GITHASH}" --no-dry-run +``` + +and follow the instructions. + #### Branching a new release series (`vX.Y`) Once again, **this is a big deal!** If you're reading this doc for the first @@ -198,16 +241,23 @@ then, run release/cut-official-release.sh "${VER}" "${GITHASH}" ``` -This will: +This will do a dry run of: 1. mark the `vX.(Y+1).0-alpha.0` tag at the given git hash on `master`; 1. fork a new branch `release-X.Y` off of `master` at the given git hash; -1. do a series of commits on the branch for `vX.Y.0-beta`, including versionizing - the documentation and doing the release version commit; +1. do a series of commits on the branch for `vX.Y.0-beta`; 1. mark the `vX.Y.0-beta` tag at the beta version commit; 1. prompt you to do the remainder of the work, including building the appropriate binaries and pushing them to the appropriate places. +If you're satisfied with the result, run + +```console +release/cut-official-release.sh "${VER}" "${GITHASH}" --no-dry-run +``` + +and follow the instructions. + ### Publishing binaries and release notes The script you ran above will prompt you to take any remaining steps, including -- cgit v1.2.3 From b6745eb538ee4515b08a87a7573a9319b3d2460c Mon Sep 17 00:00:00 2001 From: Isaac Hollander McCreery Date: Tue, 3 Nov 2015 09:26:01 -0800 Subject: Fix releasing clause about cutting beta releases --- releasing.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/releasing.md b/releasing.md index 671ab8af..238f3791 100644 --- a/releasing.md +++ b/releasing.md @@ -74,7 +74,7 @@ from, and other prerequisites. probably shouldn't be doing this release, and should talk to someone on the release team. * New release series cut a new release branch, `release-X.Y`, off of - `master`, and also release the first beta in the series, `vX.Y.0-beta`. + `master`, and also release the first beta in the series, `vX.Y.0-beta.0`. * Every change in the `vX.Y` series from this point on will have to be cherry picked, so be sure you want to do this before proceeding. * You should still look for green tests, (see below). -- cgit v1.2.3 From 33c86129c946fa5cf023494608b305404ea771a9 Mon Sep 17 00:00:00 2001 From: Chao Xu Date: Wed, 4 Nov 2015 14:07:23 -0800 Subject: add a guide on how to create an API group --- adding-an-APIGroup.md | 81 +++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 81 insertions(+) create mode 100644 adding-an-APIGroup.md diff --git a/adding-an-APIGroup.md b/adding-an-APIGroup.md new file mode 100644 index 00000000..6db23198 --- /dev/null +++ b/adding-an-APIGroup.md @@ -0,0 +1,81 @@ + + + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + +The latest 1.0.x release of this document can be found +[here](http://releases.k8s.io/release-1.0/docs/devel/adding-an-APIGroup.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + + + + +Adding an API Group +=============== + +This document includes the steps to add an API group. You may also want to take a look at PR [#16621](https://github.com/kubernetes/kubernetes/pull/16621) and PR [#13146](https://github.com/kubernetes/kubernetes/pull/13146), which add API groups. + +Please also read about [API conventions](api-conventions.md) and [API changes](api_changes.md) before adding an API group. + +### Your core group package: + +1. creaet a folder in pkg/apis to hold you group. Create types.go in pkg/apis/\/ and pkg/apis/\/\/ to define API objects in your group. + +2. create pkg/apis/\/{register.go, \/register.go} to register this group's API objects to the scheme; + +3. add a pkg/apis/\/install/install.go, which is responsible for adding the group to the `latest` package, so that other packages can access the group's meta through `latest.Group`. You need to import this `install` package in {pkg/master, pkg/client/unversioned, cmd/kube-version-change}/import_known_versions.go, if you want to make your group accessible to other packages in the kube-apiserver binary, binaries that uses the client package, or the kube-version-change tool. + +### Scripts changes and auto-generated code: + +1. Generate conversions and deep-copies: + + 1. add your "group/" or "group/version" into hack/after-build/{update-generated-conversions.sh, update-generated-deep-copies.sh, verify-generated-conversions.sh, verify-generated-deep-copies.sh}; + 2. run hack/update-generated-conversions.sh, hack/update-generated-deep-copies.sh. + +2. Generate files for Ugorji codec: + + 1. touch types.generated.go in pkg/apis/\{/, \}, and run hack/update-codecgen.sh. + +### Client (optional): + +We are overhauling pkg/client, so this section might be outdated. Currently, to add your group to the client package, you need to + +1. create pkg/client/unversioned/\.go, define a group client interface and implement the client. You can take pkg/client/unversioned/extensions.go as a reference. + +2. add the group client interface to the `Interface` in pkg/client/unversioned/client.go and add method to fetch the interface. Again, you can take how we add the Extensions group there as an example. + +3. if you need to support the group in kubectl, you'll also need to modify pkg/kubectl/cmd/util/factory.go. + +### Make the group/version selectable in unit tests (optional): + +1. add your group in pkg/api/testapi/testapi.go, then you can access the group in tests through testapi.\; + +2. add your "group/version" to `KUBE_API_VERSIONS` and `KUBE_TEST_API_VERSIONS` in hack/test-go.sh. + + + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/adding-an-APIGroup.md?pixel)]() + -- cgit v1.2.3 From 342265e8c14392f135363c04b2a7bacbbc715068 Mon Sep 17 00:00:00 2001 From: Chao Xu Date: Wed, 4 Nov 2015 15:52:18 -0800 Subject: address lavalamp's comment --- adding-an-APIGroup.md | 28 ++++++++++++++++------------ 1 file changed, 16 insertions(+), 12 deletions(-) diff --git a/adding-an-APIGroup.md b/adding-an-APIGroup.md index 6db23198..f6bf99a2 100644 --- a/adding-an-APIGroup.md +++ b/adding-an-APIGroup.md @@ -40,38 +40,42 @@ Please also read about [API conventions](api-conventions.md) and [API changes](a ### Your core group package: -1. creaet a folder in pkg/apis to hold you group. Create types.go in pkg/apis/\/ and pkg/apis/\/\/ to define API objects in your group. +We plan on improving the way the types are factored in the future; see [#16062](https://github.com/kubernetes/kubernetes/pull/16062) for the directions in which this might evolve. -2. create pkg/apis/\/{register.go, \/register.go} to register this group's API objects to the scheme; +1. Create a folder in pkg/apis to hold you group. Create types.go in pkg/apis/``/ and pkg/apis/``/``/ to define API objects in your group; -3. add a pkg/apis/\/install/install.go, which is responsible for adding the group to the `latest` package, so that other packages can access the group's meta through `latest.Group`. You need to import this `install` package in {pkg/master, pkg/client/unversioned, cmd/kube-version-change}/import_known_versions.go, if you want to make your group accessible to other packages in the kube-apiserver binary, binaries that uses the client package, or the kube-version-change tool. +2. Create pkg/apis/``/{register.go, ``/register.go} to register this group's API objects to the encoding/decoding scheme; + +3. Add a pkg/apis/``/install/install.go, which is responsible for adding the group to the `latest` package, so that other packages can access the group's meta through `latest.Group`. You need to import this `install` package in {pkg/master, pkg/client/unversioned, cmd/kube-version-change}/import_known_versions.go, if you want to make your group accessible to other packages in the kube-apiserver binary, binaries that uses the client package, or the kube-version-change tool. + +Step 2 and 3 are mechanical, we plan on autogenerate these using the cmd/libs/go2idl/ tool. ### Scripts changes and auto-generated code: 1. Generate conversions and deep-copies: - 1. add your "group/" or "group/version" into hack/after-build/{update-generated-conversions.sh, update-generated-deep-copies.sh, verify-generated-conversions.sh, verify-generated-deep-copies.sh}; - 2. run hack/update-generated-conversions.sh, hack/update-generated-deep-copies.sh. + 1. Add your "group/" or "group/version" into hack/after-build/{update-generated-conversions.sh, update-generated-deep-copies.sh, verify-generated-conversions.sh, verify-generated-deep-copies.sh}; + 2. Run hack/update-generated-conversions.sh, hack/update-generated-deep-copies.sh. 2. Generate files for Ugorji codec: - 1. touch types.generated.go in pkg/apis/\{/, \}, and run hack/update-codecgen.sh. + 1. Touch types.generated.go in pkg/apis/``{/, ``}, and run hack/update-codecgen.sh. ### Client (optional): -We are overhauling pkg/client, so this section might be outdated. Currently, to add your group to the client package, you need to +We are overhauling pkg/client, so this section might be outdated; see [#15730](https://github.com/kubernetes/kubernetes/pull/15730) for how the client package might evolve. Currently, to add your group to the client package, you need to -1. create pkg/client/unversioned/\.go, define a group client interface and implement the client. You can take pkg/client/unversioned/extensions.go as a reference. +1. Create pkg/client/unversioned/``.go, define a group client interface and implement the client. You can take pkg/client/unversioned/extensions.go as a reference. -2. add the group client interface to the `Interface` in pkg/client/unversioned/client.go and add method to fetch the interface. Again, you can take how we add the Extensions group there as an example. +2. Add the group client interface to the `Interface` in pkg/client/unversioned/client.go and add method to fetch the interface. Again, you can take how we add the Extensions group there as an example. -3. if you need to support the group in kubectl, you'll also need to modify pkg/kubectl/cmd/util/factory.go. +3. If you need to support the group in kubectl, you'll also need to modify pkg/kubectl/cmd/util/factory.go. ### Make the group/version selectable in unit tests (optional): -1. add your group in pkg/api/testapi/testapi.go, then you can access the group in tests through testapi.\; +1. Add your group in pkg/api/testapi/testapi.go, then you can access the group in tests through testapi.``; -2. add your "group/version" to `KUBE_API_VERSIONS` and `KUBE_TEST_API_VERSIONS` in hack/test-go.sh. +2. Add your "group/version" to `KUBE_API_VERSIONS` and `KUBE_TEST_API_VERSIONS` in hack/test-go.sh. -- cgit v1.2.3 From 3f40d2080709f696a506daf99af5bccdc2bd0f54 Mon Sep 17 00:00:00 2001 From: Chao Xu Date: Thu, 5 Nov 2015 15:44:20 -0800 Subject: address timstclair's comments --- adding-an-APIGroup.md | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/adding-an-APIGroup.md b/adding-an-APIGroup.md index f6bf99a2..e5f08552 100644 --- a/adding-an-APIGroup.md +++ b/adding-an-APIGroup.md @@ -44,9 +44,9 @@ We plan on improving the way the types are factored in the future; see [#16062]( 1. Create a folder in pkg/apis to hold you group. Create types.go in pkg/apis/``/ and pkg/apis/``/``/ to define API objects in your group; -2. Create pkg/apis/``/{register.go, ``/register.go} to register this group's API objects to the encoding/decoding scheme; +2. Create pkg/apis/``/{register.go, ``/register.go} to register this group's API objects to the encoding/decoding scheme (e.g., [pkg/apis/extensions/register.go](../../pkg/apis/extensions/register.go) and [pkg/apis/extensions/v1beta1/register.go](../../pkg/apis/extensions/v1beta1/register.go); -3. Add a pkg/apis/``/install/install.go, which is responsible for adding the group to the `latest` package, so that other packages can access the group's meta through `latest.Group`. You need to import this `install` package in {pkg/master, pkg/client/unversioned, cmd/kube-version-change}/import_known_versions.go, if you want to make your group accessible to other packages in the kube-apiserver binary, binaries that uses the client package, or the kube-version-change tool. +3. Add a pkg/apis/``/install/install.go, which is responsible for adding the group to the `latest` package, so that other packages can access the group's meta through `latest.Group`. You probably only need to change the name of group and version in the [example](../../pkg/apis/extensions/install/install.go)). You need to import this `install` package in {pkg/master, pkg/client/unversioned, cmd/kube-version-change}/import_known_versions.go, if you want to make your group accessible to other packages in the kube-apiserver binary, binaries that uses the client package, or the kube-version-change tool. Step 2 and 3 are mechanical, we plan on autogenerate these using the cmd/libs/go2idl/ tool. @@ -59,7 +59,8 @@ Step 2 and 3 are mechanical, we plan on autogenerate these using the cmd/libs/go 2. Generate files for Ugorji codec: - 1. Touch types.generated.go in pkg/apis/``{/, ``}, and run hack/update-codecgen.sh. + 1. Touch types.generated.go in pkg/apis/``{/, ``}; + 2. Run hack/update-codecgen.sh. ### Client (optional): -- cgit v1.2.3 From 5978c29f80989ee1e10599cfd1adca012bc89f67 Mon Sep 17 00:00:00 2001 From: Isaac Hollander McCreery Date: Fri, 6 Nov 2015 10:32:05 -0800 Subject: Use ./ notation --- releasing.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/releasing.md b/releasing.md index 238f3791..60609f0d 100644 --- a/releasing.md +++ b/releasing.md @@ -145,7 +145,7 @@ export VER="vX.Y.0-alpha.W" then, run ```console -release/cut-official-release.sh "${VER}" "${GITHASH}" +./release/cut-official-release.sh "${VER}" "${GITHASH}" ``` This will do a dry run of: @@ -157,7 +157,7 @@ This will do a dry run of: If you're satisfied with the result, run ```console -release/cut-official-release.sh "${VER}" "${GITHASH}" --no-dry-run +./release/cut-official-release.sh "${VER}" "${GITHASH}" --no-dry-run ``` and follow the instructions. @@ -173,7 +173,7 @@ export VER="vX.Y.Z-beta.W" then, run ```console -release/cut-official-release.sh "${VER}" "${GITHASH}" +./release/cut-official-release.sh "${VER}" "${GITHASH}" ``` This will do a dry run of: @@ -186,7 +186,7 @@ This will do a dry run of: If you're satisfied with the result, run ```console -release/cut-official-release.sh "${VER}" "${GITHASH}" --no-dry-run +./release/cut-official-release.sh "${VER}" "${GITHASH}" --no-dry-run ``` and follow the instructions. @@ -202,7 +202,7 @@ export VER="vX.Y.Z" then, run ```console -release/cut-official-release.sh "${VER}" "${GITHASH}" +./release/cut-official-release.sh "${VER}" "${GITHASH}" ``` This will do a dry run of: @@ -218,7 +218,7 @@ This will do a dry run of: If you're satisfied with the result, run ```console -release/cut-official-release.sh "${VER}" "${GITHASH}" --no-dry-run +./release/cut-official-release.sh "${VER}" "${GITHASH}" --no-dry-run ``` and follow the instructions. @@ -238,7 +238,7 @@ export VER="vX.Y" then, run ```console -release/cut-official-release.sh "${VER}" "${GITHASH}" +./release/cut-official-release.sh "${VER}" "${GITHASH}" ``` This will do a dry run of: @@ -253,7 +253,7 @@ This will do a dry run of: If you're satisfied with the result, run ```console -release/cut-official-release.sh "${VER}" "${GITHASH}" --no-dry-run +./release/cut-official-release.sh "${VER}" "${GITHASH}" --no-dry-run ``` and follow the instructions. -- cgit v1.2.3 From 889fa90febe3e133f0bbfda871e3a7ff45e25d02 Mon Sep 17 00:00:00 2001 From: Isaac Hollander McCreery Date: Fri, 6 Nov 2015 11:35:16 -0800 Subject: Cleanup for versioning --- releasing.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/releasing.md b/releasing.md index 238f3791..aef0e168 100644 --- a/releasing.md +++ b/releasing.md @@ -178,8 +178,8 @@ release/cut-official-release.sh "${VER}" "${GITHASH}" This will do a dry run of: -1. do a series of commits on the release branch for `vX.Y.Z-beta`; -1. mark the `vX.Y.Z-beta` tag at the beta version commit; +1. do a series of commits on the release branch for `vX.Y.Z-beta.W`; +1. mark the `vX.Y.Z-beta.W` tag at the beta version commit; 1. prompt you to do the remainder of the work, including building the appropriate binaries and pushing them to the appropriate places. @@ -209,9 +209,9 @@ This will do a dry run of: 1. do a series of commits on the branch for `vX.Y.Z`; 1. mark the `vX.Y.Z` tag at the release version commit; -1. do a series of commits on the branch for `vX.Y.(Z+1)-beta` on top of the +1. do a series of commits on the branch for `vX.Y.(Z+1)-beta.0` on top of the previous commits; -1. mark the `vX.Y.(Z+1)-beta` tag at the beta version commit; +1. mark the `vX.Y.(Z+1)-beta.0` tag at the beta version commit; 1. prompt you to do the remainder of the work, including building the appropriate binaries and pushing them to the appropriate places. @@ -245,8 +245,8 @@ This will do a dry run of: 1. mark the `vX.(Y+1).0-alpha.0` tag at the given git hash on `master`; 1. fork a new branch `release-X.Y` off of `master` at the given git hash; -1. do a series of commits on the branch for `vX.Y.0-beta`; -1. mark the `vX.Y.0-beta` tag at the beta version commit; +1. do a series of commits on the branch for `vX.Y.0-beta.0`; +1. mark the `vX.Y.0-beta.0` tag at the beta version commit; 1. prompt you to do the remainder of the work, including building the appropriate binaries and pushing them to the appropriate places. -- cgit v1.2.3 From 4a45a50ed69b428987ae2e946f829cfd3ac09c24 Mon Sep 17 00:00:00 2001 From: Janet Kuo Date: Tue, 3 Nov 2015 13:27:24 -0800 Subject: Document how to document --- how-to-doc.md | 171 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 171 insertions(+) create mode 100644 how-to-doc.md diff --git a/how-to-doc.md b/how-to-doc.md new file mode 100644 index 00000000..718aa8c0 --- /dev/null +++ b/how-to-doc.md @@ -0,0 +1,171 @@ + + + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + +The latest 1.0.x release of this document can be found +[here](http://releases.k8s.io/release-1.0/docs/devel/how-to-doc.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + + + + +Document Conventions +==================== + +Updated: 11/3/2015 + +*This document is oriented at users and developers who want to write documents for Kubernetes.* + +**Table of Contents** + + + - [What Are Mungers?](#what-are-mungers) + - [Table of Contents](#table-of-contents) + - [Writing Examples](#writing-examples) + - [Adding Links](#adding-links) + - [Auto-added Mungers](#auto-added-mungers) + - [Unversioned Warning](#unversioned-warning) + - [Is Versioned](#is-versioned) + - [Generate Analytics](#generate-analytics) + + + +## What Are Mungers? + +Mungers are like gofmt for md docs which we use to format documents. To use it, simply place + +``` + + +``` + +in your md files. Note that xxxx is the placeholder for a specific munger. Appropriate content will be generated and inserted between two brackets after you run `hack/update-generated-docs.sh`. See [munger document](../../cmd/mungedocs/) for more details. + + +## Table of Contents + +Instead of writing table of contents by hand, use the TOC munger: + +``` + + +``` + +## Writing Examples + +Sometimes you may want to show the content of certain example files. Use EXAMPLE munger whenever possible: + +``` + + +``` + +This way, you save the time to do the copy-and-paste; what's better, the content won't become out-of-date everytime you update the example file. + +For example, the following munger: + +``` + + +``` + +generates + + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: nginx + labels: + app: nginx +spec: + containers: + - name: nginx + image: nginx + ports: + - containerPort: 80 +``` + +[Download example](../user-guide/pod.yaml?raw=true) + + +## Adding Links + +Use inline link instead of url at all times. When you add internal links from `docs/` to `docs/` or `examples/`, use relative links; otherwise, use `http://releases.k8s.io/HEAD/`. For example, use: + +``` +[GCE](../getting-started-guides/gce.md) # note that it's under docs/ +[Kubernetes package](http://releases.k8s.io/HEAD/pkg/) # note that it's under pkg/ +[Kubernetes](http://kubernetes.io/) +``` + +and avoid using: + +``` +[GCE](https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/gce.md) +[Kubernetes package](../../pkg/) +http://kubernetes.io/ +``` + +## Auto-added Mungers + +Some mungers are auto-added. You don't have to add them manually, and `hack/update-generated-docs.sh` does that for you. It's recommended to just read this section as a reference instead of messing up with the following mungers. + +### Unversioned Warning + +UNVERSIONED_WARNING munger inserts unversioned warning which warns the users when they're reading the document from HEAD and informs them where to find the corresponding document for a specific release. + +``` + + + + + + +``` + +### Is Versioned + +IS_VERSIONED munger inserts `IS_VERSIONED` tag in documents in each release, which stops UNVERSIONED_WARNING munger from inserting warning messages. + +``` + + + +``` + +### Generate Analytics + +ANALYTICS munger inserts a Google Anaylytics link for this page. + +``` + + +``` + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/how-to-doc.md?pixel)]() + -- cgit v1.2.3 From 0f458971f63a60aa749ba0133e0e762b58bdca6c Mon Sep 17 00:00:00 2001 From: Janet Kuo Date: Fri, 6 Nov 2015 17:19:21 -0800 Subject: address comments --- how-to-doc.md | 103 +++++++++++++++++++++++++++++++++++++++++----------------- 1 file changed, 74 insertions(+), 29 deletions(-) diff --git a/how-to-doc.md b/how-to-doc.md index 718aa8c0..283cab1f 100644 --- a/how-to-doc.md +++ b/how-to-doc.md @@ -31,8 +31,7 @@ Documentation for other releases can be found at -Document Conventions -==================== +# Document Conventions Updated: 11/3/2015 @@ -41,10 +40,16 @@ Updated: 11/3/2015 **Table of Contents** +- [Document Conventions](#document-conventions) + - [General Concepts](#general-concepts) + - [How to Get a Table of Contents](#how-to-get-a-table-of-contents) + - [How to Write Links](#how-to-write-links) + - [How to Include an Example](#how-to-include-an-example) + - [Misc.](#misc) + - [Code formatting](#code-formatting) + - [Syntax Highlighting](#syntax-highlighting) + - [Headings](#headings) - [What Are Mungers?](#what-are-mungers) - - [Table of Contents](#table-of-contents) - - [Writing Examples](#writing-examples) - - [Adding Links](#adding-links) - [Auto-added Mungers](#auto-added-mungers) - [Unversioned Warning](#unversioned-warning) - [Is Versioned](#is-versioned) @@ -52,46 +57,63 @@ Updated: 11/3/2015 -## What Are Mungers? +## General Concepts -Mungers are like gofmt for md docs which we use to format documents. To use it, simply place +Each document needs to be munged to ensure its format is correct, links are valid, etc. To munge a document, simply run `hack/update-generated-docs.sh`. We verify that all documents have been munged using `hack/verify-generated-docs.sh`. The scripts for munging documents are called mungers, see the [mungers section](#what-are-mungers) below if you're curious about how mungers are implemented or if you want to write one. + +## How to Get a Table of Contents + +Instead of writing table of contents by hand, insert the following code in your md file: ``` - - + + ``` -in your md files. Note that xxxx is the placeholder for a specific munger. Appropriate content will be generated and inserted between two brackets after you run `hack/update-generated-docs.sh`. See [munger document](../../cmd/mungedocs/) for more details. +After running `hack/update-generated-docs.sh`, you'll see a table of contents generated for you, layered based on the headings. +## How to Write Links -## Table of Contents +It's important to follow the rules when writing links. It helps us correctly versionize documents for each release. -Instead of writing table of contents by hand, use the TOC munger: +Use inline links instead of urls at all times. When you add internal links to `docs/` or `examples/`, use relative links; otherwise, use `http://releases.k8s.io/HEAD/`. For example, avoid using: ``` - - +[GCE](https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/gce.md) # note that it's under docs/ +[Kubernetes package](../../pkg/) # note that it's under pkg/ +http://kubernetes.io/ # external link ``` -## Writing Examples +Instead, use: -Sometimes you may want to show the content of certain example files. Use EXAMPLE munger whenever possible: +``` +[GCE](../getting-started-guides/gce.md) # note that it's under docs/ +[Kubernetes package](http://releases.k8s.io/HEAD/pkg/) # note that it's under pkg/ +[Kubernetes](http://kubernetes.io/) # external link +``` + +The above example generates the following links: [GCE](../getting-started-guides/gce.md), [Kubernetes package](http://releases.k8s.io/HEAD/pkg/), and [Kubernetes](http://kubernetes.io/). + +## How to Include an Example + +While writing examples, you may want to show the content of certain example files (e.g. [pod.yaml](../user-guide/pod.yaml)). In this case, insert the following code in the md file: ``` ``` -This way, you save the time to do the copy-and-paste; what's better, the content won't become out-of-date everytime you update the example file. +Note that you should replace `path/to/file` with the relative path to the example file. Then `hack/update-generated-docs.sh` will generate a code block with the content of the specified file, and a link to download it. This way, you save the time to do the copy-and-paste; what's better, the content won't become out-of-date every time you update the example file. -For example, the following munger: +For example, the following: ``` ``` -generates +generates the following after `hack/update-generated-docs.sh`: + ```yaml @@ -112,27 +134,50 @@ spec: [Download example](../user-guide/pod.yaml?raw=true) -## Adding Links +## Misc. -Use inline link instead of url at all times. When you add internal links from `docs/` to `docs/` or `examples/`, use relative links; otherwise, use `http://releases.k8s.io/HEAD/`. For example, use: +### Code formatting + +Wrap a span of code with single backticks (`` ` ``). To format multiple lines of code as its own code block, use triple backticks (```` ``` ````). + +### Syntax Highlighting + +Adding syntax highlighting to code blocks improves readability. To do so, in your fenced block, add an optional language identifier. Some useful identifier includes `yaml`, `console` (for console output), and `sh` (for shell quote format). Note that in a console output, put `$ ` at the beginning of each command and put nothing at the beginning of the output. Here's an example of console code block: ``` -[GCE](../getting-started-guides/gce.md) # note that it's under docs/ -[Kubernetes package](http://releases.k8s.io/HEAD/pkg/) # note that it's under pkg/ -[Kubernetes](http://kubernetes.io/) +```console + +$ kubectl create -f docs/user-guide/pod.yaml +pod "foo" created + +```  +``` + +which renders as: + +```console +$ kubectl create -f docs/user-guide/pod.yaml +pod "foo" created ``` -and avoid using: +### Headings + +Add a single `#` before the document title to create a title heading, and add `##` to the next level of section title, and so on. Note that the number of `#` will determine the size of the heading. + +## What Are Mungers? + +Mungers are like gofmt for md docs which we use to format documents. To use it, simply place ``` -[GCE](https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/gce.md) -[Kubernetes package](../../pkg/) -http://kubernetes.io/ + + ``` +in your md files. Note that xxxx is the placeholder for a specific munger. Appropriate content will be generated and inserted between two brackets after you run `hack/update-generated-docs.sh`. See [munger document](http://releases.k8s.io/HEAD/cmd/mungedocs/) for more details. + ## Auto-added Mungers -Some mungers are auto-added. You don't have to add them manually, and `hack/update-generated-docs.sh` does that for you. It's recommended to just read this section as a reference instead of messing up with the following mungers. +After running `hack/update-generated-docs.sh`, you may see some code / mungers in your md file that are auto-added. You don't have to add them manually. It's recommended to just read this section as a reference instead of messing up with the following mungers. ### Unversioned Warning -- cgit v1.2.3 From ff8cdfcde813e5287a833539541a9b5f3206fe5d Mon Sep 17 00:00:00 2001 From: Tim Hockin Date: Tue, 3 Nov 2015 10:17:57 -0800 Subject: Run update-gendocs --- README.md | 4 ++-- adding-an-APIGroup.md | 4 ++-- api-conventions.md | 4 ++-- api_changes.md | 4 ++-- automation.md | 4 ++-- cherry-picks.md | 4 ++-- cli-roadmap.md | 4 ++-- client-libraries.md | 4 ++-- coding-conventions.md | 4 ++-- collab.md | 4 ++-- developer-guides/vagrant.md | 4 ++-- development.md | 4 ++-- e2e-tests.md | 4 ++-- faster_reviews.md | 4 ++-- flaky-tests.md | 4 ++-- getting-builds.md | 4 ++-- how-to-doc.md | 4 ++-- instrumentation.md | 4 ++-- issues.md | 4 ++-- kubectl-conventions.md | 4 ++-- logging.md | 4 ++-- making-release-notes.md | 4 ++-- profiling.md | 4 ++-- pull-requests.md | 4 ++-- releasing.md | 4 ++-- scheduler.md | 4 ++-- scheduler_algorithm.md | 4 ++-- writing-a-getting-started-guide.md | 4 ++-- 28 files changed, 56 insertions(+), 56 deletions(-) diff --git a/README.md b/README.md index 756846ce..87ede398 100644 --- a/README.md +++ b/README.md @@ -19,8 +19,8 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. -The latest 1.0.x release of this document can be found -[here](http://releases.k8s.io/release-1.0/docs/devel/README.md). +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.1/docs/devel/README.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/adding-an-APIGroup.md b/adding-an-APIGroup.md index e5f08552..afef1456 100644 --- a/adding-an-APIGroup.md +++ b/adding-an-APIGroup.md @@ -19,8 +19,8 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. -The latest 1.0.x release of this document can be found -[here](http://releases.k8s.io/release-1.0/docs/devel/adding-an-APIGroup.md). +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.1/docs/devel/adding-an-APIGroup.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/api-conventions.md b/api-conventions.md index cf389231..e8aaf612 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -19,8 +19,8 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. -The latest 1.0.x release of this document can be found -[here](http://releases.k8s.io/release-1.0/docs/devel/api-conventions.md). +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.1/docs/devel/api-conventions.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/api_changes.md b/api_changes.md index 53dfb014..4bbb5bd4 100644 --- a/api_changes.md +++ b/api_changes.md @@ -19,8 +19,8 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. -The latest 1.0.x release of this document can be found -[here](http://releases.k8s.io/release-1.0/docs/devel/api_changes.md). +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.1/docs/devel/api_changes.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/automation.md b/automation.md index f01b6158..c21f4ed6 100644 --- a/automation.md +++ b/automation.md @@ -19,8 +19,8 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. -The latest 1.0.x release of this document can be found -[here](http://releases.k8s.io/release-1.0/docs/devel/automation.md). +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.1/docs/devel/automation.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/cherry-picks.md b/cherry-picks.md index 7cb60465..f407c949 100644 --- a/cherry-picks.md +++ b/cherry-picks.md @@ -19,8 +19,8 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. -The latest 1.0.x release of this document can be found -[here](http://releases.k8s.io/release-1.0/docs/devel/cherry-picks.md). +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.1/docs/devel/cherry-picks.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/cli-roadmap.md b/cli-roadmap.md index 2b713260..de2f4a43 100644 --- a/cli-roadmap.md +++ b/cli-roadmap.md @@ -19,8 +19,8 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. -The latest 1.0.x release of this document can be found -[here](http://releases.k8s.io/release-1.0/docs/devel/cli-roadmap.md). +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.1/docs/devel/cli-roadmap.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/client-libraries.md b/client-libraries.md index b63e2d44..22a59d06 100644 --- a/client-libraries.md +++ b/client-libraries.md @@ -19,8 +19,8 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. -The latest 1.0.x release of this document can be found -[here](http://releases.k8s.io/release-1.0/docs/devel/client-libraries.md). +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.1/docs/devel/client-libraries.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/coding-conventions.md b/coding-conventions.md index 3e3abaf7..df9f63e7 100644 --- a/coding-conventions.md +++ b/coding-conventions.md @@ -19,8 +19,8 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. -The latest 1.0.x release of this document can be found -[here](http://releases.k8s.io/release-1.0/docs/devel/coding-conventions.md). +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.1/docs/devel/coding-conventions.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/collab.md b/collab.md index 624b3bcb..de2ce10c 100644 --- a/collab.md +++ b/collab.md @@ -19,8 +19,8 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. -The latest 1.0.x release of this document can be found -[here](http://releases.k8s.io/release-1.0/docs/devel/collab.md). +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.1/docs/devel/collab.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index f451d755..61560db7 100644 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -19,8 +19,8 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. -The latest 1.0.x release of this document can be found -[here](http://releases.k8s.io/release-1.0/docs/devel/developer-guides/vagrant.md). +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.1/docs/devel/developer-guides/vagrant.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/development.md b/development.md index 0b778dd9..09abe1e7 100644 --- a/development.md +++ b/development.md @@ -19,8 +19,8 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. -The latest 1.0.x release of this document can be found -[here](http://releases.k8s.io/release-1.0/docs/devel/development.md). +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.1/docs/devel/development.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/e2e-tests.md b/e2e-tests.md index 882da396..d1f909dc 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -19,8 +19,8 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. -The latest 1.0.x release of this document can be found -[here](http://releases.k8s.io/release-1.0/docs/devel/e2e-tests.md). +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.1/docs/devel/e2e-tests.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/faster_reviews.md b/faster_reviews.md index 0c70e435..f0cb159c 100644 --- a/faster_reviews.md +++ b/faster_reviews.md @@ -19,8 +19,8 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. -The latest 1.0.x release of this document can be found -[here](http://releases.k8s.io/release-1.0/docs/devel/faster_reviews.md). +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.1/docs/devel/faster_reviews.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/flaky-tests.md b/flaky-tests.md index 2470a815..27c788aa 100644 --- a/flaky-tests.md +++ b/flaky-tests.md @@ -19,8 +19,8 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. -The latest 1.0.x release of this document can be found -[here](http://releases.k8s.io/release-1.0/docs/devel/flaky-tests.md). +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.1/docs/devel/flaky-tests.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/getting-builds.md b/getting-builds.md index 3803c873..375a1fac 100644 --- a/getting-builds.md +++ b/getting-builds.md @@ -19,8 +19,8 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. -The latest 1.0.x release of this document can be found -[here](http://releases.k8s.io/release-1.0/docs/devel/getting-builds.md). +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.1/docs/devel/getting-builds.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/how-to-doc.md b/how-to-doc.md index 283cab1f..7f1d30ba 100644 --- a/how-to-doc.md +++ b/how-to-doc.md @@ -19,8 +19,8 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. -The latest 1.0.x release of this document can be found -[here](http://releases.k8s.io/release-1.0/docs/devel/how-to-doc.md). +The latest 1.1.x release of this document can be found +[here](http://releases.k8s.io/release-1.1/docs/devel/how-to-doc.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/instrumentation.md b/instrumentation.md index 683f9d93..49f1f077 100644 --- a/instrumentation.md +++ b/instrumentation.md @@ -19,8 +19,8 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. -The latest 1.0.x release of this document can be found -[here](http://releases.k8s.io/release-1.0/docs/devel/instrumentation.md). +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.1/docs/devel/instrumentation.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/issues.md b/issues.md index c7bda07b..f2ce6949 100644 --- a/issues.md +++ b/issues.md @@ -19,8 +19,8 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. -The latest 1.0.x release of this document can be found -[here](http://releases.k8s.io/release-1.0/docs/devel/issues.md). +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.1/docs/devel/issues.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/kubectl-conventions.md b/kubectl-conventions.md index a37e5899..3775c0b3 100644 --- a/kubectl-conventions.md +++ b/kubectl-conventions.md @@ -19,8 +19,8 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. -The latest 1.0.x release of this document can be found -[here](http://releases.k8s.io/release-1.0/docs/devel/kubectl-conventions.md). +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.1/docs/devel/kubectl-conventions.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/logging.md b/logging.md index 3870c4c3..3dc22ca5 100644 --- a/logging.md +++ b/logging.md @@ -19,8 +19,8 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. -The latest 1.0.x release of this document can be found -[here](http://releases.k8s.io/release-1.0/docs/devel/logging.md). +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.1/docs/devel/logging.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/making-release-notes.md b/making-release-notes.md index 871e65b4..7a2d73c0 100644 --- a/making-release-notes.md +++ b/making-release-notes.md @@ -19,8 +19,8 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. -The latest 1.0.x release of this document can be found -[here](http://releases.k8s.io/release-1.0/docs/devel/making-release-notes.md). +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.1/docs/devel/making-release-notes.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/profiling.md b/profiling.md index f563ce0a..f05b9d74 100644 --- a/profiling.md +++ b/profiling.md @@ -19,8 +19,8 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. -The latest 1.0.x release of this document can be found -[here](http://releases.k8s.io/release-1.0/docs/devel/profiling.md). +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.1/docs/devel/profiling.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/pull-requests.md b/pull-requests.md index 15a0f447..b97da36e 100644 --- a/pull-requests.md +++ b/pull-requests.md @@ -19,8 +19,8 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. -The latest 1.0.x release of this document can be found -[here](http://releases.k8s.io/release-1.0/docs/devel/pull-requests.md). +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.1/docs/devel/pull-requests.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/releasing.md b/releasing.md index a41568e0..01f185bd 100644 --- a/releasing.md +++ b/releasing.md @@ -19,8 +19,8 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. -The latest 1.0.x release of this document can be found -[here](http://releases.k8s.io/release-1.0/docs/devel/releasing.md). +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.1/docs/devel/releasing.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/scheduler.md b/scheduler.md index c9d32aa4..ffc73ca1 100755 --- a/scheduler.md +++ b/scheduler.md @@ -19,8 +19,8 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. -The latest 1.0.x release of this document can be found -[here](http://releases.k8s.io/release-1.0/docs/devel/scheduler.md). +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.1/docs/devel/scheduler.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/scheduler_algorithm.md b/scheduler_algorithm.md index d6a8b6c5..c8790af9 100755 --- a/scheduler_algorithm.md +++ b/scheduler_algorithm.md @@ -19,8 +19,8 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. -The latest 1.0.x release of this document can be found -[here](http://releases.k8s.io/release-1.0/docs/devel/scheduler_algorithm.md). +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.1/docs/devel/scheduler_algorithm.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/writing-a-getting-started-guide.md b/writing-a-getting-started-guide.md index c9d4e2ca..a82691a8 100644 --- a/writing-a-getting-started-guide.md +++ b/writing-a-getting-started-guide.md @@ -19,8 +19,8 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. -The latest 1.0.x release of this document can be found -[here](http://releases.k8s.io/release-1.0/docs/devel/writing-a-getting-started-guide.md). +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.1/docs/devel/writing-a-getting-started-guide.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). -- cgit v1.2.3 From 87d268af9bf5c17f24ff3dfd52e9391847ce57fe Mon Sep 17 00:00:00 2001 From: Tim Hockin Date: Mon, 16 Nov 2015 10:52:26 -0800 Subject: clarify experimental annotations doc --- api-conventions.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/api-conventions.md b/api-conventions.md index e8aaf612..6781fcae 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -746,7 +746,7 @@ Therefore, resources supporting auto-generation of unique labels should have a ` Annotations have very different intended usage from labels. We expect them to be primarily generated and consumed by tooling and system extensions. I'm inclined to generalize annotations to permit them to directly store arbitrary json. Rigid names and name prefixes make sense, since they are analogous to API fields. -In fact, in-development API fields, including those used to represent fields of newer alpha/beta API versions in the older stable storage version, may be represented as annotations with the form `something.alpha.kubernetes.io/name` or `something.beta.kubernetes.io/name` (depending on our confidence in it). For example `net.alpha.kubernetes.io/policy` might represent an experimental network policy field. +In fact, in-development API fields, including those used to represent fields of newer alpha/beta API versions in the older stable storage version, may be represented as annotations with the form `something.alpha.kubernetes.io/name` or `something.beta.kubernetes.io/name` (depending on our confidence in it). For example `net.alpha.kubernetes.io/policy` might represent an experimental network policy field. The "name" portion of the annotation should follow the below conventions for annotations. When an annotation gets promoted to a field, the name transformation should then be mechanical: `foo-bar` becomes `fooBar`. Other advice regarding use of labels, annotations, and other generic map keys by Kubernetes components and tools: - Key names should be all lowercase, with words separated by dashes, such as `desired-replicas` -- cgit v1.2.3 From 31832344a1c2b32ba9813f032213944543ac1767 Mon Sep 17 00:00:00 2001 From: Brian Grant Date: Tue, 17 Nov 2015 15:18:17 +0000 Subject: Add conventions about primitive types. --- api-conventions.md | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/api-conventions.md b/api-conventions.md index e8aaf612..18e2ddb9 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -50,6 +50,7 @@ using resources with kubectl can be found in [Working with resources](../user-gu - [Typical status properties](#typical-status-properties) - [References to related objects](#references-to-related-objects) - [Lists of named subobjects preferred over maps](#lists-of-named-subobjects-preferred-over-maps) + - [Primitive types](#primitive-types) - [Constants](#constants) - [Lists and Simple kinds](#lists-and-simple-kinds) - [Differing Representations](#differing-representations) @@ -247,6 +248,14 @@ ports: This rule maintains the invariant that all JSON/YAML keys are fields in API objects. The only exceptions are pure maps in the API (currently, labels, selectors, annotations, data), as opposed to sets of subobjects. +#### Primitive types + +* Avoid floating-point values as much as possible, and never use them in spec. Floating-point values cannot be reliably round-tripped (encoded and re-decoded) without changing, and have varying precision and representations across languages and architectures. +* Do not use unsigned integers. Similarly, not all languages (e.g., Javascript) support unsigned integers. +* int64 is converted to float by Javascript and some other languages, so they also need to be accepted as strings. +* Do not use enums. Use aliases for string instead (e.g., `NodeConditionType`). +* Look at similar fields in the API (e.g., ports, durations) and follow the conventions of existing fields. + #### Constants Some fields will have a list of allowed values (enumerations). These values will be strings, and they will be in CamelCase, with an initial uppercase letter. Examples: "ClusterFirst", "Pending", "ClientIP". -- cgit v1.2.3 From 3ec5bdafb97feaeb40135b4268548d33095a247c Mon Sep 17 00:00:00 2001 From: gmarek Date: Mon, 12 Oct 2015 11:48:05 +0200 Subject: Add Kubemark User Guide --- kubemark-guide.md | 175 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 175 insertions(+) create mode 100644 kubemark-guide.md diff --git a/kubemark-guide.md b/kubemark-guide.md new file mode 100644 index 00000000..7a68f4e6 --- /dev/null +++ b/kubemark-guide.md @@ -0,0 +1,175 @@ + + + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.1/docs/devel/kubemark-guide.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + + + + +# Kubemark User Guide + +## Introduction + +Kubemark is a performance testing tool which allows users to run experiments on simulated clusters. The primary use case is scalability testing, as simulated clusters can be +much bigger than the real ones. The objective is to expose problems with the master components (API server, controller manager or scheduler) that appear only on bigger +clusters (e.g. small memory leaks). + +This document serves as a primer to understand what Kubemark is, what it is not, and how to use it. + +## Architecture + +On a very high level Kubemark cluster consists of two parts: real master components and a set of “Hollow” Nodes. The prefix “Hollow” means an implementation/instantiation of a +component with all “moving” parts mocked out. The best example is HollowKubelet, which pretends to be an ordinary Kubelet, but does not start anything, nor mount any volumes - +it just lies it does. More detailed design and implementation details are at the end of this document. + +Currently master components run on a dedicated machine(s), and HollowNodes run on an ‘external’ Kubernetes cluster. This design has a slight advantage, over running master +components on external cluster, of completely isolating master resources from everything else. + +## Requirements + +To run Kubemark you need a Kubernetes cluster for running all your HollowNodes and a dedicated machine for a master. Master machine has to be directly routable from +HollowNodes. You also need an access to some Docker repository. + +Currently scripts are written to be easily usable by GCE, but it should be relatively straightforward to port them to different providers or bare metal. + +## Common use cases and helper scripts + +Common workflow for Kubemark is: +- starting a Kubemark cluster (on GCE) +- running e2e tests on Kubemark cluster +- monitoring test execution and debugging problems +- turning down Kubemark cluster + +Included in descrptions there will be comments helpful for anyone who’ll want to port Kubemark to different providers. + +### Starting a Kubemark cluster + +To start a Kubemark cluster on GCE you need to create an external cluster (it can be GCE, GKE or any other cluster) by yourself, build a kubernetes release (e.g. by running +`make quick-release`) and run `test/kubemark/start-kubemark.sh` script. This script will create a VM for master components, Pods for HollowNodes and do all the setup necessary +to let them talk to each other. It will use the configuration stored in `cluster/kubemark/config-default.sh` - you can tweak it however you want, but note that some features +may not be implemented yet, as implementation of Hollow components/mocks will probably be lagging behind ‘real’ one. For performance tests interesting variables are +`NUM_MINIONS` and `MASTER_SIZE`. After start-kubemark script is finished you’ll have a ready Kubemark cluster, a kubeconfig file for talking to the Kubemark +cluster is stored in `test/kubemark/kubeconfig.loc`. + +Currently we're running HollowNode with limit of 0.05 a CPU core and ~60MB or memory, which taking into account default cluster addons and fluentD running on an 'external' +cluster, allows running ~17.5 HollowNodes per core. + +#### Behind the scene details: + +Start-kubemark script does quite a lot of things: +- Creates a master machine called hollow-cluster-master and PD for it (*uses gcloud, should be easy to do outside of GCE*) +- Creates a firewall rule which opens port 443\* on the master machine (*uses gcloud, should be easy to do outside of GCE*) +- Builds a Docker image for HollowNode from the current repository and pushes it to the Docker repository (*GCR for us, using scripts from `cluster/gce/util.sh` - it may get +tricky outside of GCE*) +- Generates certificates and kubeconfig files, writes a kubeconfig locally to `test/kubemark/kubeconfig.loc` and creates a Secret which stores kubeconfig for HollowKubelet/ +HollowProxy use (*used gcloud to transfer files to Master, should be easy to do outside of GCE*). +- Creates a ReplicationController for HollowNodes and starts them up. (*will work exactly the same everywhere as long as MASTER_IP will be populated correctly, but you’ll need +to update docker image address if you’re not using GCR and default image name*) +- Waits until all HollowNodes are in the Running phase (*will work exactly the same everywhere*) + +\* Port 443 is a secured port on the master machine which is used for all external communication with the API server. In the last sentence *external* means all traffic +comming from other machines, including all the Nodes, not only from outside of the cluster. Currently local components, i.e. ControllerManager and Scheduler talk with API server using insecure port 8080. + +### Running e2e tests on Kubemark cluster + +To run standard e2e test on your Kubemark cluster created in the previous step you execute `test/kubemark/run-e2e-tests.sh` script. It will configure ginkgo to +use Kubemark cluster instead of something else and start an e2e test. This script should not need any changes to work on other cloud providers. + +By default (if nothig will be passed to it) the script will run a Density '30 test. If you want to run a different e2e test you just need to provide flags you want to be +passed to `hack/ginkgo-e2e.sh` script, e.g. `--ginkgo.focus="Load"` to run the Load test. + +### Monitoring test execution and debugging problems + +Run-e2e-tests prints the same output on Kubemark as on ordinary e2e cluster, but if you need to dig deeper you need to learn how to debug HollowNodes and how Master +machine (currently) differs from the ordinary one. + +If you need to debug master machine you can do similar things as you do on your ordinary master. The difference between Kubemark setup and ordinary setup is that in Kubemark +etcd is run as a plain docker container, and all master components are run as normal processes. There’s no Kubelet overseeing them. Logs are stored in exactly the same place, +i.e. `/var/logs/` directory. Because binaries are not supervised by anything they won't be restarted in the case of a crash. + +To help you with debugging from inside the cluster startup script puts a `~/configure-kubectl.sh` script on the master. It downloads `gcloud` and `kubectl` tool and configures +kubectl to work on unsecured master port (useful if there are problems with security). After the script is run you can use kubectl command from the master machine to play with +the cluster. + +Debugging HollowNodes is a bit more tricky, as if you experience a problem on one of them you need to learn which hollow-node pod corresponds to a given HollowNode known by +the Master. During self-registeration HollowNodes provide their cluster IPs as Names, which means that if you need to find a HollowNode named `10.2.4.5` you just need to find a +Pod in external cluster with this cluster IP. There’s a helper script `test/kubemark/get-real-pod-for-hollow-node.sh` that does this for you. + +When you have a Pod name you can use `kubectl logs` on external cluster to get logs, or use a `kubectl describe pod` call to find an external Node on which this particular +HollowNode is running so you can ssh to it. + +E.g. you want to see the logs of HollowKubelet on which pod `my-pod` is running. To do so you can execute: + +``` +$ kubectl kubernetes/test/kubemark/kubeconfig.loc describe pod my-pod +``` + +Which outputs pod description and among it a line: + +``` +Node: 1.2.3.4/1.2.3.4 +``` + +To learn the `hollow-node` pod corresponding to node `1.2.3.4` you use aforementioned script: + +``` +$ kubernetes/test/kubemark/get-real-pod-for-hollow-node.sh 1.2.3.4 +``` + +which will output the line: + +``` +hollow-node-1234 +``` + +Now you just use ordinary kubectl command to get the logs: + +``` +kubectl --namespace=kubemark logs hollow-node-1234 +``` + +All those things should work exactly the same on all cloud providers. + +### Turning down Kubemark cluster + +On GCE you just need to execute `test/kubemark/stop-kubemark.sh` script, which will delete HollowNode ReplicationController and all the resources for you. On other providers +you’ll need to delete all this stuff by yourself. + +## Some current implementation details + +Kubemark master uses exactly the same binaries as ordinary Kubernetes does. This means that it will never be out of date. On the other hand HollowNodes use existing fake for +Kubelet (called SimpleKubelet), which mocks its runtime manager with `pkg/kubelet/fake-docker-manager.go`, where most logic sits. Because there’s no easy way of mocking other +managers (e.g. VolumeManager), they are not supported in Kubemark (e.g. we can’t schedule Pods with volumes in them yet). + +As the time passes more fakes will probably be plugged into HollowNodes, but it’s crucial to make it as simple as possible to allow running a big number of Hollows on a single +core. + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/kubemark-guide.md?pixel)]() + -- cgit v1.2.3 From 1344366406a84910c7a9a85de737410b1a3b9761 Mon Sep 17 00:00:00 2001 From: Brian Grant Date: Wed, 18 Nov 2015 17:30:19 +0000 Subject: Address feedback --- api-conventions.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/api-conventions.md b/api-conventions.md index 18e2ddb9..d710cca2 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -251,8 +251,8 @@ This rule maintains the invariant that all JSON/YAML keys are fields in API obje #### Primitive types * Avoid floating-point values as much as possible, and never use them in spec. Floating-point values cannot be reliably round-tripped (encoded and re-decoded) without changing, and have varying precision and representations across languages and architectures. -* Do not use unsigned integers. Similarly, not all languages (e.g., Javascript) support unsigned integers. -* int64 is converted to float by Javascript and some other languages, so they also need to be accepted as strings. +* All numbers (e.g., uint32, int64) are converted to float64 by Javascript and some other languages, so any field which is expected to exceed that either in magnitude or in precision (specifically integer values > 53 bits) should be serialized and accepted as strings. +* Do not use unsigned integers, due to inconsistent support across languages and libraries. Just validate that the integer is non-negative if that's the case. * Do not use enums. Use aliases for string instead (e.g., `NodeConditionType`). * Look at similar fields in the API (e.g., ports, durations) and follow the conventions of existing fields. -- cgit v1.2.3 From eac73b42443d1930364e5dccd3f375b57772e5d5 Mon Sep 17 00:00:00 2001 From: Isaac Hollander McCreery Date: Wed, 18 Nov 2015 09:50:56 -0800 Subject: Defer release notes to announcement of release, and move instructions for release notes back into docs and away from scripts --- releasing.md | 165 +++++++++++++++++++---------------------------------------- 1 file changed, 53 insertions(+), 112 deletions(-) diff --git a/releasing.md b/releasing.md index 01f185bd..4805481f 100644 --- a/releasing.md +++ b/releasing.md @@ -134,123 +134,22 @@ cd kubernetes or `git checkout upstream/master` from an existing repo. -#### Cutting an alpha release (`vX.Y.0-alpha.W`) +Decide what version you're cutting and export it: -Figure out what version you're cutting, and +- alpha release: `export VER="vX.Y.0-alpha.W"`; +- beta release: `export VER="vX.Y.Z-beta.W"`; +- official release: `export VER="vX.Y.Z"`; +- new release series: `export VER="vX.Y"`. -```console -export VER="vX.Y.0-alpha.W" -``` - -then, run - -```console -./release/cut-official-release.sh "${VER}" "${GITHASH}" -``` - -This will do a dry run of: - -1. mark the `vX.Y.0-alpha.W` tag at the given git hash; -1. prompt you to do the remainder of the work, including building the - appropriate binaries and pushing them to the appropriate places. - -If you're satisfied with the result, run - -```console -./release/cut-official-release.sh "${VER}" "${GITHASH}" --no-dry-run -``` - -and follow the instructions. - -#### Cutting an beta release (`vX.Y.Z-beta.W`) - -Figure out what version you're cutting, and - -```console -export VER="vX.Y.Z-beta.W" -``` - -then, run +Then, run ```console ./release/cut-official-release.sh "${VER}" "${GITHASH}" ``` -This will do a dry run of: - -1. do a series of commits on the release branch for `vX.Y.Z-beta.W`; -1. mark the `vX.Y.Z-beta.W` tag at the beta version commit; -1. prompt you to do the remainder of the work, including building the - appropriate binaries and pushing them to the appropriate places. - -If you're satisfied with the result, run - -```console -./release/cut-official-release.sh "${VER}" "${GITHASH}" --no-dry-run -``` - -and follow the instructions. - -#### Cutting an official release (`vX.Y.Z`) - -Figure out what version you're cutting, and - -```console -export VER="vX.Y.Z" -``` - -then, run - -```console -./release/cut-official-release.sh "${VER}" "${GITHASH}" -``` - -This will do a dry run of: - -1. do a series of commits on the branch for `vX.Y.Z`; -1. mark the `vX.Y.Z` tag at the release version commit; -1. do a series of commits on the branch for `vX.Y.(Z+1)-beta.0` on top of the - previous commits; -1. mark the `vX.Y.(Z+1)-beta.0` tag at the beta version commit; -1. prompt you to do the remainder of the work, including building the - appropriate binaries and pushing them to the appropriate places. - -If you're satisfied with the result, run - -```console -./release/cut-official-release.sh "${VER}" "${GITHASH}" --no-dry-run -``` - -and follow the instructions. - -#### Branching a new release series (`vX.Y`) - -Once again, **this is a big deal!** If you're reading this doc for the first -time, you probably shouldn't be doing this release, and should talk to someone -on the release team. - -Figure out what series you're cutting, and - -```console -export VER="vX.Y" -``` - -then, run - -```console -./release/cut-official-release.sh "${VER}" "${GITHASH}" -``` - -This will do a dry run of: - -1. mark the `vX.(Y+1).0-alpha.0` tag at the given git hash on `master`; -1. fork a new branch `release-X.Y` off of `master` at the given git hash; -1. do a series of commits on the branch for `vX.Y.0-beta.0`; -1. mark the `vX.Y.0-beta.0` tag at the beta version commit; -1. prompt you to do the remainder of the work, including building the - appropriate binaries and pushing them to the appropriate places. - -If you're satisfied with the result, run +This will do a dry run of the release. It will give you instructions at the +end for `pushd`ing into the dry-run directory and having a look around. If +you're satisfied with the result, run ```console ./release/cut-official-release.sh "${VER}" "${GITHASH}" --no-dry-run @@ -260,8 +159,50 @@ and follow the instructions. ### Publishing binaries and release notes -The script you ran above will prompt you to take any remaining steps, including -publishing binaries and release notes. +The script you ran above will prompt you to take any remaining steps to push +tars, and will also give you a template for the release notes. Compose an +email to the team with the template, and use `build/make-release-notes.sh` +and/or `release-notes/release-notes.go` in +[kubernetes/contrib](https://github.com/kubernetes/contrib) to make the release +notes, (see #17444 for more info). + +- Alpha release: + - Figure out what the PR numbers for this release and last release are, and + get an api-token from GitHub (https://github.com/settings/tokens). From a + clone of kubernetes/contrib at upstream/master, + go run release-notes/release-notes.go --last-release-pr= --current-release-pr= --api-token= + Feel free to prune. +- Beta release: + - Only publish a beta release if it's a standalone pre-release. (We create + beta tags after we do official releases to maintain proper semantic + versioning, *we don't publish these beta releases*.) Use + `./hack/cherry_pick_list.sh ${VER}` to get release notes for such a + release. +- Official release: + - From your clone of upstream/master, run `./hack/cherry_pick_list.sh ${VER}` + to get the release notes for the patch release you just created. Feel free + to prune anything internal, but typically for patch releases we tend to + include everything in the release notes. + - If this is a first official release (vX.Y.0), look through the release + notes for all of the alpha releases since the last cycle, and include + anything important in release notes. + +Send the email out, letting people know these are the draft release notes. If +they want to change anything, they should update the appropriate PRs with the +`release-note` label. + +When we're ready to announce the release, [create a GitHub +release](https://github.com/kubernetes/kubernetes/releases/new): + +1. pick the appropriate tag; +1. check "This is a pre-release" if it's an alpha or beta release; +1. fill in the release title from the draft; +1. re-run the appropriate release notes tool(s) to pick up any changes people + have made; +1. find the appropriate `kubernetes.tar.gz` in GCS, download it, double check + the hash (compare to what you had in the release notes draft), and attach it + to the release; and +1. publish! ## Injecting Version into Binaries -- cgit v1.2.3 From ce41098cb3fe5dfd2a1aeade718ed37307257293 Mon Sep 17 00:00:00 2001 From: Brendan Burns Date: Fri, 13 Nov 2015 16:54:45 -0800 Subject: Add a description of the proposed owners file system for the repo --- owners.md | 131 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 131 insertions(+) create mode 100644 owners.md diff --git a/owners.md b/owners.md new file mode 100644 index 00000000..22bb2fef --- /dev/null +++ b/owners.md @@ -0,0 +1,131 @@ + + + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.1/docs/devel/owners.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + + + + +# Owners files + +_Note_: This is a design for a feature that is not yet implemented. + +## Overview + +We want to establish owners for different parts of the code in the Kubernetes codebase. These owners +will serve as the approvers for code to be submitted to these parts of the repository. Notably, owners +are not necessarily expected to do the first code review for all commits to these areas, but they are +required to approve changes before they can be merged. + +## High Level flow + +### Step One: A PR is submitted + +After a PR is submitted, the automated kubernetes PR robot will append a message to the PR indicating the owners +that are required for the PR to be submitted. + +Subsequently, a user can also request the approval message from the robot by writing: + +``` +@k8s-bot approvers +``` + +into a comment. + +In either case, the automation replies with an annotation that indicates +the owners required to approve. The annotation is a comment that is applied to the PR. +This comment will say: + +``` +Approval is required from OR , AND OR , AND ... +``` + +The set of required owners is drawn from the OWNERS files in the repository (see below). For each file +there should be multiple different OWNERS, these owners are listed in the `OR` clause(s). Because +it is possible that a PR may cover different directories, with disjoint sets of OWNERS, a PR may require +approval from more than one person, this is where the `AND` clauses come from. + +`` should be the github user id of the owner _without_ a leading `@` symbol to prevent the owner +from being cc'd into the PR by email. + +### Step Two: A PR is LGTM'd + +Once a PR is reviewed and LGTM'd it is eligible for submission. However, for it to be submitted +an owner for all of the files changed in the PR have to 'approve' the PR. A user is an owner for a +file if they are included in the OWNERS hierarchy (see below) for that file. + +Owner approval comes in two forms: + + * An owner adds a comment to the PR saying "I approve" or "approved" + * An owner is the original author of the PR + +In the case of a comment based approval, the same rules as for the 'lgtm' label apply. If the PR is +changed by pushing new commits to the PR, the previous approval is invalidated, and the owner(s) must +approve again. Because of this is recommended that PR authors squash their PRs prior to getting approval +from owners. + +### Step Three: A PR is merged + +Once a PR is LGTM'd and all required owners have approved, it is eligible for merge. The merge bot takes care of +the actual merging. + +## Design details + +We need to build new features into the existing github munger in order to accomplish this. Additionally +we need to add owners files to the repository. + +### Approval Munger + +We need to add a munger that adds comments to PRs indicating whose approval they require. This munger will +look for PRs that do not have approvers already present in the comments, or where approvers have been +requested, and add an appropriate comment to the PR. + + +### Status Munger + +GitHub has a [status api](https://developer.github.com/v3/repos/statuses/), we will add a status munger that pushes a status onto a PR of approval status. This status will only be approved if the relevant +approvers have approved the PR. + +### Requiring approval status + +Github has the ability to [require status checks prior to merging](https://help.github.com/articles/enabling-required-status-checks/) + +Once we have the status check munger described above implemented, we will add this required status check +to our main branch as well as any release branches. + +### Adding owners files + +In each directory in the repository we may add an OWNERS file. This file will contain the github OWNERS +for that directory. OWNERSHIP is hierarchical, so if a directory does not container an OWNERS file, its +parent's OWNERS file is used instead. There will be a top-level OWNERS file to back-stop the system. + +Obviously changing the OWNERS file requires OWNERS permission. + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/owners.md?pixel)]() + -- cgit v1.2.3 From 8f8914f3ded0953fda8d532f1865bcc342b8e477 Mon Sep 17 00:00:00 2001 From: Isaac Hollander McCreery Date: Wed, 18 Nov 2015 16:19:01 -0800 Subject: Add sanity checks for release --- releasing.md | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/releasing.md b/releasing.md index 4805481f..d4347ce4 100644 --- a/releasing.md +++ b/releasing.md @@ -148,8 +148,17 @@ Then, run ``` This will do a dry run of the release. It will give you instructions at the -end for `pushd`ing into the dry-run directory and having a look around. If -you're satisfied with the result, run +end for `pushd`ing into the dry-run directory and having a look around. +`pushd` into the directory and make sure everythig looks as you expect: + +```console +git log "${VER}" # do you see the commit you expect? +make release +./cluster/kubectl.sh version -c +``` + +If you're satisfied with the result of the script, go back to `upstream/master` +run ```console ./release/cut-official-release.sh "${VER}" "${GITHASH}" --no-dry-run -- cgit v1.2.3 From ccd0d84dc966dfe49553fcc88efd5c4c7c0fbac6 Mon Sep 17 00:00:00 2001 From: Hongchao Deng Date: Fri, 20 Nov 2015 10:30:50 -0800 Subject: Kubemark guide: add paragraph to describe '--delete-namespace=false' --- kubemark-guide.md | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/kubemark-guide.md b/kubemark-guide.md index 7a68f4e6..758963de 100644 --- a/kubemark-guide.md +++ b/kubemark-guide.md @@ -100,9 +100,15 @@ comming from other machines, including all the Nodes, not only from outside of t To run standard e2e test on your Kubemark cluster created in the previous step you execute `test/kubemark/run-e2e-tests.sh` script. It will configure ginkgo to use Kubemark cluster instead of something else and start an e2e test. This script should not need any changes to work on other cloud providers. -By default (if nothig will be passed to it) the script will run a Density '30 test. If you want to run a different e2e test you just need to provide flags you want to be +By default (if nothing will be passed to it) the script will run a Density '30 test. If you want to run a different e2e test you just need to provide flags you want to be passed to `hack/ginkgo-e2e.sh` script, e.g. `--ginkgo.focus="Load"` to run the Load test. +By default, at the end of each test, it will delete namespaces and everything under it (e.g. events, replication controllers) on Kubemark master, which takes a lot of time. +Such work aren't needed in most cases: if you delete your Kubemark cluster after running `run-e2e-tests.sh`; +you don't care about namespace deletion performance, specifically related to etcd; etc. +There is a flag that enables you to avoid namespace deletion: `--delete-namespace=false`. +Adding the flag should let you see in logs: `Found DeleteNamespace=false, skipping namespace deletion!` + ### Monitoring test execution and debugging problems Run-e2e-tests prints the same output on Kubemark as on ordinary e2e cluster, but if you need to dig deeper you need to learn how to debug HollowNodes and how Master -- cgit v1.2.3 From e1ded93ff37ab654682ff38c0e77e47c6a7681e6 Mon Sep 17 00:00:00 2001 From: "Tim St. Clair" Date: Mon, 23 Nov 2015 18:06:23 -0800 Subject: Clarify when pointers are used for optional types --- api-conventions.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/api-conventions.md b/api-conventions.md index 6628e998..43550903 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -387,7 +387,8 @@ Fields must be either optional or required. Optional fields have the following properties: - They have `omitempty` struct tag in Go. -- They are a pointer type in the Go definition (e.g. `bool *awesomeFlag`). +- They are a pointer type in the Go definition (e.g. `bool *awesomeFlag`) or have a built-in `nil` + value (e.g. maps and slices). - The API server should allow POSTing and PUTing a resource with this field unset. Required fields have the opposite properties, namely: @@ -409,7 +410,8 @@ codebase. However: - having a pointer consistently imply optional is clearer for users of the Go language client, and any other clients that use corresponding types -Therefore, we ask that pointers always be used with optional fields. +Therefore, we ask that pointers always be used with optional fields that do not have a built-in +`nil` value. ## Defaulting -- cgit v1.2.3 From ceef4793e3e31ba883c02ea88d9891acd01a80d2 Mon Sep 17 00:00:00 2001 From: Brad Erickson Date: Mon, 23 Nov 2015 19:01:03 -0800 Subject: Minion->Node rename: KUBERNETES_NODE_MEMORY, VAGRANT_NODE_NAMES, etc ENABLE_NODE_PUBLIC_IP NODE_ADDRESS NODE_BLOCK_DEVICE_MAPPINGS NODE_CONTAINER_ADDRS NODE_CONTAINER_NETMASKS NODE_CONTAINER_SUBNET_BASE NODE_CONTAINER_SUBNETS NODE_CPU --- developer-guides/vagrant.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index 61560db7..291b85bc 100644 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -369,7 +369,7 @@ If you need more granular control, you can set the amount of memory for the mast ```sh export KUBERNETES_MASTER_MEMORY=1536 -export KUBERNETES_MINION_MEMORY=2048 +export KUBERNETES_NODE_MEMORY=2048 ``` #### I ran vagrant suspend and nothing works! -- cgit v1.2.3 From bc465c1d0f7b0f1c7d405cf1c287f255172ce151 Mon Sep 17 00:00:00 2001 From: Brad Erickson Date: Mon, 23 Nov 2015 19:06:36 -0800 Subject: Minion->Node rename: NUM_NODES --- developer-guides/vagrant.md | 6 +++--- kubemark-guide.md | 2 +- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index 291b85bc..2d628abb 100644 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -301,7 +301,7 @@ Congratulations! The following will run all of the end-to-end testing scenarios assuming you set your environment in `cluster/kube-env.sh`: ```sh -NUM_MINIONS=3 hack/e2e-test.sh +NUM_NODES=3 hack/e2e-test.sh ``` ### Troubleshooting @@ -350,10 +350,10 @@ Are you sure you built a release first? Did you install `net-tools`? For more cl #### I want to change the number of nodes! -You can control the number of nodes that are instantiated via the environment variable `NUM_MINIONS` on your host machine. If you plan to work with replicas, we strongly encourage you to work with enough nodes to satisfy your largest intended replica size. If you do not plan to work with replicas, you can save some system resources by running with a single node. You do this, by setting `NUM_MINIONS` to 1 like so: +You can control the number of nodes that are instantiated via the environment variable `NUM_NODES` on your host machine. If you plan to work with replicas, we strongly encourage you to work with enough nodes to satisfy your largest intended replica size. If you do not plan to work with replicas, you can save some system resources by running with a single node. You do this, by setting `NUM_NODES` to 1 like so: ```sh -export NUM_MINIONS=1 +export NUM_NODES=1 ``` #### I want my VMs to have more memory! diff --git a/kubemark-guide.md b/kubemark-guide.md index 758963de..df0ecb96 100644 --- a/kubemark-guide.md +++ b/kubemark-guide.md @@ -73,7 +73,7 @@ To start a Kubemark cluster on GCE you need to create an external cluster (it ca `make quick-release`) and run `test/kubemark/start-kubemark.sh` script. This script will create a VM for master components, Pods for HollowNodes and do all the setup necessary to let them talk to each other. It will use the configuration stored in `cluster/kubemark/config-default.sh` - you can tweak it however you want, but note that some features may not be implemented yet, as implementation of Hollow components/mocks will probably be lagging behind ‘real’ one. For performance tests interesting variables are -`NUM_MINIONS` and `MASTER_SIZE`. After start-kubemark script is finished you’ll have a ready Kubemark cluster, a kubeconfig file for talking to the Kubemark +`NUM_NODES` and `MASTER_SIZE`. After start-kubemark script is finished you’ll have a ready Kubemark cluster, a kubeconfig file for talking to the Kubemark cluster is stored in `test/kubemark/kubeconfig.loc`. Currently we're running HollowNode with limit of 0.05 a CPU core and ~60MB or memory, which taking into account default cluster addons and fluentD running on an 'external' -- cgit v1.2.3 From b0542299ca51e3cbefd0c36b042a392ca407c098 Mon Sep 17 00:00:00 2001 From: Chao Xu Date: Fri, 30 Oct 2015 15:32:44 -0700 Subject: change the "too old resource version" error from InternalError to 410 Gone. --- api-conventions.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/api-conventions.md b/api-conventions.md index cf389231..9a71fe1c 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -547,6 +547,10 @@ The following HTTP status codes may be returned by the API. * * If updating an existing resource: * See `Conflict` from the `status` response section below on how to retrieve more information about the nature of the conflict. * GET and compare the fields in the pre-existing object, merge changes (if still valid according to preconditions), and retry with the updated request (including `ResourceVersion`). +* `410 StatusGone` + * Indicates that the item is no longer available at the server and no forwarding address is known. + * Suggested client recovery behavior + * Do not retry. Fix the request. * `422 StatusUnprocessableEntity` * Indicates that the requested create or update operation cannot be completed due to invalid data provided as part of the request. * Suggested client recovery behavior -- cgit v1.2.3 From a608d8c1bd88bee419ca4ab64bb174f670ec90d7 Mon Sep 17 00:00:00 2001 From: Brad Erickson Date: Sun, 8 Nov 2015 23:08:58 -0800 Subject: Minion->Name rename: cluster/vagrant, docs and Vagrantfile --- developer-guides/vagrant.md | 42 +++++++++++++++++++++--------------------- 1 file changed, 21 insertions(+), 21 deletions(-) diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index 2d628abb..74e29e3a 100644 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -47,7 +47,7 @@ Running kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/deve ### Setup -By default, the Vagrant setup will create a single master VM (called kubernetes-master) and one node (called kubernetes-minion-1). Each VM will take 1 GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate free disk space). To start your local cluster, open a shell and run: +By default, the Vagrant setup will create a single master VM (called kubernetes-master) and one node (called kubernetes-node-1). Each VM will take 1 GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate free disk space). To start your local cluster, open a shell and run: ```sh cd kubernetes @@ -74,14 +74,14 @@ To access the master or any node: ```sh vagrant ssh master -vagrant ssh minion-1 +vagrant ssh node-1 ``` If you are running more than one nodes, you can access the others by: ```sh -vagrant ssh minion-2 -vagrant ssh minion-3 +vagrant ssh node-2 +vagrant ssh node-3 ``` To view the service status and/or logs on the kubernetes-master: @@ -101,11 +101,11 @@ $ vagrant ssh master To view the services on any of the nodes: ```console -$ vagrant ssh minion-1 -[vagrant@kubernetes-minion-1] $ sudo systemctl status docker -[vagrant@kubernetes-minion-1] $ sudo journalctl -r -u docker -[vagrant@kubernetes-minion-1] $ sudo systemctl status kubelet -[vagrant@kubernetes-minion-1] $ sudo journalctl -r -u kubelet +$ vagrant ssh node-1 +[vagrant@kubernetes-node-1] $ sudo systemctl status docker +[vagrant@kubernetes-node-1] $ sudo journalctl -r -u docker +[vagrant@kubernetes-node-1] $ sudo systemctl status kubelet +[vagrant@kubernetes-node-1] $ sudo journalctl -r -u kubelet ``` ### Interacting with your Kubernetes cluster with Vagrant. @@ -139,9 +139,9 @@ You may need to build the binaries first, you can do this with `make` $ ./cluster/kubectl.sh get nodes NAME LABELS STATUS -kubernetes-minion-0whl kubernetes.io/hostname=kubernetes-minion-0whl Ready -kubernetes-minion-4jdf kubernetes.io/hostname=kubernetes-minion-4jdf Ready -kubernetes-minion-epbe kubernetes.io/hostname=kubernetes-minion-epbe Ready +kubernetes-node-0whl kubernetes.io/hostname=kubernetes-node-0whl Ready +kubernetes-node-4jdf kubernetes.io/hostname=kubernetes-node-4jdf Ready +kubernetes-node-epbe kubernetes.io/hostname=kubernetes-node-epbe Ready ``` ### Interacting with your Kubernetes cluster with the `kube-*` scripts. @@ -206,9 +206,9 @@ Your cluster is running, you can list the nodes in your cluster: $ ./cluster/kubectl.sh get nodes NAME LABELS STATUS -kubernetes-minion-0whl kubernetes.io/hostname=kubernetes-minion-0whl Ready -kubernetes-minion-4jdf kubernetes.io/hostname=kubernetes-minion-4jdf Ready -kubernetes-minion-epbe kubernetes.io/hostname=kubernetes-minion-epbe Ready +kubernetes-node-0whl kubernetes.io/hostname=kubernetes-node-0whl Ready +kubernetes-node-4jdf kubernetes.io/hostname=kubernetes-node-4jdf Ready +kubernetes-node-epbe kubernetes.io/hostname=kubernetes-node-epbe Ready ``` Now start running some containers! @@ -245,11 +245,11 @@ my-nginx-kqdjk 1/1 Waiting 0 33s my-nginx-nyj3x 1/1 Waiting 0 33s ``` -You need to wait for the provisioning to complete, you can monitor the minions by doing: +You need to wait for the provisioning to complete, you can monitor the nodes by doing: ```console -$ sudo salt '*minion-1' cmd.run 'docker images' -kubernetes-minion-1: +$ sudo salt '*node-1' cmd.run 'docker images' +kubernetes-node-1: REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE 96864a7d2df3 26 hours ago 204.4 MB kubernetes/pause latest 6c4579af347b 8 weeks ago 239.8 kB @@ -258,8 +258,8 @@ kubernetes-minion-1: Once the docker image for nginx has been downloaded, the container will start and you can list it: ```console -$ sudo salt '*minion-1' cmd.run 'docker ps' -kubernetes-minion-1: +$ sudo salt '*node-1' cmd.run 'docker ps' +kubernetes-node-1: CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES dbe79bf6e25b nginx:latest "nginx" 21 seconds ago Up 19 seconds k8s--mynginx.8c5b8a3a--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--fcfa837f fa0e29c94501 kubernetes/pause:latest "/pause" 8 minutes ago Up 8 minutes 0.0.0.0:8080->80/tcp k8s--net.a90e7ce4--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--baf5b21b @@ -346,7 +346,7 @@ It's very likely you see a build error due to an error in your source files! #### I have brought Vagrant up but the nodes won't validate! -Are you sure you built a release first? Did you install `net-tools`? For more clues, login to one of the nodes (`vagrant ssh minion-1`) and inspect the salt minion log (`sudo cat /var/log/salt/minion`). +Are you sure you built a release first? Did you install `net-tools`? For more clues, login to one of the nodes (`vagrant ssh node-1`) and inspect the salt minion log (`sudo cat /var/log/salt/minion`). #### I want to change the number of nodes! -- cgit v1.2.3 From 77f62c05d2577f3eae2c07fa513d7334e8241e98 Mon Sep 17 00:00:00 2001 From: Brad Erickson Date: Thu, 3 Dec 2015 15:42:10 -0800 Subject: Minion->Node rename: docs/ machine names only, except gce/aws --- developer-guides/vagrant.md | 12 ++++++------ flaky-tests.md | 6 +++--- 2 files changed, 9 insertions(+), 9 deletions(-) diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index 74e29e3a..14ccfe6b 100644 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -139,9 +139,9 @@ You may need to build the binaries first, you can do this with `make` $ ./cluster/kubectl.sh get nodes NAME LABELS STATUS -kubernetes-node-0whl kubernetes.io/hostname=kubernetes-node-0whl Ready -kubernetes-node-4jdf kubernetes.io/hostname=kubernetes-node-4jdf Ready -kubernetes-node-epbe kubernetes.io/hostname=kubernetes-node-epbe Ready +kubernetes-node-0whl kubernetes.io/hostname=kubernetes-node-0whl Ready +kubernetes-node-4jdf kubernetes.io/hostname=kubernetes-node-4jdf Ready +kubernetes-node-epbe kubernetes.io/hostname=kubernetes-node-epbe Ready ``` ### Interacting with your Kubernetes cluster with the `kube-*` scripts. @@ -206,9 +206,9 @@ Your cluster is running, you can list the nodes in your cluster: $ ./cluster/kubectl.sh get nodes NAME LABELS STATUS -kubernetes-node-0whl kubernetes.io/hostname=kubernetes-node-0whl Ready -kubernetes-node-4jdf kubernetes.io/hostname=kubernetes-node-4jdf Ready -kubernetes-node-epbe kubernetes.io/hostname=kubernetes-node-epbe Ready +kubernetes-node-0whl kubernetes.io/hostname=kubernetes-node-0whl Ready +kubernetes-node-4jdf kubernetes.io/hostname=kubernetes-node-4jdf Ready +kubernetes-node-epbe kubernetes.io/hostname=kubernetes-node-epbe Ready ``` Now start running some containers! diff --git a/flaky-tests.md b/flaky-tests.md index 27c788aa..d5cc6a45 100644 --- a/flaky-tests.md +++ b/flaky-tests.md @@ -80,9 +80,9 @@ You can use this script to automate checking for failures, assuming your cluster ```sh echo "" > output.txt for i in {1..4}; do - echo "Checking kubernetes-minion-${i}" - echo "kubernetes-minion-${i}:" >> output.txt - gcloud compute ssh "kubernetes-minion-${i}" --command="sudo docker ps -a" >> output.txt + echo "Checking kubernetes-node-${i}" + echo "kubernetes-node-${i}:" >> output.txt + gcloud compute ssh "kubernetes-node-${i}" --command="sudo docker ps -a" >> output.txt done grep "Exited ([^0])" output.txt ``` -- cgit v1.2.3 From ee875e93eb126d20731d0799d99b9ec95bf0f8fd Mon Sep 17 00:00:00 2001 From: Jon Eisen Date: Fri, 4 Dec 2015 13:47:30 -0700 Subject: Add new clojure api bindings library https://github.com/yanatan16/clj-kubernetes-api --- client-libraries.md | 1 + 1 file changed, 1 insertion(+) diff --git a/client-libraries.md b/client-libraries.md index 22a59d06..a6f3e6ff 100644 --- a/client-libraries.md +++ b/client-libraries.md @@ -49,6 +49,7 @@ Documentation for other releases can be found at * [PHP](https://github.com/maclof/kubernetes-client) * [Node.js](https://github.com/tenxcloud/node-kubernetes-client) * [Perl](https://metacpan.org/pod/Net::Kubernetes) + * [Clojure](https://github.com/yanatan16/clj-kubernetes-api) -- cgit v1.2.3 From 9b60d8c88083958918bb92b6104b1fe8d4e9b9ec Mon Sep 17 00:00:00 2001 From: Tamer Tas Date: Mon, 7 Dec 2015 06:16:01 +0200 Subject: Rename githash to build_version and version to release_version --- releasing.md | 34 ++++++++++++++++++---------------- 1 file changed, 18 insertions(+), 16 deletions(-) diff --git a/releasing.md b/releasing.md index d4347ce4..757048ad 100644 --- a/releasing.md +++ b/releasing.md @@ -92,22 +92,24 @@ release from HEAD of the branch, (because you have to do some version-rev commits,) so choose the latest build on the release branch. (Remember, that branch should be frozen.) -Once you find some greens, you can find the git hash for a build by looking at -the Full Console Output and searching for `githash=`. You should see a line: +Once you find some greens, you can find the build hash for a build by looking at +the Full Console Output and searching for `build_version=`. You should see a line: ```console -githash=v1.2.0-alpha.2.164+b44c7d79d6c9bb +build_version=v1.2.0-alpha.2.164+b44c7d79d6c9bb ``` Or, if you're cutting from a release branch (i.e. doing an official release), ```console -githash=v1.1.0-beta.567+d79d6c9bbb44c7 +build_version=v1.1.0-beta.567+d79d6c9bbb44c7 ``` +Please note that `build_version` was called `githash` versions prior to v1.2. + Because Jenkins builds frequently, if you're looking between jobs (e.g. `kubernetes-e2e-gke-ci` and `kubernetes-e2e-gce`), there may be no single -`githash` that's been run on both jobs. In that case, take the a green +`build_version` that's been run on both jobs. In that case, take the a green `kubernetes-e2e-gce` build (but please check that it corresponds to a temporally similar build that's green on `kubernetes-e2e-gke-ci`). Lastly, if you're having trouble understanding why the GKE continuous integration clusters are failing @@ -117,10 +119,10 @@ oncall. Before proceeding to the next step: ```sh -export GITHASH=v1.2.0-alpha.2.164+b44c7d79d6c9bb +export BUILD_VERSION=v1.2.0-alpha.2.164+b44c7d79d6c9bb ``` -Where `v1.2.0-alpha.2.164+b44c7d79d6c9bb` is the git hash you decided on. This +Where `v1.2.0-alpha.2.164+b44c7d79d6c9bb` is the build hash you decided on. This will become your release point. ### Cutting/branching the release @@ -136,15 +138,15 @@ or `git checkout upstream/master` from an existing repo. Decide what version you're cutting and export it: -- alpha release: `export VER="vX.Y.0-alpha.W"`; -- beta release: `export VER="vX.Y.Z-beta.W"`; -- official release: `export VER="vX.Y.Z"`; -- new release series: `export VER="vX.Y"`. +- alpha release: `export RELEASE_VERSION="vX.Y.0-alpha.W"`; +- beta release: `export RELEASE_VERSION="vX.Y.Z-beta.W"`; +- official release: `export RELEASE_VERSION="vX.Y.Z"`; +- new release series: `export RELEASE_VERSION="vX.Y"`. Then, run ```console -./release/cut-official-release.sh "${VER}" "${GITHASH}" +./release/cut-official-release.sh "${RELEASE_VERSION}" "${BUILD_VERSION}" ``` This will do a dry run of the release. It will give you instructions at the @@ -152,7 +154,7 @@ end for `pushd`ing into the dry-run directory and having a look around. `pushd` into the directory and make sure everythig looks as you expect: ```console -git log "${VER}" # do you see the commit you expect? +git log "${RELEASE_VERSION}" # do you see the commit you expect? make release ./cluster/kubectl.sh version -c ``` @@ -161,7 +163,7 @@ If you're satisfied with the result of the script, go back to `upstream/master` run ```console -./release/cut-official-release.sh "${VER}" "${GITHASH}" --no-dry-run +./release/cut-official-release.sh "${RELEASE_VERSION}" "${BUILD_VERSION}" --no-dry-run ``` and follow the instructions. @@ -185,10 +187,10 @@ notes, (see #17444 for more info). - Only publish a beta release if it's a standalone pre-release. (We create beta tags after we do official releases to maintain proper semantic versioning, *we don't publish these beta releases*.) Use - `./hack/cherry_pick_list.sh ${VER}` to get release notes for such a + `./hack/cherry_pick_list.sh ${RELEASE_VERSION}` to get release notes for such a release. - Official release: - - From your clone of upstream/master, run `./hack/cherry_pick_list.sh ${VER}` + - From your clone of upstream/master, run `./hack/cherry_pick_list.sh ${RELEASE_VERSION}` to get the release notes for the patch release you just created. Feel free to prune anything internal, but typically for patch releases we tend to include everything in the release notes. -- cgit v1.2.3 From 7da888eee00d0ca825059b7ceaf05dbdecceaf38 Mon Sep 17 00:00:00 2001 From: Filip Grzadkowski Date: Wed, 25 Nov 2015 14:50:46 +0100 Subject: Update documents for release process --- releasing.md | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/releasing.md b/releasing.md index 757048ad..3cefb725 100644 --- a/releasing.md +++ b/releasing.md @@ -134,7 +134,7 @@ git clone git@github.com:kubernetes/kubernetes.git cd kubernetes ``` -or `git checkout upstream/master` from an existing repo. +or `git fetch upstream && git checkout upstream/master` from an existing repo. Decide what version you're cutting and export it: @@ -210,9 +210,10 @@ release](https://github.com/kubernetes/kubernetes/releases/new): 1. fill in the release title from the draft; 1. re-run the appropriate release notes tool(s) to pick up any changes people have made; -1. find the appropriate `kubernetes.tar.gz` in GCS, download it, double check - the hash (compare to what you had in the release notes draft), and attach it - to the release; and +1. find the appropriate `kubernetes.tar.gz` in [GCS bucket](https:// +console.developers.google.com/storage/browser/kubernetes-release/release/), + download it, double check the hash (compare to what you had in the release + notes draft), and attach it to the release; and 1. publish! ## Injecting Version into Binaries -- cgit v1.2.3 From 458e489bbdd5a92cf483836c722cfdfca497c0d5 Mon Sep 17 00:00:00 2001 From: Tim Hockin Date: Wed, 2 Dec 2015 09:54:21 -0800 Subject: Make go version requirements clearer --- development.md | 20 +++++++++++++++++--- 1 file changed, 17 insertions(+), 3 deletions(-) diff --git a/development.md b/development.md index 09abe1e7..3b5443bc 100644 --- a/development.md +++ b/development.md @@ -33,15 +33,29 @@ Documentation for other releases can be found at # Development Guide -# Releases and Official Builds +This document is intended to be the canonical source of truth for things like +supported toolchain versions for building Kubernetes. If you find a +requirement that this doc does not capture, please file a bug. If you find +other docs with references to requirements that are not simply links to this +doc, please file a bug. + +This document is intended to be relative to the branch in which it is found. +It is guaranteed that requirements will change over time for the development +branch, but release branches of Kubernetes should not change. + +## Releases and Official Builds Official releases are built in Docker containers. Details are [here](http://releases.k8s.io/HEAD/build/README.md). You can do simple builds and development with just a local Docker installation. If want to build go locally outside of docker, please continue below. ## Go development environment -Kubernetes is written in [Go](http://golang.org) programming language. If you haven't set up Go development environment, please follow [this instruction](http://golang.org/doc/code.html) to install go tool and set up GOPATH. Ensure your version of Go is at least 1.3. +Kubernetes is written in the [Go](http://golang.org) programming language. If you haven't set up a Go development environment, please follow [these instructions](http://golang.org/doc/code.html) to install the go tools and set up a GOPATH. + +### Go versions + +Requires Go version 1.4.x or 1.5.x -## Git Setup +## Git setup Below, we outline one of the more common git workflows that core developers use. Other git workflows are also valid. -- cgit v1.2.3 From 343a552e67f238effd78a96be4979762e101a864 Mon Sep 17 00:00:00 2001 From: Justin Santa Barbara Date: Sat, 5 Dec 2015 22:30:46 -0500 Subject: Zone scheduler: Update scheduler docs There's not a huge amount of detail in the docs as to how the scheduler actually works, which is probably a good thing both for readability and because it makes it easier to tweak the zone-spreading approach in the future, but we should include some information that we do spread across zones if zone information is present on the nodes. --- scheduler.md | 2 +- scheduler_algorithm.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/scheduler.md b/scheduler.md index ffc73ca1..2bdb4c16 100755 --- a/scheduler.md +++ b/scheduler.md @@ -47,7 +47,7 @@ will filter out nodes that don't have at least that much resources available (co as the capacity of the node minus the sum of the resource requests of the containers that are already running on the node). Second, it applies a set of "priority functions" that rank the nodes that weren't filtered out by the predicate check. For example, -it tries to spread Pods across nodes while at the same time favoring the least-loaded +it tries to spread Pods across nodes and zones while at the same time favoring the least-loaded nodes (where "load" here is sum of the resource requests of the containers running on the node, divided by the node's capacity). Finally, the node with the highest priority is chosen diff --git a/scheduler_algorithm.md b/scheduler_algorithm.md index c8790af9..3888786c 100755 --- a/scheduler_algorithm.md +++ b/scheduler_algorithm.md @@ -61,7 +61,7 @@ Currently, Kubernetes scheduler provides some practical priority functions, incl - `LeastRequestedPriority`: The node is prioritized based on the fraction of the node that would be free if the new Pod were scheduled onto the node. (In other words, (capacity - sum of requests of all Pods already on the node - request of Pod that is being scheduled) / capacity). CPU and memory are equally weighted. The node with the highest free fraction is the most preferred. Note that this priority function has the effect of spreading Pods across the nodes with respect to resource consumption. - `CalculateNodeLabelPriority`: Prefer nodes that have the specified label. - `BalancedResourceAllocation`: This priority function tries to put the Pod on a node such that the CPU and Memory utilization rate is balanced after the Pod is deployed. -- `CalculateSpreadPriority`: Spread Pods by minimizing the number of Pods belonging to the same service on the same node. +- `CalculateSpreadPriority`: Spread Pods by minimizing the number of Pods belonging to the same service on the same node. If zone information is present on the nodes, the priority will be adjusted so that pods are spread across zones and nodes. - `CalculateAntiAffinityPriority`: Spread Pods by minimizing the number of Pods belonging to the same service on nodes with the same value for a particular label. The details of the above priority functions can be found in [plugin/pkg/scheduler/algorithm/priorities](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/priorities/). Kubernetes uses some, but not all, of these priority functions by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). Similar as predicates, you can combine the above priority functions and assign weight factors (positive number) to them as you want (check [scheduler.md](scheduler.md) for how to customize). -- cgit v1.2.3 From ad6bfda32161984d88dd14e8c3c43a739f4db2d4 Mon Sep 17 00:00:00 2001 From: Paul Morie Date: Mon, 14 Dec 2015 15:03:21 -0500 Subject: Add note about type comments to API changes doc --- api_changes.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/api_changes.md b/api_changes.md index 4bbb5bd4..d2f0aea7 100644 --- a/api_changes.md +++ b/api_changes.md @@ -320,7 +320,8 @@ before starting "all the rest". The struct definitions for each API are in `pkg/api//types.go`. Edit those files to reflect the change you want to make. Note that all types and non-inline fields in versioned APIs must be preceded by descriptive comments - these are used to generate -documentation. +documentation. Comments for types should not contain the type name; API documentation is +generated from these comments and end-users should not be exposed to golang type names. Optional fields should have the `,omitempty` json tag; fields are interpreted as being required otherwise. -- cgit v1.2.3 From 2743354deee6a23c24c668b936c2a5729ae67f8f Mon Sep 17 00:00:00 2001 From: Tim Hockin Date: Sat, 14 Nov 2015 10:22:42 -0800 Subject: api-conventions: Namespace is label, not subdomain --- api-conventions.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/api-conventions.md b/api-conventions.md index cd64435a..a6314f0b 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -152,7 +152,7 @@ These fields are required for proper decoding of the object. They may be populat Every object kind MUST have the following metadata in a nested object field called "metadata": -* namespace: a namespace is a DNS compatible subdomain that objects are subdivided into. The default namespace is 'default'. See [docs/user-guide/namespaces.md](../user-guide/namespaces.md) for more. +* namespace: a namespace is a DNS compatible label that objects are subdivided into. The default namespace is 'default'. See [docs/user-guide/namespaces.md](../user-guide/namespaces.md) for more. * name: a string that uniquely identifies this object within the current namespace (see [docs/user-guide/identifiers.md](../user-guide/identifiers.md)). This value is used in the path when retrieving an individual object. * uid: a unique in time and space value (typically an RFC 4122 generated identifier, see [docs/user-guide/identifiers.md](../user-guide/identifiers.md)) used to distinguish between objects with the same name that have been deleted and recreated -- cgit v1.2.3 From 8ecb41df7e8e98a90413409a13054ead8c04eb20 Mon Sep 17 00:00:00 2001 From: Isaac Hollander McCreery Date: Fri, 11 Dec 2015 14:03:41 -0800 Subject: Mark a release as stable when we announce it, and stop using cherry_pick_list.sh --- releasing.md | 52 ++++++++++++++++++++++++++-------------------------- 1 file changed, 26 insertions(+), 26 deletions(-) diff --git a/releasing.md b/releasing.md index 3cefb725..8ab678ef 100644 --- a/releasing.md +++ b/releasing.md @@ -170,39 +170,35 @@ and follow the instructions. ### Publishing binaries and release notes +Only publish a beta release if it's a standalone pre-release (*not* +vX.Y.Z-beta.0). We create beta tags after we do official releases to +maintain proper semantic versioning, but we don't publish these beta releases. + The script you ran above will prompt you to take any remaining steps to push tars, and will also give you a template for the release notes. Compose an -email to the team with the template, and use `build/make-release-notes.sh` -and/or `release-notes/release-notes.go` in -[kubernetes/contrib](https://github.com/kubernetes/contrib) to make the release -notes, (see #17444 for more info). - -- Alpha release: - - Figure out what the PR numbers for this release and last release are, and - get an api-token from GitHub (https://github.com/settings/tokens). From a - clone of kubernetes/contrib at upstream/master, - go run release-notes/release-notes.go --last-release-pr= --current-release-pr= --api-token= - Feel free to prune. -- Beta release: - - Only publish a beta release if it's a standalone pre-release. (We create - beta tags after we do official releases to maintain proper semantic - versioning, *we don't publish these beta releases*.) Use - `./hack/cherry_pick_list.sh ${RELEASE_VERSION}` to get release notes for such a - release. -- Official release: - - From your clone of upstream/master, run `./hack/cherry_pick_list.sh ${RELEASE_VERSION}` - to get the release notes for the patch release you just created. Feel free - to prune anything internal, but typically for patch releases we tend to - include everything in the release notes. - - If this is a first official release (vX.Y.0), look through the release - notes for all of the alpha releases since the last cycle, and include - anything important in release notes. +email to the team with the template. Figure out what the PR numbers for this +release and last release are, and get an api-token from GitHub +(https://github.com/settings/tokens). From a clone of +[kubernetes/contrib](https://github.com/kubernetes/contrib), + +``` +go run release-notes/release-notes.go --last-release-pr= --current-release-pr= --api-token= --base= +``` + +where `` is `master` for alpha releases and `release-X.Y` for beta and official releases. + +**If this is a first official release (vX.Y.0)**, look through the release +notes for all of the alpha releases since the last cycle, and include anything +important in release notes. + +Feel free to edit the notes, (e.g. cherry picks should generally just have the +same title as the original PR). Send the email out, letting people know these are the draft release notes. If they want to change anything, they should update the appropriate PRs with the `release-note` label. -When we're ready to announce the release, [create a GitHub +When you're ready to announce the release, [create a GitHub release](https://github.com/kubernetes/kubernetes/releases/new): 1. pick the appropriate tag; @@ -216,6 +212,10 @@ console.developers.google.com/storage/browser/kubernetes-release/release/), notes draft), and attach it to the release; and 1. publish! +Finally, from a clone of upstream/master, *make sure* you still have +`RELEASE_VERSION` set correctly, and run `./build/mark-stable-release.sh +${RELEASE_VERSION}`. + ## Injecting Version into Binaries *Please note that this information may be out of date. The scripts are the -- cgit v1.2.3 From 12e5ddcbac266b547c34e85e9a09f6e0acf30580 Mon Sep 17 00:00:00 2001 From: Amy Unruh Date: Thu, 3 Dec 2015 15:53:33 -0800 Subject: config best practices doc edits --- coding-conventions.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/coding-conventions.md b/coding-conventions.md index df9f63e7..d51278be 100644 --- a/coding-conventions.md +++ b/coding-conventions.md @@ -72,7 +72,8 @@ Directory and file conventions - Package directories should generally avoid using separators as much as possible (when packages are multiple words, they usually should be in nested subdirectories). - Document directories and filenames should use dashes rather than underscores - Contrived examples that illustrate system features belong in /docs/user-guide or /docs/admin, depending on whether it is a feature primarily intended for users that deploy applications or cluster administrators, respectively. Actual application examples belong in /examples. - - Examples should also illustrate [best practices for using the system](../user-guide/config-best-practices.md) + - Examples should also illustrate + [best practices for configuration and using the system](../user-guide/config-best-practices.md) - Third-party code - Third-party Go code is managed using Godeps - Other third-party code belongs in /third_party -- cgit v1.2.3 From a2ccb32f3e46b2d79e28f12a7f0feb5d75a7a7c4 Mon Sep 17 00:00:00 2001 From: Quinton Hoole Date: Wed, 16 Dec 2015 09:47:12 -0800 Subject: Addressed thockin's comments. --- issue-priorities.md | 6 ++++++ 1 file changed, 6 insertions(+) create mode 100644 issue-priorities.md diff --git a/issue-priorities.md b/issue-priorities.md new file mode 100644 index 00000000..8b6e69f5 --- /dev/null +++ b/issue-priorities.md @@ -0,0 +1,6 @@ +These are the meanings of the labels priority/P0 ... priority/P3 that we apply to issues in order to try to prioritize them relative to each other. We try to apply these priority labels consistently across the entire project, but if you notice an issue that you believe to be misprioritized, please do let us know and we will evaluate your counter-proposal. + +- **priority/P0**: Must be actively worked on as someone's top priority right now. Stuff is burning. If it's not being actively worked on, someone is expected to drop what they're doing immediately to work on it. TL's of teams are responsible for making sure that all P0's in their area are being actively worked on. Examples include user-visible bugs in core features, broken builds or tests and critical security issues. +- **priority/P1**: Must be staffed and worked on either currently, or very soon, ideally in time for the next release. +- **priority/P2**: There appears to be general agreement that this would be good to have, but we don't have anyone available to work on it right now or in the immediate future. Community contributions would be most welcome in the mean time (although it might take a while to get them reviewed if reviewers are fully occupied with higher priority issues, for example immediately before a release). +- **priority/P3**: Probably useful, but not yet enough support to actually get it done. These are mostly place-holders for potentially good ideas, so that they don't get completely forgotten, and can be referenced/deduped every time they come up. -- cgit v1.2.3 From 081c9100c770c34894536a0321eb6126771ac06e Mon Sep 17 00:00:00 2001 From: Quinton Hoole Date: Wed, 16 Dec 2015 10:39:02 -0800 Subject: Moved to existing documentation about issue priorities. --- issue-priorities.md | 6 ------ issues.md | 20 +++++++++----------- 2 files changed, 9 insertions(+), 17 deletions(-) delete mode 100644 issue-priorities.md diff --git a/issue-priorities.md b/issue-priorities.md deleted file mode 100644 index 8b6e69f5..00000000 --- a/issue-priorities.md +++ /dev/null @@ -1,6 +0,0 @@ -These are the meanings of the labels priority/P0 ... priority/P3 that we apply to issues in order to try to prioritize them relative to each other. We try to apply these priority labels consistently across the entire project, but if you notice an issue that you believe to be misprioritized, please do let us know and we will evaluate your counter-proposal. - -- **priority/P0**: Must be actively worked on as someone's top priority right now. Stuff is burning. If it's not being actively worked on, someone is expected to drop what they're doing immediately to work on it. TL's of teams are responsible for making sure that all P0's in their area are being actively worked on. Examples include user-visible bugs in core features, broken builds or tests and critical security issues. -- **priority/P1**: Must be staffed and worked on either currently, or very soon, ideally in time for the next release. -- **priority/P2**: There appears to be general agreement that this would be good to have, but we don't have anyone available to work on it right now or in the immediate future. Community contributions would be most welcome in the mean time (although it might take a while to get them reviewed if reviewers are fully occupied with higher priority issues, for example immediately before a release). -- **priority/P3**: Probably useful, but not yet enough support to actually get it done. These are mostly place-holders for potentially good ideas, so that they don't get completely forgotten, and can be referenced/deduped every time they come up. diff --git a/issues.md b/issues.md index f2ce6949..cbad9517 100644 --- a/issues.md +++ b/issues.md @@ -33,23 +33,21 @@ Documentation for other releases can be found at GitHub Issues for the Kubernetes Project ======================================== -A list quick overview of how we will review and prioritize incoming issues at https://github.com/kubernetes/kubernetes/issues +A quick overview of how we will review and prioritize incoming issues at https://github.com/kubernetes/kubernetes/issues Priorities ---------- -We will use GitHub issue labels for prioritization. The absence of a priority label means the bug has not been reviewed and prioritized yet. +We use GitHub issue labels for prioritization. The absence of a +priority label means the bug has not been reviewed and prioritized +yet. -Definitions ------------ -* P0 - something broken for users, build broken, or critical security issue. Someone must drop everything and work on it. -* P1 - must fix for earliest possible binary release (every two weeks) -* P2 - should be fixed in next major release version -* P3 - default priority for lower importance bugs that we still want to track and plan to fix at some point -* design - priority/design is for issues that are used to track design discussions -* support - priority/support is used for issues tracking user support requests -* untriaged - anything without a priority/X label will be considered untriaged +We try to apply these priority labels consistently across the entire project, but if you notice an issue that you believe to be misprioritized, please do let us know and we will evaluate your counter-proposal.\ +- **priority/P0**: Must be actively worked on as someone's top priority right now. Stuff is burning. If it's not being actively worked on, someone is expected to drop what they're doing immediately to work on it. TL's of teams are responsible for making sure that all P0's in their area are being actively worked on. Examples include user-visible bugs in core features, broken builds or tests and critical security issues. +- **priority/P1**: Must be staffed and worked on either currently, or very soon, ideally in time for the next release. +- **priority/P2**: There appears to be general agreement that this would be good to have, but we don't have anyone available to work on it right now or in the immediate future. Community contributions would be most welcome in the mean time (although it might take a while to get them reviewed if reviewers are fully occupied with higher priority issues, for example immediately before a release). +- **priority/P3**: Possibly useful, but not yet enough support to actually get it done. These are mostly place-holders for potentially good ideas, so that they don't get completely forgotten, and can be referenced/deduped every time they come up. [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/issues.md?pixel)]() -- cgit v1.2.3 From 9def9b378e36820a87d284e56039041bc642884a Mon Sep 17 00:00:00 2001 From: Chao Xu Date: Thu, 17 Dec 2015 10:57:55 -0800 Subject: add the required changes in master to devel/releasing.md --- releasing.md | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/releasing.md b/releasing.md index 8ab678ef..d47202f2 100644 --- a/releasing.md +++ b/releasing.md @@ -46,6 +46,7 @@ release breaks down into four pieces: 1. cutting/branching the release; 1. building and pushing the binaries; and 1. publishing binaries and release notes. +1. updating the master branch. You should progress in this strict order. @@ -216,6 +217,15 @@ Finally, from a clone of upstream/master, *make sure* you still have `RELEASE_VERSION` set correctly, and run `./build/mark-stable-release.sh ${RELEASE_VERSION}`. +### Updating the master branch + +If you are cutting a new release series, please also update the master branch: +change the `latestReleaseBranch` in `cmd/mungedocs/mungedocs.go` to the new +release branch (`release-X.Y`), run `hack/update-generated-docs.sh`. This will +let the unversioned warning in docs point to the latest release series. Please +send the changes as a PR titled "Update the latestReleaseBranch to release-X.Y +in the munger". + ## Injecting Version into Binaries *Please note that this information may be out of date. The scripts are the -- cgit v1.2.3 From 88882f06f45b07117ed96f6136b25c93f75aad4c Mon Sep 17 00:00:00 2001 From: Tim Hockin Date: Sat, 14 Nov 2015 12:26:04 -0800 Subject: Clean up and document validation strings Also add a detail string for Required and Forbidden. Fix tests. --- api-conventions.md | 30 ++++++++++++++++++++++++++++++ 1 file changed, 30 insertions(+) diff --git a/api-conventions.md b/api-conventions.md index a6314f0b..1fe165a6 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -76,6 +76,7 @@ using resources with kubectl can be found in [Working with resources](../user-gu - [Naming conventions](#naming-conventions) - [Label, selector, and annotation conventions](#label-selector-and-annotation-conventions) - [WebSockets and SPDY](#websockets-and-spdy) + - [Validation](#validation) @@ -787,6 +788,35 @@ There are two primary protocols in use today: Clients should use the SPDY protocols if their clients have native support, or WebSockets as a fallback. Note that WebSockets is susceptible to Head-of-Line blocking and so clients must read and process each message sequentionally. In the future, an HTTP/2 implementation will be exposed that deprecates SPDY. +## Validation + +API objects are validated upon receipt by the apiserver. Validation errors are +flagged and returned to the caller in a `Failure` status with `reason` set to +`Invalid`. In order to facilitate consistent error messages, we ask that +validation logic adheres to the following guidelines whenever possible (though +exceptional cases will exist). + +* Be as precise as possible. +* Telling users what they CAN do is more useful than telling them what they + CANNOT do. +* When asserting a requirement in the positive, use "must". Examples: "must be + greater than 0", "must match regex '[a-z]+'". Words like "should" imply that + the assertion is optional, and must be avoided. +* When asserting a formatting requirement in the negative, use "must not". + Example: "must not contain '..'". Words like "should not" imply that the + assertion is optional, and must be avoided. +* When asserting a behavioral requirement in the negative, use "may not". + Examples: "may not be specified when otherField is empty", "only `name` may be + specified". +* When referencing a literal string value, indicate the literal in + single-quotes. Example: "must not contain '..'". +* When referencing another field name, indicate the name in back-quotes. + Example: "must be greater than `request`". +* When specifying inequalities, use words rather than symbols. Examples: "must + be less than 256", "must be greater than or equal to 0". Do not use words + like "larger than", "bigger than", "more than", "higher than", etc. +* When specifying numeric ranges, use inclusive ranges when possible. + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/api-conventions.md?pixel)]() -- cgit v1.2.3 From 50e6624e2bafaf29d658a779a9b2940400cecab3 Mon Sep 17 00:00:00 2001 From: nikhiljindal Date: Mon, 30 Nov 2015 13:17:08 -0800 Subject: Adding a doc to explain the process of updating release docs --- update-release-docs.md | 148 +++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 148 insertions(+) create mode 100644 update-release-docs.md diff --git a/update-release-docs.md b/update-release-docs.md new file mode 100644 index 00000000..ea8a9b48 --- /dev/null +++ b/update-release-docs.md @@ -0,0 +1,148 @@ + + + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.1/docs/devel/update-release-docs.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + + + + +# Table of Contents + + + +- [Table of Contents](#table-of-contents) +- [Overview](#overview) +- [Adding a new docs collection for a release](#adding-a-new-docs-collection-for-a-release) +- [Updating docs in an existing collection](#updating-docs-in-an-existing-collection) + - [Updating docs on HEAD](#updating-docs-on-head) + - [Updating docs in release branch](#updating-docs-in-release-branch) + - [Updating docs in gh-pages branch](#updating-docs-in-gh-pages-branch) + + + +# Overview + +This document explains how to update kubernetes release docs hosted at http://kubernetes.io/docs/. + +http://kubernetes.io is served using the [gh-pages +branch](https://github.com/kubernetes/kubernetes/tree/gh-pages) of kubernetes repo on github. +Updating docs in that branch will update http://kubernetes.io + +There are 2 scenarios which require updating docs: +* Adding a new docs collection for a release. +* Updating docs in an existing collection. + +# Adding a new docs collection for a release + +Whenever a new release series (`release-X.Y`) is cut from `master`, we push the +corresponding set of docs to `http://kubernetes.io/vX.Y/docs`. The steps are as follows: + +* Create a `_vX.Y` folder in `gh-pages` branch. +* Add `vX.Y` as a valid collection in [_config.yml](https://github.com/kubernetes/kubernetes/blob/gh-pages/_config.yml) +* Create a new `_includes/nav_vX.Y.html` file with the navigation menu. This can + be a copy of `_includes/nav_vX.Y-1.html` with links to new docs added and links + to deleted docs removed. Update [_layouts/docwithnav.html] + (https://github.com/kubernetes/kubernetes/blob/gh-pages/_layouts/docwithnav.html) + to include this new navigation html file. Example PR: [#16143](https://github.com/kubernetes/kubernetes/pull/16143). +* [Pull docs from release branch](#updating-docs-in-gh-pages-branch) in `_vX.Y` + folder. + +Once these changes have been submitted, you should be able to reach the docs at +`http://kubernetes.io/vX.Y/docs/` where you can test them. + +To make `X.Y` the default version of docs: + +* Update [_config.yml](https://github.com/kubernetes/kubernetes/blob/gh-pages/_config.yml) + and [/kubernetes/kubernetes/blob/gh-pages/_docs/index.md](https://github.com/kubernetes/kubernetes/blob/gh-pages/_docs/index.md) + to point to the new version. Example PR: [#16416](https://github.com/kubernetes/kubernetes/pull/16416). +* Update [_includes/docversionselector.html](https://github.com/kubernetes/kubernetes/blob/gh-pages/_includes/docversionselector.html) + to make `vX.Y` the default version. +* Add "Disallow: /vX.Y-1/" to existing [robots.txt](https://github.com/kubernetes/kubernetes/blob/gh-pages/robots.txt) + file to hide old content from web crawlers and focus SEO on new docs. Example PR: + [#16388](https://github.com/kubernetes/kubernetes/pull/16388). +* Regenerate [sitemaps.xml](https://github.com/kubernetes/kubernetes/blob/gh-pages/sitemap.xml) + so that it now contains `vX.Y` links. Sitemap can be regenerated using + https://www.xml-sitemaps.com. Example PR: [#17126](https://github.com/kubernetes/kubernetes/pull/17126). +* Resubmit the updated sitemaps file to [Google + webmasters](https://www.google.com/webmasters/tools/sitemap-list?siteUrl=http://kubernetes.io/) for google to index the new links. +* Update [_layouts/docwithnav.html] (https://github.com/kubernetes/kubernetes/blob/gh-pages/_layouts/docwithnav.html) + to include [_includes/archivedocnotice.html](https://github.com/kubernetes/kubernetes/blob/gh-pages/_includes/archivedocnotice.html) + for `vX.Y-1` docs which need to be archived. +* Ping @thockin to update docs.k8s.io to redirect to `http://kubernetes.io/vX.Y/`. [#18788](https://github.com/kubernetes/kubernetes/issues/18788). + +http://kubernetes.io/docs/ should now be redirecting to `http://kubernetes.io/vX.Y/`. + +# Updating docs in an existing collection + +The high level steps to update docs in an existing collection are: + +1. Update docs on `HEAD` (master branch) +2. Cherryick the change in relevant release branch. +3. Update docs on `gh-pages`. + +## Updating docs on HEAD + +[Development guide](development.md) provides general instructions on how to contribute to kubernetes github repo. +[Docs how to guide](how-to-doc.md) provides conventions to follow while writting docs. + +## Updating docs in release branch + +Once docs have been updated in the master branch, the changes need to be +cherrypicked in the latest release branch. +[Cherrypick guide](cherry-picks.md) has more details on how to cherrypick your change. + +## Updating docs in gh-pages branch + +Once release branch has all the relevant changes, we can pull in the latest docs +in `gh-pages` branch. +Run the following 2 commands in `gh-pages` branch to update docs for release `X.Y`: + +``` +_tools/import_docs vX.Y _vX.Y release-X.Y release-X.Y +``` + +For ex: to pull in docs for release 1.1, run: + +``` +_tools/import_docs v1.1 _v1.1 release-1.1 release-1.1 +``` + +Apart from copying over the docs, `_tools/release_docs` also does some post processing +(like updating the links to docs to point to http://kubernetes.io/docs/ instead of pointing to github repo). +Note that we always pull in the docs from release branch and not from master (pulling docs +from master requires some extra processing like versionizing the links and removing unversioned warnings). + +We delete all existing docs before pulling in new ones to ensure that deleted +docs go away. + +If the change added or deleted a doc, then update the corresponding `_includes/nav_vX.Y.html` file as well. + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/update-release-docs.md?pixel)]() + -- cgit v1.2.3 From 3d4cf50dd255c732440474b1ddf70e96a65c8f77 Mon Sep 17 00:00:00 2001 From: nikhiljindal Date: Thu, 17 Dec 2015 15:04:42 -0800 Subject: Add instructions to run versionize-docs in cherrypick doc --- cherry-picks.md | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/cherry-picks.md b/cherry-picks.md index f407c949..6fae778f 100644 --- a/cherry-picks.md +++ b/cherry-picks.md @@ -47,6 +47,23 @@ hack/cherry_pick_pull.sh upstream/release-3.14 98765 This will walk you through the steps to propose an automated cherry pick of pull #98765 for remote branch `upstream/release-3.14`. +### Cherrypicking a doc change + +If you are cherrypicking a change which adds a doc, then you also need to run +`build/versionize-docs.sh` in the release branch to versionize that doc. +Ideally, just running `hack/cherry_pick_pull.sh` should be enough, but we are not there +yet: [#18861](https://github.com/kubernetes/kubernetes/issues/18861) + +To cherrypick PR 123456 to release-1.1, run the following commands after running `hack/cherry_pick_pull.sh` and before merging the PR: + +``` +$ git checkout -b automated-cherry-pick-of-#123456-upstream-release-1.1 + origin/automated-cherry-pick-of-#123456-upstream-release-1.1 +$ ./build/versionize-docs.sh release-1.1 +$ git commit -a -m "Running versionize docs" +$ git push origin automated-cherry-pick-of-#123456-upstream-release-1.1 +``` + ## Cherry Pick Review Cherry pick pull requests are reviewed differently than normal pull requests. In -- cgit v1.2.3 From ecc0cc2d5b47258f834a82fda4219767c1b0e3f8 Mon Sep 17 00:00:00 2001 From: Clayton Coleman Date: Sun, 20 Dec 2015 14:36:34 -0500 Subject: Document that int32 and int64 must be used in external types --- api-conventions.md | 1 + 1 file changed, 1 insertion(+) diff --git a/api-conventions.md b/api-conventions.md index 1fe165a6..ab049694 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -256,6 +256,7 @@ This rule maintains the invariant that all JSON/YAML keys are fields in API obje * Do not use unsigned integers, due to inconsistent support across languages and libraries. Just validate that the integer is non-negative if that's the case. * Do not use enums. Use aliases for string instead (e.g., `NodeConditionType`). * Look at similar fields in the API (e.g., ports, durations) and follow the conventions of existing fields. +* All public integer fields MUST use the Go `(u)int32` or Go `(u)int64` types, not `(u)int` (which is ambiguous depending on target platform). Internal types may use `(u)int`. #### Constants -- cgit v1.2.3 From f43cec8f19af7a9a2701d507bc152c44a7eb1528 Mon Sep 17 00:00:00 2001 From: Clayton Coleman Date: Sun, 20 Dec 2015 14:38:34 -0500 Subject: Document lowercase filenames --- coding-conventions.md | 1 + 1 file changed, 1 insertion(+) diff --git a/coding-conventions.md b/coding-conventions.md index d51278be..e1708633 100644 --- a/coding-conventions.md +++ b/coding-conventions.md @@ -68,6 +68,7 @@ Directory and file conventions - Avoid package sprawl. Find an appropriate subdirectory for new packages. (See [#4851](http://issues.k8s.io/4851) for discussion.) - Libraries with no more appropriate home belong in new package subdirectories of pkg/util - Avoid general utility packages. Packages called "util" are suspect. Instead, derive a name that describes your desired function. For example, the utility functions dealing with waiting for operations are in the "wait" package and include functionality like Poll. So the full name is wait.Poll + - All filenames should be lowercase - Go source files and directories use underscores, not dashes - Package directories should generally avoid using separators as much as possible (when packages are multiple words, they usually should be in nested subdirectories). - Document directories and filenames should use dashes rather than underscores -- cgit v1.2.3 From 83db13cc2e582365a830b196a582fa9ff4d5a534 Mon Sep 17 00:00:00 2001 From: Chao Xu Date: Mon, 14 Dec 2015 10:37:38 -0800 Subject: run hack/update-generated-docs.sh --- README.md | 1 + adding-an-APIGroup.md | 4 ---- api-conventions.md | 1 + api_changes.md | 1 + automation.md | 1 + cherry-picks.md | 1 + cli-roadmap.md | 1 + client-libraries.md | 1 + coding-conventions.md | 1 + collab.md | 1 + developer-guides/vagrant.md | 1 + development.md | 1 + e2e-tests.md | 1 + faster_reviews.md | 1 + flaky-tests.md | 1 + getting-builds.md | 1 + instrumentation.md | 1 + issues.md | 1 + kubectl-conventions.md | 1 + kubemark-guide.md | 4 ---- logging.md | 1 + making-release-notes.md | 1 + owners.md | 4 ---- profiling.md | 1 + pull-requests.md | 1 + releasing.md | 1 + scheduler.md | 1 + scheduler_algorithm.md | 1 + update-release-docs.md | 4 ---- writing-a-getting-started-guide.md | 1 + 30 files changed, 26 insertions(+), 16 deletions(-) diff --git a/README.md b/README.md index 87ede398..ed586cd0 100644 --- a/README.md +++ b/README.md @@ -18,6 +18,7 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + The latest release of this document can be found [here](http://releases.k8s.io/release-1.1/docs/devel/README.md). diff --git a/adding-an-APIGroup.md b/adding-an-APIGroup.md index afef1456..8f67a0ab 100644 --- a/adding-an-APIGroup.md +++ b/adding-an-APIGroup.md @@ -18,10 +18,6 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.1/docs/devel/adding-an-APIGroup.md). - Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/api-conventions.md b/api-conventions.md index 1fe165a6..17cda1eb 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -18,6 +18,7 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + The latest release of this document can be found [here](http://releases.k8s.io/release-1.1/docs/devel/api-conventions.md). diff --git a/api_changes.md b/api_changes.md index d2f0aea7..f5ffbd46 100644 --- a/api_changes.md +++ b/api_changes.md @@ -18,6 +18,7 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + The latest release of this document can be found [here](http://releases.k8s.io/release-1.1/docs/devel/api_changes.md). diff --git a/automation.md b/automation.md index c21f4ed6..d7cdaef1 100644 --- a/automation.md +++ b/automation.md @@ -18,6 +18,7 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + The latest release of this document can be found [here](http://releases.k8s.io/release-1.1/docs/devel/automation.md). diff --git a/cherry-picks.md b/cherry-picks.md index f407c949..711f1233 100644 --- a/cherry-picks.md +++ b/cherry-picks.md @@ -18,6 +18,7 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + The latest release of this document can be found [here](http://releases.k8s.io/release-1.1/docs/devel/cherry-picks.md). diff --git a/cli-roadmap.md b/cli-roadmap.md index de2f4a43..b2ea1894 100644 --- a/cli-roadmap.md +++ b/cli-roadmap.md @@ -18,6 +18,7 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + The latest release of this document can be found [here](http://releases.k8s.io/release-1.1/docs/devel/cli-roadmap.md). diff --git a/client-libraries.md b/client-libraries.md index a6f3e6ff..fb7cdf6b 100644 --- a/client-libraries.md +++ b/client-libraries.md @@ -18,6 +18,7 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + The latest release of this document can be found [here](http://releases.k8s.io/release-1.1/docs/devel/client-libraries.md). diff --git a/coding-conventions.md b/coding-conventions.md index e1708633..8b264395 100644 --- a/coding-conventions.md +++ b/coding-conventions.md @@ -18,6 +18,7 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + The latest release of this document can be found [here](http://releases.k8s.io/release-1.1/docs/devel/coding-conventions.md). diff --git a/collab.md b/collab.md index de2ce10c..28de1035 100644 --- a/collab.md +++ b/collab.md @@ -18,6 +18,7 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + The latest release of this document can be found [here](http://releases.k8s.io/release-1.1/docs/devel/collab.md). diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index 14ccfe6b..ebb12ab1 100644 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -18,6 +18,7 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + The latest release of this document can be found [here](http://releases.k8s.io/release-1.1/docs/devel/developer-guides/vagrant.md). diff --git a/development.md b/development.md index 3b5443bc..27ce1b8a 100644 --- a/development.md +++ b/development.md @@ -18,6 +18,7 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + The latest release of this document can be found [here](http://releases.k8s.io/release-1.1/docs/devel/development.md). diff --git a/e2e-tests.md b/e2e-tests.md index d1f909dc..902ba1c1 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -18,6 +18,7 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + The latest release of this document can be found [here](http://releases.k8s.io/release-1.1/docs/devel/e2e-tests.md). diff --git a/faster_reviews.md b/faster_reviews.md index f0cb159c..18a01fe9 100644 --- a/faster_reviews.md +++ b/faster_reviews.md @@ -18,6 +18,7 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + The latest release of this document can be found [here](http://releases.k8s.io/release-1.1/docs/devel/faster_reviews.md). diff --git a/flaky-tests.md b/flaky-tests.md index d5cc6a45..51f8bcac 100644 --- a/flaky-tests.md +++ b/flaky-tests.md @@ -18,6 +18,7 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + The latest release of this document can be found [here](http://releases.k8s.io/release-1.1/docs/devel/flaky-tests.md). diff --git a/getting-builds.md b/getting-builds.md index 375a1fac..0caacb34 100644 --- a/getting-builds.md +++ b/getting-builds.md @@ -18,6 +18,7 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + The latest release of this document can be found [here](http://releases.k8s.io/release-1.1/docs/devel/getting-builds.md). diff --git a/instrumentation.md b/instrumentation.md index 49f1f077..bfd74026 100644 --- a/instrumentation.md +++ b/instrumentation.md @@ -18,6 +18,7 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + The latest release of this document can be found [here](http://releases.k8s.io/release-1.1/docs/devel/instrumentation.md). diff --git a/issues.md b/issues.md index cbad9517..483747a1 100644 --- a/issues.md +++ b/issues.md @@ -18,6 +18,7 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + The latest release of this document can be found [here](http://releases.k8s.io/release-1.1/docs/devel/issues.md). diff --git a/kubectl-conventions.md b/kubectl-conventions.md index 3775c0b3..a3a7b6f6 100644 --- a/kubectl-conventions.md +++ b/kubectl-conventions.md @@ -18,6 +18,7 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + The latest release of this document can be found [here](http://releases.k8s.io/release-1.1/docs/devel/kubectl-conventions.md). diff --git a/kubemark-guide.md b/kubemark-guide.md index df0ecb96..c2addc8f 100644 --- a/kubemark-guide.md +++ b/kubemark-guide.md @@ -18,10 +18,6 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.1/docs/devel/kubemark-guide.md). - Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/logging.md b/logging.md index 3dc22ca5..8dca0a9f 100644 --- a/logging.md +++ b/logging.md @@ -18,6 +18,7 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + The latest release of this document can be found [here](http://releases.k8s.io/release-1.1/docs/devel/logging.md). diff --git a/making-release-notes.md b/making-release-notes.md index 7a2d73c0..48c7d72f 100644 --- a/making-release-notes.md +++ b/making-release-notes.md @@ -18,6 +18,7 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + The latest release of this document can be found [here](http://releases.k8s.io/release-1.1/docs/devel/making-release-notes.md). diff --git a/owners.md b/owners.md index 22bb2fef..3b5a1aca 100644 --- a/owners.md +++ b/owners.md @@ -18,10 +18,6 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.1/docs/devel/owners.md). - Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/profiling.md b/profiling.md index f05b9d74..18c87f41 100644 --- a/profiling.md +++ b/profiling.md @@ -18,6 +18,7 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + The latest release of this document can be found [here](http://releases.k8s.io/release-1.1/docs/devel/profiling.md). diff --git a/pull-requests.md b/pull-requests.md index b97da36e..eaffce23 100644 --- a/pull-requests.md +++ b/pull-requests.md @@ -18,6 +18,7 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + The latest release of this document can be found [here](http://releases.k8s.io/release-1.1/docs/devel/pull-requests.md). diff --git a/releasing.md b/releasing.md index d47202f2..d43a20cd 100644 --- a/releasing.md +++ b/releasing.md @@ -18,6 +18,7 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + The latest release of this document can be found [here](http://releases.k8s.io/release-1.1/docs/devel/releasing.md). diff --git a/scheduler.md b/scheduler.md index 2bdb4c16..5051bfed 100755 --- a/scheduler.md +++ b/scheduler.md @@ -18,6 +18,7 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + The latest release of this document can be found [here](http://releases.k8s.io/release-1.1/docs/devel/scheduler.md). diff --git a/scheduler_algorithm.md b/scheduler_algorithm.md index 3888786c..06c482fd 100755 --- a/scheduler_algorithm.md +++ b/scheduler_algorithm.md @@ -18,6 +18,7 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + The latest release of this document can be found [here](http://releases.k8s.io/release-1.1/docs/devel/scheduler_algorithm.md). diff --git a/update-release-docs.md b/update-release-docs.md index ea8a9b48..e94c5442 100644 --- a/update-release-docs.md +++ b/update-release-docs.md @@ -18,10 +18,6 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.1/docs/devel/update-release-docs.md). - Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/writing-a-getting-started-guide.md b/writing-a-getting-started-guide.md index a82691a8..f6b2a4b1 100644 --- a/writing-a-getting-started-guide.md +++ b/writing-a-getting-started-guide.md @@ -18,6 +18,7 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + The latest release of this document can be found [here](http://releases.k8s.io/release-1.1/docs/devel/writing-a-getting-started-guide.md). -- cgit v1.2.3 From b3849ceb4436cc722929bd742e6614678835a3ce Mon Sep 17 00:00:00 2001 From: Ed Costello Date: Thu, 29 Oct 2015 14:36:29 -0400 Subject: Copy edits for typos --- api-conventions.md | 2 +- api_changes.md | 4 ++-- automation.md | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/api-conventions.md b/api-conventions.md index ab049694..00c2ec62 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -403,7 +403,7 @@ Using the `omitempty` tag causes swagger documentation to reflect that the field Using a pointer allows distinguishing unset from the zero value for that type. There are some cases where, in principle, a pointer is not needed for an optional field -since the zero value is forbidden, and thus imples unset. There are examples of this in the +since the zero value is forbidden, and thus implies unset. There are examples of this in the codebase. However: - it can be difficult for implementors to anticipate all cases where an empty value might need to be diff --git a/api_changes.md b/api_changes.md index d2f0aea7..015bab3e 100644 --- a/api_changes.md +++ b/api_changes.md @@ -558,7 +558,7 @@ New feature development proceeds through a series of stages of increasing maturi - Development level - Object Versioning: no convention - - Availability: not commited to main kubernetes repo, and thus not available in offical releases + - Availability: not committed to main kubernetes repo, and thus not available in official releases - Audience: other developers closely collaborating on a feature or proof-of-concept - Upgradeability, Reliability, Completeness, and Support: no requirements or guarantees - Alpha level @@ -590,7 +590,7 @@ New feature development proceeds through a series of stages of increasing maturi tests complete; the API has had a thorough API review and is thought to be complete, though use during beta may frequently turn up API issues not thought of during review - Upgradeability: the object schema and semantics may change in a later software release; when - this happens, an upgrade path will be documentedr; in some cases, objects will be automatically + this happens, an upgrade path will be documented; in some cases, objects will be automatically converted to the new version; in other cases, a manual upgrade may be necessary; a manual upgrade may require downtime for anything relying on the new feature, and may require manual conversion of objects to the new version; when manual conversion is necessary, the diff --git a/automation.md b/automation.md index c21f4ed6..5b77425a 100644 --- a/automation.md +++ b/automation.md @@ -35,7 +35,7 @@ Documentation for other releases can be found at ## Overview -Kubernetes uses a variety of automated tools in an attempt to relieve developers of repeptitive, low +Kubernetes uses a variety of automated tools in an attempt to relieve developers of repetitive, low brain power work. This document attempts to describe these processes. -- cgit v1.2.3 From d8b1f8d6aed960aa01683a736eeee0ff91dbb2b3 Mon Sep 17 00:00:00 2001 From: hurf Date: Sat, 10 Oct 2015 09:51:09 +0800 Subject: Clean up standalone conversion tool Remove kube-version-change for all its functionalities are covered by kubectl convert command. Also changed the related docs. --- adding-an-APIGroup.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/adding-an-APIGroup.md b/adding-an-APIGroup.md index 8f67a0ab..0541af61 100644 --- a/adding-an-APIGroup.md +++ b/adding-an-APIGroup.md @@ -42,7 +42,7 @@ We plan on improving the way the types are factored in the future; see [#16062]( 2. Create pkg/apis/``/{register.go, ``/register.go} to register this group's API objects to the encoding/decoding scheme (e.g., [pkg/apis/extensions/register.go](../../pkg/apis/extensions/register.go) and [pkg/apis/extensions/v1beta1/register.go](../../pkg/apis/extensions/v1beta1/register.go); -3. Add a pkg/apis/``/install/install.go, which is responsible for adding the group to the `latest` package, so that other packages can access the group's meta through `latest.Group`. You probably only need to change the name of group and version in the [example](../../pkg/apis/extensions/install/install.go)). You need to import this `install` package in {pkg/master, pkg/client/unversioned, cmd/kube-version-change}/import_known_versions.go, if you want to make your group accessible to other packages in the kube-apiserver binary, binaries that uses the client package, or the kube-version-change tool. +3. Add a pkg/apis/``/install/install.go, which is responsible for adding the group to the `latest` package, so that other packages can access the group's meta through `latest.Group`. You probably only need to change the name of group and version in the [example](../../pkg/apis/extensions/install/install.go)). You need to import this `install` package in {pkg/master, pkg/client/unversioned}/import_known_versions.go, if you want to make your group accessible to other packages in the kube-apiserver binary, binaries that uses the client package. Step 2 and 3 are mechanical, we plan on autogenerate these using the cmd/libs/go2idl/ tool. -- cgit v1.2.3 From 4f4703bb1ad27a90b4d6263d34843159b126fd7c Mon Sep 17 00:00:00 2001 From: Justin Santa Barbara Date: Sun, 29 Nov 2015 14:00:49 -0500 Subject: Ubernetes Lite: Volumes can dictate zone scheduling For AWS EBS, a volume can only be attached to a node in the same AZ. The scheduler must therefore detect if a volume is being attached to a pod, and ensure that the pod is scheduled on a node in the same AZ as the volume. So that the scheduler need not query the cloud provider every time, and to support decoupled operation (e.g. bare metal) we tag the volume with our placement labels. This is done automatically by means of an admission controller on AWS when a PersistentVolume is created backed by an EBS volume. Support for tagging GCE PVs will follow. Pods that specify a volume directly (i.e. without using a PersistentVolumeClaim) will not currently be scheduled correctly (i.e. they will be scheduled without zone-awareness). --- scheduler_algorithm.md | 1 + 1 file changed, 1 insertion(+) diff --git a/scheduler_algorithm.md b/scheduler_algorithm.md index 06c482fd..00a812a5 100755 --- a/scheduler_algorithm.md +++ b/scheduler_algorithm.md @@ -41,6 +41,7 @@ For each unscheduled Pod, the Kubernetes scheduler tries to find a node across t The purpose of filtering the nodes is to filter out the nodes that do not meet certain requirements of the Pod. For example, if the free resource on a node (measured by the capacity minus the sum of the resource requests of all the Pods that already run on the node) is less than the Pod's required resource, the node should not be considered in the ranking phase so it is filtered out. Currently, there are several "predicates" implementing different filtering policies, including: - `NoDiskConflict`: Evaluate if a pod can fit due to the volumes it requests, and those that are already mounted. +- `NoVolumeZoneConflict`: Evaluate if the volumes a pod requests are available on the node, given the Zone restrictions. - `PodFitsResources`: Check if the free resource (CPU and Memory) meets the requirement of the Pod. The free resource is measured by the capacity minus the sum of requests of all Pods on the node. To learn more about the resource QoS in Kubernetes, please check [QoS proposal](../proposals/resource-qos.md). - `PodFitsHostPorts`: Check if any HostPort required by the Pod is already occupied on the node. - `PodFitsHost`: Filter out all nodes except the one specified in the PodSpec's NodeName field. -- cgit v1.2.3 From 7ea61a4e4c2d63a9e7f2dc10a5b79b4e530e7396 Mon Sep 17 00:00:00 2001 From: David O'Riordan Date: Sun, 3 Jan 2016 14:37:15 +0000 Subject: Add Scala to client library list --- client-libraries.md | 1 + 1 file changed, 1 insertion(+) diff --git a/client-libraries.md b/client-libraries.md index a6f3e6ff..94453c17 100644 --- a/client-libraries.md +++ b/client-libraries.md @@ -50,6 +50,7 @@ Documentation for other releases can be found at * [Node.js](https://github.com/tenxcloud/node-kubernetes-client) * [Perl](https://metacpan.org/pod/Net::Kubernetes) * [Clojure](https://github.com/yanatan16/clj-kubernetes-api) + * [Scala](https://github.com/doriordan/skuber) -- cgit v1.2.3 From 0e671553a511e9eb1a8728e03cf39a8751fdca58 Mon Sep 17 00:00:00 2001 From: Mike Danese Date: Wed, 30 Dec 2015 11:39:57 -0800 Subject: docs: move local getting started guide to docs/devel/ Signed-off-by: Mike Danese --- README.md | 2 + running-locally.md | 176 +++++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 178 insertions(+) create mode 100644 running-locally.md diff --git a/README.md b/README.md index ed586cd0..8a01a8d6 100644 --- a/README.md +++ b/README.md @@ -73,6 +73,8 @@ Guide](../admin/README.md). * **Coding Conventions** ([coding-conventions.md](coding-conventions.md)): Coding style advice for contributors. +* **Running a cluster locally** ([running-locally.md](running-locally.md)): + A fast and lightweight local cluster deployment for developement. ## Developing against the Kubernetes API diff --git a/running-locally.md b/running-locally.md new file mode 100644 index 00000000..257b2522 --- /dev/null +++ b/running-locally.md @@ -0,0 +1,176 @@ + + + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). +
+-- + + + + +Getting started locally +----------------------- + +**Table of Contents** + +- [Requirements](#requirements) + - [Linux](#linux) + - [Docker](#docker) + - [etcd](#etcd) + - [go](#go) +- [Clone the repository](#clone-the-repository) +- [Starting the cluster](#starting-the-cluster) +- [Running a container](#running-a-container) +- [Running a user defined pod](#running-a-user-defined-pod) +- [Troubleshooting](#troubleshooting) + - [I cannot reach service IPs on the network.](#i-cannot-reach-service-ips-on-the-network) + - [I cannot create a replication controller with replica size greater than 1! What gives?](#i-cannot-create-a-replication-controller-with-replica-size-greater-than-1--what-gives) + - [I changed Kubernetes code, how do I run it?](#i-changed-kubernetes-code-how-do-i-run-it) + - [kubectl claims to start a container but `get pods` and `docker ps` don't show it.](#kubectl-claims-to-start-a-container-but-get-pods-and-docker-ps-dont-show-it) + - [The pods fail to connect to the services by host names](#the-pods-fail-to-connect-to-the-services-by-host-names) + +### Requirements + +#### Linux + +Not running Linux? Consider running Linux in a local virtual machine with [Vagrant](../getting-started-guides/vagrant.md), or on a cloud provider like [Google Compute Engine](../getting-started-guides/gce.md). + +#### Docker + +At least [Docker](https://docs.docker.com/installation/#installation) +1.3+. Ensure the Docker daemon is running and can be contacted (try `docker +ps`). Some of the Kubernetes components need to run as root, which normally +works fine with docker. + +#### etcd + +You need an [etcd](https://github.com/coreos/etcd/releases) in your path, please make sure it is installed and in your ``$PATH``. + +#### go + +You need [go](https://golang.org/doc/install) in your path (see [here](development.md#go-versions) for supported versions), please make sure it is installed and in your ``$PATH``. + +### Clone the repository + +In order to run kubernetes you must have the kubernetes code on the local machine. Cloning this repository is sufficient. + +```$ git clone --depth=1 https://github.com/kubernetes/kubernetes.git``` + +The `--depth=1` parameter is optional and will ensure a smaller download. + +### Starting the cluster + +In a separate tab of your terminal, run the following (since one needs sudo access to start/stop Kubernetes daemons, it is easier to run the entire script as root): + +```sh +cd kubernetes +hack/local-up-cluster.sh +``` + +This will build and start a lightweight local cluster, consisting of a master +and a single node. Type Control-C to shut it down. + +You can use the cluster/kubectl.sh script to interact with the local cluster. hack/local-up-cluster.sh will +print the commands to run to point kubectl at the local cluster. + + +### Running a container + +Your cluster is running, and you want to start running containers! + +You can now use any of the cluster/kubectl.sh commands to interact with your local setup. + +```sh +cluster/kubectl.sh get pods +cluster/kubectl.sh get services +cluster/kubectl.sh get replicationcontrollers +cluster/kubectl.sh run my-nginx --image=nginx --replicas=2 --port=80 + + +## begin wait for provision to complete, you can monitor the docker pull by opening a new terminal + sudo docker images + ## you should see it pulling the nginx image, once the above command returns it + sudo docker ps + ## you should see your container running! + exit +## end wait + +## introspect Kubernetes! +cluster/kubectl.sh get pods +cluster/kubectl.sh get services +cluster/kubectl.sh get replicationcontrollers +``` + + +### Running a user defined pod + +Note the difference between a [container](../user-guide/containers.md) +and a [pod](../user-guide/pods.md). Since you only asked for the former, Kubernetes will create a wrapper pod for you. +However you cannot view the nginx start page on localhost. To verify that nginx is running you need to run `curl` within the docker container (try `docker exec`). + +You can control the specifications of a pod via a user defined manifest, and reach nginx through your browser on the port specified therein: + +```sh +cluster/kubectl.sh create -f docs/user-guide/pod.yaml +``` + +Congratulations! + +### Troubleshooting + +#### I cannot reach service IPs on the network. + +Some firewall software that uses iptables may not interact well with +kubernetes. If you have trouble around networking, try disabling any +firewall or other iptables-using systems, first. Also, you can check +if SELinux is blocking anything by running a command such as `journalctl --since yesterday | grep avc`. + +By default the IP range for service cluster IPs is 10.0.*.* - depending on your +docker installation, this may conflict with IPs for containers. If you find +containers running with IPs in this range, edit hack/local-cluster-up.sh and +change the service-cluster-ip-range flag to something else. + +#### I cannot create a replication controller with replica size greater than 1! What gives? + +You are running a single node setup. This has the limitation of only supporting a single replica of a given pod. If you are interested in running with larger replica sizes, we encourage you to try the local vagrant setup or one of the cloud providers. + +#### I changed Kubernetes code, how do I run it? + +```sh +cd kubernetes +hack/build-go.sh +hack/local-up-cluster.sh +``` + +#### kubectl claims to start a container but `get pods` and `docker ps` don't show it. + +One or more of the Kubernetes daemons might've crashed. Tail the logs of each in /tmp. + +#### The pods fail to connect to the services by host names + +The local-up-cluster.sh script doesn't start a DNS service. Similar situation can be found [here](http://issue.k8s.io/6667). You can start a manually. Related documents can be found [here](../../cluster/addons/dns/#how-do-i-configure-it) + + + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/running-locally.md?pixel)]() + -- cgit v1.2.3 From 957857f1e082addf2a2013dfae8921bd4eb96a36 Mon Sep 17 00:00:00 2001 From: Haoran Wang Date: Wed, 6 Jan 2016 13:09:43 +0800 Subject: fix wrong submit-queue.go link --- automation.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/automation.md b/automation.md index c1851e84..99688de1 100644 --- a/automation.md +++ b/automation.md @@ -47,7 +47,7 @@ In an effort to * maintain e2e stability * load test githubs label feature -We have added an automated [submit-queue](https://github.com/kubernetes/contrib/blob/master/mungegithub/pulls/submit-queue.go) to the +We have added an automated [submit-queue](https://github.com/kubernetes/contrib/blob/master/mungegithub/mungers/submit-queue.go) to the [github "munger"](https://github.com/kubernetes/contrib/tree/master/mungegithub) for kubernetes. The submit-queue does the following: -- cgit v1.2.3 From a751615a47b630dfb5accb0108a90a34020644c9 Mon Sep 17 00:00:00 2001 From: "Tim St. Clair" Date: Wed, 6 Jan 2016 15:19:05 -0800 Subject: Add node performance measuring guide Add a development guide for measuring performance of node components. The purpose of this guide is threefold: 1. Document the nuances of measuring kubelet performance so we don't forget or need to reinvent the wheel. 2. Make it easier for new contributors to analyze performance. 3. Share tips and tricks that current team members might not be aware of. --- node-performance-testing.md | 147 ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 147 insertions(+) create mode 100644 node-performance-testing.md diff --git a/node-performance-testing.md b/node-performance-testing.md new file mode 100644 index 00000000..8a14eedc --- /dev/null +++ b/node-performance-testing.md @@ -0,0 +1,147 @@ + + + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). +
+-- + + + + + +# Measuring Node Performance + +This document outlines the issues and pitfalls of measuring Node performance, as well as the tools +available. + +## Cluster Set-up + +There are lots of factors which can affect node performance numbers, so care must be taken in +setting up the cluster to make the intended measurements. In addition to taking the following steps +into consideration, it is important to document precisely which setup was used. For example, +performance can vary wildly from commit-to-commit, so it is very important to **document which commit +or version** of Kubernetes was used, which Docker version was used, etc. + +### Addon pods + +Be aware of which addon pods are running on which nodes. By default Kubernetes runs 8 addon pods, +plus another 2 per node (`fluentd-elasticsearch` and `kube-proxy`) in the `kube-system` +namespace. The addon pods can be disabled for more consistent results, but doing so can also have +performance implications. + +For example, Heapster polls each node regularly to collect stats data. Disabling Heapster will hide +the performance cost of serving those stats in the Kubelet. + +#### Disabling Add-ons + +Disabling addons is simple. Just ssh into the Kubernetes master and move the addon from +`/etc/kubernetes/addons/` to a backup location. More details [here](../../cluster/addons/). + +### Which / how many pods? + +Performance will vary a lot between a node with 0 pods and a node with 100 pods. In many cases +you'll want to make measurements with several different amounts of pods. On a single node cluster +scaling a replication controller makes this easy, just make sure the system reaches a steady-state +before starting the measurement. E.g. `kubectl scale replicationcontroller pause --replicas=100` + +In most cases pause pods will yield the most consistent measurements since the system will not be +affected by pod load. However, in some special cases Kubernetes has been tuned to optimize pods that +are not doing anything, such as the cAdvisor housekeeping (stats gathering). In these cases, +performing a very light task (such as a simple network ping) can make a difference. + +Finally, you should also consider which features yours pods should be using. For example, if you +want to measure performance with probing, you should obviously use pods with liveness or readiness +probes configured. Likewise for volumes, number of containers, etc. + +### Other Tips + +**Number of nodes** - On the one hand, it can be easier to manage logs, pods, environment etc. with + a single node to worry about. On the other hand, having multiple nodes will let you gather more + data in parallel for more robust sampling. + +## E2E Performance Test + +There is an end-to-end test for collecting overall resource usage of node components: +[kubelet_perf.go](../../test/e2e/kubelet_perf.go). To +run the test, simply make sure you have an e2e cluster running (`go run hack/e2e.go -up`) and +[set up](#cluster-set-up) correctly. + +Run the test with `go run hack/e2e.go -v -test +--test_args="--ginkgo.focus=resource\susage\stracking"`. You may also wish to customise the number of +pods or other parameters of the test (remember to rerun `make WHAT=test/e2e/e2e.test` after you do). + +## Profiling + +Kubelet installs the [go pprof handlers](https://golang.org/pkg/net/http/pprof/), which can be +queried for CPU profiles: + +```console +$ kubectl proxy & +Starting to serve on 127.0.0.1:8001 +$ curl -G "http://localhost:8001/api/v1/proxy/nodes/${NODE}:10250/debug/pprof/profile?seconds=${DURATION_SECONDS}" > $OUTPUT +$ KUBELET_BIN=_output/dockerized/bin/linux/amd64/kubelet +$ go tool pprof -web $KUBELET_BIN $OUTPUT +``` + +`pprof` can also provide heap usage, from the `/debug/pprof/heap` endpoint +(e.g. `http://localhost:8001/api/v1/proxy/nodes/${NODE}:10250/debug/pprof/heap`). + +More information on go profiling can be found [here](http://blog.golang.org/profiling-go-programs). + +## Benchmarks + +Before jumping through all the hoops to measure a live Kubernetes node in a real cluster, it is +worth considering whether the data you need can be gathered through a Benchmark test. Go provides a +really simple benchmarking mechanism, just add a unit test of the form: + +```go +// In foo_test.go +func BenchmarkFoo(b *testing.B) { + b.StopTimer() + setupFoo() // Perform any global setup + b.StartTimer() + for i := 0; i < b.N; i++ { + foo() // Functionality to measure + } +} +``` + +Then: + +```console +$ go test -bench=. -benchtime=${SECONDS}s foo_test.go +``` + +More details on benchmarking [here](https://golang.org/pkg/testing/). + +## TODO + +- (taotao) Measuring docker performance +- Expand cluster set-up section +- (vishh) Measuring disk usage +- (yujuhong) Measuring memory usage +- Add section on monitoring kubelet metrics (e.g. with prometheus) + + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/node-performance-testing.md?pixel)]() + -- cgit v1.2.3 From 629e14cd12f0568c4316f1948d41eb064215dc99 Mon Sep 17 00:00:00 2001 From: Greg Taylor Date: Sun, 27 Dec 2015 11:50:04 -0800 Subject: Alphabetize user contributed libraries list. --- client-libraries.md | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/client-libraries.md b/client-libraries.md index a8a3f613..69661ff4 100644 --- a/client-libraries.md +++ b/client-libraries.md @@ -42,18 +42,17 @@ Documentation for other releases can be found at *Note: Libraries provided by outside parties are supported by their authors, not the core Kubernetes team* + * [Clojure](https://github.com/yanatan16/clj-kubernetes-api) * [Java (OSGi)](https://bitbucket.org/amdatulabs/amdatu-kubernetes) * [Java (Fabric8, OSGi)](https://github.com/fabric8io/kubernetes-client) - * [Ruby](https://github.com/Ch00k/kuber) - * [Ruby](https://github.com/abonas/kubeclient) - * [PHP](https://github.com/devstub/kubernetes-api-php-client) - * [PHP](https://github.com/maclof/kubernetes-client) * [Node.js](https://github.com/tenxcloud/node-kubernetes-client) * [Perl](https://metacpan.org/pod/Net::Kubernetes) - * [Clojure](https://github.com/yanatan16/clj-kubernetes-api) + * [PHP](https://github.com/devstub/kubernetes-api-php-client) + * [PHP](https://github.com/maclof/kubernetes-client) + * [Ruby](https://github.com/Ch00k/kuber) + * [Ruby](https://github.com/abonas/kubeclient) * [Scala](https://github.com/doriordan/skuber) - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/client-libraries.md?pixel)]() -- cgit v1.2.3 From a32d464cc1b97a0ead97d83d541929679dc949fa Mon Sep 17 00:00:00 2001 From: Karl Isenberg Date: Tue, 8 Dec 2015 13:23:59 -0800 Subject: Add hack/update-godep-licenses.sh to generate Godeps/LICENSES.md - Add Godeps/LICENSES.md - Add verify-godep-licenses to verify that Godeps/LICENSES.md is up to date - Trigger verify-godep-licenses in the pre-commit hook only if the Godeps dir has changed - Exclude verify-godep-licenses in verify-all - Add verify-godep-licenses to make verify (used by travis) - Add verify-godep-licenses to shippable - Update dev docs to mention update-godep-licenses --- development.md | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/development.md b/development.md index 27ce1b8a..95dccaa9 100644 --- a/development.md +++ b/development.md @@ -219,7 +219,7 @@ _If `go get -u path/to/dependency` fails with compilation errors, instead try `g to fetch the dependencies without compiling them. This can happen when updating the cadvisor dependency._ -5) Before sending your PR, it's a good idea to sanity check that your Godeps.json file is ok by running hack/verify-godeps.sh +5) Before sending your PR, it's a good idea to sanity check that your Godeps.json file is ok by running `hack/verify-godeps.sh` _If hack/verify-godeps.sh fails after a `godep update`, it is possible that a transitive dependency was added or removed but not updated by godeps. It then may be necessary to perform a `godep save ./...` to pick up the transitive dependency changes._ @@ -228,6 +228,10 @@ It is sometimes expedient to manually fix the /Godeps/godeps.json file to minimi Please send dependency updates in separate commits within your PR, for easier reviewing. +6) If you updated the Godeps, please also update `Godeps/LICENSES.md` by running `hack/update-godep-licenses.sh`. + +_If Godep does not automatically vendor the proper license file for a new dependency, be sure to add an exception entry to `hack/update-godep-licenses.sh`._ + ## Unit tests ```sh -- cgit v1.2.3 From 4843e36ddf99f472c10b9268cb795b0156e21169 Mon Sep 17 00:00:00 2001 From: Isaac Hollander McCreery Date: Fri, 8 Jan 2016 11:35:30 -0800 Subject: Add documentation for test labels --- e2e-tests.md | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/e2e-tests.md b/e2e-tests.md index 902ba1c1..6eae78cf 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -102,6 +102,20 @@ As mentioned earlier there are a host of other options that are available, but a - `rm -rf /var/run/kubernetes`, clear kube generated credentials, sometimes stale permissions can cause problems. - `sudo iptables -F`, clear ip tables rules left by the kube-proxy. +## Kinds of tests + +We are working on implementing clearer partitioning of our e2e tests to make running a known set of tests easier (#10548). Tests can be labeled with any of the following labels, in order of increasing precedence (that is, each label listed below supersedes the previous ones): + +- If a test has no labels, it is expected to run fast (under five minutes), be able to be run in parallel, and be consistent. +- `[Slow]`: If a test takes more than five minutes to run (by itself or in parallel with many other tests), it is labeled `[Slow]`. This partition allows us to run almost all of our tests quickly in parallel, without waiting for the stragglers to finish. +- `[Serial]`: If a test cannot be run in parallel with other tests (e.g. it takes too many resources or restarts nodes), it is labeled `[Serial]`, and should be run in serial as part of a separate suite. +- `[Disruptive]`: If a test restarts components that might cause other tests to fail or break the cluster completely, it is labeled `[Disruptive]`. Any `[Disruptive]` test is also assumed to qualify for the `[Serial]` label, but need not be labeled as both. These tests are not run against soak clusters to avoid restarting components. +- `[Flaky]`: If a test is found to be flaky, it receives the `[Flaky]` label until it is fixed. A `[Flaky]` label should be accompanied with a reference to the issue for de-flaking the test, because while a test remains labeled `[Flaky]`, it is not monitored closely in CI. `[Flaky]` tests are by default not run, unless a `focus` or `skip` argument is explicitly given. +- `[Skipped]`: `[Skipped]` is a legacy label that we're phasing out. If a test is marked `[Skipped]`, there should be an issue open to label it properly. `[Skipped]` tests are by default not run, unless a `focus` or `skip` argument is explicitly given. +- `[Feature:...]`: If a test has non-default requirements to run or targets some non-core functionality, and thus should not be run as part of the standard suite, it receives a `[Feature:...]` label, e.g. `[Feature:Performance]` or `[Feature:Ingress]`. `[Feature:...]` tests are not run in our core suites, instead running in custom suites. + +Finally, `[Conformance]` tests are tests we expect to pass on **any** Kubernetes cluster. The `[Conformance]` label does not supersede any other labels. `[Conformance]` test policies are a work-in-progress; see #18162. + ## Adding a New Test As mentioned above, prior to adding a new test, it is a good idea to perform a `-ginkgo.dryRun=true` on the system, in order to see if a behavior is already being tested, or to determine if it may be possible to augment an existing set of tests for a specific use case. -- cgit v1.2.3 From 85f44c87d7606fe8ea59f69a9ccda0214bfeffad Mon Sep 17 00:00:00 2001 From: Isaac Hollander McCreery Date: Tue, 19 Jan 2016 16:45:15 -0800 Subject: Add docs about [Feature:...] tests for experimental, beta, and GA features --- e2e-tests.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/e2e-tests.md b/e2e-tests.md index 6eae78cf..0fc6bcd7 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -112,7 +112,9 @@ We are working on implementing clearer partitioning of our e2e tests to make run - `[Disruptive]`: If a test restarts components that might cause other tests to fail or break the cluster completely, it is labeled `[Disruptive]`. Any `[Disruptive]` test is also assumed to qualify for the `[Serial]` label, but need not be labeled as both. These tests are not run against soak clusters to avoid restarting components. - `[Flaky]`: If a test is found to be flaky, it receives the `[Flaky]` label until it is fixed. A `[Flaky]` label should be accompanied with a reference to the issue for de-flaking the test, because while a test remains labeled `[Flaky]`, it is not monitored closely in CI. `[Flaky]` tests are by default not run, unless a `focus` or `skip` argument is explicitly given. - `[Skipped]`: `[Skipped]` is a legacy label that we're phasing out. If a test is marked `[Skipped]`, there should be an issue open to label it properly. `[Skipped]` tests are by default not run, unless a `focus` or `skip` argument is explicitly given. -- `[Feature:...]`: If a test has non-default requirements to run or targets some non-core functionality, and thus should not be run as part of the standard suite, it receives a `[Feature:...]` label, e.g. `[Feature:Performance]` or `[Feature:Ingress]`. `[Feature:...]` tests are not run in our core suites, instead running in custom suites. +- `[Feature:...]`: If a test has non-default requirements to run or targets some non-core functionality, and thus should not be run as part of the standard suite, it receives a `[Feature:...]` label, e.g. `[Feature:Performance]` or `[Feature:Ingress]`. `[Feature:...]` tests are not run in our core suites, instead running in custom suites. There are a few use-cases for `[Feature:...]` tests: + - If a feature is experimental (i.e. in the `experimental` API or otherwise experimental), it should *not* block the merge-queue, and thus should run in some separate test suites owned by the feature owner(s). + - If a feature is in beta or GA, it *should* block the merge-queue. In moving from experimental to beta or GA, tests that are expected to pass by default should simply remove the `[Feature:...]` label, and will be incorporated into our core suites. If tests are not expected to pass by default, (e.g. they require a special environment such as added quota,) they should remain with the `[Feature:...]` label, and the suites that run them should be incorporated into our merge-queue, owned by the Build Cop. Finally, `[Conformance]` tests are tests we expect to pass on **any** Kubernetes cluster. The `[Conformance]` label does not supersede any other labels. `[Conformance]` test policies are a work-in-progress; see #18162. -- cgit v1.2.3 From b48824a8053fa2ff9fef2635a8b63b366eeeb319 Mon Sep 17 00:00:00 2001 From: Isaac Hollander McCreery Date: Fri, 22 Jan 2016 16:03:08 -0800 Subject: Change wording about experimental API --- e2e-tests.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/e2e-tests.md b/e2e-tests.md index 0fc6bcd7..388e25f0 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -113,7 +113,7 @@ We are working on implementing clearer partitioning of our e2e tests to make run - `[Flaky]`: If a test is found to be flaky, it receives the `[Flaky]` label until it is fixed. A `[Flaky]` label should be accompanied with a reference to the issue for de-flaking the test, because while a test remains labeled `[Flaky]`, it is not monitored closely in CI. `[Flaky]` tests are by default not run, unless a `focus` or `skip` argument is explicitly given. - `[Skipped]`: `[Skipped]` is a legacy label that we're phasing out. If a test is marked `[Skipped]`, there should be an issue open to label it properly. `[Skipped]` tests are by default not run, unless a `focus` or `skip` argument is explicitly given. - `[Feature:...]`: If a test has non-default requirements to run or targets some non-core functionality, and thus should not be run as part of the standard suite, it receives a `[Feature:...]` label, e.g. `[Feature:Performance]` or `[Feature:Ingress]`. `[Feature:...]` tests are not run in our core suites, instead running in custom suites. There are a few use-cases for `[Feature:...]` tests: - - If a feature is experimental (i.e. in the `experimental` API or otherwise experimental), it should *not* block the merge-queue, and thus should run in some separate test suites owned by the feature owner(s). + - If a feature is experimental or alpha and is not enabled by default due to being incomplete or potentially subject to breaking changes, it should *not* block the merge-queue, and thus should run in some separate test suites owned by the feature owner(s). - If a feature is in beta or GA, it *should* block the merge-queue. In moving from experimental to beta or GA, tests that are expected to pass by default should simply remove the `[Feature:...]` label, and will be incorporated into our core suites. If tests are not expected to pass by default, (e.g. they require a special environment such as added quota,) they should remain with the `[Feature:...]` label, and the suites that run them should be incorporated into our merge-queue, owned by the Build Cop. Finally, `[Conformance]` tests are tests we expect to pass on **any** Kubernetes cluster. The `[Conformance]` label does not supersede any other labels. `[Conformance]` test policies are a work-in-progress; see #18162. -- cgit v1.2.3 From 4060b4eed4c73076794502ac0fdd714db593cae2 Mon Sep 17 00:00:00 2001 From: David Oppenheimer Date: Thu, 17 Dec 2015 23:00:11 -0800 Subject: How to build Mesos/Omega-style frameworks on Kubernetes. --- mesos-style.md | 169 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 169 insertions(+) create mode 100644 mesos-style.md diff --git a/mesos-style.md b/mesos-style.md new file mode 100644 index 00000000..c8d096be --- /dev/null +++ b/mesos-style.md @@ -0,0 +1,169 @@ + + + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). +
+-- + + + + + +# Building Mesos/Omega-style frameworks on Kubernetes + +## Introduction + +We have observed two different cluster management architectures, which can be categorized as "Borg-style" and "Mesos/Omega-style." +(In the remainder of this document, we will abbreviate the latter as "Mesos-style.") +Although out-of-the box Kubernetes uses a Borg-style architecture, it can also be configured in a Mesos-style architecture, +and in fact can support both styles at the same time. This document describes the two approaches and describes how +to deploy a Mesos-style architecture on Kubernetes. + +(As an aside, the converse is also true: one can deploy a Borg/Kubernetes-style architecture on Mesos.) + +This document is NOT intended to provide a comprehensive comparison of Borg and Mesos. For example, we omit discussion +of the tradeoffs between scheduling with full knowledge of cluster state vs. scheduling using the "offer" model. +(That issue is discussed in some detail in the Omega paper (see references section at the end of this doc).) + + +## What is a Borg-style architecture? + +A Borg-style architecture is characterized by: +* a single logical API endpoint for clients, where some amount of processing is done on requests, such as admission control and applying defaults +* generic (non-application-specific) collection abstractions described declaratively, +* generic controllers/state machines that manage the lifecycle of the collection abstractions and the containers spawned from them +* a generic scheduler + +For example, Borg's primary collection abstraction is a Job, and every application that runs on Borg--whether it's a user-facing +service like the GMail front-end, a batch job like a MapReduce, or an infrastructure service like GFS--must represent itself as +a Job. Borg has corresponding state machine logic for managing Jobs and their instances, and a scheduler that's responsible +for assigning the instances to machines. + +The flow of a request in Borg is: + +1. Client submits a collection object to the Borgmaster API endpoint +1. Admission control, quota, applying defaults, etc. run on the collection +1. If the collection is admitted, it is persisted, and the collection state machine creates the underlying instances +1. The scheduler assigns a hostname to the instance, and tells the Borglet to start the instance's container(s) +1. Borglet starts the container(s) +1. The instance state machine manages the instances and the collection state machine manages the collection during their lifetimes + +Out-of-the-box Kubernetes has *workload-specific* abstractions (ReplicaSet, Job, DaemonSet, etc.) and corresponding controllers, +and in the future may have [workload-specific schedulers](../../docs/proposals/multiple-schedulers.md), +e.g. different schedulers for long-running services vs. short-running batch. But these abstractions, controllers, and +schedulers are not *application-specific*. + +The usual request flow in Kubernetes is very similar, namely + +1. Client submits a collection object (e.g. ReplicaSet, Job, ...) to the API server +1. Admission control, quota, applying defaults, etc. run on the collection +1. If the collection is admitted, it is persisted, and the corresponding collection controller creates the underlying pods +1. Admission control, quota, applying defaults, etc. runs on each pod; if there are multiple schedulers, one of the admission +controllers will write the scheduler name as an annotation based on a policy +1. If a pod is admitted, it is persisted +1. The appropriate scheduler assigns a nodeName to the instance, which triggers the Kubelet to start the pod's container(s) +1. Kubelet starts the container(s) +1. The controller corresponding to the collection manages the pod and the collection during their lifetime + +In the Borg model, application-level scheduling and cluster-level scheduling are handled by separate +components. For example, a MapReduce master might request Borg to create a job with a certain number of instances +with a particular resource shape, where each instance corresponds to a MapReduce worker; the MapReduce master would +then schedule individual units of work onto those workers. + +## What is a Mesos-style architecture? + +Mesos is fundamentally designed to support multiple application-specific "frameworks." A framework is +composed of a "framework scheduler" and a "framework executor." We will abbreviate "framework scheduler" +as "framework" since "scheduler" means something very different in Kubernetes (something that just +assigns pods to nodes). + +Unlike Borg and Kubernetes, where there is a single logical endpoint that receives all API requests (the Borgmaster and API server, +respectively), in Mesos every framework is a separate API endpoint. Mesos does not have any standard set of +collection abstractions, controllers/state machines, or schedulers; the logic for all of these things is contained +in each [application-specific framework](http://mesos.apache.org/documentation/latest/frameworks/) individually. +(Note that the notion of application-specific does sometimes blur into the realm of workload-specific, +for example [Chronos](https://github.com/mesos/chronos) is a generic framework for batch jobs. +However, regardless of what set of Mesos frameworks you are using, the key properties remain: each +framework is its own API endpoint with its own client-facing and internal abstractions, state machines, and scheduler). + +A Mesos framework can integrate application-level scheduling and cluster-level scheduling into a single component. + +Note: Although Mesos frameworks expose their own API endpoints to clients, they consume a common +infrastructure via a common API endpoint for controlling tasks (launching, detecting failure, etc.) and learning about available +cluster resources. More details [here](http://mesos.apache.org/documentation/latest/scheduler-http-api/). + +## Building a Mesos-style framework on Kubernetes + +Implementing the Mesos model on Kubernetes boils down to enabling application-specific collection abstractions, +controllers/state machines, and scheduling. There are just three steps: +* Use API plugins to create API resources for your new application-specific collection abstraction(s) +* Implement controllers for the new abstractions (and for managing the lifecycle of the pods the controllers generate) +* Implement a scheduler with the application-specific scheduling logic + +Note that the last two can be combined: a Kubernetes controller can do the scheduling for the pods it creates, +by writing node name to the pods when it creates them. + +Once you've done this, you end up with an architecture that is extremely similar to the Mesos-style--the +Kubernetes controller is effectively a Mesos framework. The remaining differences are +* In Kubernetes, all API operations go through a single logical endpoint, the API server (we say logical because the API server can be replicated). +In contrast, in Mesos, API operations go to a particular framework. However, the Kubernetes API plugin model makes this difference fairly small. +* In Kubernetes, application-specific admission control, quota, defaulting, etc. rules can be implemented +in the API server rather than in the controller. Of course you can choose to make these operations be no-ops for +your application-specific collection abstractions, and handle them in your controller. +* On the node level, Mesos allows application-specific executors, whereas Kubernetes only has +executors for Docker and Rocket containers. + +The end-to-end flow is + +1. Client submits an application-specific collection object to the API server +2. The API server plugin for that collection object forwards the request to the API server that handles that collection type +3. Admission control, quota, applying defaults, etc. runs on the collection object +4. If the collection is admitted, it is persisted +5. The collection controller sees the collection object and in response creates the underlying pods and chooses which nodes they will run on by setting node name +6. Kubelet sees the pods with node name set and starts the container(s) +7. The collection controller manages the pods and the collection during their lifetimes + +(note that if the controller and scheduler are separated, then step 5 breaks down into multiple steps: +(5a) collection controller creates pods with empty node name. (5b) API server admission control, quota, defaulting, +etc. runs on the pods; one of the admission controller steps writes the scheduler name as an annotation on each pods +(see #18262 for more details). +(5c) The corresponding application-specific scheduler chooses a node and writes node name, which triggers the Kubelet to start the pod's container(s).) + +As a final note, the Kubernetes model allows multiple levels of iterative refinement of runtime abstractions, +as long as the lowest level is the pod. For example, clients of application Foo might create a `FooSet` +which is picked up by the FooController which in turn creates `BatchFooSet` and `ServiceFooSet` objects, +which are picked up by the BatchFoo controller and ServiceFoo controller respectively, which in turn +create pods. In between each of these steps there is an opportunity for object-specific admission control, +quota, and defaulting to run in the API server, though these can instead be handled by the controllers. + + +## References + +Mesos is described [here](https://www.usenix.org/legacy/event/nsdi11/tech/full_papers/Hindman_new.pdf). +Omega is described [here](http://research.google.com/pubs/pub41684.html). +Borg is described [here](http://research.google.com/pubs/pub43438.html). + + + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/mesos-style.md?pixel)]() + -- cgit v1.2.3 From 959efa553489849f4f61f925559b24aa969064cc Mon Sep 17 00:00:00 2001 From: Daniel Smith Date: Fri, 22 Jan 2016 16:10:13 -0800 Subject: add expectations for flaky test issues --- flaky-tests.md | 88 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 87 insertions(+), 1 deletion(-) diff --git a/flaky-tests.md b/flaky-tests.md index 51f8bcac..4047a7f9 100644 --- a/flaky-tests.md +++ b/flaky-tests.md @@ -32,7 +32,93 @@ Documentation for other releases can be found at -# Hunting flaky tests in Kubernetes +# Flaky tests + +Any test that fails occasionally is "flaky". Since our merges only proceed when +all tests are green, and we have a number of different CI systems running the +tests in various combinations, even a small percentage of flakes results in a +lot of pain for people waiting for their PRs to merge. + +Therefore, it's very important that we write tests defensively. Situations that +"almost never happen" happen with some regularity when run thousands of times in +resource-constrained environments. Since flakes can often be quite hard to +reproduce while still being common enough to block merges occasionally, it's +additionally important that the test logs be useful for narrowing down exactly +what caused the failure. + +Note that flakes can occur in unit tests, integration tests, or end-to-end +tests, but probably occur most commonly in end-to-end tests. + +## Filing issues for flaky tests + +Because flakes may be rare, it's very important that all relevant logs be +discoverable from the issue. + +1. Search for the test name. If you find an open issue and you're 90% sure the + flake is exactly the same, add a comment instead of making a new issue. +2. If you make a new issue, you should title it with the test name, prefixed by + "e2e/unit/integration flake:" (whichever is appropriate) +3. Reference any old issues you found in step one. +4. Paste, in block quotes, the entire log of the individual failing test, not + just the failure line. +5. Link to durable storage with the rest of the logs. This means (for all the + tests that Google runs) the GCS link is mandatory! The Jenkins test result + link is nice but strictly optional: not only does it expire more quickly, + it's not accesible to non-Googlers. + +## Expectations when a flaky test is assigned to you + +Note that we won't randomly assign these issues to you unless you've opted in or +you're part of a group that has opted in. We are more than happy to accept help +from anyone in fixing these, but due to the severity of the problem when merges +are blocked, we need reasonably quick turn-around time on test flakes. Therefore +we have the following guidelines: + +1. If a flaky test is assigned to you, it's more important than anything else + you're doing unless you can get a special dispensation (in which case it will + be reassigned). If you have too many flaky tests assigned to you, or you + have such a dispensation, then it's *still* your responsibility to find new + owners (this may just mean giving stuff back to the relevant Team or SIG Lead). +2. You should make a reasonable effort to reproduce it. Somewhere between an + hour and half a day of concentrated effort is "reasonable". It is perfectly + reasonable to ask for help! +3. If you can reproduce it (or it's obvious from the logs what happened), you + should then be able to fix it, or in the case where someone is clearly more + qualified to fix it, reassign it with very clear instructions. +4. If you can't reproduce it: __don't just close it!__ Every time a flake comes + back, at least 2 hours of merge time is wasted. So we need to make monotonic + progress towards narrowing it down every time a flake occurs. If you can't + figure it out from the logs, add log messages that would have help you figure + it out. + +# Reproducing unit test flakes + +Try the [stress command](https://godoc.org/golang.org/x/tools/cmd/stress). + +Just + +``` +$ go install golang.org/x/tools/cmd/stress +``` + +Then build your test binary + +``` +$ godep go test -c -race +``` + +Then run it under stress + +``` +$ stress ./package.test -test.run=FlakyTest +``` + +It runs the command and writes output to `/tmp/gostress-*` files when it fails. +It periodically reports with run counts. Be careful with tests that use the +`net/http/httptest` package; they could exhaust the available ports on your +system! + +# Hunting flaky unit tests in Kubernetes Sometimes unit tests are flaky. This means that due to (usually) race conditions, they will occasionally fail, even though most of the time they pass. -- cgit v1.2.3 From 1256768871a22dbb88d8111c690441ec6b09b0db Mon Sep 17 00:00:00 2001 From: Chao Xu Date: Sat, 9 Jan 2016 17:03:39 -0800 Subject: linkchecker tool now visits the URL to determine if it's valid --- how-to-doc.md | 4 ---- 1 file changed, 4 deletions(-) diff --git a/how-to-doc.md b/how-to-doc.md index 7f1d30ba..2c508611 100644 --- a/how-to-doc.md +++ b/how-to-doc.md @@ -18,10 +18,6 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. - -The latest 1.1.x release of this document can be found -[here](http://releases.k8s.io/release-1.1/docs/devel/how-to-doc.md). - Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). -- cgit v1.2.3 From 52c6c3c1ada6aaff5a3d37b5cff2bec4999c5c4c Mon Sep 17 00:00:00 2001 From: Clayton Coleman Date: Tue, 12 Jan 2016 21:10:38 -0500 Subject: Add benchmarks for watch over websocket and http ... and a quick doc on how to run them ``` $ godep go test ./pkg/apiserver -benchmem -run=XXX -bench=BenchmarkWatch PASS BenchmarkWatchHTTP-8 20000 95669 ns/op 15053 B/op 196 allocs/op BenchmarkWatchWebsocket-8 10000 102871 ns/op 18430 B/op 204 allocs/op ``` --- development.md | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/development.md b/development.md index 95dccaa9..06aa870a 100644 --- a/development.md +++ b/development.md @@ -374,6 +374,15 @@ See [conformance-test.sh](http://releases.k8s.io/HEAD/hack/conformance-test.sh). [Instructions here](flaky-tests.md) +## Benchmarking + +To run benchmark tests, you'll typically use something like: + + $ godep go test ./pkg/apiserver -benchmem -run=XXX -bench=BenchmarkWatch + +The `-run=XXX` prevents normal unit tests for running, while `-bench` is a regexp for selecting which benchmarks to run. +See `go test -h` for more instructions on generating profiles from benchmarks. + ## Regenerating the CLI documentation ```sh -- cgit v1.2.3 From 20a715d36fdb09d7686c8e7e8f9432e7a1784956 Mon Sep 17 00:00:00 2001 From: Janet Kuo Date: Thu, 28 Jan 2016 19:22:16 -0800 Subject: Update kubectl convention about title column --- kubectl-conventions.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kubectl-conventions.md b/kubectl-conventions.md index a3a7b6f6..126fd71a 100644 --- a/kubectl-conventions.md +++ b/kubectl-conventions.md @@ -110,7 +110,7 @@ Updated: 8/27/2015 * However, affordances are made for simple parsing of `get` output * Only errors should be directed to stderr * `get` commands should output one row per resource, and one resource per row - * Column titles and values should not contain spaces in order to facilitate commands that break lines into fields: cut, awk, etc. + * Column titles and values should not contain spaces in order to facilitate commands that break lines into fields: cut, awk, etc. Instead, use `-` as the word separator. * By default, `get` output should fit within about 80 columns * Eventually we could perhaps auto-detect width * `-o wide` may be used to display additional columns -- cgit v1.2.3 From 7816368e07cbb396873223947ae5b9d7d26c96ff Mon Sep 17 00:00:00 2001 From: Paul Morie Date: Fri, 29 Jan 2016 00:00:34 -0500 Subject: Expand coding convention information on third party code --- coding-conventions.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/coding-conventions.md b/coding-conventions.md index 8b264395..e0a1e146 100644 --- a/coding-conventions.md +++ b/coding-conventions.md @@ -77,8 +77,10 @@ Directory and file conventions - Examples should also illustrate [best practices for configuration and using the system](../user-guide/config-best-practices.md) - Third-party code - - Third-party Go code is managed using Godeps - - Other third-party code belongs in /third_party + - Go code for normal third-party dependencies is managed using [Godeps](https://github.com/tools/godep) + - Other third-party code belongs in `/third_party` + - forked third party Go code goes in `/third_party/forked` + - forked _golang stdlib_ code goes in `/third_party/golang` - Third-party code must include licenses - This includes modified third-party code and excerpts, as well -- cgit v1.2.3 From 4ba595e7f6ae4394f5d1bd4ae3be0f86c5881191 Mon Sep 17 00:00:00 2001 From: Daniel Smith Date: Fri, 29 Jan 2016 11:47:33 -0800 Subject: add instructions to ease follow-up --- flaky-tests.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/flaky-tests.md b/flaky-tests.md index 4047a7f9..ce838915 100644 --- a/flaky-tests.md +++ b/flaky-tests.md @@ -58,7 +58,10 @@ discoverable from the issue. flake is exactly the same, add a comment instead of making a new issue. 2. If you make a new issue, you should title it with the test name, prefixed by "e2e/unit/integration flake:" (whichever is appropriate) -3. Reference any old issues you found in step one. +3. Reference any old issues you found in step one. Also, make a comment in the + old issue referencing your new issue, because people monitoring only their + email do not see the backlinks github adds. Alternatively, tag the person or + people who most recently worked on it. 4. Paste, in block quotes, the entire log of the individual failing test, not just the failure line. 5. Link to durable storage with the rest of the logs. This means (for all the -- cgit v1.2.3 From 7b2a9050f3fa5b47949aeb2b3d2f1e4afc578602 Mon Sep 17 00:00:00 2001 From: Aaron Crickenberger Date: Fri, 29 Jan 2016 12:14:55 -0800 Subject: Copy-paste on-call docs out of wiki Changed links to docs/wiki where appropriate. No content changes aside from explicitly calling out FAQ's as living in the wiki. Ran `hack/update-generated-docs.sh` --- README.md | 2 + on-call-build-cop.md | 105 ++++++++++++++++++++++++++++++++++++++++++++++++ on-call-rotations.md | 52 ++++++++++++++++++++++++ on-call-user-support.md | 83 ++++++++++++++++++++++++++++++++++++++ pull-requests.md | 2 +- 5 files changed, 243 insertions(+), 1 deletion(-) create mode 100644 on-call-build-cop.md create mode 100644 on-call-rotations.md create mode 100644 on-call-user-support.md diff --git a/README.md b/README.md index 8a01a8d6..4a049888 100644 --- a/README.md +++ b/README.md @@ -48,6 +48,8 @@ Guide](../admin/README.md). * **Pull Request Process** ([pull-requests.md](pull-requests.md)): When and why pull requests are closed. +* **Kubernetes On-Call Rotations** ([on-call-rotations.md](on-call-rotations.md)): Descriptions of on-call rotations for build and end-user support + * **Faster PR reviews** ([faster_reviews.md](faster_reviews.md)): How to get faster PR reviews. * **Getting Recent Builds** ([getting-builds.md](getting-builds.md)): How to get recent builds including the latest builds that pass CI. diff --git a/on-call-build-cop.md b/on-call-build-cop.md new file mode 100644 index 00000000..7530963e --- /dev/null +++ b/on-call-build-cop.md @@ -0,0 +1,105 @@ + + + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + + + +Kubernetes "Github and Build-cop" Rotation +========================================== + +Preqrequisites +-------------- + +* Ensure you have [write access to http://github.com/kubernetes/kubernetes](https://github.com/orgs/kubernetes/teams/kubernetes-maintainers) + * Test your admin access by e.g. adding a label to an issue. + +Traffic sources and responsibilities +------------------------------------ + +* GitHub [https://github.com/kubernetes/kubernetes/issues](https://github.com/kubernetes/kubernetes/issues) and [https://github.com/kubernetes/kubernetes/pulls](https://github.com/kubernetes/kubernetes/pulls): Your job is to be the first responder to all new issues and PRs. If you are not equipped to do this (which is fine!), it is your job to seek guidance! + * Support issues should be closed and redirected to Stackoverflow (see example response below). + * All incoming issues should be tagged with a team label (team/{api,ux,control-plane,node,cluster,csi,redhat,mesosphere,gke,release-infra,test-infra,none}); for issues that overlap teams, you can use multiple team labels + * There is a related concept of "Github teams" which allow you to @ mention a set of people; feel free to @ mention a Github team if you wish, but this is not a substitute for adding a team/* label, which is required + * [Google teams](https://github.com/orgs/kubernetes/teams?utf8=%E2%9C%93&query=goog-) + * [Redhat teams](https://github.com/orgs/kubernetes/teams?utf8=%E2%9C%93&query=rh-) + * [SIGs](https://github.com/orgs/kubernetes/teams?utf8=%E2%9C%93&query=sig-) + * If the issue is reporting broken builds, broken e2e tests, or other obvious P0 issues, label the issue with priority/P0 and assign it to someone. This is the only situation in which you should add a priority/* label + * non-P0 issues do not need a reviewer assigned initially + * Assign any issues related to Vagrant to @derekwaynecarr (and @mention him in the issue) + * All incoming PRs should be assigned a reviewer. + * unless it is a WIP (Work in Progress), RFC (Request for Comments), or design proposal. + * An auto-assigner [should do this for you] (https://github.com/kubernetes/kubernetes/pull/12365/files) + * When in doubt, choose a TL or team maintainer of the most relevant team; they can delegate + * Keep in mind that you can @ mention people in an issue/PR to bring it to their attention without assigning it to them. You can also @ mention github teams, such as @kubernetes/goog-ux or @kubernetes/kubectl + * If you need help triaging an issue or PR, consult with (or assign it to) @brendandburns, @thockin, @bgrant0607, @quinton-hoole, @davidopp, @dchen1107, @lavalamp (all U.S. Pacific Time) or @fgrzadkowski (Central European Time). + * At the beginning of your shift, please add team/* labels to any issues that have fallen through the cracks and don't have one. Likewise, be fair to the next person in rotation: try to ensure that every issue that gets filed while you are on duty is handled. The Github query to find issues with no team/* label is: [here](https://github.com/kubernetes/kubernetes/issues?utf8=%E2%9C%93&q=is%3Aopen+is%3Aissue+-label%3Ateam%2Fcontrol-plane+-label%3Ateam%2Fmesosphere+-label%3Ateam%2Fredhat+-label%3Ateam%2Frelease-infra+-label%3Ateam%2Fnone+-label%3Ateam%2Fnode+-label%3Ateam%2Fcluster+-label%3Ateam%2Fux+-label%3Ateam%2Fcsi+-label%3Ateam%2Fapi+-label%3Ateam%2Ftest-infra+). + +Example response for support issues: + + Please re-post your question to [stackoverflow](http://stackoverflow.com/questions/tagged/kubernetes). + + We are trying to consolidate the channels to which questions for help/support are posted so that we can improve our efficiency in responding to your requests, and to make it easier for you to find answers to frequently asked questions and how to address common use cases. + + We regularly see messages posted in multiple forums, with the full response thread only in one place or, worse, spread across multiple forums. Also, the large volume of support issues on github is making it difficult for us to use issues to identify real bugs. + + The Kubernetes team scans stackoverflow on a regular basis, and will try to ensure your questions don't go unanswered. + + Before posting a new question, please search stackoverflow for answers to similar questions, and also familiarize yourself with: + * [the user guide](http://kubernetes.io/v1.0/) + * [the troubleshooting guide](http://kubernetes.io/v1.0/docs/troubleshooting.html) + + Again, thanks for using Kubernetes. + + The Kubernetes Team + +Build-copping +------------- + +* The [merge-bot submit queue](http://submit-queue.k8s.io/) ([source](https://github.com/kubernetes/contrib/tree/master/submit-queue)) should auto-merge all eligible PRs for you once they've passed all the relevant checks mentioned below and all [critical e2e tests] (https://goto.google.com/k8s-test/view/Critical%20Builds/) are passing. If the merge-bot been disabled for some reason, or tests are failing, you might need to do some manual merging to get things back on track. +* Once a day or so, look at the [flaky test builds](https://goto.google.com/k8s-test/view/Flaky/); if they are timing out, clusters are failing to start, or tests are consistently failing (instead of just flaking), file an issue to get things back on track. +* Jobs that are not in [critical e2e tests] (https://goto.google.com/k8s-test/view/Critical%20Builds/) or [flaky test builds](https://goto.google.com/k8s-test/view/Flaky/) are not your responsibility to monitor. The `Test owner:` in the job description will be automatically emailed if the job is failing. +* If you are a weekday oncall, ensure that PRs confirming to the following pre-requisites are being merged at a reasonable rate: + * [Have been LGTMd](https://github.com/kubernetes/kubernetes/labels/lgtm) + * Pass Travis and Shippable. + * Author has signed CLA if applicable. +* If you are a weekend oncall, [never merge PRs manually](collab.md), instead add the label "lgtm" to the PRs once they have been LGTMd and passed Travis and Shippable; this will cause merge-bot to merge them automatically (or make them easy to find by the next oncall, who will merge them). +* When the build is broken, roll back the PRs responsible ASAP +* When E2E tests are unstable, a "merge freeze" may be instituted. During a merge freeze: + * Oncall should slowly merge LGTMd changes throughout the day while monitoring E2E to ensure stability. + * Ideally the E2E run should be green, but some tests are flaky and can fail randomly (not as a result of a particular change). + * If a large number of tests fail, or tests that normally pass fail, that is an indication that one or more of the PR(s) in that build might be problematic (and should be reverted). + * Use the Test Results Analyzer to see individual test history over time. +* Flake mitigation + * Tests that flake (fail a small percentage of the time) need an issue filed against them. Please read [this](https://github.com/kubernetes/kubernetes/blob/doc-flaky-test/docs/devel/flaky-tests.md#filing-issues-for-flaky-tests); the build cop is expected to file issues for any flaky tests they encounter. + * It's reasonable to manually merge PRs that fix a flake or otherwise mitigate it. + +Contact information +------------------- + +[@k8s-oncall](https://github.com/k8s-oncall) will reach the current person on call. + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/on-call-build-cop.md?pixel)]() + diff --git a/on-call-rotations.md b/on-call-rotations.md new file mode 100644 index 00000000..9544db51 --- /dev/null +++ b/on-call-rotations.md @@ -0,0 +1,52 @@ + + + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + + + +Kubernetes On-Call Rotations +==================== + +Kubernetes "first responder" rotations +-------------------------------------- + +Kubernetes has generated a lot of public traffic: email, pull-requests, bugs, etc. So much traffic that it's becoming impossible to keep up with it all! This is a fantastic problem to have. In order to be sure that SOMEONE, but not EVERYONE on the team is paying attention to public traffic, we have instituted two "first responder" rotations, listed below. Please read this page before proceeding to the pages linked below, which are specific to each rotation. + +Please also read our [notes on OSS collaboration](collab.md), particularly the bits about hours. Specifically, each rotation is expected to be active primarily during work hours, less so off hours. + +During regular workday work hours of your shift, your primary responsibility is to monitor the traffic sources specific to your rotation. You can check traffic in the evenings if you feel so inclined, but it is not expected to be as highly focused as work hours. For weekends, you should check traffic very occasionally (e.g. once or twice a day). Again, it is not expected to be as highly focused as workdays. It is assumed that over time, everyone will get weekday and weekend shifts, so the workload will balance out. + +If you can not serve your shift, and you know this ahead of time, it is your responsibility to find someone to cover and to change the rotation. If you have an emergency, your responsibilities fall on the primary of the other rotation, who acts as your secondary. If you need help to cover all of the tasks, partners with oncall rotations (e.g., [Redhat](https://github.com/orgs/kubernetes/teams/rh-oncall)). + +If you are not on duty you DO NOT need to do these things. You are free to focus on "real work". + +Note that Kubernetes will occasionally enter code slush/freeze, prior to milestones. When it does, there might be changes in the instructions (assigning milestones, for instance). + +* [Github and Build Cop Rotation](on-call-build-cop.md) +* [User Support Rotation](on-call-user-support.md) + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/on-call-rotations.md?pixel)]() + diff --git a/on-call-user-support.md b/on-call-user-support.md new file mode 100644 index 00000000..ceea9c76 --- /dev/null +++ b/on-call-user-support.md @@ -0,0 +1,83 @@ + + + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + + + +Kubernetes "User Support" Rotation +================================== + +Traffic sources and responsibilities +------------------------------------ + +* [StackOverflow](http://stackoverflow.com/questions/tagged/kubernetes) and [ServerFault](http://serverfault.com/questions/tagged/google-kubernetes): Respond to any thread that has no responses and is more than 6 hours old (over time we will lengthen this timeout to allow community responses). If you are not equipped to respond, it is your job to redirect to someone who can. + * [Query for unanswered Kubernetes StackOverflow questions](http://stackoverflow.com/search?q=%5Bkubernetes%5D+answers%3A0) + * [Query for unanswered Kubernetes ServerFault questions](http://serverfault.com/questions/tagged/google-kubernetes?sort=unanswered&pageSize=15) + * Direct poorly formulated questions to [stackoverflow's tips about how to ask](http://stackoverflow.com/help/how-to-ask) + * Direct off-topic questions to [stackoverflow's policy](http://stackoverflow.com/help/on-topic) +* [Slack](https://kubernetes.slack.com) ([registration](http://slack.k8s.io)): Your job is to be on Slack, watching for questions and answering or redirecting as needed. Also check out the [Slack Archive](http://kubernetes.slackarchive.io/). +* [Email/Groups](https://groups.google.com/forum/#!forum/google-containers): Respond to any thread that has no responses and is more than 6 hours old (over time we will lengthen this timeout to allow community responses). If you are not equipped to respond, it is your job to redirect to someone who can. +* [Legacy] [IRC](irc://irc.freenode.net/#google-containers) (irc.freenode.net #google-containers): watch IRC for questions and try to redirect users to Slack. Also check out the [IRC logs](https://botbot.me/freenode/google-containers/). + +In general, try to direct support questions to: + +1. Documentation, such as the [user guide](../user-guide/README.md) and [troubleshooting guide](../troubleshooting.md) +2. Stackoverflow + +If you see questions on a forum other than Stackoverflow, try to redirect them to Stackoverflow. Example response: + + Please re-post your question to [stackoverflow](http://stackoverflow.com/questions/tagged/kubernetes). + + We are trying to consolidate the channels to which questions for help/support are posted so that we can improve our efficiency in responding to your requests, and to make it easier for you to find answers to frequently asked questions and how to address common use cases. + + We regularly see messages posted in multiple forums, with the full response thread only in one place or, worse, spread across multiple forums. Also, the large volume of support issues on github is making it difficult for us to use issues to identify real bugs. + + The Kubernetes team scans stackoverflow on a regular basis, and will try to ensure your questions don't go unanswered. + + Before posting a new question, please search stackoverflow for answers to similar questions, and also familiarize yourself with: + * [the user guide](http://kubernetes.io/v1.1/) + * [the troubleshooting guide](http://kubernetes.io/v1.1/docs/troubleshooting.html) + + Again, thanks for using Kubernetes. + + The Kubernetes Team + +If you answer a question (in any of the above forums) that you think might be useful for someone else in the future, *please add it to one of the FAQs in the wiki*: +* [User FAQ](https://github.com/kubernetes/kubernetes/wiki/User-FAQ) +* [Developer FAQ](https://github.com/kubernetes/kubernetes/wiki/Developer-FAQ) +* [Debugging FAQ](https://github.com/kubernetes/kubernetes/wiki/Debugging-FAQ). + +Getting it into the FAQ is more important than polish. Please indicate the date it was added, so people can judge the likelihood that it is out-of-date (and please correct any FAQ entries that you see contain out-of-date information). + +Contact information +------------------- + +[@k8s-support-oncall](https://github.com/k8s-support-oncall) will reach the current person on call. + + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/on-call-user-support.md?pixel)]() + diff --git a/pull-requests.md b/pull-requests.md index eaffce23..5394d574 100644 --- a/pull-requests.md +++ b/pull-requests.md @@ -53,7 +53,7 @@ Life of a Pull Request Unless in the last few weeks of a milestone when we need to reduce churn and stabilize, we aim to be always accepting pull requests. -Either the [on call](https://github.com/kubernetes/kubernetes/wiki/Kubernetes-on-call-rotations) manually or the [github "munger"](https://github.com/kubernetes/contrib/tree/master/mungegithub) submit-queue plugin automatically will manage merging PRs. +Either the [on call](on-call-rotations.md) manually or the [github "munger"](https://github.com/kubernetes/contrib/tree/master/mungegithub) submit-queue plugin automatically will manage merging PRs. There are several requirements for the submit-queue to work: * Author must have signed CLA ("cla: yes" label added to PR) -- cgit v1.2.3 From 965f504b7419177a82e140bf4485857ed3d7e187 Mon Sep 17 00:00:00 2001 From: Aaron Crickenberger Date: Fri, 29 Jan 2016 12:19:46 -0800 Subject: Link how-to-doc.md in devel/README.md --- README.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/README.md b/README.md index 4a049888..4128e001 100644 --- a/README.md +++ b/README.md @@ -75,6 +75,9 @@ Guide](../admin/README.md). * **Coding Conventions** ([coding-conventions.md](coding-conventions.md)): Coding style advice for contributors. +* **Document Conventions** ([how-to-doc.md](how-to-doc.md)) + Document style advice for contributors. + * **Running a cluster locally** ([running-locally.md](running-locally.md)): A fast and lightweight local cluster deployment for developement. -- cgit v1.2.3 From dae3fa25e4f49e3a3f67341edb0d565faf69f7af Mon Sep 17 00:00:00 2001 From: Prashanth Balasubramanian Date: Wed, 25 Nov 2015 10:50:44 -0800 Subject: [wip] CI testing guidelines --- testing.md | 66 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 66 insertions(+) create mode 100644 testing.md diff --git a/testing.md b/testing.md new file mode 100644 index 00000000..ed51dcc6 --- /dev/null +++ b/testing.md @@ -0,0 +1,66 @@ + + + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.1/docs/devel/testing.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + + + +Kubernetes Commit Queue Testing +=============================== + +A quick overview of how we add, remove and recycle tests from CI. + +## What is CI? + +Throughout this document we will refer to CI as any suite of e2e tests that can potentially hold up the submit queue, this means Kubernetes PRs must pass these tests prior to getting merged. + +## Adding a test to CI + +When first adding a test it should *not* go straight into CI, because failures block ordinary development. A test should only be added to CI after is has been running in some non-CI suite long enought to establish a track record showing that the test does not fail when run against *working* software. A suite named `flaky` exists, and can be overloaded to mean `experimental` and used for this reason (can it really?). In addition to this track record, consider the following as requirements: +* The test must be short (20m?) +* Failures must indicate that the product is unfit (TODO: establish a firmer bar, what I'm trying to say is testing random controller X in CI doesn't help anyone, but maybe it does if controller X is a cluster addon, or our largest customer wants X, or what?) +* Failures must reliably indicate a bug in the product, not a bug in the test + +(TODO: is there a parallelism requirement here?) + +## Moving a test out of CI + +Do *not* move a test to flaky as soon as it starts failing just to clear up the submit queue, this risks introducing more bugs and compounding the problem even further (TODO: or do this? but why). Build cop can use their better judgement to call a test `flaky`. This means it fails for presumably random reasons, once in X runs (TODO: is X == 0?). Move flaky tests out of CI, create a P0/1 bug and try to triage it along to the right person. Adding the `kind/flake` label on github will grab the attention of the grumpy CI shamer bot, which will include the bug in its daily report. + +If your test got moved to flaky, it must demonstrate the run of greens required for getting added to CI once again (or nah?). + +## Non CI channels for testing + +If you want to test against Kubernetes but your test doesn't meet the following requirements, peruse [this list](../../hack/jenkins/e2e.sh) and add your test in whatever makes sense (TODO: What about release-lists, shouldn't they mirror other lists?). + + + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/testing.md?pixel)]() + -- cgit v1.2.3 From 8326bbc4f6ba090207364a41bfa2b1f0feca466f Mon Sep 17 00:00:00 2001 From: Isaac Hollander McCreery Date: Fri, 29 Jan 2016 16:20:53 -0800 Subject: CI testing guidelines redux --- e2e-tests.md | 45 ++++++++++++++++++++++++++++++++++++----- testing.md | 66 ------------------------------------------------------------ 2 files changed, 40 insertions(+), 71 deletions(-) delete mode 100644 testing.md diff --git a/e2e-tests.md b/e2e-tests.md index 388e25f0..63ebd16e 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -112,19 +112,54 @@ We are working on implementing clearer partitioning of our e2e tests to make run - `[Disruptive]`: If a test restarts components that might cause other tests to fail or break the cluster completely, it is labeled `[Disruptive]`. Any `[Disruptive]` test is also assumed to qualify for the `[Serial]` label, but need not be labeled as both. These tests are not run against soak clusters to avoid restarting components. - `[Flaky]`: If a test is found to be flaky, it receives the `[Flaky]` label until it is fixed. A `[Flaky]` label should be accompanied with a reference to the issue for de-flaking the test, because while a test remains labeled `[Flaky]`, it is not monitored closely in CI. `[Flaky]` tests are by default not run, unless a `focus` or `skip` argument is explicitly given. - `[Skipped]`: `[Skipped]` is a legacy label that we're phasing out. If a test is marked `[Skipped]`, there should be an issue open to label it properly. `[Skipped]` tests are by default not run, unless a `focus` or `skip` argument is explicitly given. -- `[Feature:...]`: If a test has non-default requirements to run or targets some non-core functionality, and thus should not be run as part of the standard suite, it receives a `[Feature:...]` label, e.g. `[Feature:Performance]` or `[Feature:Ingress]`. `[Feature:...]` tests are not run in our core suites, instead running in custom suites. There are a few use-cases for `[Feature:...]` tests: - - If a feature is experimental or alpha and is not enabled by default due to being incomplete or potentially subject to breaking changes, it should *not* block the merge-queue, and thus should run in some separate test suites owned by the feature owner(s). - - If a feature is in beta or GA, it *should* block the merge-queue. In moving from experimental to beta or GA, tests that are expected to pass by default should simply remove the `[Feature:...]` label, and will be incorporated into our core suites. If tests are not expected to pass by default, (e.g. they require a special environment such as added quota,) they should remain with the `[Feature:...]` label, and the suites that run them should be incorporated into our merge-queue, owned by the Build Cop. +- `[Feature:.+]`: If a test has non-default requirements to run or targets some non-core functionality, and thus should not be run as part of the standard suite, it receives a `[Feature:.+]` label, e.g. `[Feature:Performance]` or `[Feature:Ingress]`. `[Feature:.+]` tests are not run in our core suites, instead running in custom suites. There are a few use-cases for `[Feature:.+]` tests: + - If a feature is experimental or alpha and is not enabled by default due to being incomplete or potentially subject to breaking changes, it does *not* block the merge-queue, and thus should run in some separate test suites owned by the feature owner(s) (see #continuous_integration below). Finally, `[Conformance]` tests are tests we expect to pass on **any** Kubernetes cluster. The `[Conformance]` label does not supersede any other labels. `[Conformance]` test policies are a work-in-progress; see #18162. -## Adding a New Test +## Continuous Integration + +A quick overview of how we run e2e CI on Kubernetes. + +### What is CI? + +We run a battery of `e2e` tests against `HEAD` of the master branch on a continuous basis, and block merges via the [submit queue](http://submit-queue.k8s.io/) on a subset of those tests if they fail (the subset is defined in the [munger config](https://github.com/kubernetes/contrib/blob/master/mungegithub/mungers/submit-queue.go) via the `jenkins-jobs` flag; note we also block on `kubernetes-build` and `kubernetes-test-go` jobs for build and unit and integration tests). + +CI results can be found at [ci-test.k8s.io](ci-test.k8s.io), e.g. [ci-test.k8s.io/kubernetes-e2e-gce/10594](ci-test.k8s.io/kubernetes-e2e-gce/10594). + +### What runs in CI? + +We run all default tests (those that aren't marked `[Flaky]` or `[Feature:.+]`) against GCE and GKE. To minimize the time from regression-to-green-run, we partition tests across different jobs: + +- `kubernetes-` runs all non-`[Slow]`, non-`[Serial]`, non-`[Disruptive]`, non-`[Flaky]`, non-`[Feature:.+]` tests in parallel. +- `kubernetes--slow` runs all `[Slow]`, non-`[Serial]`, non-`[Disruptive]`, non-`[Flaky]`, non-`[Feature:.+]` tests in parallel. +- `kubernetes--serial` runs all `[Serial]` and `[Disruptive]`, non-`[Flaky]`, non-`[Feature:.+]` tests in serial. + +We also run non-default tests if the tests exercise general-availability ("GA") features that require a special environment to run in, e.g. `kubernetes-e2e-gce-scalability` and `kubernetes-kubemark-gce`, which test for Kubernetes performance. + +#### Non-default tests + +Many `[Feature:.+]` tests we don't run in CI. These tests are for features that are experimental (often in the `experimental` API), and aren't enabled by default. + +### Adding a test to CI As mentioned above, prior to adding a new test, it is a good idea to perform a `-ginkgo.dryRun=true` on the system, in order to see if a behavior is already being tested, or to determine if it may be possible to augment an existing set of tests for a specific use case. If a behavior does not currently have coverage and a developer wishes to add a new e2e test, navigate to the ./test/e2e directory and create a new test using the existing suite as a guide. -**TODO:** Create a self-documented example which has been disabled, but can be copied to create new tests and outlines the capabilities and libraries used. +TODO(#20357): Create a self-documented example which has been disabled, but can be copied to create new tests and outlines the capabilities and libraries used. + +When writing a test, consult #kinds_of_tests above to determine how your test should be marked, (e.g. `[Slow]`, `[Serial]`; remember, by default we assume a test can run in parallel with other tests!). + +When first adding a test it should *not* go straight into CI, because failures block ordinary development. A test should only be added to CI after is has been running in some non-CI suite long enough to establish a track record showing that the test does not fail when run against *working* software. + +Generally, a feature starts as `experimental`, and will be run in some suite owned by the team developing the feature. If a feature is in beta or GA, it *should* block the merge-queue. In moving from experimental to beta or GA, tests that are expected to pass by default should simply remove the `[Feature:.+]` label, and will be incorporated into our core suites. If tests are not expected to pass by default, (e.g. they require a special environment such as added quota,) they should remain with the `[Feature:.+]` label, and the suites that run them should be incorporated into the [munger config](https://github.com/kubernetes/contrib/blob/master/mungegithub/mungers/submit-queue.go) via the `jenkins-jobs` flag. + +Occasionally, we'll want to add tests to better exercise features that are already GA. These tests also shouldn't go straight to CI. They should begin by being marked as `[Flaky]` to be run outside of CI, and once a track-record for them is established, they may be promoted out of `[Flaky]`. + +### Moving a test out of CI + +TODO(ihmccreery) do we want to keep the `[Flaky]` label at all? ## Performance Evaluation diff --git a/testing.md b/testing.md deleted file mode 100644 index ed51dcc6..00000000 --- a/testing.md +++ /dev/null @@ -1,66 +0,0 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.1/docs/devel/testing.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - -Kubernetes Commit Queue Testing -=============================== - -A quick overview of how we add, remove and recycle tests from CI. - -## What is CI? - -Throughout this document we will refer to CI as any suite of e2e tests that can potentially hold up the submit queue, this means Kubernetes PRs must pass these tests prior to getting merged. - -## Adding a test to CI - -When first adding a test it should *not* go straight into CI, because failures block ordinary development. A test should only be added to CI after is has been running in some non-CI suite long enought to establish a track record showing that the test does not fail when run against *working* software. A suite named `flaky` exists, and can be overloaded to mean `experimental` and used for this reason (can it really?). In addition to this track record, consider the following as requirements: -* The test must be short (20m?) -* Failures must indicate that the product is unfit (TODO: establish a firmer bar, what I'm trying to say is testing random controller X in CI doesn't help anyone, but maybe it does if controller X is a cluster addon, or our largest customer wants X, or what?) -* Failures must reliably indicate a bug in the product, not a bug in the test - -(TODO: is there a parallelism requirement here?) - -## Moving a test out of CI - -Do *not* move a test to flaky as soon as it starts failing just to clear up the submit queue, this risks introducing more bugs and compounding the problem even further (TODO: or do this? but why). Build cop can use their better judgement to call a test `flaky`. This means it fails for presumably random reasons, once in X runs (TODO: is X == 0?). Move flaky tests out of CI, create a P0/1 bug and try to triage it along to the right person. Adding the `kind/flake` label on github will grab the attention of the grumpy CI shamer bot, which will include the bug in its daily report. - -If your test got moved to flaky, it must demonstrate the run of greens required for getting added to CI once again (or nah?). - -## Non CI channels for testing - -If you want to test against Kubernetes but your test doesn't meet the following requirements, peruse [this list](../../hack/jenkins/e2e.sh) and add your test in whatever makes sense (TODO: What about release-lists, shouldn't they mirror other lists?). - - - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/testing.md?pixel)]() - -- cgit v1.2.3 From d4d8eb42e09e50e41d71a698d21b7dc2ef348b0b Mon Sep 17 00:00:00 2001 From: Isaac Hollander McCreery Date: Fri, 29 Jan 2016 16:23:59 -0800 Subject: Add docs about the PR builder --- e2e-tests.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/e2e-tests.md b/e2e-tests.md index 63ebd16e..12915543 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -141,6 +141,10 @@ We also run non-default tests if the tests exercise general-availability ("GA") Many `[Feature:.+]` tests we don't run in CI. These tests are for features that are experimental (often in the `experimental` API), and aren't enabled by default. +### The PR-builder + +We also run a battery of tests against every PR before we merge it. These tests are equivalent to `kubernetes-gce`: it runs all non-`[Slow]`, non-`[Serial]`, non-`[Disruptive]`, non-`[Flaky]`, non-`[Feature:.+]` tests in parallel. These tests are considered "smoke tests" to give a decent signal that the PR doesn't break most functionality. Results for you PR can be found at [pr-test.k8s.io](pr-test.k8s.io), e.g. [pr-test.k8s.io/20354](pr-test.k8s.io/20354) for #20354. + ### Adding a test to CI As mentioned above, prior to adding a new test, it is a good idea to perform a `-ginkgo.dryRun=true` on the system, in order to see if a behavior is already being tested, or to determine if it may be possible to augment an existing set of tests for a specific use case. -- cgit v1.2.3 From d37c7d1df600ba12d7205f32685724cc4abb88b4 Mon Sep 17 00:00:00 2001 From: Paul Morie Date: Sat, 30 Jan 2016 00:06:34 -0500 Subject: Add link to e2e docs from coding conventions --- coding-conventions.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/coding-conventions.md b/coding-conventions.md index e0a1e146..6af7c40e 100644 --- a/coding-conventions.md +++ b/coding-conventions.md @@ -61,7 +61,7 @@ Code conventions Testing conventions - All new packages and most new significant functionality must come with unit tests - Table-driven tests are preferred for testing multiple scenarios/inputs; for example, see [TestNamespaceAuthorization](../../test/integration/auth_test.go) - - Significant features should come with integration (test/integration) and/or end-to-end (test/e2e) tests + - Significant features should come with integration (test/integration) and/or [end-to-end (test/e2e) tests](e2e-tests.md) - Including new kubectl commands and major features of existing commands - Unit tests must pass on OS X and Windows platforms - if you use Linux specific features, your test case must either be skipped on windows or compiled out (skipped is better when running Linux specific commands, compiled out is required when your code does not compile on Windows). -- cgit v1.2.3 From f9eba89426a0343fede5ce66134227372c1db371 Mon Sep 17 00:00:00 2001 From: Paul Morie Date: Sat, 30 Jan 2016 00:13:13 -0500 Subject: Add basic doc on local cluster to dev guide --- development.md | 30 +++++++++++++++++++++++++++++- 1 file changed, 29 insertions(+), 1 deletion(-) diff --git a/development.md b/development.md index 95dccaa9..c92ecf94 100644 --- a/development.md +++ b/development.md @@ -46,7 +46,7 @@ branch, but release branches of Kubernetes should not change. ## Releases and Official Builds -Official releases are built in Docker containers. Details are [here](http://releases.k8s.io/HEAD/build/README.md). You can do simple builds and development with just a local Docker installation. If want to build go locally outside of docker, please continue below. +Official releases are built in Docker containers. Details are [here](http://releases.k8s.io/HEAD/build/README.md). You can do simple builds and development with just a local Docker installation. If you want to build go code locally outside of docker, please continue below. ## Go development environment @@ -358,6 +358,34 @@ go run hack/e2e.go -v -ctl='get events' go run hack/e2e.go -v -ctl='delete pod foobar' ``` +## Local clusters + +It can be much faster to iterate on a local cluster instead of a cloud-based one. To start a local cluster, you can run: + +```sh +# The PATH construction is needed because PATH is one of the special-cased +# environment variables not passed by sudo -E +sudo PATH=$PATH hack/local-up-cluster.sh +``` + +This will start a single-node Kubernetes cluster than runs pods using the local docker daemon. Press Control-C to stop the cluster. + +### E2E tests against local clusters + +In order to run an E2E test against a locally running cluster, use the `e2e.test` binary built by `hack/build-go.sh` +directly: + +```sh +export KUBECONFIG=/path/to/kubeconfig +e2e.test --host=http://127.0.0.1:8080 +``` + +To control the tests that are run: + +```sh +e2e.test --host=http://127.0.0.1:8080 --ginkgo.focus="Secrets" +``` + ## Conformance testing End-to-end testing, as described above, is for [development -- cgit v1.2.3 From a4a50507a11f6f407c554bc8f4f19bc0a268a32a Mon Sep 17 00:00:00 2001 From: Thibault Serot Date: Wed, 30 Dec 2015 23:48:00 +0100 Subject: Enabling DNS support in local-up-cluster.sh --- running-locally.md | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/running-locally.md b/running-locally.md index 257b2522..dc9120fd 100644 --- a/running-locally.md +++ b/running-locally.md @@ -166,7 +166,16 @@ One or more of the Kubernetes daemons might've crashed. Tail the logs of each in #### The pods fail to connect to the services by host names -The local-up-cluster.sh script doesn't start a DNS service. Similar situation can be found [here](http://issue.k8s.io/6667). You can start a manually. Related documents can be found [here](../../cluster/addons/dns/#how-do-i-configure-it) +To start the DNS service, you need to set the following variables: + +```sh +KUBE_ENABLE_CLUSTER_DNS=true +KUBE_DNS_SERVER_IP="10.0.0.10" +KUBE_DNS_DOMAIN="cluster.local" +KUBE_DNS_REPLICAS=1 +``` + +To know more on DNS service you can look [here](http://issue.k8s.io/6667). Related documents can be found [here](../../cluster/addons/dns/#how-do-i-configure-it) -- cgit v1.2.3 From 73c482e1c56f6986d6e466ae705adcff203e7f25 Mon Sep 17 00:00:00 2001 From: Thibault Serot Date: Sat, 30 Jan 2016 19:46:56 +0100 Subject: Fixing conflict --- running-locally.md | 2 -- 1 file changed, 2 deletions(-) diff --git a/running-locally.md b/running-locally.md index dc9120fd..f5a5e85b 100644 --- a/running-locally.md +++ b/running-locally.md @@ -178,8 +178,6 @@ KUBE_DNS_REPLICAS=1 To know more on DNS service you can look [here](http://issue.k8s.io/6667). Related documents can be found [here](../../cluster/addons/dns/#how-do-i-configure-it) - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/running-locally.md?pixel)]() -- cgit v1.2.3 From 12ddea1828f62604de8839cc666ccba1fce90b34 Mon Sep 17 00:00:00 2001 From: Michail Kargakis Date: Wed, 27 Jan 2016 16:17:17 +0100 Subject: docs: kubectl command structure and generators conventions --- kubectl-conventions.md | 136 +++++++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 126 insertions(+), 10 deletions(-) diff --git a/kubectl-conventions.md b/kubectl-conventions.md index 126fd71a..ba72d6fb 100644 --- a/kubectl-conventions.md +++ b/kubectl-conventions.md @@ -45,6 +45,8 @@ Updated: 8/27/2015 - [Flag conventions](#flag-conventions) - [Output conventions](#output-conventions) - [Documentation conventions](#documentation-conventions) + - [Command implementation conventions](#command-implementation-conventions) + - [Generators](#generators) @@ -59,19 +61,15 @@ Updated: 8/27/2015 ## Command conventions * Command names are all lowercase, and hyphenated if multiple words. -* kubectl VERB NOUNs for commands that apply to multiple resource types -* NOUNs may be specified as TYPE name1 name2 ... or TYPE/name1 TYPE/name2; TYPE is omitted when only a single type is expected -* Resource types are all lowercase, with no hyphens; both singular and plural forms are accepted +* kubectl VERB NOUNs for commands that apply to multiple resource types. +* NOUNs may be specified as `TYPE name1 name2` or `TYPE/name1 TYPE/name2` or `TYPE1,TYPE2,TYPE3/name1`; TYPE is omitted when only a single type is expected. +* Resource types are all lowercase, with no hyphens; both singular and plural forms are accepted. * NOUNs may also be specified by one or more file arguments: -f file1 -f file2 ... * Resource types may have 2- or 3-letter aliases. * Business logic should be decoupled from the command framework, so that it can be reused independently of kubectl, cobra, etc. - * Ideally, commonly needed functionality would be implemented server-side in order to avoid problems typical of "fat" clients and to make it readily available to non-Go clients -* Commands that generate resources, such as `run` or `expose`, should obey the following conventions: - * Flags should be converted to a parameter Go map or json map prior to invoking the generator - * The generator must be versioned so that users depending on a specific behavior may pin to that version, via `--generator=` - * Generation should be decoupled from creation - * `--dry-run` should output the resource that would be created, without creating it -* A command group (e.g., `kubectl config`) may be used to group related non-standard commands, such as custom generators, mutations, and computations + * Ideally, commonly needed functionality would be implemented server-side in order to avoid problems typical of "fat" clients and to make it readily available to non-Go clients. +* Commands that generate resources, such as `run` or `expose`, should obey specific conventions, see [generators](#generators). +* A command group (e.g., `kubectl config`) may be used to group related non-standard commands, such as custom generators, mutations, and computations. ## Flag conventions @@ -136,6 +134,124 @@ Updated: 8/27/2015 * Use "TYPE" for the particular flavor of resource type accepted by kubectl, rather than "RESOURCE" or "KIND" * Use "NAME" for resource names +## Command implementation conventions + +For every command there should be a `NewCmd` function that creates the command and returns a pointer to a `cobra.Command`, which can later be added to other parent commands to compose the structure tree. There should also be a `Config` struct with a variable to every flag and argument declared by the command (and any other variable required for the command to run). This makes tests and mocking easier. The struct ideally exposes three methods: + +* `Complete`: Completes the struct fields with values that may or may not be directly provided by the user, for example, by flags pointers, by the `args` slice, by using the Factory, etc. +* `Validate`: performs validation on the struct fields and returns appropriate errors. +* `Run`: runs the actual logic of the command, taking as assumption that the struct is complete with all required values to run, and they are valid. + +Sample command skeleton: + +```go +// MineRecommendedName is the recommended command name for kubectl mine. +const MineRecommendedName = "mine" + +// MineConfig contains all the options for running the mine cli command. +type MineConfig struct { + mineLatest bool +} + +const ( + mineLong = `Some long description +for my command.` + + mineExample = ` # Run my command's first action + $ %[1]s first + + # Run my command's second action on latest stuff + $ %[1]s second --latest` +) + +// NewCmdMine implements the kubectl mine command. +func NewCmdMine(parent, name string, f *cmdutil.Factory, out io.Writer) *cobra.Command { + opts := &MineConfig{} + + cmd := &cobra.Command{ + Use: fmt.Sprintf("%s [--latest]", name), + Short: "Run my command", + Long: mineLong, + Example: fmt.Sprintf(mineExample, parent+" "+name), + Run: func(cmd *cobra.Command, args []string) { + if err := opts.Complete(f, cmd, args, out); err != nil { + cmdutil.CheckErr(err) + } + if err := opts.Validate(); err != nil { + cmdutil.CheckErr(cmdutil.UsageError(cmd, err.Error())) + } + if err := opts.RunMine(); err != nil { + cmdutil.CheckErr(err) + } + }, + } + + cmd.Flags().BoolVar(&options.mineLatest, "latest", false, "Use latest stuff") + return cmd +} + +// Complete completes all the required options for mine. +func (o *MineConfig) Complete(f *cmdutil.Factory, cmd *cobra.Command, args []string, out io.Writer) error { + return nil +} + +// Validate validates all the required options for mine. +func (o MineConfig) Validate() error { + return nil +} + +// RunMine implements all the necessary functionality for mine. +func (o MineConfig) RunMine() error { + return nil +} +``` + +The `Run` method should contain the business logic of the command and as noted in [command conventions](#command-conventions), ideally that logic should exist server-side so any client could take advantage of it. Notice that this is not a mandatory structure and not every command is implemented this way, but this is a nice convention so try to be compliant with it. As an example, have a look at how [kubectl logs](../../pkg/kubectl/cmd/logs.go) is implemented. + +## Generators + +Generators are kubectl commands that generate resources based on a set of inputs (other resources, flags, or a combination of both). + +The point of generators is: +* to enable users using kubectl in a scripted fashion to pin to a particular behavior which may change in the future. Explicit use of a generator will always guarantee that the expected behavior stays the same. +* to enable potential expansion of the generated resources for scenarios other than just creation, similar to how -f is supported for most general-purpose commands. + +Generator commands shoud obey to the following conventions: +* A `--generator` flag should be defined. Users then can choose between different generators, if the command supports them (for example, `kubectl run` currently supports generators for pods, jobs, replication controllers, and deployments), or between different versions of a generator so that users depending on a specific behavior may pin to that version (for example, `kubectl expose` currently supports two different versions of a service generator). +* Generation should be decoupled from creation. A generator should implement the `kubectl.StructuredGenerator` interface and have no dependencies on cobra or the Factory. See, for example, how the first version of the namespace generator is defined: + +```go +// NamespaceGeneratorV1 supports stable generation of a namespace +type NamespaceGeneratorV1 struct { + // Name of namespace + Name string +} + +// Ensure it supports the generator pattern that uses parameters specified during construction +var _ StructuredGenerator = &NamespaceGeneratorV1{} + +// StructuredGenerate outputs a namespace object using the configured fields +func (g *NamespaceGeneratorV1) StructuredGenerate() (runtime.Object, error) { + if err := g.validate(); err != nil { + return nil, err + } + namespace := &api.Namespace{} + namespace.Name = g.Name + return namespace, nil +} + +// validate validates required fields are set to support structured generation +func (g *NamespaceGeneratorV1) validate() error { + if len(g.Name) == 0 { + return fmt.Errorf("name must be specified") + } + return nil +} +``` + +The generator struct (`NamespaceGeneratorV1`) holds the necessary fields for namespace generation. It also satisfies the `kubectl.StructuredGenerator` interface by implementing the `StructuredGenerate() (runtime.Object, error)` method which configures the generated namespace that callers of the generator (`kubectl create namespace` in our case) need to create. +* `--dry-run` should output the resource that would be created, without creating it. + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/kubectl-conventions.md?pixel)]() -- cgit v1.2.3 From dfd29cb1235b2d694d860c7f26a5de9888d1041b Mon Sep 17 00:00:00 2001 From: Isaac Hollander McCreery Date: Mon, 1 Feb 2016 16:01:32 -0800 Subject: Update docs on flaky issues --- e2e-tests.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/e2e-tests.md b/e2e-tests.md index 388e25f0..8d736f25 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -110,7 +110,7 @@ We are working on implementing clearer partitioning of our e2e tests to make run - `[Slow]`: If a test takes more than five minutes to run (by itself or in parallel with many other tests), it is labeled `[Slow]`. This partition allows us to run almost all of our tests quickly in parallel, without waiting for the stragglers to finish. - `[Serial]`: If a test cannot be run in parallel with other tests (e.g. it takes too many resources or restarts nodes), it is labeled `[Serial]`, and should be run in serial as part of a separate suite. - `[Disruptive]`: If a test restarts components that might cause other tests to fail or break the cluster completely, it is labeled `[Disruptive]`. Any `[Disruptive]` test is also assumed to qualify for the `[Serial]` label, but need not be labeled as both. These tests are not run against soak clusters to avoid restarting components. -- `[Flaky]`: If a test is found to be flaky, it receives the `[Flaky]` label until it is fixed. A `[Flaky]` label should be accompanied with a reference to the issue for de-flaking the test, because while a test remains labeled `[Flaky]`, it is not monitored closely in CI. `[Flaky]` tests are by default not run, unless a `focus` or `skip` argument is explicitly given. +- `[Flaky]`: If a test is found to be flaky and we have decided that it's too hard to fix in the short term (e.g. it's going to take a full engineer-week), it receives the `[Flaky]` label until it is fixed. The `[Flaky]` label should be used very sparingly, and should be accompanied with a reference to the issue for de-flaking the test, because while a test remains labeled `[Flaky]`, it is not monitored closely in CI. `[Flaky]` tests are by default not run, unless a `focus` or `skip` argument is explicitly given. - `[Skipped]`: `[Skipped]` is a legacy label that we're phasing out. If a test is marked `[Skipped]`, there should be an issue open to label it properly. `[Skipped]` tests are by default not run, unless a `focus` or `skip` argument is explicitly given. - `[Feature:...]`: If a test has non-default requirements to run or targets some non-core functionality, and thus should not be run as part of the standard suite, it receives a `[Feature:...]` label, e.g. `[Feature:Performance]` or `[Feature:Ingress]`. `[Feature:...]` tests are not run in our core suites, instead running in custom suites. There are a few use-cases for `[Feature:...]` tests: - If a feature is experimental or alpha and is not enabled by default due to being incomplete or potentially subject to breaking changes, it should *not* block the merge-queue, and thus should run in some separate test suites owned by the feature owner(s). -- cgit v1.2.3 From 7f738639b25b9bc7583fc93fab7dbb3bc50d8be4 Mon Sep 17 00:00:00 2001 From: Isaac Hollander McCreery Date: Mon, 1 Feb 2016 16:24:41 -0800 Subject: Updates based on comments --- e2e-tests.md | 17 ++++++++--------- 1 file changed, 8 insertions(+), 9 deletions(-) diff --git a/e2e-tests.md b/e2e-tests.md index 12915543..54c7caef 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -112,8 +112,7 @@ We are working on implementing clearer partitioning of our e2e tests to make run - `[Disruptive]`: If a test restarts components that might cause other tests to fail or break the cluster completely, it is labeled `[Disruptive]`. Any `[Disruptive]` test is also assumed to qualify for the `[Serial]` label, but need not be labeled as both. These tests are not run against soak clusters to avoid restarting components. - `[Flaky]`: If a test is found to be flaky, it receives the `[Flaky]` label until it is fixed. A `[Flaky]` label should be accompanied with a reference to the issue for de-flaking the test, because while a test remains labeled `[Flaky]`, it is not monitored closely in CI. `[Flaky]` tests are by default not run, unless a `focus` or `skip` argument is explicitly given. - `[Skipped]`: `[Skipped]` is a legacy label that we're phasing out. If a test is marked `[Skipped]`, there should be an issue open to label it properly. `[Skipped]` tests are by default not run, unless a `focus` or `skip` argument is explicitly given. -- `[Feature:.+]`: If a test has non-default requirements to run or targets some non-core functionality, and thus should not be run as part of the standard suite, it receives a `[Feature:.+]` label, e.g. `[Feature:Performance]` or `[Feature:Ingress]`. `[Feature:.+]` tests are not run in our core suites, instead running in custom suites. There are a few use-cases for `[Feature:.+]` tests: - - If a feature is experimental or alpha and is not enabled by default due to being incomplete or potentially subject to breaking changes, it does *not* block the merge-queue, and thus should run in some separate test suites owned by the feature owner(s) (see #continuous_integration below). +- `[Feature:.+]`: If a test has non-default requirements to run or targets some non-core functionality, and thus should not be run as part of the standard suite, it receives a `[Feature:.+]` label, e.g. `[Feature:Performance]` or `[Feature:Ingress]`. `[Feature:.+]` tests are not run in our core suites, instead running in custom suites. If a feature is experimental or alpha and is not enabled by default due to being incomplete or potentially subject to breaking changes, it does *not* block the merge-queue, and thus should run in some separate test suites owned by the feature owner(s) (see #continuous_integration below). Finally, `[Conformance]` tests are tests we expect to pass on **any** Kubernetes cluster. The `[Conformance]` label does not supersede any other labels. `[Conformance]` test policies are a work-in-progress; see #18162. @@ -125,15 +124,15 @@ A quick overview of how we run e2e CI on Kubernetes. We run a battery of `e2e` tests against `HEAD` of the master branch on a continuous basis, and block merges via the [submit queue](http://submit-queue.k8s.io/) on a subset of those tests if they fail (the subset is defined in the [munger config](https://github.com/kubernetes/contrib/blob/master/mungegithub/mungers/submit-queue.go) via the `jenkins-jobs` flag; note we also block on `kubernetes-build` and `kubernetes-test-go` jobs for build and unit and integration tests). -CI results can be found at [ci-test.k8s.io](ci-test.k8s.io), e.g. [ci-test.k8s.io/kubernetes-e2e-gce/10594](ci-test.k8s.io/kubernetes-e2e-gce/10594). +CI results can be found at [ci-test.k8s.io](http://ci-test.k8s.io), e.g. [ci-test.k8s.io/kubernetes-e2e-gce/10594](http://ci-test.k8s.io/kubernetes-e2e-gce/10594). ### What runs in CI? We run all default tests (those that aren't marked `[Flaky]` or `[Feature:.+]`) against GCE and GKE. To minimize the time from regression-to-green-run, we partition tests across different jobs: -- `kubernetes-` runs all non-`[Slow]`, non-`[Serial]`, non-`[Disruptive]`, non-`[Flaky]`, non-`[Feature:.+]` tests in parallel. -- `kubernetes--slow` runs all `[Slow]`, non-`[Serial]`, non-`[Disruptive]`, non-`[Flaky]`, non-`[Feature:.+]` tests in parallel. -- `kubernetes--serial` runs all `[Serial]` and `[Disruptive]`, non-`[Flaky]`, non-`[Feature:.+]` tests in serial. +- `kubernetes-e2e-` runs all non-`[Slow]`, non-`[Serial]`, non-`[Disruptive]`, non-`[Flaky]`, non-`[Feature:.+]` tests in parallel. +- `kubernetes-e2e--slow` runs all `[Slow]`, non-`[Serial]`, non-`[Disruptive]`, non-`[Flaky]`, non-`[Feature:.+]` tests in parallel. +- `kubernetes-e2e--serial` runs all `[Serial]` and `[Disruptive]`, non-`[Flaky]`, non-`[Feature:.+]` tests in serial. We also run non-default tests if the tests exercise general-availability ("GA") features that require a special environment to run in, e.g. `kubernetes-e2e-gce-scalability` and `kubernetes-kubemark-gce`, which test for Kubernetes performance. @@ -143,7 +142,7 @@ Many `[Feature:.+]` tests we don't run in CI. These tests are for features that ### The PR-builder -We also run a battery of tests against every PR before we merge it. These tests are equivalent to `kubernetes-gce`: it runs all non-`[Slow]`, non-`[Serial]`, non-`[Disruptive]`, non-`[Flaky]`, non-`[Feature:.+]` tests in parallel. These tests are considered "smoke tests" to give a decent signal that the PR doesn't break most functionality. Results for you PR can be found at [pr-test.k8s.io](pr-test.k8s.io), e.g. [pr-test.k8s.io/20354](pr-test.k8s.io/20354) for #20354. +We also run a battery of tests against every PR before we merge it. These tests are equivalent to `kubernetes-gce`: it runs all non-`[Slow]`, non-`[Serial]`, non-`[Disruptive]`, non-`[Flaky]`, non-`[Feature:.+]` tests in parallel. These tests are considered "smoke tests" to give a decent signal that the PR doesn't break most functionality. Results for you PR can be found at [pr-test.k8s.io](http://pr-test.k8s.io), e.g. [pr-test.k8s.io/20354](http://pr-test.k8s.io/20354) for #20354. ### Adding a test to CI @@ -155,7 +154,7 @@ TODO(#20357): Create a self-documented example which has been disabled, but can When writing a test, consult #kinds_of_tests above to determine how your test should be marked, (e.g. `[Slow]`, `[Serial]`; remember, by default we assume a test can run in parallel with other tests!). -When first adding a test it should *not* go straight into CI, because failures block ordinary development. A test should only be added to CI after is has been running in some non-CI suite long enough to establish a track record showing that the test does not fail when run against *working* software. +When first adding a test it should *not* go straight into CI, because failures block ordinary development. A test should only be added to CI after is has been running in some non-CI suite long enough to establish a track record showing that the test does not fail when run against *working* software. Note also that tests running in CI are generally running on a well-loaded cluster, so must contend for resources; see above about [kinds of tests](#kinds_of_tests). Generally, a feature starts as `experimental`, and will be run in some suite owned by the team developing the feature. If a feature is in beta or GA, it *should* block the merge-queue. In moving from experimental to beta or GA, tests that are expected to pass by default should simply remove the `[Feature:.+]` label, and will be incorporated into our core suites. If tests are not expected to pass by default, (e.g. they require a special environment such as added quota,) they should remain with the `[Feature:.+]` label, and the suites that run them should be incorporated into the [munger config](https://github.com/kubernetes/contrib/blob/master/mungegithub/mungers/submit-queue.go) via the `jenkins-jobs` flag. @@ -163,7 +162,7 @@ Occasionally, we'll want to add tests to better exercise features that are alrea ### Moving a test out of CI -TODO(ihmccreery) do we want to keep the `[Flaky]` label at all? +If we have determined that a test is known-flaky and cannot be fixed in the short-term, we may move it out of CI indefinitely. This move should be used sparingly, as it effectively means that we have no coverage of that test. When a test if demoted, it should be marked `[Flaky]` with a comment accompanying the label with a reference to an issue opened to fix the test. ## Performance Evaluation -- cgit v1.2.3 From c657c0c84c4051e5e2e64b3f0b4bd1fa51080a7d Mon Sep 17 00:00:00 2001 From: Filip Grzadkowski Date: Tue, 2 Feb 2016 13:58:47 +0100 Subject: Fix link in releasing.md Remove additional space which broke http link. --- releasing.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/releasing.md b/releasing.md index d43a20cd..7c5ad9d9 100644 --- a/releasing.md +++ b/releasing.md @@ -208,8 +208,7 @@ release](https://github.com/kubernetes/kubernetes/releases/new): 1. fill in the release title from the draft; 1. re-run the appropriate release notes tool(s) to pick up any changes people have made; -1. find the appropriate `kubernetes.tar.gz` in [GCS bucket](https:// -console.developers.google.com/storage/browser/kubernetes-release/release/), +1. find the appropriate `kubernetes.tar.gz` in [GCS bucket](https://console.developers.google.com/storage/browser/kubernetes-release/release/), download it, double check the hash (compare to what you had in the release notes draft), and attach it to the release; and 1. publish! -- cgit v1.2.3 From ac22d0e274b2ee2fcd2263ad9117548bf5ccca82 Mon Sep 17 00:00:00 2001 From: Solly Ross Date: Thu, 14 Jan 2016 15:45:08 -0500 Subject: Scheduler predicate for capping node volume count For certain volume types (e.g. AWS EBS or GCE PD), a limitted number of such volumes can be attached to a given node. This commit introduces a predicate with allows cluster admins to cap the maximum number of volumes matching a particular type attached to a given node. The volume type is configurable by passing a pair of filter functions, and the maximum number of such volumes is configurable to allow node admins to reserve a certain number of volumes for system use. By default, the predicate is exposed as MaxEBSVolumeCount and MaxGCEPDVolumeCount (for AWS ElasticBlocKStore and GCE PersistentDisk volumes, respectively), each of which can be configured using the `KUBE_MAX_PD_VOLS` environment variable. Fixes #7835 --- scheduler_algorithm.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/scheduler_algorithm.md b/scheduler_algorithm.md index 00a812a5..786666ca 100755 --- a/scheduler_algorithm.md +++ b/scheduler_algorithm.md @@ -47,6 +47,8 @@ The purpose of filtering the nodes is to filter out the nodes that do not meet c - `PodFitsHost`: Filter out all nodes except the one specified in the PodSpec's NodeName field. - `PodSelectorMatches`: Check if the labels of the node match the labels specified in the Pod's `nodeSelector` field ([Here](../user-guide/node-selection/) is an example of how to use `nodeSelector` field). - `CheckNodeLabelPresence`: Check if all the specified labels exist on a node or not, regardless of the value. +- `MaxEBSVolumeCount`: Ensure that the number of attached ElasticBlockStore volumes does not exceed a maximum value (by default, 39, since Amazon recommends a maximum of 40 with one of those 40 reserved for the root volume -- see [Amazon's documentation](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/volume_limits.html#linux-specific-volume-limits)). The maximum value can be controlled by setting the `KUBE_MAX_PD_VOLS` environment variable. +- `MaxGCEPDVolumeCount`: Ensure that the number of attached GCE PersistentDisk volumes does not exceed a maximum value (by default, 16, which is the maximum GCE allows -- see [GCE's documentation](https://cloud.google.com/compute/docs/disks/persistent-disks#limits_for_predefined_machine_types)). The maximum value can be controlled by setting the `KUBE_MAX_PD_VOLS` environment variable. The details of the above predicates can be found in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/predicates/predicates.go). All predicates mentioned above can be used in combination to perform a sophisticated filtering policy. Kubernetes uses some, but not all, of these predicates by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). -- cgit v1.2.3 From 84dd3cfed344c22d95ff426a3221c5377fde0f75 Mon Sep 17 00:00:00 2001 From: harry Date: Thu, 21 Jan 2016 17:05:37 +0800 Subject: Caculate priorities based on image locality Add test for image score Update generated docs --- scheduler_algorithm.md | 1 + 1 file changed, 1 insertion(+) diff --git a/scheduler_algorithm.md b/scheduler_algorithm.md index 00a812a5..e886b0df 100755 --- a/scheduler_algorithm.md +++ b/scheduler_algorithm.md @@ -65,6 +65,7 @@ Currently, Kubernetes scheduler provides some practical priority functions, incl - `BalancedResourceAllocation`: This priority function tries to put the Pod on a node such that the CPU and Memory utilization rate is balanced after the Pod is deployed. - `CalculateSpreadPriority`: Spread Pods by minimizing the number of Pods belonging to the same service on the same node. If zone information is present on the nodes, the priority will be adjusted so that pods are spread across zones and nodes. - `CalculateAntiAffinityPriority`: Spread Pods by minimizing the number of Pods belonging to the same service on nodes with the same value for a particular label. +- `ImageLocalityPriority`: Nodes are prioritized based on locality of images requested by a pod. Nodes with larger size of already-installed packages required by the pod will be preferred over nodes with no already-installed packages required by the pod or a small total size of already-installed packages required by the pod. The details of the above priority functions can be found in [plugin/pkg/scheduler/algorithm/priorities](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/priorities/). Kubernetes uses some, but not all, of these priority functions by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). Similar as predicates, you can combine the above priority functions and assign weight factors (positive number) to them as you want (check [scheduler.md](scheduler.md) for how to customize). -- cgit v1.2.3 From dfa00a5742b5eab0df2e98fcc70539cb97e1f876 Mon Sep 17 00:00:00 2001 From: Eric Tune Date: Fri, 15 Jan 2016 14:34:46 -0800 Subject: Document Unions, conventions for adding to Unions. --- api-conventions.md | 10 +++++ api_changes.md | 117 +++++++++++++++++++++++++++++++++++++++++++++++++++-- 2 files changed, 123 insertions(+), 4 deletions(-) diff --git a/api-conventions.md b/api-conventions.md index c5fda4bb..af02d3db 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -53,6 +53,7 @@ using resources with kubectl can be found in [Working with resources](../user-gu - [Lists of named subobjects preferred over maps](#lists-of-named-subobjects-preferred-over-maps) - [Primitive types](#primitive-types) - [Constants](#constants) + - [Unions](#unions) - [Lists and Simple kinds](#lists-and-simple-kinds) - [Differing Representations](#differing-representations) - [Verbs on Resources](#verbs-on-resources) @@ -263,6 +264,15 @@ This rule maintains the invariant that all JSON/YAML keys are fields in API obje Some fields will have a list of allowed values (enumerations). These values will be strings, and they will be in CamelCase, with an initial uppercase letter. Examples: "ClusterFirst", "Pending", "ClientIP". +#### Unions + +Sometimes, at most one of a set of fields can be set. For example, the [volumes] field of a PodSpec has 17 different volume type-specific +fields, such as `nfs` and `iscsi`. All fields in the set should be [Optional](#optional-vs-required). + +Sometimes, when a new type is created, the api designer may anticipate that a union will be needed in the future, even if only one field is +allowed initially. In this case, be sure to make the field [Optional](#optional-vs-required) optional. In the validation, you may +still return an error if the sole field is unset. Do not set a default value for that field. + ### Lists and Simple kinds Every list or simple kind SHOULD have the following metadata in a nested object field called "metadata": diff --git a/api_changes.md b/api_changes.md index 2fe8a5af..0c039aab 100644 --- a/api_changes.md +++ b/api_changes.md @@ -32,6 +32,38 @@ Documentation for other releases can be found at +*This document is oriented at developers who want to change existing APIs. +A set of API conventions, which applies to new APIs and to changes, can be +found at [API Conventions](api-conventions.md). + +**Table of Contents** + + +- [So you want to change the API?](#so-you-want-to-change-the-api) + - [Operational overview](#operational-overview) + - [On compatibility](#on-compatibility) + - [Incompatible API changes](#incompatible-api-changes) + - [Changing versioned APIs](#changing-versioned-apis) + - [Edit types.go](#edit-typesgo) + - [Edit defaults.go](#edit-defaultsgo) + - [Edit conversion.go](#edit-conversiongo) + - [Changing the internal structures](#changing-the-internal-structures) + - [Edit types.go](#edit-typesgo) + - [Edit validation.go](#edit-validationgo) + - [Edit version conversions](#edit-version-conversions) + - [Edit deep copy files](#edit-deep-copy-files) + - [Edit json (un)marshaling code](#edit-json-unmarshaling-code) + - [Making a new API Group](#making-a-new-api-group) + - [Update the fuzzer](#update-the-fuzzer) + - [Update the semantic comparisons](#update-the-semantic-comparisons) + - [Implement your change](#implement-your-change) + - [Write end-to-end tests](#write-end-to-end-tests) + - [Examples and docs](#examples-and-docs) + - [Alpha, Beta, and Stable Versions](#alpha-beta-and-stable-versions) + - [Adding Unstable Features to Stable Versions](#adding-unstable-features-to-stable-versions) + + + # So you want to change the API? Before attempting a change to the API, you should familiarize yourself @@ -273,6 +305,11 @@ enumerated set *can* be a compatible change, if handled properly (treat the removed value as deprecated but allowed). This is actually a special case of a new representation, discussed above. +For [Unions](api-conventions.md), sets of fields where at most one should be set, +it is acceptible to add a new option to the union if the [appropriate conventions] +were followed in the original object. Removing an option requires following +the deprecation process. + ## Incompatible API changes There are times when this might be OK, but mostly we want changes that @@ -549,10 +586,6 @@ hack/update-swagger-spec.sh The API spec changes should be in a commit separate from your other changes. -## Adding new REST objects - -TODO(smarterclayton): write this. - ## Alpha, Beta, and Stable Versions New feature development proceeds through a series of stages of increasing maturity: @@ -617,6 +650,82 @@ New feature development proceeds through a series of stages of increasing maturi - Support: API version will continue to be present for many subsequent software releases; - Recommended Use Cases: any +### Adding Unstable Features to Stable Versions + +When adding a feature to an object which is already Stable, the new fields and new behaviors +need to meet the Stable level requirements. If these cannot be met, then the new +field cannot be added to the object. + +For example, consider the following object: + +```go +// API v6. +type Frobber struct { + Height int `json:"height"` + Param string `json:"param"` +} +``` + +A developer is considering adding a new `Width` parameter, like this: + +```go +// API v6. +type Frobber struct { + Height int `json:"height"` + Width int `json:"height"` + Param string `json:"param"` +} +``` + +However, the new feature is not stable enough to be used in a stable version (`v6`). +Some reasons for this might include: + +- the final representation is undecided (e.g. should it be called `Width` or `Breadth`?) +- the implementation is not stable enough for general use (e.g. the `Area()` routine sometimes overflows.) + +The developer cannot add the new field until stability is met. However, sometimes stability +cannot be met until some users try the new feature, and some users are only able or willing +to accept a released version of Kubernetes. In that case, the developer has a few options, +both of which require staging work over several releases. + + +A preferred option is to first make a release where the new value (`Width` in this example) +is specified via an annotation, like this: + +```go +kind: frobber +version: v6 +metadata: + name: myfrobber + annotations: + frobbing.alpha.kubernetes.io/width: 2 +height: 4 +param: "green and blue" +``` + +This format allows users to specify the new field, but makes it clear +that they are using a Alpha feature when they do, since the word `alpha` +is in the annotation key. + +Another option is to introduce a new type with an new `alpha` or `beta` version +designator, like this: + +``` +// API v6alpha2 +type Frobber struct { + Height int `json:"height"` + Width int `json:"height"` + Param string `json:"param"` +} +``` + +The latter requires that all objects in the same API group as `Frobber` to be replicated in +the new version, `v6alpha2`. This also requires user to use a new client which uses the +other version. Therefore, this is not a preferred option. + +A releated issue is how a cluster manager can roll back from a new version +with a new feature, that is already being used by users. See https://github.com/kubernetes/kubernetes/issues/4855. + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/api_changes.md?pixel)]() -- cgit v1.2.3 From f3bc3b0fddaf4c8377d4fcc9436efe6ea50d1468 Mon Sep 17 00:00:00 2001 From: Jeff Lowdermilk Date: Fri, 15 Jan 2016 10:55:24 -0800 Subject: Add workflow diagram to pull request doc --- pr_workflow.dia | Bin 0 -> 1987 bytes pr_workflow.png | Bin 0 -> 27835 bytes pull-requests.md | 7 +++++-- 3 files changed, 5 insertions(+), 2 deletions(-) create mode 100644 pr_workflow.dia create mode 100644 pr_workflow.png diff --git a/pr_workflow.dia b/pr_workflow.dia new file mode 100644 index 00000000..d520c21d Binary files /dev/null and b/pr_workflow.dia differ diff --git a/pr_workflow.png b/pr_workflow.png new file mode 100644 index 00000000..dc95a366 Binary files /dev/null and b/pr_workflow.png differ diff --git a/pull-requests.md b/pull-requests.md index eaffce23..b0fb7385 100644 --- a/pull-requests.md +++ b/pull-requests.md @@ -34,8 +34,7 @@ Documentation for other releases can be found at Pull Request Process ==================== -An overview of how we will manage old or out-of-date pull requests. - +An overview of how we will manage old or out-of-date pull requests.k Process ------- @@ -51,6 +50,10 @@ We want to limit the total number of PRs in flight to: Life of a Pull Request ---------------------- +### Visual overview + +![PR workflow](pr_workflow.png) + Unless in the last few weeks of a milestone when we need to reduce churn and stabilize, we aim to be always accepting pull requests. Either the [on call](https://github.com/kubernetes/kubernetes/wiki/Kubernetes-on-call-rotations) manually or the [github "munger"](https://github.com/kubernetes/contrib/tree/master/mungegithub) submit-queue plugin automatically will manage merging PRs. -- cgit v1.2.3 From 9913665341c6cfdc7cc5af5d03775787ea8adc66 Mon Sep 17 00:00:00 2001 From: Jeff Lowdermilk Date: Fri, 15 Jan 2016 16:08:39 -0800 Subject: review changes --- pr_workflow.dia | Bin 1987 -> 3189 bytes pr_workflow.png | Bin 27835 -> 80793 bytes pull-requests.md | 51 ++++++++++++++++++++++++++++++++++----------------- 3 files changed, 34 insertions(+), 17 deletions(-) diff --git a/pr_workflow.dia b/pr_workflow.dia index d520c21d..753a284b 100644 Binary files a/pr_workflow.dia and b/pr_workflow.dia differ diff --git a/pr_workflow.png b/pr_workflow.png index dc95a366..0e2bd5d6 100644 Binary files a/pr_workflow.png and b/pr_workflow.png differ diff --git a/pull-requests.md b/pull-requests.md index b0fb7385..4e0f668f 100644 --- a/pull-requests.md +++ b/pull-requests.md @@ -34,26 +34,13 @@ Documentation for other releases can be found at Pull Request Process ==================== -An overview of how we will manage old or out-of-date pull requests.k -Process -------- - -We will close any pull requests older than two weeks. - -Exceptions can be made for PRs that have active review comments, or that are awaiting other dependent PRs. Closed pull requests are easy to recreate, and little work is lost by closing a pull request that subsequently needs to be reopened. - -We want to limit the total number of PRs in flight to: -* Maintain a clean project -* Remove old PRs that would be difficult to rebase as the underlying code has changed over time -* Encourage code velocity +An overview of how pull requests are managed for kubernetes. This document +assumes the reader has already followed the [development guide](development.md) +to set up their environment. Life of a Pull Request ---------------------- -### Visual overview - -![PR workflow](pr_workflow.png) - Unless in the last few weeks of a milestone when we need to reduce churn and stabilize, we aim to be always accepting pull requests. Either the [on call](https://github.com/kubernetes/kubernetes/wiki/Kubernetes-on-call-rotations) manually or the [github "munger"](https://github.com/kubernetes/contrib/tree/master/mungegithub) submit-queue plugin automatically will manage merging PRs. @@ -61,10 +48,40 @@ Either the [on call](https://github.com/kubernetes/kubernetes/wiki/Kubernetes-on There are several requirements for the submit-queue to work: * Author must have signed CLA ("cla: yes" label added to PR) * No changes can be made since last lgtm label was applied -* k8s-bot must have reported the GCE E2E build and test steps passed (Travis, Shippable and Jenkins build) +* k8s-bot must have reported the GCE E2E build and test steps passed (Travis, Jenkins unit/integration, Jenkins e2e) Additionally, for infrequent or new contributors, we require the on call to apply the "ok-to-merge" label manually. This is gated by the [whitelist](https://github.com/kubernetes/contrib/blob/master/mungegithub/whitelist.txt). +### Before sending a pull request + +The following will save time for both you and your reviewer: + +* Enable [pre-commit hooks](development.md#committing-changes-to-your-fork) and verify they pass. +* Verify `hack/verify-generated-docs.sh` passes. +* Verify `hack/test-go.sh` passes. + +### Visual overview + +![PR workflow](pr_workflow.png) + +Other notes +----------- + +Pull requests that are purely support questions will be closed and +redirected to [stackoverflow](http://stackoverflow.com/questions/tagged/kubernetes). +We do this to consolidate help/support questions into a single channel, +improve efficiency in responding to requests and make FAQs easier +to find. + +Pull requests older than 2 weeks will be closed. Exceptions can be made +for PRs that have active review comments, or that are awaiting other dependent PRs. +Closed pull requests are easy to recreate, and little work is lost by closing a pull +request that subsequently needs to be reopened. We want to limit the total number of PRs in flight to: +* Maintain a clean project +* Remove old PRs that would be difficult to rebase as the underlying code has changed over time +* Encourage code velocity + + Automation ---------- -- cgit v1.2.3 From 5eef9a7df826954fdbaf7c179fe0a05652d42206 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Chris=20K=C3=BChl?= Date: Tue, 9 Feb 2016 13:52:25 +0100 Subject: docs: replace Rocket with rkt --- mesos-style.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mesos-style.md b/mesos-style.md index c8d096be..c0510264 100644 --- a/mesos-style.md +++ b/mesos-style.md @@ -129,7 +129,7 @@ In contrast, in Mesos, API operations go to a particular framework. However, the in the API server rather than in the controller. Of course you can choose to make these operations be no-ops for your application-specific collection abstractions, and handle them in your controller. * On the node level, Mesos allows application-specific executors, whereas Kubernetes only has -executors for Docker and Rocket containers. +executors for Docker and rkt containers. The end-to-end flow is -- cgit v1.2.3 From d48796f5aaa04ae06ad9cfe8f924f54f9e2d1b02 Mon Sep 17 00:00:00 2001 From: Isaac Hollander McCreery Date: Fri, 5 Feb 2016 15:43:18 -0800 Subject: Reconcile testing docs, fixes #18606 --- development.md | 104 +-------------------------------------------- e2e-tests.md | 130 ++++++++++++++++++++++++++++++++++++++++++++++----------- 2 files changed, 106 insertions(+), 128 deletions(-) diff --git a/development.md b/development.md index 608482f7..bdef3213 100644 --- a/development.md +++ b/development.md @@ -294,109 +294,7 @@ hack/test-integration.sh ## End-to-End tests -You can run an end-to-end test which will bring up a master and two nodes, perform some tests, and then tear everything down. Make sure you have followed the getting started steps for your chosen cloud platform (which might involve changing the `KUBERNETES_PROVIDER` environment variable to something other than "gce". - -```sh -cd kubernetes -hack/e2e-test.sh -``` - -Pressing control-C should result in an orderly shutdown but if something goes wrong and you still have some VMs running you can force a cleanup with this command: - -```sh -go run hack/e2e.go --down -``` - -### Flag options - -See the flag definitions in `hack/e2e.go` for more options, such as reusing an existing cluster, here is an overview: - -```sh -# Build binaries for testing -go run hack/e2e.go --build - -# Create a fresh cluster. Deletes a cluster first, if it exists -go run hack/e2e.go --up - -# Create a fresh cluster at a specific release version. -go run hack/e2e.go --up --version=0.7.0 - -# Test if a cluster is up. -go run hack/e2e.go --isup - -# Push code to an existing cluster -go run hack/e2e.go --push - -# Push to an existing cluster, or bring up a cluster if it's down. -go run hack/e2e.go --pushup - -# Run all tests -go run hack/e2e.go --test - -# Run tests matching the regex "Pods.*env" -go run hack/e2e.go -v -test --test_args="--ginkgo.focus=Pods.*env" - -# Alternately, if you have the e2e cluster up and no desire to see the event stream, you can run ginkgo-e2e.sh directly: -hack/ginkgo-e2e.sh --ginkgo.focus=Pods.*env -``` - -### Combining flags - -```sh -# Flags can be combined, and their actions will take place in this order: -# -build, -push|-up|-pushup, -test|-tests=..., -down -# e.g.: -go run hack/e2e.go -build -pushup -test -down - -# -v (verbose) can be added if you want streaming output instead of only -# seeing the output of failed commands. - -# -ctl can be used to quickly call kubectl against your e2e cluster. Useful for -# cleaning up after a failed test or viewing logs. Use -v to avoid suppressing -# kubectl output. -go run hack/e2e.go -v -ctl='get events' -go run hack/e2e.go -v -ctl='delete pod foobar' -``` - -## Local clusters - -It can be much faster to iterate on a local cluster instead of a cloud-based one. To start a local cluster, you can run: - -```sh -# The PATH construction is needed because PATH is one of the special-cased -# environment variables not passed by sudo -E -sudo PATH=$PATH hack/local-up-cluster.sh -``` - -This will start a single-node Kubernetes cluster than runs pods using the local docker daemon. Press Control-C to stop the cluster. - -### E2E tests against local clusters - -In order to run an E2E test against a locally running cluster, use the `e2e.test` binary built by `hack/build-go.sh` -directly: - -```sh -export KUBECONFIG=/path/to/kubeconfig -e2e.test --host=http://127.0.0.1:8080 -``` - -To control the tests that are run: - -```sh -e2e.test --host=http://127.0.0.1:8080 --ginkgo.focus="Secrets" -``` - -## Conformance testing - -End-to-end testing, as described above, is for [development -distributions](writing-a-getting-started-guide.md). A conformance test is used on -a [versioned distro](writing-a-getting-started-guide.md). - -The conformance test runs a subset of the e2e-tests against a manually-created cluster. It does not -require support for up/push/down and other operations. To run a conformance test, you need to know the -IP of the master for your cluster and the authorization arguments to use. The conformance test is -intended to run against a cluster at a specific binary release of Kubernetes. -See [conformance-test.sh](http://releases.k8s.io/HEAD/hack/conformance-test.sh). +See [End-to-End Testing in Kubernetes](e2e-tests.md). ## Testing out flaky tests diff --git a/e2e-tests.md b/e2e-tests.md index f07f2b59..50163815 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -32,33 +32,91 @@ Documentation for other releases can be found at -# End-2-End Testing in Kubernetes +# End-to-End Testing in Kubernetes ## Overview -The end-2-end tests for kubernetes provide a mechanism to test behavior of the system, and to ensure end user operations match developer specifications. In distributed systems it is not uncommon that a minor change may pass all unit tests, but cause unforseen changes at the system level. Thus, the primary objectives of the end-2-end tests are to ensure a consistent and reliable behavior of the kubernetes code base, and to catch bugs early. +End-to-end (e2e) tests for Kubernetes provide a mechanism to test end-to-end behavior of the system, and is the last signal to ensure end user operations match developer specifications. Although unit and integration tests should ideally provide a good signal, the reality is in a distributed system like Kubernetes it is not uncommon that a minor change may pass all unit and integration tests, but cause unforseen changes at the system level. e2e testing is very costly, both in time to run tests and difficulty debugging, though: it takes a long time to build, deploy, and exercise a cluster. Thus, the primary objectives of the e2e tests are to ensure a consistent and reliable behavior of the kubernetes code base, and to catch hard-to-test bugs before users do, when unit and integration tests are insufficient. -The end-2-end tests in kubernetes are built atop of [ginkgo] (http://onsi.github.io/ginkgo/) and [gomega] (http://onsi.github.io/gomega/). There are a host of features that this BDD testing framework provides, and it is recommended that the developer read the documentation prior to diving into the tests. +The e2e tests in kubernetes are built atop of [Ginkgo](http://onsi.github.io/ginkgo/) and [Gomega](http://onsi.github.io/gomega/). There are a host of features that this BDD testing framework provides, and it is recommended that the developer read the documentation prior to diving into the tests. -The purpose of *this* document is to serve as a primer for developers who are looking to execute, or add tests, using a local development environment. +The purpose of *this* document is to serve as a primer for developers who are looking to execute or add tests using a local development environment. ## Building and Running the Tests -**NOTE:** The tests have an array of options. For simplicity, the examples will focus on leveraging the tests on a local cluster using `sudo ./hack/local-up-cluster.sh` +There are a variety of ways to run e2e tests, but we aim to decrease the number of ways to run e2e tests to a canonical way: `hack/e2e.go`. -### Building the Tests +You can run an end-to-end test which will bring up a master and nodes, perform some tests, and then tear everything down. Make sure you have followed the getting started steps for your chosen cloud platform (which might involve changing the `KUBERNETES_PROVIDER` environment variable to something other than "gce"). -The tests are built into a single binary which can be run against any deployed kubernetes system. To build the tests, navigate to your source directory and execute: +To build Kubernetes, up a cluster, run tests, and tear everything down, use: -`$ make all` +```sh +go run hack/e2e.go -v --build --up --test --down +``` + +If you'd like to just perform one of these steps, here are some examples: + +```sh +# Build binaries for testing +go run hack/e2e.go -v --build + +# Create a fresh cluster. Deletes a cluster first, if it exists +go run hack/e2e.go -v --up + +# Create a fresh cluster at a specific release version. +go run hack/e2e.go -v --up --version=0.7.0 + +# Test if a cluster is up. +go run hack/e2e.go -v --isup + +# Push code to an existing cluster +go run hack/e2e.go -v --push + +# Push to an existing cluster, or bring up a cluster if it's down. +go run hack/e2e.go -v --pushup + +# Run all tests +go run hack/e2e.go -v --test -The output for the end-2-end tests will be a single binary called `e2e.test` under the default output directory, which is typically `_output/local/bin/linux/amd64/`. Within the repository there are scripts that are provided under the `./hack` directory that are helpful for automation, but may not apply for a local development purposes. Instead, we recommend familiarizing yourself with the executable options. To obtain the full list of options, run the following: +# Run tests matching the regex "\[Conformance\]" (the conformance tests) +go run hack/e2e.go -v -test --test_args="--ginkgo.focus=\[Conformance\]" -`$ ./e2e.test --help` +# Conversely, exclude tests that match the regex "Pods.*env" +go run hack/e2e.go -v -test --test_args="--ginkgo.focus=Pods.*env" -### Running the Tests +# Flags can be combined, and their actions will take place in this order: +# --build, --push|--up|--pushup, --test|--tests=..., --down +# +# You can also specify an alternative provider, such as 'aws' +# +# e.g.: +KUBERNETES_PROVIDER=aws go run hack/e2e.go -v --build --pushup --test --down -For the purposes of brevity, we will look at a subset of the options, which are listed below: +# -ctl can be used to quickly call kubectl against your e2e cluster. Useful for +# cleaning up after a failed test or viewing logs. Use -v to avoid suppressing +# kubectl output. +go run hack/e2e.go -v -ctl='get events' +go run hack/e2e.go -v -ctl='delete pod foobar' + +# Alternately, if you have the e2e cluster up and no desire to see the event stream, you can run ginkgo-e2e.sh directly: +hack/ginkgo-e2e.sh --ginkgo.focus=\[Conformance\] +``` + +The tests are built into a single binary which can be run used to deploy a Kubernetes system or run tests against an already-deployed Kubernetes system. See `go run hack/e2e.go --help` (or the flag definitions in `hack/e2e.go`) for more options, such as reusing an existing cluster. + +### Cleaning up + +During a run, pressing `control-C` should result in an orderly shutdown, but if something goes wrong and you still have some VMs running you can force a cleanup with this command: + +```sh +go run hack/e2e.go -v --down +``` + +## Advanced testing + +### Bringing up a cluster for testing + +If you want, you may bring up a cluster in some other manner and run tests against it. To do so, or to do other non-standard test things, you can pass arguments into Ginkgo using `--test_args` (e.g. see above). For the purposes of brevity, we will look at a subset of the options, which are listed below: ``` -ginkgo.dryRun=false: If set, ginkgo will walk the test hierarchy without actually running anything. Best paired with -v. @@ -75,7 +133,7 @@ For the purposes of brevity, we will look at a subset of the options, which are -repo-root="../../": Root directory of kubernetes repository, for finding test files. ``` -Prior to running the tests, it is recommended that you first create a simple auth file in your home directory, e.g. `$HOME/.kube/config` , with the following: +Prior to running the tests, you may want to first create a simple auth file in your home directory, e.g. `$HOME/.kube/config` , with the following: ``` { @@ -84,23 +142,39 @@ Prior to running the tests, it is recommended that you first create a simple aut } ``` -Next, you will need a cluster that you can test against. As mentioned earlier, you will want to execute `sudo ./hack/local-up-cluster.sh`. To get a sense of what tests exist, you may want to run: +As mentioned earlier there are a host of other options that are available, but they are left to the developer. -`e2e.test --host="127.0.0.1:8080" --provider="local" --ginkgo.v=true -ginkgo.dryRun=true --kubeconfig="$HOME/.kube/config" --repo-root="$KUBERNETES_SRC_PATH"` +**NOTE:** If you are running tests on a local cluster repeatedly, you may need to periodically perform some manual cleanup. -If you wish to execute a specific set of tests you can use the `-ginkgo.focus=` regex, e.g.: +- `rm -rf /var/run/kubernetes`, clear kube generated credentials, sometimes stale permissions can cause problems. +- `sudo iptables -F`, clear ip tables rules left by the kube-proxy. -`e2e.test ... --ginkgo.focus="DNS|(?i)nodeport(?-i)|kubectl guestbook"` +### Local clusters -Conversely, if you wish to exclude a set of tests, you can run: +It can be much faster to iterate on a local cluster instead of a cloud-based one. To start a local cluster, you can run: -`e2e.test ... --ginkgo.skip="Density|Scale"` +```sh +# The PATH construction is needed because PATH is one of the special-cased +# environment variables not passed by sudo -E +sudo PATH=$PATH hack/local-up-cluster.sh +``` -As mentioned earlier there are a host of other options that are available, but are left to the developer +This will start a single-node Kubernetes cluster than runs pods using the local docker daemon. Press Control-C to stop the cluster. -**NOTE:** If you are running tests on a local cluster repeatedly, you may need to periodically perform some manual cleanup. -- `rm -rf /var/run/kubernetes`, clear kube generated credentials, sometimes stale permissions can cause problems. -- `sudo iptables -F`, clear ip tables rules left by the kube-proxy. +#### Testing against local clusters + +In order to run an E2E test against a locally running cluster, point the tests at a custom host directly: + +```sh +export KUBECONFIG=/path/to/kubeconfig +go run hack/e2e.go -v --test_args="--host=http://127.0.0.1:8080" +``` + +To control the tests that are run: + +```sh +go run hack/e2e.go -v --test_args="--host=http://127.0.0.1:8080" --ginkgo.focus="Secrets" +``` ## Kinds of tests @@ -114,7 +188,13 @@ We are working on implementing clearer partitioning of our e2e tests to make run - `[Skipped]`: `[Skipped]` is a legacy label that we're phasing out. If a test is marked `[Skipped]`, there should be an issue open to label it properly. `[Skipped]` tests are by default not run, unless a `focus` or `skip` argument is explicitly given. - `[Feature:.+]`: If a test has non-default requirements to run or targets some non-core functionality, and thus should not be run as part of the standard suite, it receives a `[Feature:.+]` label, e.g. `[Feature:Performance]` or `[Feature:Ingress]`. `[Feature:.+]` tests are not run in our core suites, instead running in custom suites. If a feature is experimental or alpha and is not enabled by default due to being incomplete or potentially subject to breaking changes, it does *not* block the merge-queue, and thus should run in some separate test suites owned by the feature owner(s) (see #continuous_integration below). -Finally, `[Conformance]` tests are tests we expect to pass on **any** Kubernetes cluster. The `[Conformance]` label does not supersede any other labels. `[Conformance]` test policies are a work-in-progress; see #18162. +### Conformance tests + +Finally, `[Conformance]` tests are tests we expect to pass on **any** Kubernetes cluster. The `[Conformance]` label does not supersede any other labels. `[Conformance]` test policies are a work-in-progress (see #18162). + +End-to-end testing, as described above, is for [development distributions](writing-a-getting-started-guide.md). A conformance test is used on a [versioned distro](writing-a-getting-started-guide.md). (Links WIP) + +The conformance test runs a subset of the e2e-tests against a manually-created cluster. It does not require support for up/push/down and other operations. To run a conformance test, you need to know the IP of the master for your cluster and the authorization arguments to use. The conformance test is intended to run against a cluster at a specific binary release of Kubernetes. See [conformance-test.sh](http://releases.k8s.io/HEAD/hack/conformance-test.sh). ## Continuous Integration @@ -166,7 +246,7 @@ If we have determined that a test is known-flaky and cannot be fixed in the shor ## Performance Evaluation -Another benefit of the end-2-end tests is the ability to create reproducible loads on the system, which can then be used to determine the responsiveness, or analyze other characteristics of the system. For example, the density tests load the system to 30,50,100 pods per/node and measures the different characteristics of the system, such as throughput, api-latency, etc. +Another benefit of the e2e tests is the ability to create reproducible loads on the system, which can then be used to determine the responsiveness, or analyze other characteristics of the system. For example, the density tests load the system to 30,50,100 pods per/node and measures the different characteristics of the system, such as throughput, api-latency, etc. For a good overview of how we analyze performance data, please read the following [post](http://blog.kubernetes.io/2015/09/kubernetes-performance-measurements-and.html) -- cgit v1.2.3 From 9ac4c6edbbee5f6dd5d0004adb1b86ee324cc285 Mon Sep 17 00:00:00 2001 From: Brian Rosner Date: Fri, 12 Feb 2016 08:15:26 -0700 Subject: Added pykube client library --- client-libraries.md | 1 + 1 file changed, 1 insertion(+) diff --git a/client-libraries.md b/client-libraries.md index 69661ff4..aeba3610 100644 --- a/client-libraries.md +++ b/client-libraries.md @@ -49,6 +49,7 @@ Documentation for other releases can be found at * [Perl](https://metacpan.org/pod/Net::Kubernetes) * [PHP](https://github.com/devstub/kubernetes-api-php-client) * [PHP](https://github.com/maclof/kubernetes-client) + * [Python](https://github.com/eldarion-gondor/pykube) * [Ruby](https://github.com/Ch00k/kuber) * [Ruby](https://github.com/abonas/kubeclient) * [Scala](https://github.com/doriordan/skuber) -- cgit v1.2.3 From bb3ad6388bb90dd7c795b2a1d34b9f8e9f7e82b1 Mon Sep 17 00:00:00 2001 From: goltermann Date: Fri, 12 Feb 2016 09:06:29 -0800 Subject: Add milestone tag clarifications Add Milestone details to our issues page that defines Priority --- issues.md | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/issues.md b/issues.md index 483747a1..ac0304ae 100644 --- a/issues.md +++ b/issues.md @@ -43,13 +43,23 @@ We use GitHub issue labels for prioritization. The absence of a priority label means the bug has not been reviewed and prioritized yet. -We try to apply these priority labels consistently across the entire project, but if you notice an issue that you believe to be misprioritized, please do let us know and we will evaluate your counter-proposal.\ +We try to apply these priority labels consistently across the entire project, but if you notice an issue that you believe to be misprioritized, please do let us know and we will evaluate your counter-proposal. - **priority/P0**: Must be actively worked on as someone's top priority right now. Stuff is burning. If it's not being actively worked on, someone is expected to drop what they're doing immediately to work on it. TL's of teams are responsible for making sure that all P0's in their area are being actively worked on. Examples include user-visible bugs in core features, broken builds or tests and critical security issues. - **priority/P1**: Must be staffed and worked on either currently, or very soon, ideally in time for the next release. - **priority/P2**: There appears to be general agreement that this would be good to have, but we don't have anyone available to work on it right now or in the immediate future. Community contributions would be most welcome in the mean time (although it might take a while to get them reviewed if reviewers are fully occupied with higher priority issues, for example immediately before a release). - **priority/P3**: Possibly useful, but not yet enough support to actually get it done. These are mostly place-holders for potentially good ideas, so that they don't get completely forgotten, and can be referenced/deduped every time they come up. +Milestones +---------- + +We additionally use milestones, based on minor version, for determining if a bug should be fixed for the next release. These milestones will be especially scrutinized as we get to the weeks just before a release. We can release a new version of Kubernetes once they are empty. We will have two milestones per minor release. + +- **vX.Y**: The list of bugs that will be merged for that milestone once ready. +- **vX.Y-candidate**: The list of bug that we might merge for that milestone. A bug shouldn't be in this milestone for moe than a day or two towards the end of a milestone. It should be triaged either into vX.Y, or moved out of the release milestones. + +The above priority scheme still applies, so P0 and P1 bugs are work we feel must get done before release, while P2 and P3 represent work we would merge into the release if it gets done, but we wouldn't block the release on it. A few days before release, we will probably move all P2 and P3 bugs out of that milestone tag in bulk. + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/issues.md?pixel)]() -- cgit v1.2.3 From f4d646df6f4df7b4c648468fcc8bd980e74ca2be Mon Sep 17 00:00:00 2001 From: Isaac Hollander McCreery Date: Tue, 9 Feb 2016 16:06:46 -0800 Subject: Remove [Skipped] as a label for tests. --- e2e-tests.md | 1 - 1 file changed, 1 deletion(-) diff --git a/e2e-tests.md b/e2e-tests.md index 50163815..a9c440c7 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -185,7 +185,6 @@ We are working on implementing clearer partitioning of our e2e tests to make run - `[Serial]`: If a test cannot be run in parallel with other tests (e.g. it takes too many resources or restarts nodes), it is labeled `[Serial]`, and should be run in serial as part of a separate suite. - `[Disruptive]`: If a test restarts components that might cause other tests to fail or break the cluster completely, it is labeled `[Disruptive]`. Any `[Disruptive]` test is also assumed to qualify for the `[Serial]` label, but need not be labeled as both. These tests are not run against soak clusters to avoid restarting components. - `[Flaky]`: If a test is found to be flaky and we have decided that it's too hard to fix in the short term (e.g. it's going to take a full engineer-week), it receives the `[Flaky]` label until it is fixed. The `[Flaky]` label should be used very sparingly, and should be accompanied with a reference to the issue for de-flaking the test, because while a test remains labeled `[Flaky]`, it is not monitored closely in CI. `[Flaky]` tests are by default not run, unless a `focus` or `skip` argument is explicitly given. -- `[Skipped]`: `[Skipped]` is a legacy label that we're phasing out. If a test is marked `[Skipped]`, there should be an issue open to label it properly. `[Skipped]` tests are by default not run, unless a `focus` or `skip` argument is explicitly given. - `[Feature:.+]`: If a test has non-default requirements to run or targets some non-core functionality, and thus should not be run as part of the standard suite, it receives a `[Feature:.+]` label, e.g. `[Feature:Performance]` or `[Feature:Ingress]`. `[Feature:.+]` tests are not run in our core suites, instead running in custom suites. If a feature is experimental or alpha and is not enabled by default due to being incomplete or potentially subject to breaking changes, it does *not* block the merge-queue, and thus should run in some separate test suites owned by the feature owner(s) (see #continuous_integration below). ### Conformance tests -- cgit v1.2.3 From 515f62c1464533cd857ac6d477f1ec0520be64b1 Mon Sep 17 00:00:00 2001 From: Isaac Hollander McCreery Date: Tue, 9 Feb 2016 17:08:27 -0800 Subject: Add instructions and tooling for munging test infra for a new release series --- releasing.md | 32 ++++++++++++++++++++++++-------- 1 file changed, 24 insertions(+), 8 deletions(-) diff --git a/releasing.md b/releasing.md index 7c5ad9d9..643827d7 100644 --- a/releasing.md +++ b/releasing.md @@ -217,14 +217,30 @@ Finally, from a clone of upstream/master, *make sure* you still have `RELEASE_VERSION` set correctly, and run `./build/mark-stable-release.sh ${RELEASE_VERSION}`. -### Updating the master branch - -If you are cutting a new release series, please also update the master branch: -change the `latestReleaseBranch` in `cmd/mungedocs/mungedocs.go` to the new -release branch (`release-X.Y`), run `hack/update-generated-docs.sh`. This will -let the unversioned warning in docs point to the latest release series. Please -send the changes as a PR titled "Update the latestReleaseBranch to release-X.Y -in the munger". +### Manual tasks for new release series + +*TODO(#20946) Burn this list down.* + +If you are cutting a new release series, there are a few tasks that haven't yet +been automated that need to happen after the branch has been cut: + +1. Update the master branch constant for doc generation: change the + `latestReleaseBranch` in `cmd/mungedocs/mungedocs.go` to the new release + branch (`release-X.Y`), run `hack/update-generated-docs.sh`. This will let + the unversioned warning in docs point to the latest release series. Please + send the changes as a PR titled "Update the latestReleaseBranch to + release-X.Y in the munger". +1. Add test jobs for the new branch. See [End-2-End Testing in + Kubernetes](e2e-tests.md) for the test jobs that run in CI, which are under + version control in `hack/jenkins/e2e.sh` (on the release branch) and + `hack/jenkins/job-configs/kubernetes-e2e.yaml` (in `master`). You'll want + to duplicate/munge these for the release branch so that, as we cherry-pick + fixes onto the branch, we know that it builds, etc. +1. Make sure all features that are supposed to be GA are covered by tests. You + can use `hack/list-feature-tests.sh` to see a list of tests labeled as + `[Feature:.+]`; make sure that these are all either covered in CI jobs or + are experimental features. (The answer should already be 'yes', but this is + a good time to reconcile.) ## Injecting Version into Binaries -- cgit v1.2.3 From 4457026c6fb93f74d3c36ae88a5a460602e24339 Mon Sep 17 00:00:00 2001 From: Isaac Hollander McCreery Date: Wed, 10 Feb 2016 10:30:10 -0800 Subject: Add docs for branching e2e jobs --- releasing.md | 28 +++++++++++++++++----------- 1 file changed, 17 insertions(+), 11 deletions(-) diff --git a/releasing.md b/releasing.md index 643827d7..27b0e906 100644 --- a/releasing.md +++ b/releasing.md @@ -230,17 +230,23 @@ been automated that need to happen after the branch has been cut: the unversioned warning in docs point to the latest release series. Please send the changes as a PR titled "Update the latestReleaseBranch to release-X.Y in the munger". -1. Add test jobs for the new branch. See [End-2-End Testing in - Kubernetes](e2e-tests.md) for the test jobs that run in CI, which are under - version control in `hack/jenkins/e2e.sh` (on the release branch) and - `hack/jenkins/job-configs/kubernetes-e2e.yaml` (in `master`). You'll want - to duplicate/munge these for the release branch so that, as we cherry-pick - fixes onto the branch, we know that it builds, etc. -1. Make sure all features that are supposed to be GA are covered by tests. You - can use `hack/list-feature-tests.sh` to see a list of tests labeled as - `[Feature:.+]`; make sure that these are all either covered in CI jobs or - are experimental features. (The answer should already be 'yes', but this is - a good time to reconcile.) +1. Add test jobs for the new branch. + 1. See [End-2-End Testing in Kubernetes](e2e-tests.md) for the test jobs + that should be running in CI, which are under version control in + `hack/jenkins/e2e.sh` (on the release branch) and + `hack/jenkins/job-configs/kubernetes-e2e.yaml` (in `master`). You'll + want to munge these for the release branch so that, as we cherry-pick + fixes onto the branch, we know that it builds, etc. (Talk with + @ihmccreery for more details.) + 1. Make sure all features that are supposed to be GA are covered by tests, + but remove feature tests on the release branch for features that aren't + GA. You can use `hack/list-feature-tests.sh` to see a list of tests + labeled as `[Feature:.+]`; make sure that these are all either covered in + CI jobs on the release branch or are experimental features. (The answer + should already be 'yes', but this is a good time to reconcile.) + 1. Make a dashboard in Jenkins that contains all of the jobs for this + release cycle, and also add them to Critical Builds. (Don't add them to + the merge-bot blockers; see kubernetes/contrib#156.) ## Injecting Version into Binaries -- cgit v1.2.3 From 89a42157bcc823e247865435ce65409e2fbc4e2b Mon Sep 17 00:00:00 2001 From: Joe Finney Date: Tue, 16 Feb 2016 14:54:50 -0800 Subject: Remove hack/e2e-test.sh in favor of hack/e2e.go. --- developer-guides/vagrant.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index ebb12ab1..6ab4d670 100644 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -302,7 +302,7 @@ Congratulations! The following will run all of the end-to-end testing scenarios assuming you set your environment in `cluster/kube-env.sh`: ```sh -NUM_NODES=3 hack/e2e-test.sh +NUM_NODES=3 go run hack/e2e.go -v --build --up --test --down ``` ### Troubleshooting -- cgit v1.2.3 From e8117bd017a5d22ed264eebc80727b6a2b8606a4 Mon Sep 17 00:00:00 2001 From: Alex Robinson Date: Tue, 16 Feb 2016 15:00:14 -0800 Subject: Fix build cop issues link to filter out issues with the team/gke label. --- on-call-build-cop.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/on-call-build-cop.md b/on-call-build-cop.md index 7530963e..f5f12417 100644 --- a/on-call-build-cop.md +++ b/on-call-build-cop.md @@ -54,7 +54,7 @@ Traffic sources and responsibilities * When in doubt, choose a TL or team maintainer of the most relevant team; they can delegate * Keep in mind that you can @ mention people in an issue/PR to bring it to their attention without assigning it to them. You can also @ mention github teams, such as @kubernetes/goog-ux or @kubernetes/kubectl * If you need help triaging an issue or PR, consult with (or assign it to) @brendandburns, @thockin, @bgrant0607, @quinton-hoole, @davidopp, @dchen1107, @lavalamp (all U.S. Pacific Time) or @fgrzadkowski (Central European Time). - * At the beginning of your shift, please add team/* labels to any issues that have fallen through the cracks and don't have one. Likewise, be fair to the next person in rotation: try to ensure that every issue that gets filed while you are on duty is handled. The Github query to find issues with no team/* label is: [here](https://github.com/kubernetes/kubernetes/issues?utf8=%E2%9C%93&q=is%3Aopen+is%3Aissue+-label%3Ateam%2Fcontrol-plane+-label%3Ateam%2Fmesosphere+-label%3Ateam%2Fredhat+-label%3Ateam%2Frelease-infra+-label%3Ateam%2Fnone+-label%3Ateam%2Fnode+-label%3Ateam%2Fcluster+-label%3Ateam%2Fux+-label%3Ateam%2Fcsi+-label%3Ateam%2Fapi+-label%3Ateam%2Ftest-infra+). + * At the beginning of your shift, please add team/* labels to any issues that have fallen through the cracks and don't have one. Likewise, be fair to the next person in rotation: try to ensure that every issue that gets filed while you are on duty is handled. The Github query to find issues with no team/* label is: [here](https://github.com/kubernetes/kubernetes/issues?utf8=%E2%9C%93&q=is%3Aopen+is%3Aissue+-label%3Ateam%2Fcontrol-plane+-label%3Ateam%2Fmesosphere+-label%3Ateam%2Fredhat+-label%3Ateam%2Frelease-infra+-label%3Ateam%2Fnone+-label%3Ateam%2Fnode+-label%3Ateam%2Fcluster+-label%3Ateam%2Fux+-label%3Ateam%2Fcsi+-label%3Ateam%2Fapi+-label%3Ateam%2Ftest-infra+-label%3Ateam%2Fgke). Example response for support issues: -- cgit v1.2.3 From 375d2ed2e2866a698c5957818581cee7a7dc69fd Mon Sep 17 00:00:00 2001 From: Jonathan Yu Date: Wed, 17 Feb 2016 16:01:22 +0000 Subject: Indicate that OpenSSL is required to run Kubernetes Signed-off-by: Jonathan Yu --- running-locally.md | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/running-locally.md b/running-locally.md index 257b2522..e5b1b40d 100644 --- a/running-locally.md +++ b/running-locally.md @@ -36,6 +36,7 @@ Getting started locally - [Docker](#docker) - [etcd](#etcd) - [go](#go) + - [OpenSSL](#openssl) - [Clone the repository](#clone-the-repository) - [Starting the cluster](#starting-the-cluster) - [Running a container](#running-a-container) @@ -68,6 +69,14 @@ You need an [etcd](https://github.com/coreos/etcd/releases) in your path, please You need [go](https://golang.org/doc/install) in your path (see [here](development.md#go-versions) for supported versions), please make sure it is installed and in your ``$PATH``. +#### OpenSSL + +You need [OpenSSL](https://www.openssl.org/) installed. If you do not have the `openssl` command available, you may see the following error in `/tmp/kube-apiserver.log`: + +``` +server.go:333] Invalid Authentication Config: open /tmp/kube-serviceaccount.key: no such file or directory +``` + ### Clone the repository In order to run kubernetes you must have the kubernetes code on the local machine. Cloning this repository is sufficient. -- cgit v1.2.3 From 024b847eb1f07f450008f6997cdff1f9233c7b01 Mon Sep 17 00:00:00 2001 From: laushinka Date: Sat, 13 Feb 2016 02:33:32 +0700 Subject: Spelling fixes inspired by github.com/client9/misspell --- README.md | 2 +- api_changes.md | 2 +- e2e-tests.md | 2 +- kubemark-guide.md | 2 +- update-release-docs.md | 2 +- 5 files changed, 5 insertions(+), 5 deletions(-) diff --git a/README.md b/README.md index 4128e001..727fbfa6 100644 --- a/README.md +++ b/README.md @@ -79,7 +79,7 @@ Guide](../admin/README.md). Document style advice for contributors. * **Running a cluster locally** ([running-locally.md](running-locally.md)): - A fast and lightweight local cluster deployment for developement. + A fast and lightweight local cluster deployment for development. ## Developing against the Kubernetes API diff --git a/api_changes.md b/api_changes.md index 0c039aab..585f015d 100644 --- a/api_changes.md +++ b/api_changes.md @@ -306,7 +306,7 @@ removed value as deprecated but allowed). This is actually a special case of a new representation, discussed above. For [Unions](api-conventions.md), sets of fields where at most one should be set, -it is acceptible to add a new option to the union if the [appropriate conventions] +it is acceptable to add a new option to the union if the [appropriate conventions] were followed in the original object. Removing an option requires following the deprecation process. diff --git a/e2e-tests.md b/e2e-tests.md index a9c440c7..0c75af70 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -36,7 +36,7 @@ Documentation for other releases can be found at ## Overview -End-to-end (e2e) tests for Kubernetes provide a mechanism to test end-to-end behavior of the system, and is the last signal to ensure end user operations match developer specifications. Although unit and integration tests should ideally provide a good signal, the reality is in a distributed system like Kubernetes it is not uncommon that a minor change may pass all unit and integration tests, but cause unforseen changes at the system level. e2e testing is very costly, both in time to run tests and difficulty debugging, though: it takes a long time to build, deploy, and exercise a cluster. Thus, the primary objectives of the e2e tests are to ensure a consistent and reliable behavior of the kubernetes code base, and to catch hard-to-test bugs before users do, when unit and integration tests are insufficient. +End-to-end (e2e) tests for Kubernetes provide a mechanism to test end-to-end behavior of the system, and is the last signal to ensure end user operations match developer specifications. Although unit and integration tests should ideally provide a good signal, the reality is in a distributed system like Kubernetes it is not uncommon that a minor change may pass all unit and integration tests, but cause unforeseen changes at the system level. e2e testing is very costly, both in time to run tests and difficulty debugging, though: it takes a long time to build, deploy, and exercise a cluster. Thus, the primary objectives of the e2e tests are to ensure a consistent and reliable behavior of the kubernetes code base, and to catch hard-to-test bugs before users do, when unit and integration tests are insufficient. The e2e tests in kubernetes are built atop of [Ginkgo](http://onsi.github.io/ginkgo/) and [Gomega](http://onsi.github.io/gomega/). There are a host of features that this BDD testing framework provides, and it is recommended that the developer read the documentation prior to diving into the tests. diff --git a/kubemark-guide.md b/kubemark-guide.md index c2addc8f..8edc4b0a 100644 --- a/kubemark-guide.md +++ b/kubemark-guide.md @@ -89,7 +89,7 @@ to update docker image address if you’re not using GCR and default image name* - Waits until all HollowNodes are in the Running phase (*will work exactly the same everywhere*) \* Port 443 is a secured port on the master machine which is used for all external communication with the API server. In the last sentence *external* means all traffic -comming from other machines, including all the Nodes, not only from outside of the cluster. Currently local components, i.e. ControllerManager and Scheduler talk with API server using insecure port 8080. +coming from other machines, including all the Nodes, not only from outside of the cluster. Currently local components, i.e. ControllerManager and Scheduler talk with API server using insecure port 8080. ### Running e2e tests on Kubemark cluster diff --git a/update-release-docs.md b/update-release-docs.md index e94c5442..e0c04047 100644 --- a/update-release-docs.md +++ b/update-release-docs.md @@ -104,7 +104,7 @@ The high level steps to update docs in an existing collection are: ## Updating docs on HEAD [Development guide](development.md) provides general instructions on how to contribute to kubernetes github repo. -[Docs how to guide](how-to-doc.md) provides conventions to follow while writting docs. +[Docs how to guide](how-to-doc.md) provides conventions to follow while writing docs. ## Updating docs in release branch -- cgit v1.2.3 From e6099d5fbfb3e954c646393b26e02e551ea49740 Mon Sep 17 00:00:00 2001 From: David Oppenheimer Date: Sun, 14 Feb 2016 16:56:40 -0800 Subject: Update user guide and scheduler documentation to describe node affinity. Register image priority locality function, which the original PR that introduced it forgot to do. Change zone and region labels to beta. --- scheduler_algorithm.md | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/scheduler_algorithm.md b/scheduler_algorithm.md index a3897ffb..0f52ca27 100755 --- a/scheduler_algorithm.md +++ b/scheduler_algorithm.md @@ -44,9 +44,8 @@ The purpose of filtering the nodes is to filter out the nodes that do not meet c - `NoVolumeZoneConflict`: Evaluate if the volumes a pod requests are available on the node, given the Zone restrictions. - `PodFitsResources`: Check if the free resource (CPU and Memory) meets the requirement of the Pod. The free resource is measured by the capacity minus the sum of requests of all Pods on the node. To learn more about the resource QoS in Kubernetes, please check [QoS proposal](../proposals/resource-qos.md). - `PodFitsHostPorts`: Check if any HostPort required by the Pod is already occupied on the node. -- `PodFitsHost`: Filter out all nodes except the one specified in the PodSpec's NodeName field. -- `PodSelectorMatches`: Check if the labels of the node match the labels specified in the Pod's `nodeSelector` field ([Here](../user-guide/node-selection/) is an example of how to use `nodeSelector` field). -- `CheckNodeLabelPresence`: Check if all the specified labels exist on a node or not, regardless of the value. +- `HostName`: Filter out all nodes except the one specified in the PodSpec's NodeName field. +- `MatchNodeSelector`: Check if the labels of the node match the labels specified in the Pod's `nodeSelector` field and, as of Kubernetes v1.2, also match the `scheduler.alpha.kubernetes.io/affinity` pod annotation if present. See [here](../user-guide/node-selection/) for more details on both. - `MaxEBSVolumeCount`: Ensure that the number of attached ElasticBlockStore volumes does not exceed a maximum value (by default, 39, since Amazon recommends a maximum of 40 with one of those 40 reserved for the root volume -- see [Amazon's documentation](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/volume_limits.html#linux-specific-volume-limits)). The maximum value can be controlled by setting the `KUBE_MAX_PD_VOLS` environment variable. - `MaxGCEPDVolumeCount`: Ensure that the number of attached GCE PersistentDisk volumes does not exceed a maximum value (by default, 16, which is the maximum GCE allows -- see [GCE's documentation](https://cloud.google.com/compute/docs/disks/persistent-disks#limits_for_predefined_machine_types)). The maximum value can be controlled by setting the `KUBE_MAX_PD_VOLS` environment variable. @@ -63,11 +62,11 @@ After the scores of all nodes are calculated, the node with highest score is cho Currently, Kubernetes scheduler provides some practical priority functions, including: - `LeastRequestedPriority`: The node is prioritized based on the fraction of the node that would be free if the new Pod were scheduled onto the node. (In other words, (capacity - sum of requests of all Pods already on the node - request of Pod that is being scheduled) / capacity). CPU and memory are equally weighted. The node with the highest free fraction is the most preferred. Note that this priority function has the effect of spreading Pods across the nodes with respect to resource consumption. -- `CalculateNodeLabelPriority`: Prefer nodes that have the specified label. - `BalancedResourceAllocation`: This priority function tries to put the Pod on a node such that the CPU and Memory utilization rate is balanced after the Pod is deployed. -- `CalculateSpreadPriority`: Spread Pods by minimizing the number of Pods belonging to the same service on the same node. If zone information is present on the nodes, the priority will be adjusted so that pods are spread across zones and nodes. +- `SelectorSpreadPriority`: Spread Pods by minimizing the number of Pods belonging to the same service, replication controller, or replica set on the same node. If zone information is present on the nodes, the priority will be adjusted so that pods are spread across zones and nodes. - `CalculateAntiAffinityPriority`: Spread Pods by minimizing the number of Pods belonging to the same service on nodes with the same value for a particular label. - `ImageLocalityPriority`: Nodes are prioritized based on locality of images requested by a pod. Nodes with larger size of already-installed packages required by the pod will be preferred over nodes with no already-installed packages required by the pod or a small total size of already-installed packages required by the pod. +- `NodeAffinityPriority`: (Kubernetes v1.2) Implements `preferredDuringSchedulingIgnoredDuringExecution` node affinity; see [here](../user-guide/node-selection/) for more details. The details of the above priority functions can be found in [plugin/pkg/scheduler/algorithm/priorities](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/priorities/). Kubernetes uses some, but not all, of these priority functions by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). Similar as predicates, you can combine the above priority functions and assign weight factors (positive number) to them as you want (check [scheduler.md](scheduler.md) for how to customize). -- cgit v1.2.3 From ecaea688620a75a0e48c518dd1cfdd08d9659c8b Mon Sep 17 00:00:00 2001 From: Chao Xu Date: Wed, 17 Feb 2016 15:14:43 -0800 Subject: add client-gen readme --- generating-clientset.md | 64 +++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 64 insertions(+) create mode 100644 generating-clientset.md diff --git a/generating-clientset.md b/generating-clientset.md new file mode 100644 index 00000000..e9f238f5 --- /dev/null +++ b/generating-clientset.md @@ -0,0 +1,64 @@ + + + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + + + + +# Generation and release cycle of clientset + +Client-gen is an automatic tool that generates [clientset](../../docs/proposals/client-package-structure.md#high-level-client-sets) based on API types. This doc introduces the use the client-gen, and the release cycle of the generated clientsets. + +## Using client-gen + +The workflow includes four steps: +- Marking API types with tags: in `pkg/apis/${GROUP}/${VERSION}/types.go`, mark the types (e.g., Pods) that you want to generate clients for with the `// +genclient=true` tag. If the resource associated with the type is not namespace scoped (e.g., PersistentVolume), you need to append the `nonNamespaced=true` tag as well. +- Running the client-gen tool: you need to use the command line argument `--input` to specify the groups and versions of the APIs you want to generate clients for, client-gen will then look into `pkg/apis/${GROUP}/${VERSION}/types.go` and generate clients for the types you have marked with the `genlcient` tags. For example, run + +``` +$ client-gen --input="api/v1,extensions/v1beta1" --clientset-name="my_release" +``` + +will generate a clientset named "my_release" which includes clients for api/v1 objects and extensions/v1beta1 objects. You can run `$ client-gen --help` to see other command line arguments. +- Adding expansion methods: client-gen only generates the common methods, such as `Create()` and `Delete()`. You can manually add additional methods through the expansion interface. For example, this [file](../../pkg/client/typed/generated/core/v1/pod_expansion.go) adds additional methods to Pod's client. As a convention, we put the expansion interface and its methods in file ${TYPE}_expansion.go. +- Generating Fake clients for testing purposes: client-gen will generate a fake clientset if the command line argument `--fake-clientset` is set. The fake clientset provides the default implementation, you only need to fake out the methods you care about when writing test cases. + +The output of client-gen inlcudes: +- Individual typed clients and client for group: They will be generated at `pkg/client/typed/generated/${GROUP}/${VERSION}/` +- clientset: the top-level clientset will be generated at `pkg/client/clientset_generated` by default, and you can change the path via the `--clientset-path` command line argument. + +## Released clientsets + +At the 1.2 release, we have two released clientsets in the repo: internalclientset and release_1_2. +- internalclientset: because most components in our repo still deal with the internal objects, the internalclientset talks in internal objects to ease the adoption of clientset. We will keep updating it as our API evolves. Eventually it will be replaced by a versioned clientset. +- release_1_2: release_1_2 clientset is a versioned clientset, it includes clients for the core v1 objects, extensions/v1beta1, autoscaling/v1, and batch/v1 objects. We will NOT update it after we cut the 1.2 release. After the 1.2 release, we will create release_1_3 clientset and keep it updated until we cut release 1.3. + + + + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/generating-clientset.md?pixel)]() + -- cgit v1.2.3 From 3a26a3eaffa596997f0f9742f6b1801936154f37 Mon Sep 17 00:00:00 2001 From: Thibault Serot Date: Wed, 30 Dec 2015 23:48:00 +0100 Subject: Enabling DNS support in local-up-cluster.sh --- running-locally.md | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/running-locally.md b/running-locally.md index e5b1b40d..b30a7de6 100644 --- a/running-locally.md +++ b/running-locally.md @@ -175,7 +175,16 @@ One or more of the Kubernetes daemons might've crashed. Tail the logs of each in #### The pods fail to connect to the services by host names -The local-up-cluster.sh script doesn't start a DNS service. Similar situation can be found [here](http://issue.k8s.io/6667). You can start a manually. Related documents can be found [here](../../cluster/addons/dns/#how-do-i-configure-it) +To start the DNS service, you need to set the following variables: + +```sh +KUBE_ENABLE_CLUSTER_DNS=true +KUBE_DNS_SERVER_IP="10.0.0.10" +KUBE_DNS_DOMAIN="cluster.local" +KUBE_DNS_REPLICAS=1 +``` + +To know more on DNS service you can look [here](http://issue.k8s.io/6667). Related documents can be found [here](../../cluster/addons/dns/#how-do-i-configure-it) -- cgit v1.2.3 From 1e6b7d53a305b04a28e03e36fb329b4ebad21380 Mon Sep 17 00:00:00 2001 From: Joe Finney Date: Wed, 24 Feb 2016 13:08:22 -0800 Subject: Remove PrepareVersion from hack/e2e.go. --- e2e-tests.md | 3 --- 1 file changed, 3 deletions(-) diff --git a/e2e-tests.md b/e2e-tests.md index 0c75af70..bc1e7184 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -63,9 +63,6 @@ go run hack/e2e.go -v --build # Create a fresh cluster. Deletes a cluster first, if it exists go run hack/e2e.go -v --up -# Create a fresh cluster at a specific release version. -go run hack/e2e.go -v --up --version=0.7.0 - # Test if a cluster is up. go run hack/e2e.go -v --isup -- cgit v1.2.3 From b2bc099aeda9a1de3930157022526d19ba488234 Mon Sep 17 00:00:00 2001 From: derekwaynecarr Date: Thu, 25 Feb 2016 14:46:16 -0500 Subject: Update vagrant developer guide for where logs are located --- developer-guides/vagrant.md | 53 ++++++++++++++++++++++++++++++--------------- 1 file changed, 36 insertions(+), 17 deletions(-) diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index 6ab4d670..5e20d507 100644 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -48,7 +48,14 @@ Running kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/deve ### Setup -By default, the Vagrant setup will create a single master VM (called kubernetes-master) and one node (called kubernetes-node-1). Each VM will take 1 GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate free disk space). To start your local cluster, open a shell and run: +Setting up a cluster is as simple as running: + +```sh +export KUBERNETES_PROVIDER=vagrant +curl -sS https://get.k8s.io | bash +``` + +Alternatively, you can download [Kubernetes release](https://github.com/kubernetes/kubernetes/releases) and extract the archive. To start your local cluster, open a shell and run: ```sh cd kubernetes @@ -59,6 +66,10 @@ export KUBERNETES_PROVIDER=vagrant The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine. +By default, the Vagrant setup will create a single master VM (called kubernetes-master) and one node (called kubernetes-node-1). Each VM will take 1 GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate free disk space). + +Vagrant will provision each machine in the cluster with all the necessary components to run Kubernetes. The initial setup can take a few minutes to complete on each machine. + If you installed more than one Vagrant provider, Kubernetes will usually pick the appropriate one. However, you can override which one Kubernetes will use by setting the [`VAGRANT_DEFAULT_PROVIDER`](https://docs.vagrantup.com/v2/providers/default.html) environment variable: ```sh @@ -67,9 +78,7 @@ export KUBERNETES_PROVIDER=vagrant ./cluster/kube-up.sh ``` -Vagrant will provision each machine in the cluster with all the necessary components to run Kubernetes. The initial setup can take a few minutes to complete on each machine. - -By default, each VM in the cluster is running Fedora, and all of the Kubernetes services are installed into systemd. +By default, each VM in the cluster is running Fedora. To access the master or any node: @@ -78,35 +87,45 @@ vagrant ssh master vagrant ssh node-1 ``` -If you are running more than one nodes, you can access the others by: +If you are running more than one node, you can access the others by: ```sh vagrant ssh node-2 vagrant ssh node-3 ``` +Each node in the cluster installs the docker daemon and the kubelet. + +The master node instantiates the Kubernetes master components as pods on the machine. + To view the service status and/or logs on the kubernetes-master: ```console -$ vagrant ssh master -[vagrant@kubernetes-master ~] $ sudo systemctl status kube-apiserver -[vagrant@kubernetes-master ~] $ sudo journalctl -r -u kube-apiserver +[vagrant@kubernetes-master ~] $ vagrant ssh master +[vagrant@kubernetes-master ~] $ sudo su -[vagrant@kubernetes-master ~] $ sudo systemctl status kube-controller-manager -[vagrant@kubernetes-master ~] $ sudo journalctl -r -u kube-controller-manager +[root@kubernetes-master ~] $ systemctl status kubelet +[root@kubernetes-master ~] $ journalctl -ru kubelet -[vagrant@kubernetes-master ~] $ sudo systemctl status etcd -[vagrant@kubernetes-master ~] $ sudo systemctl status nginx +[root@kubernetes-master ~] $ systemctl status docker +[root@kubernetes-master ~] $ journalctl -ru docker + +[root@kubernetes-master ~] $ tail -f /var/log/kube-apiserver.log +[root@kubernetes-master ~] $ tail -f /var/log/kube-controller-manager.log +[root@kubernetes-master ~] $ tail -f /var/log/kube-scheduler.log ``` To view the services on any of the nodes: ```console -$ vagrant ssh node-1 -[vagrant@kubernetes-node-1] $ sudo systemctl status docker -[vagrant@kubernetes-node-1] $ sudo journalctl -r -u docker -[vagrant@kubernetes-node-1] $ sudo systemctl status kubelet -[vagrant@kubernetes-node-1] $ sudo journalctl -r -u kubelet +[vagrant@kubernetes-master ~] $ vagrant ssh node-1 +[vagrant@kubernetes-master ~] $ sudo su + +[root@kubernetes-master ~] $ systemctl status kubelet +[root@kubernetes-master ~] $ journalctl -ru kubelet + +[root@kubernetes-master ~] $ systemctl status docker +[root@kubernetes-master ~] $ journalctl -ru docker ``` ### Interacting with your Kubernetes cluster with Vagrant. -- cgit v1.2.3 From 27b8a757323eda68363195f6b6f46a5faa5eee65 Mon Sep 17 00:00:00 2001 From: Phillip Wittrock Date: Thu, 25 Feb 2016 08:38:06 -0800 Subject: Node e2e documentations and minor features - Add README.md for node e2e tests - Add support for --cleanup=false to leave test files on remote hosts and temporary instances for debugging - Add ubuntu trusty instances for docker 1.8 and docker 1.9 to jenkins pr builder - Disable coreos-beta for jenkins ci since it is failing - need to investigate --- e2e-node-tests.md | 141 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 141 insertions(+) create mode 100644 e2e-node-tests.md diff --git a/e2e-node-tests.md b/e2e-node-tests.md new file mode 100644 index 00000000..9df2d1db --- /dev/null +++ b/e2e-node-tests.md @@ -0,0 +1,141 @@ + + + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + + + + +# Node End-To-End tests + +Node e2e tests start kubelet and minimal supporting infrastructure to validate the kubelet on a host. +Tests can be run either locally, against a remote host or against a GCE image. + +*Note: Linux only. Mac and Windows unsupported.* + +## Running tests locally + +etcd must be installed and on the PATH to run the node e2e tests. To verify etcd is installed: `which etcd`. +You can find instructions for installing etcd [on the etcd releases page](https://github.com/coreos/etcd/releases). + +Run the tests locally: `make test_e2e_node` + +Running the node e2e tests locally will build the kubernetes go source files and then start the +kubelet, kube-apiserver, and etcd binaries on localhost before executing the ginkgo tests under +test/e2e_node against the local kubelet instance. + +## Running tests against a remote host + +The node e2e tests can be run against one or more remote hosts using one of +* [e2e-node-jenkins.sh](../../test/e2e_node/jenkins/e2e-node-jenkins.sh) (gce only) +* [run_e2e.go](../../test/e2e_node/runner/run_e2e.go) (requires passwordless ssh and remote passwordless sudo access over ssh) +* using [run_e2e.go](../../test/e2e_node/runner/run_e2e.go) to build a tar.gz and executing on host (requires host access w/ remote sudo) + +### Configuring a new remote host for testing + +The host must contain a environment capable of supporting a mini-kubernetes cluster. Includes: +* install etcd +* install docker +* install lxc and update grub commandline +* enable tty-less sudo access + +See [setup_host.sh](../../test/e2e_node/environment/setup_host.sh) + +### Running the tests + +1. If running against a host on gce + * Copy [template.properties](../../test/e2e_node/jenkins/template.properties) + * Fill in `GCE_HOSTS` + * Set `INSTALL_GODEP=true` to install `godep`, `gomega`, `ginkgo` + * Make sure host names are resolvable to ssh `ssh `. + * If needed, you can run `gcloud compute config-ssh` to add gce hostnames to your .ssh/config so they are resolvable by ssh. + * Run `test/e2e_node/jenkins/e2e-node-jenkins.sh ` + * **Must be run from kubernetes root** + +2. If running against a host anywhere else + * **Requires password-less ssh and sudo access** + * Make sure this works - e.g. `ssh -- sudo echo "ok"` + * If ssh flags are required (e.g. `-i`), they can be used and passed to the tests with `--ssh-options` + * `godep go run test/e2e_node/runner/run_e2e.go --logtostderr --hosts ` + * **Must be run from kubernetes root** + * requires (go get): `github.com/tools/godep`, `github.com/onsi/gomega`, `github.com/onsi/ginkgo/ginkgo` + +3. Alternatively, manually build and copy `e2e_node_test.tar.gz` to a remote host + * Build the tar.gz `godep go run test/e2e_node/runner/run_e2e.go --logtostderr --build-only` + * requires (go get): `github.com/tools/godep`, `github.com/onsi/gomega`, `github.com/onsi/ginkgo/ginkgo` + * Copy `e2e_node_test.tar.gz` to the remote host + * Extract the archive on the remote host `tar -xzvf e2e_node_test.tar.gz` + * Run the tests `./e2e_node.test --logtostderr --vmodule=*=2 --build-services=false --node-name=` + * Note: This must be run from the directory containing the kubelet and kube-apiserver binaries. + +## Running tests against a gce image + +* Build a gce image from a prepared gce host + * Create the host from a base image and configure it (see above) + * Run tests against this remote host to ensure that it is setup correctly before doing anything else + * Create a gce *snapshot* of the instance + * Create a gce *disk* from the snapshot + * Create a gce *image* from the disk +* Test that the necessary gcloud credentials are setup for the project + * `gcloud compute --project --zone images list` + * Verify that your image appears in the list +* Copy [template.properties](../../test/e2e_node/jenkins/template.properties) + * Fill in `GCE_PROJECT`, `GCE_ZONE`, `GCE_IMAGES` +* Run `test/e2e_node/jenkins/e2e-node-jenkins.sh ` + * **Must be run from kubernetes root** + +## Kubernetes Jenkins CI and PR builder + +Node e2e tests are run against a static list of host environments continuously or when manually triggered on a github.com +pull requests using the trigger phrase `@k8s-bot test node e2e experimental` - *results not yet publish, pending +evaluation of test stability.*. + + +### CI Host environments + +TBD + +### PR builder host environments + +| linux distro | distro version | docker version | etcd version | cloud provider | +|-----------------|----------------|----------------|--------------|----------------| +| containervm | | 1.8 | | gce | +| rhel | 7 | 1.10 | | gce | +| centos | 7 | 1.10 | | gce | +| coreos | stable | 1.8 | | gce | +| debian | jessie | 1.10 | | gce | +| ubuntu | trusty | 1.8 | | gce | +| ubuntu | trusty | 1.9 | | gce | +| ubuntu | trusty | 1.10 | | gce | +| ubuntu | wily | 1.10 | | gce | + + + + + + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/e2e-node-tests.md?pixel)]() + -- cgit v1.2.3 From 903eb395174c5a4698aed5600725f2b86bd4f9f6 Mon Sep 17 00:00:00 2001 From: Quinton Hoole Date: Fri, 22 Jan 2016 13:52:53 -0800 Subject: Add document on writing good e2e tests. --- e2e-tests.md | 2 + writing-good-e2e-tests.md | 264 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 266 insertions(+) create mode 100644 writing-good-e2e-tests.md diff --git a/e2e-tests.md b/e2e-tests.md index 0c75af70..5e44f07d 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -42,6 +42,8 @@ The e2e tests in kubernetes are built atop of [Ginkgo](http://onsi.github.io/gin The purpose of *this* document is to serve as a primer for developers who are looking to execute or add tests using a local development environment. +Before writing new tests or making substantive changes to existing tests, you should also read [Writing Good e2e Tests](writing-good-e2e-tests.md) + ## Building and Running the Tests There are a variety of ways to run e2e tests, but we aim to decrease the number of ways to run e2e tests to a canonical way: `hack/e2e.go`. diff --git a/writing-good-e2e-tests.md b/writing-good-e2e-tests.md new file mode 100644 index 00000000..f00b55dc --- /dev/null +++ b/writing-good-e2e-tests.md @@ -0,0 +1,264 @@ + + + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + + + + +# Writing good e2e tests for Kubernetes # + +## Patterns and Anti-Patterns ## + +### Goals of e2e tests ### + +Beyond the obvious goal of providing end-to-end system test coverage, +there are a few less obvious goals that you should bear in mind when +designing, writing and debugging your end-to-end tests. In +particular, "flaky" tests, which pass most of the time but fail +intermittently for difficult-to-diagnose reasons are extremely costly +in terms of blurring our regression signals and slowing down our +automated merge queue. Up-front time and effort designing your test +to be reliable is very well spent. Bear in mind that we have hundreds +of tests, each running in dozens of different environments, and if any +test in any test environment fails, we have to assume that we +potentially have some sort of regression. So if a significant number +of tests fail even only 1% of the time, basic statistics dictates that +we will almost never have a "green" regression indicator. Stated +another way, writing a test that is only 99% reliable is just about +useless in the harsh reality of a CI environment. In fact it's worse +than useless, because not only does it not provide a reliable +regression indicator, but it also costs a lot of subsequent debugging +time, and delayed merges. + +#### Debuggability #### + +If your test fails, it should provide as detailed as possible reasons +for the failure in it's output. "Timeout" is not a useful error +message. "Timed out after 60 seconds waiting for pod xxx to enter +running state, still in pending state" is much more useful to someone +trying to figure out why your test failed and what to do about it. +Specifically, +[assertion](https://onsi.github.io/gomega/#making-assertions) code +like the following generates rather useless errors: + +``` +Expect(err).NotTo(HaveOccurred()) +``` + +Rather +[annotate](https://onsi.github.io/gomega/#annotating-assertions) your assertion with something like this: + +``` +Expect(err).NotTo(HaveOccurred(), "Failed to create %d foobars, only created %d", foobarsReqd, foobarsCreated) +``` + +On the other hand, overly verbose logging, particularly of non-error conditions, can make +it unnecessarily difficult to figure out whether a test failed and if +so why? So don't log lots of irrelevant stuff either. + +#### Ability to run in non-dedicated test clusters #### + +To reduce end-to-end delay and improve resource utilization when +running e2e tests, we try, where possible, to run large numbers of +tests in parallel against the same test cluster. This means that: + +1. you should avoid making any assumption (implicit or explicit) that +your test is the only thing running against the cluster. For example, +making the assumption that your test can run a pod on every node in a +cluster is not a safe assumption, as some other tests, running at the +same time as yours, might have saturated one or more nodes in the +cluster. Similarly, running a pod in the system namespace, and +assuming that that will increase the count of pods in the system +namespace by one is not safe, as some other test might be creating or +deleting pods in the system namespace at the same time as your test. +If you do legitimately need to write a test like that, make sure to +label it ["\[Serial\]"](e2e-tests.md#kinds_of_tests) so that it's easy +to identify, and not run in parallel with any other tests. +1. You should avoid doing things to the cluster that make it difficult +for other tests to reliably do what they're trying to do, at the same +time. For example, rebooting nodes, disconnecting network interfaces, +or upgrading cluster software as part of your test is likely to +violate the assumptions that other tests might have made about a +reasonably stable cluster environment. If you need to write such +tests, please label them as +["\[Disruptive\]"](e2e-tests.md#kinds_of_tests) so that it's easy to +identify them, and not run them in parallel with other tests. +1. You should avoid making assumptions about the Kubernetes API that +are not part of the API specification, as your tests will break as +soon as these assumptions become invalid. For example, relying on +specific Events, Event reasons or Event messages will make your tests +very brittle. + +#### Speed of execution #### + +We have hundreds of e2e tests, some of which we run in serial, one +after the other, in some cases. If each test takes just a few minutes +to run, that very quickly adds up to many, many hours of total +execution time. We try to keep such total execution time down to a +few tens of minutes at most. Therefore, try (very hard) to keep the +execution time of your individual tests below 2 minutes, ideally +shorter than that. Concretely, adding inappropriately long 'sleep' +statements or other gratuitous waits to tests is a killer. If under +normal circumstances your pod enters the running state within 10 +seconds, and 99.9% of the time within 30 seconds, it would be +gratuitous to wait 5 minutes for this to happen. Rather just fail +after 30 seconds, with a clear error message as to why your test +failed ("e.g. Pod x failed to become ready after 30 seconds, it +usually takes 10 seconds"). If you do have a truly legitimate reason +for waiting longer than that, or writing a test which takes longer +than 2 minutes to run, comment very clearly in the code why this is +necessary, and label the test as +["\[Slow\]"](e2e-tests.md#kinds_of_tests), so that it's easy to +identify and avoid in test runs that are required to complete +timeously (for example those that are run against every code +submission before it is allowed to be merged). +Note that completing within, say, 2 minutes only when the test +passes is not generally good enough. Your test should also fail in a +reasonable time. We have seen tests that, for example, wait up to 10 +minutes for each of several pods to become ready. Under good +conditions these tests might pass within a few seconds, but if the +pods never become ready (e.g. due to a system regression) they take a +very long time to fail and typically cause the entire test run to time +out, so that no results are produced. Again, this is a lot less +useful than a test that fails reliably within a minute or two when the +system is not working correctly. + +#### Resilience to relatively rare, temporary infrastructure glitches or delays #### + +Remember that your test will be run many thousands of +times, at different times of day and night, probably on different +cloud providers, under different load conditions. And often the +underlying state of these systems is stored in eventually consistent +data stores. So, for example, if a resource creation request is +theoretically asynchronous, even if you observe it to be practically +synchronous most of the time, write your test to assume that it's +asynchronous (e.g. make the "create" call, and poll or watch the +resource until it's in the correct state before proceeding). +Similarly, don't assume that API endpoints are 100% available. +They're not. Under high load conditions, API calls might temporarily +fail or time-out. In such cases it's appropriate to back off and retry +a few times before failing your test completely (in which case make +the error message very clear about what happened, e.g. "Retried +http://... 3 times - all failed with xxx". Use the standard +retry mechanisms provided in the libraries detailed below. + +### Some concrete tools at your disposal ### + +Obviously most of the above goals apply to many tests, not just yours. +So we've developed a set of reusable test infrastructure, libraries +and best practises to help you to do the right thing, or at least do +the same thing as other tests, so that if that turns out to be the +wrong thing, it can be fixed in one place, not hundreds, to be the +right thing. + +Here are a few pointers: + ++ [E2e Framework](../../test/e2e/framework.go): + Familiarise yourself with this test framework and how to use it. + Amongst others, it automatically creates uniquely named namespaces + within which your tests can run to avoid name clashes, and reliably + automates cleaning up the mess after your test has completed (it + just deletes everything in the namespace). This helps to ensure + that tests do not leak resources. Note that deleting a namespace + (and by implication everything in it) is currently an expensive + operation. So the fewer resources you create, the less cleaning up + the framework needs to do, and the faster your test (and other + tests running concurrently with yours) will complete. Your tests + should always use this framework. Trying other home-grown + approaches to avoiding name clashes and resource leaks has proven + to be a very bad idea. ++ [E2e utils library](../../test/e2e/util.go): + This handy library provides tons of reusable code for a host of + commonly needed test functionality, including waiting for resources + to enter specified states, safely and consistently retrying failed + operations, usefully reporting errors, and much more. Make sure + that you're familiar with what's available there, and use it. + Likewise, if you come across a generally useful mechanism that's + not yet implemented there, add it so that others can benefit from + your brilliance. In particular pay attention to the variety of + timeout and retry related constants at the top of that file. Always + try to reuse these constants rather than try to dream up your own + values. Even if the values there are not precisely what you would + like to use (timeout periods, retry counts etc), the benefit of + having them be consistent and centrally configurable across our + entire test suite typically outweighs your personal preferences. ++ **Follow the examples of stable, well-written tests:** Some of our + existing end-to-end tests are better written and more reliable than + others. A few examples of well-written tests include: + [Replication Controllers](../../test/e2e/rc.go), + [Services](../../test/e2e/service.go), + [Reboot](../../test/e2e/reboot.go). ++ [Ginkgo Test Framework](https://github.com/onsi/ginkgo): This is the + test library and runner upon which our e2e tests are built. Before + you write or refactor a test, read the docs and make sure that you + understand how it works. In particular be aware that every test is + uniquely identified and described (e.g. in test reports) by the + concatenation of it's `Describe` clause and nested `It` clauses. + So for example `Describe("Pods",...).... It(""should be scheduled + with cpu and memory limits")` produces a sane test identifier and + descriptor `Pods should be scheduled with cpu and memory limits`, + which makes it clear what's being tested, and hence what's not + working if it fails. Other good examples include: + +``` + CAdvisor should be healthy on every node +``` + +and + +``` + Daemon set should run and stop complex daemon +``` + + On the contrary +(these are real examples), the following are less good test +descriptors: + +``` + KubeProxy should test kube-proxy +``` + +and + +``` +Nodes [Disruptive] Network when a node becomes unreachable +[replication controller] recreates pods scheduled on the +unreachable node AND allows scheduling of pods on a node after +it rejoins the cluster +``` + +An improvement might be + +``` +Unreachable nodes are evacuated and then repopulated upon rejoining [Disruptive] +``` + +Note that opening issues for specific better tooling is welcome, and +code implementing that tooling is even more welcome :-). + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/writing-good-e2e-tests.md?pixel)]() + -- cgit v1.2.3 From 3b6592d4ba2c1d0f5258b7ce3a3dae70f4e4be5f Mon Sep 17 00:00:00 2001 From: Chao Xu Date: Wed, 24 Feb 2016 10:41:16 -0800 Subject: fix links --- on-call-build-cop.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/on-call-build-cop.md b/on-call-build-cop.md index f5f12417..8795935a 100644 --- a/on-call-build-cop.md +++ b/on-call-build-cop.md @@ -77,7 +77,7 @@ Example response for support issues: Build-copping ------------- -* The [merge-bot submit queue](http://submit-queue.k8s.io/) ([source](https://github.com/kubernetes/contrib/tree/master/submit-queue)) should auto-merge all eligible PRs for you once they've passed all the relevant checks mentioned below and all [critical e2e tests] (https://goto.google.com/k8s-test/view/Critical%20Builds/) are passing. If the merge-bot been disabled for some reason, or tests are failing, you might need to do some manual merging to get things back on track. +* The [merge-bot submit queue](http://submit-queue.k8s.io/) ([source](https://github.com/kubernetes/contrib/tree/master/mungegithub/mungers/submit-queue.go)) should auto-merge all eligible PRs for you once they've passed all the relevant checks mentioned below and all [critical e2e tests] (https://goto.google.com/k8s-test/view/Critical%20Builds/) are passing. If the merge-bot been disabled for some reason, or tests are failing, you might need to do some manual merging to get things back on track. * Once a day or so, look at the [flaky test builds](https://goto.google.com/k8s-test/view/Flaky/); if they are timing out, clusters are failing to start, or tests are consistently failing (instead of just flaking), file an issue to get things back on track. * Jobs that are not in [critical e2e tests] (https://goto.google.com/k8s-test/view/Critical%20Builds/) or [flaky test builds](https://goto.google.com/k8s-test/view/Flaky/) are not your responsibility to monitor. The `Test owner:` in the job description will be automatically emailed if the job is failing. * If you are a weekday oncall, ensure that PRs confirming to the following pre-requisites are being merged at a reasonable rate: @@ -92,7 +92,7 @@ Build-copping * If a large number of tests fail, or tests that normally pass fail, that is an indication that one or more of the PR(s) in that build might be problematic (and should be reverted). * Use the Test Results Analyzer to see individual test history over time. * Flake mitigation - * Tests that flake (fail a small percentage of the time) need an issue filed against them. Please read [this](https://github.com/kubernetes/kubernetes/blob/doc-flaky-test/docs/devel/flaky-tests.md#filing-issues-for-flaky-tests); the build cop is expected to file issues for any flaky tests they encounter. + * Tests that flake (fail a small percentage of the time) need an issue filed against them. Please read [this](flaky-tests.md#filing-issues-for-flaky-tests); the build cop is expected to file issues for any flaky tests they encounter. * It's reasonable to manually merge PRs that fix a flake or otherwise mitigate it. Contact information -- cgit v1.2.3 From 1380489076b037a59bfde9d190ae2e38b536006c Mon Sep 17 00:00:00 2001 From: David McMahon Date: Fri, 26 Feb 2016 18:33:24 -0800 Subject: New Godeps LICENSE generation tool. Includes initial Godeps/LICENSES and Godeps/.license_file_state file to ensure fast local generation. --- development.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/development.md b/development.md index bdef3213..e1a01015 100644 --- a/development.md +++ b/development.md @@ -228,9 +228,8 @@ It is sometimes expedient to manually fix the /Godeps/godeps.json file to minimi Please send dependency updates in separate commits within your PR, for easier reviewing. -6) If you updated the Godeps, please also update `Godeps/LICENSES.md` by running `hack/update-godep-licenses.sh`. +6) If you updated the Godeps, please also update `Godeps/LICENSES` by running `hack/update-godep-licenses.sh`. -_If Godep does not automatically vendor the proper license file for a new dependency, be sure to add an exception entry to `hack/update-godep-licenses.sh`._ ## Unit tests -- cgit v1.2.3 From 6962746589411c0c2dced1dc46975d5bfde10e0f Mon Sep 17 00:00:00 2001 From: Jay Vyas Date: Wed, 2 Mar 2016 14:23:58 -0500 Subject: Update Conformance definition section in e2e doc. --- e2e-tests.md | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/e2e-tests.md b/e2e-tests.md index 5e44f07d..529ab56a 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -197,6 +197,17 @@ End-to-end testing, as described above, is for [development distributions](writi The conformance test runs a subset of the e2e-tests against a manually-created cluster. It does not require support for up/push/down and other operations. To run a conformance test, you need to know the IP of the master for your cluster and the authorization arguments to use. The conformance test is intended to run against a cluster at a specific binary release of Kubernetes. See [conformance-test.sh](http://releases.k8s.io/HEAD/hack/conformance-test.sh). +### Defining what Conformance means + +It is impossible to define the entire space of Conformance tests without knowing the future, so instead, we define the compliment of conformance tests, below. + +Please update this with companion PRs as necessary. + + - A conformance test cannot test cloud provider specific features (i.e. GCE monitoring, S3 Bucketing, ...) + - A conformance test cannot rely on any particular non-standard file system permissions granted to containers or users (i.e. sharing writable host /tmp with a container) + - A conformance test cannot rely on any binaries that are not required for the linux kernel or for a kubelet to run (i.e. git) + - A conformance test cannot test a feature which obviously cannot be supported on a broad range of platforms (i.e. testing of multiple disk mounts, GPUs, high density) + ## Continuous Integration A quick overview of how we run e2e CI on Kubernetes. -- cgit v1.2.3 From 1ef7b2c44a87683e1ca769a94030204e9a9d60a3 Mon Sep 17 00:00:00 2001 From: David McMahon Date: Tue, 1 Mar 2016 17:49:00 -0800 Subject: Pass latest or stable to build/push-official-release.sh. --- releasing.md | 4 ---- 1 file changed, 4 deletions(-) diff --git a/releasing.md b/releasing.md index 27b0e906..475c9785 100644 --- a/releasing.md +++ b/releasing.md @@ -213,10 +213,6 @@ release](https://github.com/kubernetes/kubernetes/releases/new): notes draft), and attach it to the release; and 1. publish! -Finally, from a clone of upstream/master, *make sure* you still have -`RELEASE_VERSION` set correctly, and run `./build/mark-stable-release.sh -${RELEASE_VERSION}`. - ### Manual tasks for new release series *TODO(#20946) Burn this list down.* -- cgit v1.2.3 From e31ecd66fef340c7a00d0b0f2e1663fa52e18de7 Mon Sep 17 00:00:00 2001 From: Jeff Grafton Date: Thu, 3 Mar 2016 15:21:14 -0800 Subject: Remove log collection code in cluster/gce/util.sh. Also update some docs to mention cluster/log-dump.sh. --- e2e-tests.md | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/e2e-tests.md b/e2e-tests.md index 1978af18..d88be47c 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -148,6 +148,26 @@ As mentioned earlier there are a host of other options that are available, but t - `rm -rf /var/run/kubernetes`, clear kube generated credentials, sometimes stale permissions can cause problems. - `sudo iptables -F`, clear ip tables rules left by the kube-proxy. +### Debugging clusters + +If a cluster fails to initialize, or you'd like to better understand cluster +state to debug a failed e2e test, you can use the `cluster/log-dump.sh` script +to gather logs. + +This script requires that the cluster provider supports ssh. Assuming it does, +running + +``` +cluster/log-dump.sh +```` + +will ssh to the master and all nodes +and download a variety of useful logs to the provided directory (which should +already exist). + +The Google-run Jenkins builds automatically collected these logs for every +build, saving them in the `artifacts` directory uploaded to GCS. + ### Local clusters It can be much faster to iterate on a local cluster instead of a cloud-based one. To start a local cluster, you can run: -- cgit v1.2.3 From 8a122bd8cef246a3f5173e7b2d4a2b9da6e0b774 Mon Sep 17 00:00:00 2001 From: Joshua Piccari Date: Fri, 4 Mar 2016 14:36:37 -0800 Subject: Fix typo in developer guide README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 727fbfa6..29fd9e52 100644 --- a/README.md +++ b/README.md @@ -64,7 +64,7 @@ Guide](../admin/README.md). * **Hunting flaky tests** ([flaky-tests.md](flaky-tests.md)): We have a goal of 99.9% flake free tests. Here's how to run your tests many times. -* **Logging Conventions** ([logging.md](logging.md)]: Glog levels. +* **Logging Conventions** ([logging.md](logging.md)): Glog levels. * **Profiling Kubernetes** ([profiling.md](profiling.md)): How to plug in go pprof profiler to Kubernetes. -- cgit v1.2.3 From 369415d0d692817a91438db53ff73ff180f78533 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Lucas=20K=C3=A4ldstr=C3=B6m?= Date: Sun, 6 Mar 2016 23:07:51 +0200 Subject: Add some info about binary downloads --- getting-builds.md | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/getting-builds.md b/getting-builds.md index 0caacb34..46c65709 100644 --- a/getting-builds.md +++ b/getting-builds.md @@ -38,10 +38,10 @@ You can use [hack/get-build.sh](http://releases.k8s.io/HEAD/hack/get-build.sh) t Run `./hack/get-build.sh -h` for its usage. -For example, to get a build at a specific version (v1.0.2): +For example, to get a build at a specific version (v1.1.1): ```console -./hack/get-build.sh v1.0.2 +./hack/get-build.sh v1.1.1 ``` Alternatively, to get the latest stable release: @@ -65,6 +65,14 @@ gsutil ls gs://kubernetes-release/ci/v0.20.0-29-g29a55cc/ # list the contents of gsutil ls gs://kubernetes-release/release # list all official releases and rcs ``` +## Install `gsutil` + +Example installation: + +```console +$ curl -sSL https://storage.googleapis.com/pub/gsutil.tar.gz | sudo tar -xz -C /usr/local/src +$ sudo ln -s /usr/local/src/gsutil/gsutil /usr/bin/gsutil +``` [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/getting-builds.md?pixel)]() -- cgit v1.2.3 From 124be6935272684a29207613dea7b99c478a0161 Mon Sep 17 00:00:00 2001 From: Erick Fejta Date: Sun, 6 Mar 2016 19:07:34 -0800 Subject: Add simplified testing instructions and etcd installation check. --- development.md | 79 ++--------------------- testing.md | 198 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 205 insertions(+), 72 deletions(-) create mode 100644 testing.md diff --git a/development.md b/development.md index e1a01015..5b8dbdad 100644 --- a/development.md +++ b/development.md @@ -168,7 +168,7 @@ export PATH=$PATH:$GOPATH/bin ### Using godep -Here's a quick walkthrough of one way to use godeps to add or update a Kubernetes dependency into Godeps/_workspace. For more details, please see the instructions in [godep's documentation](https://github.com/tools/godep). +Here's a quick walkthrough of one way to use godeps to add or update a Kubernetes dependency into Godeps/\_workspace. For more details, please see the instructions in [godep's documentation](https://github.com/tools/godep). 1) Devote a directory to this endeavor: @@ -230,83 +230,18 @@ Please send dependency updates in separate commits within your PR, for easier re 6) If you updated the Godeps, please also update `Godeps/LICENSES` by running `hack/update-godep-licenses.sh`. +## Testing -## Unit tests +Three basic commands let you run unit, integration and/or unit tests: ```sh cd kubernetes -hack/test-go.sh +hack/test-go.sh # Run unit tests +hack/test-integration.sh # Run integration tests, requires etcd +go run hack/e2e.go -v --build --up --test --down # Run e2e tests ``` -Alternatively, you could also run: - -```sh -cd kubernetes -godep go test ./... -``` - -If you only want to run unit tests in one package, you could run ``godep go test`` under the package directory. For example, the following commands will run all unit tests in package kubelet: - -```console -$ cd kubernetes # step into the kubernetes directory. -$ cd pkg/kubelet -$ godep go test -# some output from unit tests -PASS -ok k8s.io/kubernetes/pkg/kubelet 0.317s -``` - -## Coverage - -Currently, collecting coverage is only supported for the Go unit tests. - -To run all unit tests and generate an HTML coverage report, run the following: - -```sh -cd kubernetes -KUBE_COVER=y hack/test-go.sh -``` - -At the end of the run, an the HTML report will be generated with the path printed to stdout. - -To run tests and collect coverage in only one package, pass its relative path under the `kubernetes` directory as an argument, for example: - -```sh -cd kubernetes -KUBE_COVER=y hack/test-go.sh pkg/kubectl -``` - -Multiple arguments can be passed, in which case the coverage results will be combined for all tests run. - -Coverage results for the project can also be viewed on [Coveralls](https://coveralls.io/r/kubernetes/kubernetes), and are continuously updated as commits are merged. Additionally, all pull requests which spawn a Travis build will report unit test coverage results to Coveralls. Coverage reports from before the Kubernetes Github organization was created can be found [here](https://coveralls.io/r/GoogleCloudPlatform/kubernetes). - -## Integration tests - -You need an [etcd](https://github.com/coreos/etcd/releases) in your path. To download a copy of the latest version used by Kubernetes, either - * run `hack/install-etcd.sh`, which will download etcd to `third_party/etcd`, and then set your `PATH` to include `third_party/etcd`. - * inspect `cluster/saltbase/salt/etcd/etcd.manifest` for the correct version, and then manually download and install it to some place in your `PATH`. - -```sh -cd kubernetes -hack/test-integration.sh -``` - -## End-to-End tests - -See [End-to-End Testing in Kubernetes](e2e-tests.md). - -## Testing out flaky tests - -[Instructions here](flaky-tests.md) - -## Benchmarking - -To run benchmark tests, you'll typically use something like: - - $ godep go test ./pkg/apiserver -benchmem -run=XXX -bench=BenchmarkWatch - -The `-run=XXX` prevents normal unit tests for running, while `-bench` is a regexp for selecting which benchmarks to run. -See `go test -h` for more instructions on generating profiles from benchmarks. +See the [testing guide](testing.md) for additional information and scenarios. ## Regenerating the CLI documentation diff --git a/testing.md b/testing.md new file mode 100644 index 00000000..03eddeb4 --- /dev/null +++ b/testing.md @@ -0,0 +1,198 @@ + + + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + + + + +# Testing guide + +This assumes you already read the [development guide](development.md) to +install go, godeps and configure your git client. + +In order to send pull requests you need to make sure you changes pass +unit, integration tests. + +Kubernetes only merges pull requests when e2e tests are passing, so it is often +a good idea to make sure these work as well. + +## Unit tests + +Unit tests should be fully hermetic and +access no resources outside the test binary. + +### Run all unit tests + +```sh +cd kubernetes +hack/test-go.sh # Run all unit tests. +``` + +### Run some unit tests + +```sh +cd kubernetes + +# Run all tests under pkg (requires client to be in $GOPATH/src/k8s.io) +godep go test ./pkg/... + +# Run all tests in the pkg/api (but not subpackages) +godep go test ./pkg/api +``` + +### Stress running unit tests + +Running the same tests repeatedly is one way to root out flakes. +You can do this efficiently. + + +```sh +cd kubernetes + +# Have 2 workers run all tests 5 times each (10 total iterations). +hack/test-go.sh -p 2 -i 5 +``` + +For more advanced ideas please see [flaky-tests.md](flaky-tests.md). + +### Unit test coverage + +Currently, collecting coverage is only supported for the Go unit tests. + +To run all unit tests and generate an HTML coverage report, run the following: + +```sh +cd kubernetes +KUBE_COVER=y hack/test-go.sh +``` + +At the end of the run, an the HTML report will be generated with the path printed to stdout. + +To run tests and collect coverage in only one package, pass its relative path under the `kubernetes` directory as an argument, for example: + +```sh +cd kubernetes +KUBE_COVER=y hack/test-go.sh pkg/kubectl +``` + +Multiple arguments can be passed, in which case the coverage results will be combined for all tests run. + +Coverage results for the project can also be viewed on [Coveralls](https://coveralls.io/r/kubernetes/kubernetes), and are continuously updated as commits are merged. Additionally, all pull requests which spawn a Travis build will report unit test coverage results to Coveralls. Coverage reports from before the Kubernetes Github organization was created can be found [here](https://coveralls.io/r/GoogleCloudPlatform/kubernetes). + +### Benchmark unit tests + +To run benchmark tests, you'll typically use something like: + +```sh +cd kubernetes +godep go test ./pkg/apiserver -benchmem -run=XXX -bench=BenchmarkWatch +``` + +This will do the following: + +1. `-run=XXX` will turn off regular unit tests + * Technically it will run test methods with XXX in the name. +2. `-bench=BenchmarkWatch` will run test methods with BenchmarkWatch in the name + * See `grep -nr BenchmarkWatch .` for examples +3. `-benchmem` enables memory allocation stats + +See `go help test` and `go help testflag` for additional info. + + +## Integration tests + +Integration tests should only access other resources on the local machine, +most commonly etcd or a kubernetes/docker binary. + +### Install etcd dependency + +Kubernetes integration tests require your PATH to include an [etcd](https://github.com/coreos/etcd/releases) installation. +Kubernetes includes a script to help install etcd on your machine. + +```sh +# Install etcd and add to PATH + +# Option a) install inside kubernetes root +cd kubernetes +hack/install-etcd.sh # Installs in ./third_party/etcd +echo export PATH="$PATH:$(pwd)/third_party/etcd" >> .profile # Add to PATH + +# Option b) install manually +cd kubernetes +grep -E "image.*etcd" cluster/saltbase/etcd/etcd.manifest # Find version +# Install that version using yum/apt-get/etc +echo export PATH="$PATH:" >> .profile # Add to PATH +``` + +### Run integration tests + +```sh +cd kubernetes +hack/test-integration.sh # Run all integration tests. +``` + + +## End-to-End tests + +### e2e test philosophy + +In general passing unit and integration tests should provide sufficient +confidence +to allow code to merge. If that is not the case, please *invest more time adding +unit and integration test coverage*. These tests run faster and a smaller failure domain. + +However, end-to-end (e2e) tests provide maximum confidence that +the system is working in exchange for reduced performance and a +higher debugging cost. + +e2e tests deploy a real kubernetes cluster of real nodes on a concrete provider such as GCE. The tests then manipulate the cluster in certain ways and assert the expected results. + +For a more in depth discussion please read [End-to-End Testing in Kubernetes](e2e-tests.md). + +### Running e2e tests + +```sh +cd kubernetes +go run hack/e2e.go -v --build --up --test --down + +# Change code, run unit and integration tests +# Push to an existing cluster, or bring up a cluster if it's down. +go run hack/e2e.go -v --pushup + +# Run all tests on an already up cluster +go run hack/e2e.go -v --test + +# Run only conformance tests +go run hack/e2e.go -v -test --test_args="--ginkgo.focus=\[Conformance\]" + +# Run tests on a specific provider +KUBERNETES_PROVIDER=aws go run hack/e2e.go --build --pushup --test --down +``` + +For a more in depth discussion please read [End-to-End Testing in Kubernetes](e2e-tests.md). + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/testing.md?pixel)]() + -- cgit v1.2.3 From 92c0bf5f3bc9662ff0212441c13f36909b969d83 Mon Sep 17 00:00:00 2001 From: Erick Fejta Date: Sun, 6 Mar 2016 19:50:31 -0800 Subject: Add conventions --- coding-conventions.md | 1 + testing.md | 37 +++++++++++++++++++++++++++++++++---- 2 files changed, 34 insertions(+), 4 deletions(-) diff --git a/coding-conventions.md b/coding-conventions.md index 6af7c40e..69f4fa12 100644 --- a/coding-conventions.md +++ b/coding-conventions.md @@ -64,6 +64,7 @@ Testing conventions - Significant features should come with integration (test/integration) and/or [end-to-end (test/e2e) tests](e2e-tests.md) - Including new kubectl commands and major features of existing commands - Unit tests must pass on OS X and Windows platforms - if you use Linux specific features, your test case must either be skipped on windows or compiled out (skipped is better when running Linux specific commands, compiled out is required when your code does not compile on Windows). + - See the [testing guide](testing.md) for additional testing advice. Directory and file conventions - Avoid package sprawl. Find an appropriate subdirectory for new packages. (See [#4851](http://issues.k8s.io/4851) for discussion.) diff --git a/testing.md b/testing.md index 03eddeb4..0e02ebad 100644 --- a/testing.md +++ b/testing.md @@ -40,8 +40,17 @@ a good idea to make sure these work as well. ## Unit tests -Unit tests should be fully hermetic and -access no resources outside the test binary. +* Unit tests should be fully hermetic + - Only access resources in the test binary. +* All packages and any significant files require unit tests. +* The preferred method of testing multiple scenarios or inputs +is [table driven testing](https://github.com/golang/go/wiki/TableDrivenTests) + - Example: [TestNamespaceAuthorization](../../test/integration/auth_test.go) +* Unit tests must pass on OS X and Windows platforms. + - Tests using linux-specific features must be skipped or compiled out. + - Skipped is better, compiled out is required when it won't compile. +* Concurrent unit test runs must pass. +* See [coding conventions](coding-conventions.md). ### Run all unit tests @@ -123,8 +132,17 @@ See `go help test` and `go help testflag` for additional info. ## Integration tests -Integration tests should only access other resources on the local machine, -most commonly etcd or a kubernetes/docker binary. +* Integration tests should only access other resources on the local machine + - Most commonly etcd or a service listening on localhost. +* All significant features require integration tests. + - This includes kubectl commands +* The preferred method of testing multiple scenarios or inputs +is [table driven testing](https://github.com/golang/go/wiki/TableDrivenTests) + - Example: [TestNamespaceAuthorization](../../test/integration/auth_test.go) +* Integration tests must run in parallel + - Each test should create its own master, httpserver and config. + - Example: [TestPodUpdateActiveDeadlineSeconds](../../test/integration/pods.go) +* See [coding conventions](coding-conventions.md). ### Install etcd dependency @@ -156,6 +174,17 @@ hack/test-integration.sh # Run all integration tests. ## End-to-End tests +* e2e tests build kubernetes and deploy a cluster of nodes. + - Generally on a specific cloud provider. +* Access gcr.io images +* Access a specific, non-latest image tag (unless testing pulling). +* Tests may not flake due to intermittent issues. +* Use ginko to desribe steps. + - See [should run a job to completion when tasks succeed](../../test/e2e/job.go) +* Use [NewDefaultFramework](../../test/e2e/framework.go) + - Contains clients, namespace and auto resource cleanup +* See [coding conventions](coding-conventions.md). + ### e2e test philosophy In general passing unit and integration tests should provide sufficient -- cgit v1.2.3 From 1a1db74ae8d2d49d9b08aca9f08791672133162d Mon Sep 17 00:00:00 2001 From: Maciej Szulik Date: Mon, 7 Mar 2016 13:31:19 +0100 Subject: Updated kubectl convetions with information about describing empty fields --- kubectl-conventions.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/kubectl-conventions.md b/kubectl-conventions.md index ba72d6fb..a6d0f850 100644 --- a/kubectl-conventions.md +++ b/kubectl-conventions.md @@ -119,6 +119,7 @@ Updated: 8/27/2015 * json, yaml, Go template, and jsonpath template formats should be supported and encouraged for subsequent processing * Users should use --api-version or --output-version to ensure the output uses the version they expect * `describe` commands may output on multiple lines and may include information from related resources, such as events. Describe should add additional information from related resources that a normal user may need to know - if a user would always run "describe resource1" and the immediately want to run a "get type2" or "describe resource2", consider including that info. Examples, persistent volume claims for pods that reference claims, events for most resources, nodes and the pods scheduled on them. When fetching related resources, a targeted field selector should be used in favor of client side filtering of related resources. +* For fields that can be explicitly unset (booleans, integers, structs), the output should say ``. Likewise, for arrays `` should be used. Lastly `` should be used where unrecognized field type was specified. * Mutations should output TYPE/name verbed by default, where TYPE is singular; `-o name` may be used to just display TYPE/name, which may be used to specify resources in other commands ## Documentation conventions @@ -185,7 +186,7 @@ func NewCmdMine(parent, name string, f *cmdutil.Factory, out io.Writer) *cobra.C } }, } - + cmd.Flags().BoolVar(&options.mineLatest, "latest", false, "Use latest stuff") return cmd } -- cgit v1.2.3 From e0770531abf24ca8589928feb0c63a9f17d590d0 Mon Sep 17 00:00:00 2001 From: David McMahon Date: Tue, 8 Mar 2016 18:06:40 -0800 Subject: Update the latestReleaseBranch to release-1.2 in the munger. --- README.md | 2 +- adding-an-APIGroup.md | 5 +++++ api-conventions.md | 2 +- api_changes.md | 2 +- automation.md | 2 +- cherry-picks.md | 2 +- cli-roadmap.md | 2 +- client-libraries.md | 2 +- coding-conventions.md | 2 +- collab.md | 2 +- developer-guides/vagrant.md | 2 +- development.md | 2 +- e2e-node-tests.md | 5 +++++ e2e-tests.md | 2 +- faster_reviews.md | 2 +- flaky-tests.md | 2 +- generating-clientset.md | 5 +++++ getting-builds.md | 2 +- instrumentation.md | 2 +- issues.md | 2 +- kubectl-conventions.md | 2 +- kubemark-guide.md | 5 +++++ logging.md | 2 +- making-release-notes.md | 2 +- mesos-style.md | 5 +++++ node-performance-testing.md | 5 +++++ on-call-build-cop.md | 5 +++++ on-call-rotations.md | 5 +++++ on-call-user-support.md | 5 +++++ owners.md | 5 +++++ profiling.md | 2 +- pull-requests.md | 2 +- releasing.md | 2 +- running-locally.md | 5 +++++ scheduler.md | 2 +- scheduler_algorithm.md | 2 +- update-release-docs.md | 5 +++++ writing-a-getting-started-guide.md | 2 +- writing-good-e2e-tests.md | 5 +++++ 39 files changed, 91 insertions(+), 26 deletions(-) diff --git a/README.md b/README.md index 29fd9e52..188b395e 100644 --- a/README.md +++ b/README.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.1/docs/devel/README.md). +[here](http://releases.k8s.io/release-1.2/docs/devel/README.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/adding-an-APIGroup.md b/adding-an-APIGroup.md index 0541af61..0119c7b9 100644 --- a/adding-an-APIGroup.md +++ b/adding-an-APIGroup.md @@ -18,6 +18,11 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/adding-an-APIGroup.md). + Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/api-conventions.md b/api-conventions.md index af02d3db..343800af 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.1/docs/devel/api-conventions.md). +[here](http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/api_changes.md b/api_changes.md index 585f015d..e244096f 100644 --- a/api_changes.md +++ b/api_changes.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.1/docs/devel/api_changes.md). +[here](http://releases.k8s.io/release-1.2/docs/devel/api_changes.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/automation.md b/automation.md index 99688de1..918af37b 100644 --- a/automation.md +++ b/automation.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.1/docs/devel/automation.md). +[here](http://releases.k8s.io/release-1.2/docs/devel/automation.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/cherry-picks.md b/cherry-picks.md index 6f5aa5a9..7fbcc93a 100644 --- a/cherry-picks.md +++ b/cherry-picks.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.1/docs/devel/cherry-picks.md). +[here](http://releases.k8s.io/release-1.2/docs/devel/cherry-picks.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/cli-roadmap.md b/cli-roadmap.md index b2ea1894..7a7791b8 100644 --- a/cli-roadmap.md +++ b/cli-roadmap.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.1/docs/devel/cli-roadmap.md). +[here](http://releases.k8s.io/release-1.2/docs/devel/cli-roadmap.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/client-libraries.md b/client-libraries.md index aeba3610..a195b383 100644 --- a/client-libraries.md +++ b/client-libraries.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.1/docs/devel/client-libraries.md). +[here](http://releases.k8s.io/release-1.2/docs/devel/client-libraries.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/coding-conventions.md b/coding-conventions.md index 6af7c40e..9ab6b623 100644 --- a/coding-conventions.md +++ b/coding-conventions.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.1/docs/devel/coding-conventions.md). +[here](http://releases.k8s.io/release-1.2/docs/devel/coding-conventions.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/collab.md b/collab.md index 28de1035..ab2e3337 100644 --- a/collab.md +++ b/collab.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.1/docs/devel/collab.md). +[here](http://releases.k8s.io/release-1.2/docs/devel/collab.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index 6ab4d670..8f0e268a 100644 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.1/docs/devel/developer-guides/vagrant.md). +[here](http://releases.k8s.io/release-1.2/docs/devel/developer-guides/vagrant.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/development.md b/development.md index e1a01015..5a7b52db 100644 --- a/development.md +++ b/development.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.1/docs/devel/development.md). +[here](http://releases.k8s.io/release-1.2/docs/devel/development.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/e2e-node-tests.md b/e2e-node-tests.md index 9df2d1db..09189457 100644 --- a/e2e-node-tests.md +++ b/e2e-node-tests.md @@ -18,6 +18,11 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/e2e-node-tests.md). + Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/e2e-tests.md b/e2e-tests.md index d88be47c..913f60f3 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.1/docs/devel/e2e-tests.md). +[here](http://releases.k8s.io/release-1.2/docs/devel/e2e-tests.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/faster_reviews.md b/faster_reviews.md index 18a01fe9..eb7416d6 100644 --- a/faster_reviews.md +++ b/faster_reviews.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.1/docs/devel/faster_reviews.md). +[here](http://releases.k8s.io/release-1.2/docs/devel/faster_reviews.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/flaky-tests.md b/flaky-tests.md index ce838915..cd27c200 100644 --- a/flaky-tests.md +++ b/flaky-tests.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.1/docs/devel/flaky-tests.md). +[here](http://releases.k8s.io/release-1.2/docs/devel/flaky-tests.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/generating-clientset.md b/generating-clientset.md index e9f238f5..f5a8ca76 100644 --- a/generating-clientset.md +++ b/generating-clientset.md @@ -18,6 +18,11 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/generating-clientset.md). + Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/getting-builds.md b/getting-builds.md index 0caacb34..95f5fa66 100644 --- a/getting-builds.md +++ b/getting-builds.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.1/docs/devel/getting-builds.md). +[here](http://releases.k8s.io/release-1.2/docs/devel/getting-builds.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/instrumentation.md b/instrumentation.md index bfd74026..5e195f6b 100644 --- a/instrumentation.md +++ b/instrumentation.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.1/docs/devel/instrumentation.md). +[here](http://releases.k8s.io/release-1.2/docs/devel/instrumentation.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/issues.md b/issues.md index ac0304ae..ed541adc 100644 --- a/issues.md +++ b/issues.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.1/docs/devel/issues.md). +[here](http://releases.k8s.io/release-1.2/docs/devel/issues.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/kubectl-conventions.md b/kubectl-conventions.md index a6d0f850..cc69c78f 100644 --- a/kubectl-conventions.md +++ b/kubectl-conventions.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.1/docs/devel/kubectl-conventions.md). +[here](http://releases.k8s.io/release-1.2/docs/devel/kubectl-conventions.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/kubemark-guide.md b/kubemark-guide.md index 8edc4b0a..e5c8fdc4 100644 --- a/kubemark-guide.md +++ b/kubemark-guide.md @@ -18,6 +18,11 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/kubemark-guide.md). + Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/logging.md b/logging.md index 8dca0a9f..e0869980 100644 --- a/logging.md +++ b/logging.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.1/docs/devel/logging.md). +[here](http://releases.k8s.io/release-1.2/docs/devel/logging.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/making-release-notes.md b/making-release-notes.md index 48c7d72f..3418258e 100644 --- a/making-release-notes.md +++ b/making-release-notes.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.1/docs/devel/making-release-notes.md). +[here](http://releases.k8s.io/release-1.2/docs/devel/making-release-notes.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/mesos-style.md b/mesos-style.md index c0510264..9616dc31 100644 --- a/mesos-style.md +++ b/mesos-style.md @@ -18,6 +18,11 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/mesos-style.md). + Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/node-performance-testing.md b/node-performance-testing.md index 8a14eedc..ae8789a7 100644 --- a/node-performance-testing.md +++ b/node-performance-testing.md @@ -18,6 +18,11 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/node-performance-testing.md). + Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/on-call-build-cop.md b/on-call-build-cop.md index 8795935a..18067799 100644 --- a/on-call-build-cop.md +++ b/on-call-build-cop.md @@ -18,6 +18,11 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/on-call-build-cop.md). + Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/on-call-rotations.md b/on-call-rotations.md index 9544db51..46d5b75f 100644 --- a/on-call-rotations.md +++ b/on-call-rotations.md @@ -18,6 +18,11 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/on-call-rotations.md). + Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/on-call-user-support.md b/on-call-user-support.md index ceea9c76..1be99f17 100644 --- a/on-call-user-support.md +++ b/on-call-user-support.md @@ -18,6 +18,11 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/on-call-user-support.md). + Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/owners.md b/owners.md index 3b5a1aca..dcd14483 100644 --- a/owners.md +++ b/owners.md @@ -18,6 +18,11 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/owners.md). + Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/profiling.md b/profiling.md index 18c87f41..5e74d25f 100644 --- a/profiling.md +++ b/profiling.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.1/docs/devel/profiling.md). +[here](http://releases.k8s.io/release-1.2/docs/devel/profiling.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/pull-requests.md b/pull-requests.md index 817882db..f4e63f89 100644 --- a/pull-requests.md +++ b/pull-requests.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.1/docs/devel/pull-requests.md). +[here](http://releases.k8s.io/release-1.2/docs/devel/pull-requests.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/releasing.md b/releasing.md index 475c9785..311b613d 100644 --- a/releasing.md +++ b/releasing.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.1/docs/devel/releasing.md). +[here](http://releases.k8s.io/release-1.2/docs/devel/releasing.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/running-locally.md b/running-locally.md index a84bb08c..98df8cfc 100644 --- a/running-locally.md +++ b/running-locally.md @@ -18,6 +18,11 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/running-locally.md). + Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/scheduler.md b/scheduler.md index 5051bfed..778fd087 100755 --- a/scheduler.md +++ b/scheduler.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.1/docs/devel/scheduler.md). +[here](http://releases.k8s.io/release-1.2/docs/devel/scheduler.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/scheduler_algorithm.md b/scheduler_algorithm.md index 0f52ca27..63206c8b 100755 --- a/scheduler_algorithm.md +++ b/scheduler_algorithm.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.1/docs/devel/scheduler_algorithm.md). +[here](http://releases.k8s.io/release-1.2/docs/devel/scheduler_algorithm.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/update-release-docs.md b/update-release-docs.md index e0c04047..1dbb20a8 100644 --- a/update-release-docs.md +++ b/update-release-docs.md @@ -18,6 +18,11 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/update-release-docs.md). + Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/writing-a-getting-started-guide.md b/writing-a-getting-started-guide.md index f6b2a4b1..729977b0 100644 --- a/writing-a-getting-started-guide.md +++ b/writing-a-getting-started-guide.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.1/docs/devel/writing-a-getting-started-guide.md). +[here](http://releases.k8s.io/release-1.2/docs/devel/writing-a-getting-started-guide.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/writing-good-e2e-tests.md b/writing-good-e2e-tests.md index f00b55dc..54b70030 100644 --- a/writing-good-e2e-tests.md +++ b/writing-good-e2e-tests.md @@ -18,6 +18,11 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/writing-good-e2e-tests.md). + Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). -- cgit v1.2.3 From 7e8bc7623ccf95ffd0e354def20d30efbcef0135 Mon Sep 17 00:00:00 2001 From: derekwaynecarr Date: Thu, 10 Mar 2016 13:12:53 -0500 Subject: Comment that godep versions 54 or above do not play nice with Kubernetes --- development.md | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/development.md b/development.md index 5a7b52db..18d3abe5 100644 --- a/development.md +++ b/development.md @@ -166,6 +166,16 @@ export GOPATH=$HOME/go-tools export PATH=$PATH:$GOPATH/bin ``` +Note: +At this time, godep update in the Kubernetes project only works properly if your version of godep is < 54. + +To check your version of godep: + +```sh +$ godep version +godep v53 (linux/amd64/go1.5.3) +``` + ### Using godep Here's a quick walkthrough of one way to use godeps to add or update a Kubernetes dependency into Godeps/_workspace. For more details, please see the instructions in [godep's documentation](https://github.com/tools/godep). -- cgit v1.2.3 From 713ce4f132e034740a160501eb53a2843f14fe81 Mon Sep 17 00:00:00 2001 From: David McMahon Date: Tue, 8 Mar 2016 18:21:08 -0800 Subject: Update the section on jenkins changes for a new branch. This reflects the actual state of things at the moment. There is quite a bit of assumed knowledge here in a rapidly changing (test) environment. referencing #22672. --- releasing.md | 42 +++++++++++++++++++++++++----------------- 1 file changed, 25 insertions(+), 17 deletions(-) diff --git a/releasing.md b/releasing.md index 475c9785..5016714b 100644 --- a/releasing.md +++ b/releasing.md @@ -226,23 +226,31 @@ been automated that need to happen after the branch has been cut: the unversioned warning in docs point to the latest release series. Please send the changes as a PR titled "Update the latestReleaseBranch to release-X.Y in the munger". -1. Add test jobs for the new branch. - 1. See [End-2-End Testing in Kubernetes](e2e-tests.md) for the test jobs - that should be running in CI, which are under version control in - `hack/jenkins/e2e.sh` (on the release branch) and - `hack/jenkins/job-configs/kubernetes-e2e.yaml` (in `master`). You'll - want to munge these for the release branch so that, as we cherry-pick - fixes onto the branch, we know that it builds, etc. (Talk with - @ihmccreery for more details.) - 1. Make sure all features that are supposed to be GA are covered by tests, - but remove feature tests on the release branch for features that aren't - GA. You can use `hack/list-feature-tests.sh` to see a list of tests - labeled as `[Feature:.+]`; make sure that these are all either covered in - CI jobs on the release branch or are experimental features. (The answer - should already be 'yes', but this is a good time to reconcile.) - 1. Make a dashboard in Jenkins that contains all of the jobs for this - release cycle, and also add them to Critical Builds. (Don't add them to - the merge-bot blockers; see kubernetes/contrib#156.) +1. Send a note to the test team (@kubernetes/goog-testing) that a new branch + has been created. + 1. There is currently much work being done on our Jenkins infrastructure + and configs. Eventually we could have a relatively simple interface + to make this change or a way to automatically use the new branch. + See [recent Issue #22672](https://github.com/kubernetes/kubernetes/issues/22672). + 1. You can provide this guidance in the email to aid in the setup: + 1. See [End-2-End Testing in Kubernetes](e2e-tests.md) for the test jobs + that should be running in CI, which are under version control in + `hack/jenkins/e2e.sh` (on the release branch) and + `hack/jenkins/job-configs/kubernetes-jenkins/kubernetes-e2e.yaml` + (in `master`). You'll want to munge these for the release + branch so that, as we cherry-pick fixes onto the branch, we know that + it builds, etc. (Talk with @ihmccreery for more details.) + 1. Make sure all features that are supposed to be GA are covered by tests, + but remove feature tests on the release branch for features that aren't + GA. You can use `hack/list-feature-tests.sh` to see a list of tests + labeled as `[Feature:.+]`; make sure that these are all either + covered in CI jobs on the release branch or are experimental + features. (The answer should already be 'yes', but this is a + good time to reconcile.) + 1. Make a dashboard in Jenkins that contains all of the jobs for this + release cycle, and also add them to Critical Builds. (Don't add + them to the merge-bot blockers; see kubernetes/contrib#156.) + ## Injecting Version into Binaries -- cgit v1.2.3 From ae94c4fca489bdfd0f28f543cfeb3c9bea183334 Mon Sep 17 00:00:00 2001 From: Erick Fejta Date: Fri, 11 Mar 2016 02:06:05 -0800 Subject: Address thockin nits --- development.md | 2 +- testing.md | 14 ++++++++------ 2 files changed, 9 insertions(+), 7 deletions(-) diff --git a/development.md b/development.md index 5b8dbdad..191e1e33 100644 --- a/development.md +++ b/development.md @@ -232,7 +232,7 @@ Please send dependency updates in separate commits within your PR, for easier re ## Testing -Three basic commands let you run unit, integration and/or unit tests: +Three basic commands let you run unit, integration and/or e2e tests: ```sh cd kubernetes diff --git a/testing.md b/testing.md index 0e02ebad..b918f044 100644 --- a/testing.md +++ b/testing.md @@ -30,10 +30,10 @@ Documentation for other releases can be found at # Testing guide This assumes you already read the [development guide](development.md) to -install go, godeps and configure your git client. +install go, godeps, and configure your git client. In order to send pull requests you need to make sure you changes pass -unit, integration tests. +unit and integration tests. Kubernetes only merges pull requests when e2e tests are passing, so it is often a good idea to make sure these work as well. @@ -188,15 +188,17 @@ hack/test-integration.sh # Run all integration tests. ### e2e test philosophy In general passing unit and integration tests should provide sufficient -confidence -to allow code to merge. If that is not the case, please *invest more time adding -unit and integration test coverage*. These tests run faster and a smaller failure domain. +confidence to allow code to merge. If that is not the case, +please *invest more time adding unit and integration test coverage*. +These tests run faster and have a smaller failure domain. However, end-to-end (e2e) tests provide maximum confidence that the system is working in exchange for reduced performance and a higher debugging cost. -e2e tests deploy a real kubernetes cluster of real nodes on a concrete provider such as GCE. The tests then manipulate the cluster in certain ways and assert the expected results. +e2e tests deploy a real kubernetes cluster of real nodes on a concrete provider +such as GCE. The tests then manipulate the cluster in certain ways and +assert the expected results. For a more in depth discussion please read [End-to-End Testing in Kubernetes](e2e-tests.md). -- cgit v1.2.3 From dbc40f5a38a538c2a256c94f1fe14b73fd604ed9 Mon Sep 17 00:00:00 2001 From: nikhiljindal Date: Wed, 16 Mar 2016 15:17:03 -0700 Subject: Add a README for api-reference docs and link to it instead of linking to swagger-ui --- README.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 188b395e..b1af07df 100644 --- a/README.md +++ b/README.md @@ -83,7 +83,8 @@ Guide](../admin/README.md). ## Developing against the Kubernetes API -* API objects are explained at [http://kubernetes.io/third_party/swagger-ui/](http://kubernetes.io/third_party/swagger-ui/). +* The [REST API documentation](../api-reference/README.md) explains the REST + API exposed by apiserver. * **Annotations** ([docs/user-guide/annotations.md](../user-guide/annotations.md)): are for attaching arbitrary non-identifying metadata to objects. Programs that automate Kubernetes objects may use annotations to store small amounts of their state. -- cgit v1.2.3 From afb05a2b3d15b98fcbc420171af6f2c71076e321 Mon Sep 17 00:00:00 2001 From: David McMahon Date: Fri, 18 Mar 2016 15:20:45 -0700 Subject: Update with release-1.2. --- cherry-picks.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/cherry-picks.md b/cherry-picks.md index 7fbcc93a..fa261be4 100644 --- a/cherry-picks.md +++ b/cherry-picks.md @@ -55,14 +55,14 @@ If you are cherrypicking a change which adds a doc, then you also need to run Ideally, just running `hack/cherry_pick_pull.sh` should be enough, but we are not there yet: [#18861](https://github.com/kubernetes/kubernetes/issues/18861) -To cherrypick PR 123456 to release-1.1, run the following commands after running `hack/cherry_pick_pull.sh` and before merging the PR: +To cherrypick PR 123456 to release-1.2, run the following commands after running `hack/cherry_pick_pull.sh` and before merging the PR: ``` -$ git checkout -b automated-cherry-pick-of-#123456-upstream-release-1.1 - origin/automated-cherry-pick-of-#123456-upstream-release-1.1 -$ ./build/versionize-docs.sh release-1.1 +$ git checkout -b automated-cherry-pick-of-#123456-upstream-release-1.2 + origin/automated-cherry-pick-of-#123456-upstream-release-1.2 +$ ./build/versionize-docs.sh release-1.2 $ git commit -a -m "Running versionize docs" -$ git push origin automated-cherry-pick-of-#123456-upstream-release-1.1 +$ git push origin automated-cherry-pick-of-#123456-upstream-release-1.2 ``` ## Cherry Pick Review -- cgit v1.2.3 From 8846094b71eb4cf662fe1ff1c97386c064886a7d Mon Sep 17 00:00:00 2001 From: Aaron Crickenberger Date: Tue, 22 Mar 2016 13:09:31 -0700 Subject: Update conformance test policy Mostly doc updates and cruft removal - describe conformance test policy and howto in e2e-tests.md - rm e2e test info from testing.md in the name of DRY - rm cluster/test-conformance.sh; unusable in release tar, not e2e.go - update e2e test link in write-a-getting-started-guide.md --- e2e-tests.md | 36 ++++++++++++++++++++------- testing.md | 50 +------------------------------------- writing-a-getting-started-guide.md | 2 +- 3 files changed, 29 insertions(+), 59 deletions(-) diff --git a/e2e-tests.md b/e2e-tests.md index 913f60f3..afa1337d 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -77,12 +77,15 @@ go run hack/e2e.go -v --pushup # Run all tests go run hack/e2e.go -v --test -# Run tests matching the regex "\[Conformance\]" (the conformance tests) -go run hack/e2e.go -v -test --test_args="--ginkgo.focus=\[Conformance\]" +# Run tests matching the regex "\[Feature:Performance\]" +go run hack/e2e.go -v -test --test_args="--ginkgo.focus=\[Feature:Performance\]" # Conversely, exclude tests that match the regex "Pods.*env" go run hack/e2e.go -v -test --test_args="--ginkgo.focus=Pods.*env" +# Run tests in parallel, skip any that must be run serially +GINKGO_PARALLEL=y go run hack/e2e.go --v --test --test_args="--ginkgo.skip=\[Serial\]" + # Flags can be combined, and their actions will take place in this order: # --build, --push|--up|--pushup, --test|--tests=..., --down # @@ -96,9 +99,6 @@ KUBERNETES_PROVIDER=aws go run hack/e2e.go -v --build --pushup --test --down # kubectl output. go run hack/e2e.go -v -ctl='get events' go run hack/e2e.go -v -ctl='delete pod foobar' - -# Alternately, if you have the e2e cluster up and no desire to see the event stream, you can run ginkgo-e2e.sh directly: -hack/ginkgo-e2e.sh --ginkgo.focus=\[Conformance\] ``` The tests are built into a single binary which can be run used to deploy a Kubernetes system or run tests against an already-deployed Kubernetes system. See `go run hack/e2e.go --help` (or the flag definitions in `hack/e2e.go`) for more options, such as reusing an existing cluster. @@ -208,13 +208,31 @@ We are working on implementing clearer partitioning of our e2e tests to make run ### Conformance tests -Finally, `[Conformance]` tests are tests we expect to pass on **any** Kubernetes cluster. The `[Conformance]` label does not supersede any other labels. `[Conformance]` test policies are a work-in-progress (see #18162). +Finally, `[Conformance]` tests represent a subset of the e2e-tests we expect to pass on **any** Kubernetes cluster. The `[Conformance]` label does not supersede any other labels. + +As each new release of Kubernetes providers new functionality, the subset of tests necessary to demonstrate conformance grows with each release. Conformance is thus considered versioned with an eye towards backwards compatibility. Conformance tests for a given version should be run off of the release branch that corresponds to that version. Thus `v1.2` conformance tests would be run from the head of the `release-1.2` branch. eg: + + - A v1.3 development cluster should pass v1.0, v1.1, v1.2 conformance tests + - A v1.2 cluster should pass v1.0, v1.1, v1.2 conformance tests + - A v1.1 cluster should pass v1.0, v1.1 conformance tests, and fail v1.2 conformance tests + +Conformance tests are designed to be run with no cloud provider configured. Conformance tests can be run against clusters that have not been created with `hack/e2e.go`, just provide a kubeconfig with the appropriate endpoint and credentials. -End-to-end testing, as described above, is for [development distributions](writing-a-getting-started-guide.md). A conformance test is used on a [versioned distro](writing-a-getting-started-guide.md). (Links WIP) +```sh +# setup for conformance tests +export KUBECONFIG=/path/to/kubeconfig +export KUBE_CONFORMANCE_TEST=y -The conformance test runs a subset of the e2e-tests against a manually-created cluster. It does not require support for up/push/down and other operations. To run a conformance test, you need to know the IP of the master for your cluster and the authorization arguments to use. The conformance test is intended to run against a cluster at a specific binary release of Kubernetes. See [conformance-test.sh](http://releases.k8s.io/HEAD/hack/conformance-test.sh). +# run all conformance tests +go run hack/e2e.go -v --test_args="--ginkgo.focus=\[Conformance\]" + +# run all parallel-safe conformance tests in parallel +GINKGO_PARALLEL=y go run hack/e2e.go --v --test --test_args="--ginkgo.focus=\[Conformance\] --ginkgo.skip=\[Serial\]" +# ... and finish up with remaining tests in serial +go run hack/e2e.go --v --test --test_args="--ginkgo.focus=\[Serial\].*\[Conformance\]" +``` -### Defining what Conformance means +### Defining Conformance Subset It is impossible to define the entire space of Conformance tests without knowing the future, so instead, we define the compliment of conformance tests, below. diff --git a/testing.md b/testing.md index b918f044..3bc1c141 100644 --- a/testing.md +++ b/testing.md @@ -174,55 +174,7 @@ hack/test-integration.sh # Run all integration tests. ## End-to-End tests -* e2e tests build kubernetes and deploy a cluster of nodes. - - Generally on a specific cloud provider. -* Access gcr.io images -* Access a specific, non-latest image tag (unless testing pulling). -* Tests may not flake due to intermittent issues. -* Use ginko to desribe steps. - - See [should run a job to completion when tasks succeed](../../test/e2e/job.go) -* Use [NewDefaultFramework](../../test/e2e/framework.go) - - Contains clients, namespace and auto resource cleanup -* See [coding conventions](coding-conventions.md). - -### e2e test philosophy - -In general passing unit and integration tests should provide sufficient -confidence to allow code to merge. If that is not the case, -please *invest more time adding unit and integration test coverage*. -These tests run faster and have a smaller failure domain. - -However, end-to-end (e2e) tests provide maximum confidence that -the system is working in exchange for reduced performance and a -higher debugging cost. - -e2e tests deploy a real kubernetes cluster of real nodes on a concrete provider -such as GCE. The tests then manipulate the cluster in certain ways and -assert the expected results. - -For a more in depth discussion please read [End-to-End Testing in Kubernetes](e2e-tests.md). - -### Running e2e tests - -```sh -cd kubernetes -go run hack/e2e.go -v --build --up --test --down - -# Change code, run unit and integration tests -# Push to an existing cluster, or bring up a cluster if it's down. -go run hack/e2e.go -v --pushup - -# Run all tests on an already up cluster -go run hack/e2e.go -v --test - -# Run only conformance tests -go run hack/e2e.go -v -test --test_args="--ginkgo.focus=\[Conformance\]" - -# Run tests on a specific provider -KUBERNETES_PROVIDER=aws go run hack/e2e.go --build --pushup --test --down -``` - -For a more in depth discussion please read [End-to-End Testing in Kubernetes](e2e-tests.md). +Please refer to [End-to-End Testing in Kubernetes](e2e-tests.md). [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/testing.md?pixel)]() diff --git a/writing-a-getting-started-guide.md b/writing-a-getting-started-guide.md index 729977b0..fbe5aa1b 100644 --- a/writing-a-getting-started-guide.md +++ b/writing-a-getting-started-guide.md @@ -68,7 +68,7 @@ These guidelines say *what* to do. See the Rationale section for *why*. own repo. - Add or update a row in [The Matrix](../../docs/getting-started-guides/README.md). - State the binary version of Kubernetes that you tested clearly in your Guide doc. - - Setup a cluster and run the [conformance test](development.md#conformance-testing) against it, and report the + - Setup a cluster and run the [conformance tests](e2e-tests.md#conformance-tests) against it, and report the results in your PR. - Versioned distros should typically not modify or add code in `cluster/`. That is just scripts for developer distros. -- cgit v1.2.3 From d5948ba42bfc564cb1571a086e5605606e268635 Mon Sep 17 00:00:00 2001 From: Aaron Crickenberger Date: Tue, 22 Mar 2016 13:48:31 -0700 Subject: Run hack/update-generated-docs.sh --- e2e-tests.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/e2e-tests.md b/e2e-tests.md index afa1337d..23d73e67 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -211,7 +211,7 @@ We are working on implementing clearer partitioning of our e2e tests to make run Finally, `[Conformance]` tests represent a subset of the e2e-tests we expect to pass on **any** Kubernetes cluster. The `[Conformance]` label does not supersede any other labels. As each new release of Kubernetes providers new functionality, the subset of tests necessary to demonstrate conformance grows with each release. Conformance is thus considered versioned with an eye towards backwards compatibility. Conformance tests for a given version should be run off of the release branch that corresponds to that version. Thus `v1.2` conformance tests would be run from the head of the `release-1.2` branch. eg: - + - A v1.3 development cluster should pass v1.0, v1.1, v1.2 conformance tests - A v1.2 cluster should pass v1.0, v1.1, v1.2 conformance tests - A v1.1 cluster should pass v1.0, v1.1 conformance tests, and fail v1.2 conformance tests -- cgit v1.2.3 From a9d113c4b25c0283d3836e59cc7df4a3634ca316 Mon Sep 17 00:00:00 2001 From: Aaron Crickenberger Date: Tue, 22 Mar 2016 15:36:27 -0700 Subject: Update versioning per supported releases policy --- e2e-tests.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/e2e-tests.md b/e2e-tests.md index 23d73e67..5d23bc9a 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -210,10 +210,10 @@ We are working on implementing clearer partitioning of our e2e tests to make run Finally, `[Conformance]` tests represent a subset of the e2e-tests we expect to pass on **any** Kubernetes cluster. The `[Conformance]` label does not supersede any other labels. -As each new release of Kubernetes providers new functionality, the subset of tests necessary to demonstrate conformance grows with each release. Conformance is thus considered versioned with an eye towards backwards compatibility. Conformance tests for a given version should be run off of the release branch that corresponds to that version. Thus `v1.2` conformance tests would be run from the head of the `release-1.2` branch. eg: +As each new release of Kubernetes providers new functionality, the subset of tests necessary to demonstrate conformance grows with each release. Conformance is thus considered versioned, with the same backwards compatibility guarantees as laid out in [our versioning policy](../design/versioning.md#supported-releases). Conformance tests for a given version should be run off of the release branch that corresponds to that version. Thus `v1.2` conformance tests would be run from the head of the `release-1.2` branch. eg: - - A v1.3 development cluster should pass v1.0, v1.1, v1.2 conformance tests - - A v1.2 cluster should pass v1.0, v1.1, v1.2 conformance tests + - A v1.3 development cluster should pass v1.1, v1.2 conformance tests + - A v1.2 cluster should pass v1.1, v1.2 conformance tests - A v1.1 cluster should pass v1.0, v1.1 conformance tests, and fail v1.2 conformance tests Conformance tests are designed to be run with no cloud provider configured. Conformance tests can be run against clusters that have not been created with `hack/e2e.go`, just provide a kubeconfig with the appropriate endpoint and credentials. -- cgit v1.2.3 From 91a6c5e96daf908c3a6be4d0147061014001b2ea Mon Sep 17 00:00:00 2001 From: Aaron Crickenberger Date: Wed, 23 Mar 2016 12:14:54 -0700 Subject: Fix typo --- e2e-tests.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/e2e-tests.md b/e2e-tests.md index 5d23bc9a..175c323b 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -221,7 +221,7 @@ Conformance tests are designed to be run with no cloud provider configured. Con ```sh # setup for conformance tests export KUBECONFIG=/path/to/kubeconfig -export KUBE_CONFORMANCE_TEST=y +export KUBERNETES_CONFORMANCE_TEST=y # run all conformance tests go run hack/e2e.go -v --test_args="--ginkgo.focus=\[Conformance\]" -- cgit v1.2.3 From 3d914cf027607f93f3c425e2136f979d6fb80eab Mon Sep 17 00:00:00 2001 From: David McMahon Date: Tue, 22 Mar 2016 11:35:27 -0700 Subject: Update the cherry-pick guide to guide based on new batching method. --- cherry-picks.md | 33 ++++++++++++++++++++++++--------- 1 file changed, 24 insertions(+), 9 deletions(-) diff --git a/cherry-picks.md b/cherry-picks.md index fa261be4..c01fd76d 100644 --- a/cherry-picks.md +++ b/cherry-picks.md @@ -35,11 +35,27 @@ Documentation for other releases can be found at # Overview This document explains cherry picks are managed on release branches within the -Kubernetes projects. +Kubernetes projects. Patches are either applied in batches or individually +depending on the point in the release cycle. ## Propose a Cherry Pick -Any contributor can propose a cherry pick of any pull request, like so: +### BATCHING: After branching during code slush up to X.X.0 + +Starting with the release-1.2 branch, we shifted to a new cherrypick model +where the branch 'OWNERS' cherry pick batches of patches into the branch +to control the order and also vet what is or is not cherrypicked to a branch. + +Contributors that want to include a cherrypick for a branch should label +their PR with the `cherrypick-candidate` label **AND** mark it +with the appropriate milestone (or the bot will unlabel it). + +These cherrypick-candidate's will be triaged, batched up and submitted +to the release branch by the branch OWNERS. + +There is an [issue](https://github.com/kubernetes/kubernetes/issues/23347) open to automate this new procedure. + +### INDIVIDUAL CHERRYPICKS: Post X.X.0 ```sh hack/cherry_pick_pull.sh upstream/release-3.14 98765 @@ -48,7 +64,7 @@ hack/cherry_pick_pull.sh upstream/release-3.14 98765 This will walk you through the steps to propose an automated cherry pick of pull #98765 for remote branch `upstream/release-3.14`. -### Cherrypicking a doc change +#### Cherrypicking a doc change If you are cherrypicking a change which adds a doc, then you also need to run `build/versionize-docs.sh` in the release branch to versionize that doc. @@ -72,16 +88,15 @@ particular, they may be self-merged by the release branch owner without fanfare, in the case the release branch owner knows the cherry pick was already requested - this should not be the norm, but it may happen. +## Searching for Cherry Picks + +See the [cherrypick queue dashboard](http://cherrypick.k8s.io/#/queue) for +status of PRs labeled as `cherrypick-candidate`. + [Contributor License Agreements](http://releases.k8s.io/HEAD/CONTRIBUTING.md) is considered implicit for all code within cherry-pick pull requests, ***unless there is a large conflict***. -## Searching for Cherry Picks - -Now that we've structured cherry picks as PRs, searching for all cherry-picks -against a release is a GitHub query: For example, -[this query is all of the v0.21.x cherry-picks](https://github.com/kubernetes/kubernetes/pulls?utf8=%E2%9C%93&q=is%3Apr+%22automated+cherry+pick%22+base%3Arelease-0.21) - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/cherry-picks.md?pixel)]() -- cgit v1.2.3 From 84183ee0df51f85861f69da7ef2c1912450738e9 Mon Sep 17 00:00:00 2001 From: David McMahon Date: Thu, 24 Mar 2016 16:22:32 -0700 Subject: Add a Release Notes section. Also fix up markdown headers and add a TOC. Ref: #23409 --- pull-requests.md | 42 ++++++++++++++++++++++++++++++++---------- 1 file changed, 32 insertions(+), 10 deletions(-) diff --git a/pull-requests.md b/pull-requests.md index f4e63f89..de2c6bd7 100644 --- a/pull-requests.md +++ b/pull-requests.md @@ -31,15 +31,26 @@ Documentation for other releases can be found at -Pull Request Process -==================== + + + +- [Pull Request Process](#pull-request-process) +- [Life of a Pull Request](#life-of-a-pull-request) + - [Before sending a pull request](#before-sending-a-pull-request) + - [Release Notes](#release-notes) + - [Visual overview](#visual-overview) +- [Other notes](#other-notes) +- [Automation](#automation) + + + +# Pull Request Process An overview of how pull requests are managed for kubernetes. This document assumes the reader has already followed the [development guide](development.md) to set up their environment. -Life of a Pull Request ----------------------- +# Life of a Pull Request Unless in the last few weeks of a milestone when we need to reduce churn and stabilize, we aim to be always accepting pull requests. @@ -52,7 +63,7 @@ There are several requirements for the submit-queue to work: Additionally, for infrequent or new contributors, we require the on call to apply the "ok-to-merge" label manually. This is gated by the [whitelist](https://github.com/kubernetes/contrib/blob/master/mungegithub/whitelist.txt). -### Before sending a pull request +## Before sending a pull request The following will save time for both you and your reviewer: @@ -60,12 +71,24 @@ The following will save time for both you and your reviewer: * Verify `hack/verify-generated-docs.sh` passes. * Verify `hack/test-go.sh` passes. -### Visual overview +## Release Notes + +All pull requests are initiated with a `needs-release-note` label. +There are many `release-note-*` label options, including `release-note-none`. +If your PR does not require any visibility at release time, you may use a +`release-note-none` label. Otherwise, please choose a `release-note-*` label +that fits your PR. + +Additionally, `release-note-none` is not allowed on PRs on release branches. + +Finally, ensure your PR title is the release note you want published at relase +time. + +## Visual overview ![PR workflow](pr_workflow.png) -Other notes ------------ +# Other notes Pull requests that are purely support questions will be closed and redirected to [stackoverflow](http://stackoverflow.com/questions/tagged/kubernetes). @@ -82,8 +105,7 @@ request that subsequently needs to be reopened. We want to limit the total numbe * Encourage code velocity -Automation ----------- +# Automation We use a variety of automation to manage pull requests. This automation is described in detail [elsewhere.](automation.md) -- cgit v1.2.3 From fcf9b3c9433d96a423b7d5aaa6f573e86869702c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Lucas=20K=C3=A4ldstr=C3=B6m?= Date: Sun, 27 Mar 2016 17:17:04 +0300 Subject: Up to golang 1.6 --- development.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/development.md b/development.md index bdef3213..b664fc52 100644 --- a/development.md +++ b/development.md @@ -54,7 +54,7 @@ Kubernetes is written in the [Go](http://golang.org) programming language. If yo ### Go versions -Requires Go version 1.4.x or 1.5.x +Requires Go version 1.4.x, 1.5.x or 1.6.x ## Git setup -- cgit v1.2.3 From 75e63949ff758a0e494761b46cfe3afbc9d09b06 Mon Sep 17 00:00:00 2001 From: Alex Robinson Date: Wed, 30 Mar 2016 18:46:37 -0700 Subject: Fix the link for unlabeled issues in the build cop guide. --- on-call-build-cop.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/on-call-build-cop.md b/on-call-build-cop.md index 18067799..bb4b513c 100644 --- a/on-call-build-cop.md +++ b/on-call-build-cop.md @@ -59,7 +59,7 @@ Traffic sources and responsibilities * When in doubt, choose a TL or team maintainer of the most relevant team; they can delegate * Keep in mind that you can @ mention people in an issue/PR to bring it to their attention without assigning it to them. You can also @ mention github teams, such as @kubernetes/goog-ux or @kubernetes/kubectl * If you need help triaging an issue or PR, consult with (or assign it to) @brendandburns, @thockin, @bgrant0607, @quinton-hoole, @davidopp, @dchen1107, @lavalamp (all U.S. Pacific Time) or @fgrzadkowski (Central European Time). - * At the beginning of your shift, please add team/* labels to any issues that have fallen through the cracks and don't have one. Likewise, be fair to the next person in rotation: try to ensure that every issue that gets filed while you are on duty is handled. The Github query to find issues with no team/* label is: [here](https://github.com/kubernetes/kubernetes/issues?utf8=%E2%9C%93&q=is%3Aopen+is%3Aissue+-label%3Ateam%2Fcontrol-plane+-label%3Ateam%2Fmesosphere+-label%3Ateam%2Fredhat+-label%3Ateam%2Frelease-infra+-label%3Ateam%2Fnone+-label%3Ateam%2Fnode+-label%3Ateam%2Fcluster+-label%3Ateam%2Fux+-label%3Ateam%2Fcsi+-label%3Ateam%2Fapi+-label%3Ateam%2Ftest-infra+-label%3Ateam%2Fgke). + * At the beginning of your shift, please add team/* labels to any issues that have fallen through the cracks and don't have one. Likewise, be fair to the next person in rotation: try to ensure that every issue that gets filed while you are on duty is handled. The Github query to find issues with no team/* label is: [here](https://github.com/kubernetes/kubernetes/issues?utf8=%E2%9C%93&q=is%3Aopen+is%3Aissue+-label%3Ateam%2Fcontrol-plane+-label%3Ateam%2Fmesosphere+-label%3Ateam%2Fredhat+-label%3Ateam%2Frelease-infra+-label%3Ateam%2Fnone+-label%3Ateam%2Fnode+-label%3Ateam%2Fcluster+-label%3Ateam%2Fux+-label%3Ateam%2Fapi+-label%3Ateam%2Ftest-infra+-label%3Ateam%2Fgke+-label%3A"team%2FCSI-API+Machinery+SIG"+-label%3Ateam%2Fhuawei+-label%3Ateam%2Fsig-aws). Example response for support issues: -- cgit v1.2.3 From 6e11ba72c43dee28ad33a4992528b7f0ba852a29 Mon Sep 17 00:00:00 2001 From: Chao Xu Date: Tue, 29 Mar 2016 14:52:43 -0700 Subject: Generate the typed clients under the clientset folder --- generating-clientset.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/generating-clientset.md b/generating-clientset.md index f5a8ca76..1788627e 100644 --- a/generating-clientset.md +++ b/generating-clientset.md @@ -47,12 +47,12 @@ $ client-gen --input="api/v1,extensions/v1beta1" --clientset-name="my_release" ``` will generate a clientset named "my_release" which includes clients for api/v1 objects and extensions/v1beta1 objects. You can run `$ client-gen --help` to see other command line arguments. -- Adding expansion methods: client-gen only generates the common methods, such as `Create()` and `Delete()`. You can manually add additional methods through the expansion interface. For example, this [file](../../pkg/client/typed/generated/core/v1/pod_expansion.go) adds additional methods to Pod's client. As a convention, we put the expansion interface and its methods in file ${TYPE}_expansion.go. +- Adding expansion methods: client-gen only generates the common methods, such as `Create()` and `Delete()`. You can manually add additional methods through the expansion interface. For example, this [file](../../pkg/client/clientset_generated/release_1_2/typed/core/v1/pod_expansion.go) adds additional methods to Pod's client. As a convention, we put the expansion interface and its methods in file ${TYPE}_expansion.go. - Generating Fake clients for testing purposes: client-gen will generate a fake clientset if the command line argument `--fake-clientset` is set. The fake clientset provides the default implementation, you only need to fake out the methods you care about when writing test cases. The output of client-gen inlcudes: -- Individual typed clients and client for group: They will be generated at `pkg/client/typed/generated/${GROUP}/${VERSION}/` -- clientset: the top-level clientset will be generated at `pkg/client/clientset_generated` by default, and you can change the path via the `--clientset-path` command line argument. +- clientset: the clientset will be generated at `pkg/client/clientset_generated/` by default, and you can change the path via the `--clientset-path` command line argument. +- Individual typed clients and client for group: They will be generated at `pkg/client/clientset_generated/${clientset_name}/typed/generated/${GROUP}/${VERSION}/` ## Released clientsets -- cgit v1.2.3 From d197504b424bd06bfd66d6d2d0e88981313a1675 Mon Sep 17 00:00:00 2001 From: David McMahon Date: Mon, 28 Mar 2016 14:07:24 -0700 Subject: Clarify labeling states on proposed cherrypicks. Sync the examples with the scripts usage so we don't need to update this doc with every new branch. Supporting updates to docs/devel/pull-requests.md#release-notes. --- cherry-picks.md | 47 +++++++++++++++++++++++++---------------------- pull-requests.md | 16 ++++++---------- 2 files changed, 31 insertions(+), 32 deletions(-) diff --git a/cherry-picks.md b/cherry-picks.md index c01fd76d..02b5cc1e 100644 --- a/cherry-picks.md +++ b/cherry-picks.md @@ -40,29 +40,32 @@ depending on the point in the release cycle. ## Propose a Cherry Pick -### BATCHING: After branching during code slush up to X.X.0 +1. Cherrypicks are [managed with labels and milestones](pull-requests.md#release-notes) -Starting with the release-1.2 branch, we shifted to a new cherrypick model -where the branch 'OWNERS' cherry pick batches of patches into the branch -to control the order and also vet what is or is not cherrypicked to a branch. +1. All label/milestone accounting happens on PRs on master. There's nothing to do on PRs targeted to the release branches. +1. When you want a PR to be merged to the release branch, make the following label changes to the **master** branch PR: -Contributors that want to include a cherrypick for a branch should label -their PR with the `cherrypick-candidate` label **AND** mark it -with the appropriate milestone (or the bot will unlabel it). + * Remove release-note-label-needed + * Add an appropriate release-note-(!label-needed) label + * Add an appropriate milestone + * Add the `cherrypick-candidate` label -These cherrypick-candidate's will be triaged, batched up and submitted -to the release branch by the branch OWNERS. +### How do cherrypick-candidates make it to the release branch? -There is an [issue](https://github.com/kubernetes/kubernetes/issues/23347) open to automate this new procedure. +1. **BATCHING:** After a branch is first created and before the X.Y.0 release + * Branch owners review the list of `cherrypick-candidate` labeled PRs. + * PRs batched up and merged to the release branch get a `cherrypick-approved` label and lose the `cherrypick-candidate` label. + * PRs that won't be merged to the release branch, lose the `cherrypick-candidate` label. -### INDIVIDUAL CHERRYPICKS: Post X.X.0 +1. **INDIVIDUAL CHERRYPICKS:** After the first X.Y.0 on a branch + * Run the cherry pick script. This example applies a master branch PR #98765 to the remote branch `upstream/release-3.14`: + `hack/cherry_pick_pull.sh upstream/release-3.14 98765` + * Your cherrypick PR (targeted to the branch) will immediately get the + `do-not-merge` label. The branch owner will triage PRs targeted to + the branch and label the ones to be merged by applying the `lgtm` + label. -```sh -hack/cherry_pick_pull.sh upstream/release-3.14 98765 -``` - -This will walk you through the steps to propose an automated cherry pick of pull - #98765 for remote branch `upstream/release-3.14`. +There is an [issue](https://github.com/kubernetes/kubernetes/issues/23347) open tracking the tool to automate the batching procedure. #### Cherrypicking a doc change @@ -71,14 +74,14 @@ If you are cherrypicking a change which adds a doc, then you also need to run Ideally, just running `hack/cherry_pick_pull.sh` should be enough, but we are not there yet: [#18861](https://github.com/kubernetes/kubernetes/issues/18861) -To cherrypick PR 123456 to release-1.2, run the following commands after running `hack/cherry_pick_pull.sh` and before merging the PR: +To cherrypick PR 123456 to release-3.14, run the following commands after running `hack/cherry_pick_pull.sh` and before merging the PR: ``` -$ git checkout -b automated-cherry-pick-of-#123456-upstream-release-1.2 - origin/automated-cherry-pick-of-#123456-upstream-release-1.2 -$ ./build/versionize-docs.sh release-1.2 +$ git checkout -b automated-cherry-pick-of-#123456-upstream-release-3.14 + origin/automated-cherry-pick-of-#123456-upstream-release-3.14 +$ ./build/versionize-docs.sh release-3.14 $ git commit -a -m "Running versionize docs" -$ git push origin automated-cherry-pick-of-#123456-upstream-release-1.2 +$ git push origin automated-cherry-pick-of-#123456-upstream-release-3.14 ``` ## Cherry Pick Review diff --git a/pull-requests.md b/pull-requests.md index de2c6bd7..6b68f716 100644 --- a/pull-requests.md +++ b/pull-requests.md @@ -73,16 +73,12 @@ The following will save time for both you and your reviewer: ## Release Notes -All pull requests are initiated with a `needs-release-note` label. -There are many `release-note-*` label options, including `release-note-none`. -If your PR does not require any visibility at release time, you may use a -`release-note-none` label. Otherwise, please choose a `release-note-*` label -that fits your PR. - -Additionally, `release-note-none` is not allowed on PRs on release branches. - -Finally, ensure your PR title is the release note you want published at relase -time. +1. Your PR title is the **release note** you want published at release time. +1. Release note labels are only needed on master branch PRs. +1. All pull requests are initiated with a `release-note-label-needed` label. +1. For a PR to be ready to merge, the `release-note-label-needed` label must be removed and one of the other `release-note-*` labels must be added. +1. `release-note-none` is a valid option if the PR does not need to be mentioned + at release time. ## Visual overview -- cgit v1.2.3 From 9a8e0fbebeb128ec4a73f6fca8e6c57fd20174a7 Mon Sep 17 00:00:00 2001 From: mikebrow Date: Fri, 1 Apr 2016 16:48:14 -0500 Subject: minor edits to testing guide Signed-off-by: mikebrow --- testing.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/testing.md b/testing.md index 3bc1c141..dc6a8bd7 100644 --- a/testing.md +++ b/testing.md @@ -32,11 +32,11 @@ Documentation for other releases can be found at This assumes you already read the [development guide](development.md) to install go, godeps, and configure your git client. -In order to send pull requests you need to make sure you changes pass -unit and integration tests. +Before sending pull requests you should at least make sure your changes have +passed both unit and integration tests. -Kubernetes only merges pull requests when e2e tests are passing, so it is often -a good idea to make sure these work as well. +Kubernetes only merges pull requests when unit, integration, and e2e tests are +passing, so it is often a good idea to make sure the e2e tests work as well. ## Unit tests @@ -155,13 +155,13 @@ Kubernetes includes a script to help install etcd on your machine. # Option a) install inside kubernetes root cd kubernetes hack/install-etcd.sh # Installs in ./third_party/etcd -echo export PATH="$PATH:$(pwd)/third_party/etcd" >> .profile # Add to PATH +echo export PATH="$PATH:$(pwd)/third_party/etcd" >> ~/.profile # Add to PATH # Option b) install manually cd kubernetes grep -E "image.*etcd" cluster/saltbase/etcd/etcd.manifest # Find version # Install that version using yum/apt-get/etc -echo export PATH="$PATH:" >> .profile # Add to PATH +echo export PATH="$PATH:" >> ~/.profile # Add to PATH ``` ### Run integration tests -- cgit v1.2.3 From 4d57a65504a4bc8b03fe6ae7a47d1f73ced78251 Mon Sep 17 00:00:00 2001 From: Thien-Thi Nguyen Date: Sat, 2 Apr 2016 12:46:57 +0200 Subject: Fix typo --- generating-clientset.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/generating-clientset.md b/generating-clientset.md index f5a8ca76..3bf5fd15 100644 --- a/generating-clientset.md +++ b/generating-clientset.md @@ -40,7 +40,7 @@ Client-gen is an automatic tool that generates [clientset](../../docs/proposals/ The workflow includes four steps: - Marking API types with tags: in `pkg/apis/${GROUP}/${VERSION}/types.go`, mark the types (e.g., Pods) that you want to generate clients for with the `// +genclient=true` tag. If the resource associated with the type is not namespace scoped (e.g., PersistentVolume), you need to append the `nonNamespaced=true` tag as well. -- Running the client-gen tool: you need to use the command line argument `--input` to specify the groups and versions of the APIs you want to generate clients for, client-gen will then look into `pkg/apis/${GROUP}/${VERSION}/types.go` and generate clients for the types you have marked with the `genlcient` tags. For example, run +- Running the client-gen tool: you need to use the command line argument `--input` to specify the groups and versions of the APIs you want to generate clients for, client-gen will then look into `pkg/apis/${GROUP}/${VERSION}/types.go` and generate clients for the types you have marked with the `genclient` tags. For example, run ``` $ client-gen --input="api/v1,extensions/v1beta1" --clientset-name="my_release" -- cgit v1.2.3 From 6116cfae8d8e7ae31134bd5bb641eda278bd5b46 Mon Sep 17 00:00:00 2001 From: Wojciech Tyczynski Date: Tue, 5 Apr 2016 01:31:01 +0200 Subject: Remove Shippable --- automation.md | 6 +++--- on-call-build-cop.md | 4 ++-- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/automation.md b/automation.md index 918af37b..0d25fe3c 100644 --- a/automation.md +++ b/automation.md @@ -67,7 +67,7 @@ The status of the submit-queue is [online.](http://submit-queue.k8s.io/) A PR is considered "ready for merging" if it matches the following: * it has the `lgtm` label, and that `lgtm` is newer than the latest commit * it has passed the cla pre-submit and has the `cla:yes` label - * it has passed the travis and shippable pre-submit tests + * it has passed the travis pre-submit tests * one (or all) of * its author is in kubernetes/contrib/submit-queue/whitelist.txt * its author is in contributors.txt via the github API. @@ -103,7 +103,7 @@ Currently this runs: * needs-rebase - Adds `needs-rebase` to PRs that aren't currently mergeable, and removes it from those that are. * size - Adds `size/xs` - `size/xxl` labels to PRs * ok-to-test - Adds the `ok-to-test` message to PRs that have an `lgtm` but the e2e-builder would otherwise not test due to whitelist - * ping-ci - Attempts to ping the ci systems (Travis/Shippable) if they are missing from a PR. + * ping-ci - Attempts to ping the ci systems (Travis) if they are missing from a PR. * lgtm-after-commit - Removes the `lgtm` label from PRs where there are commits that are newer than the `lgtm` label In the works: @@ -130,7 +130,7 @@ PR builder to re-run the tests. To do this, reply to the PR with a message that Right now you have to ask a contributor (this may be you!) to re-run the test with "@k8s-bot test this" -### How can I kick Shippable to re-test on a failure? +### How can I kick Travis to re-test on a failure? Right now the easiest way is to close and then immediately re-open the PR. diff --git a/on-call-build-cop.md b/on-call-build-cop.md index bb4b513c..32660b2b 100644 --- a/on-call-build-cop.md +++ b/on-call-build-cop.md @@ -87,9 +87,9 @@ Build-copping * Jobs that are not in [critical e2e tests] (https://goto.google.com/k8s-test/view/Critical%20Builds/) or [flaky test builds](https://goto.google.com/k8s-test/view/Flaky/) are not your responsibility to monitor. The `Test owner:` in the job description will be automatically emailed if the job is failing. * If you are a weekday oncall, ensure that PRs confirming to the following pre-requisites are being merged at a reasonable rate: * [Have been LGTMd](https://github.com/kubernetes/kubernetes/labels/lgtm) - * Pass Travis and Shippable. + * Pass Travis and Jenkins per-PR tests. * Author has signed CLA if applicable. -* If you are a weekend oncall, [never merge PRs manually](collab.md), instead add the label "lgtm" to the PRs once they have been LGTMd and passed Travis and Shippable; this will cause merge-bot to merge them automatically (or make them easy to find by the next oncall, who will merge them). +* If you are a weekend oncall, [never merge PRs manually](collab.md), instead add the label "lgtm" to the PRs once they have been LGTMd and passed Travis; this will cause merge-bot to merge them automatically (or make them easy to find by the next oncall, who will merge them). * When the build is broken, roll back the PRs responsible ASAP * When E2E tests are unstable, a "merge freeze" may be instituted. During a merge freeze: * Oncall should slowly merge LGTMd changes throughout the day while monitoring E2E to ensure stability. -- cgit v1.2.3 From 643e3cdf29449fcd0cb0150f7eb9e7a74803ec03 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Lucas=20K=C3=A4ldstr=C3=B6m?= Date: Wed, 6 Apr 2016 20:08:45 +0300 Subject: Add a note about supported go version --- development.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/development.md b/development.md index 41179f57..9e5b0867 100644 --- a/development.md +++ b/development.md @@ -54,7 +54,9 @@ Kubernetes is written in the [Go](http://golang.org) programming language. If yo ### Go versions -Requires Go version 1.4.x, 1.5.x or 1.6.x +Kubernetes 1.0 - 1.2 only supports Go 1.4.2 + +Kubernetes 1.3 and higher supports Go 1.6.0 ## Git setup -- cgit v1.2.3 From dc6c0f0150ddae33413b8b0983138cdabd861fc0 Mon Sep 17 00:00:00 2001 From: mikebrow Date: Wed, 6 Apr 2016 17:03:11 -0500 Subject: minor edits to development.md to make the first steps easier and more obvious for newcommers Signed-off-by: mikebrow --- development.md | 50 ++++++++++++++++++++++++++++++-------------------- 1 file changed, 30 insertions(+), 20 deletions(-) diff --git a/development.md b/development.md index 9e5b0867..53706cad 100644 --- a/development.md +++ b/development.md @@ -35,8 +35,8 @@ Documentation for other releases can be found at # Development Guide This document is intended to be the canonical source of truth for things like -supported toolchain versions for building Kubernetes. If you find a -requirement that this doc does not capture, please file a bug. If you find +supported toolchain versions for building Kubernetes. If you find a +requirement that this doc does not capture, please file a bug. If you find other docs with references to requirements that are not simply links to this doc, please file a bug. @@ -44,23 +44,32 @@ This document is intended to be relative to the branch in which it is found. It is guaranteed that requirements will change over time for the development branch, but release branches of Kubernetes should not change. -## Releases and Official Builds +## Building Kubernetes -Official releases are built in Docker containers. Details are [here](http://releases.k8s.io/HEAD/build/README.md). You can do simple builds and development with just a local Docker installation. If you want to build go code locally outside of docker, please continue below. +Official releases are built using Docker containers. To build Kubernetes using +Docker please follow [these instructions](http://releases.k8s.io/HEAD/build/README.md). -## Go development environment +### Go development environment -Kubernetes is written in the [Go](http://golang.org) programming language. If you haven't set up a Go development environment, please follow [these instructions](http://golang.org/doc/code.html) to install the go tools and set up a GOPATH. +Kubernetes is written in the [Go](http://golang.org) programming language. +To build Kubernetes without using Docker containers, you'll need a Go +development environment. Builds for Kubernetes 1.0 - 1.2 require Go version +1.4.2. Builds for Kubernetes 1.3 and higher require Go version 1.6.0. If you +haven't set up a Go development environment, please follow [these instructions](http://golang.org/doc/code.html) +to install the go tools and set up a GOPATH. -### Go versions +To build Kubernetes using your local Go development environment (generate linux +binaries): -Kubernetes 1.0 - 1.2 only supports Go 1.4.2 + hack/build-go.sh +You may pass build options and packages to the script as necessary. To build binaries for all platforms: -Kubernetes 1.3 and higher supports Go 1.6.0 + hack/build-cross.sh -## Git setup +## Workflow -Below, we outline one of the more common git workflows that core developers use. Other git workflows are also valid. +Below, we outline one of the more common git workflows that core developers use. +Other git workflows are also valid. ### Visual overview @@ -73,7 +82,8 @@ Below, we outline one of the more common git workflows that core developers use. ### Clone your fork -The commands below require that you have $GOPATH set ([$GOPATH docs](https://golang.org/doc/code.html#GOPATH)). We highly recommend you put Kubernetes' code into your GOPATH. Note: the commands below will not work if there is more than one directory in your `$GOPATH`. +The commands below require that you have $GOPATH set ([$GOPATH docs](https://golang.org/doc/code.html#GOPATH)). We highly recommend you put Kubernetes' code into your GOPATH. Note: the commands below will not work if +there is more than one directory in your `$GOPATH`. ```sh mkdir -p $GOPATH/src/k8s.io @@ -107,7 +117,7 @@ git remote set-url --push upstream no_push ### Committing changes to your fork Before committing any changes, please link/copy these pre-commit hooks into your .git -directory. This will keep you from accidentally committing non-gofmt'd go code. +directory. This will keep you from accidentally committing non-gofmt'd Go code. ```sh cd kubernetes/.git/hooks/ @@ -133,10 +143,10 @@ Upon merge, all git commits should represent meaningful milestones or units of work. Use commits to add clarity to the development and review process. Before merging a PR, squash any "fix review feedback", "typo", and "rebased" -sorts of commits. It is not imperative that every commit in a PR compile and -pass tests independently, but it is worth striving for. For mass automated +sorts of commits. It is not imperative that every commit in a PR compile and +pass tests independently, but it is worth striving for. For mass automated fixups (e.g. automated doc formatting), use one or more commits for the -changes to tooling and a final commit to apply the fixup en masse. This makes +changes to tooling and a final commit to apply the fixup en masse. This makes reviews much easier. See [Faster Reviews](faster_reviews.md) for more details. @@ -147,10 +157,10 @@ Kubernetes uses [godep](https://github.com/tools/godep) to manage dependencies. ### Installing godep -There are many ways to build and host go binaries. Here is an easy way to get utilities like `godep` installed: +There are many ways to build and host Go binaries. Here is an easy way to get utilities like `godep` installed: 1) Ensure that [mercurial](http://mercurial.selenic.com/wiki/Download) is installed on your system. (some of godep's dependencies use the mercurial -source control system). Use `apt-get install mercurial` or `yum install mercurial` on Linux, or [brew.sh](http://brew.sh) on OS X, or download +source control system). Use `apt-get install mercurial` or `yum install mercurial` on Linux, or [brew.sh](http://brew.sh) on OS X, or download directly from mercurial. 2) Create a new GOPATH for your tools and install godep: @@ -228,13 +238,13 @@ godep update path/to/dependency/... ``` _If `go get -u path/to/dependency` fails with compilation errors, instead try `go get -d -u path/to/dependency` -to fetch the dependencies without compiling them. This can happen when updating the cadvisor dependency._ +to fetch the dependencies without compiling them. This can happen when updating the cadvisor dependency._ 5) Before sending your PR, it's a good idea to sanity check that your Godeps.json file is ok by running `hack/verify-godeps.sh` _If hack/verify-godeps.sh fails after a `godep update`, it is possible that a transitive dependency was added or removed but not -updated by godeps. It then may be necessary to perform a `godep save ./...` to pick up the transitive dependency changes._ +updated by godeps. It then may be necessary to perform a `godep save ./...` to pick up the transitive dependency changes._ It is sometimes expedient to manually fix the /Godeps/godeps.json file to minimize the changes. -- cgit v1.2.3 From a700866f4a8b33e3d339b086f066dc7fd51b574c Mon Sep 17 00:00:00 2001 From: David McMahon Date: Wed, 6 Apr 2016 13:20:03 -0700 Subject: Sync up all release note related docs with the latest process/procedures. --- cherry-picks.md | 5 +++-- pull-requests.md | 27 +++++++++++++++++++++++++-- 2 files changed, 28 insertions(+), 4 deletions(-) diff --git a/cherry-picks.md b/cherry-picks.md index 02b5cc1e..3bc2a3ff 100644 --- a/cherry-picks.md +++ b/cherry-picks.md @@ -41,14 +41,15 @@ depending on the point in the release cycle. ## Propose a Cherry Pick 1. Cherrypicks are [managed with labels and milestones](pull-requests.md#release-notes) - 1. All label/milestone accounting happens on PRs on master. There's nothing to do on PRs targeted to the release branches. 1. When you want a PR to be merged to the release branch, make the following label changes to the **master** branch PR: - * Remove release-note-label-needed * Add an appropriate release-note-(!label-needed) label * Add an appropriate milestone * Add the `cherrypick-candidate` label + * The PR title is the **release note** you want published at release time and + note that PR titles are mutable and should reflect a release note + friendly message for any `release-note-*` labeled PRs. ### How do cherrypick-candidates make it to the release branch? diff --git a/pull-requests.md b/pull-requests.md index 6b68f716..5085651e 100644 --- a/pull-requests.md +++ b/pull-requests.md @@ -38,6 +38,7 @@ Documentation for other releases can be found at - [Life of a Pull Request](#life-of-a-pull-request) - [Before sending a pull request](#before-sending-a-pull-request) - [Release Notes](#release-notes) + - [Reviewing pre-release notes](#reviewing-pre-release-notes) - [Visual overview](#visual-overview) - [Other notes](#other-notes) - [Automation](#automation) @@ -73,12 +74,34 @@ The following will save time for both you and your reviewer: ## Release Notes -1. Your PR title is the **release note** you want published at release time. -1. Release note labels are only needed on master branch PRs. +This section applies only to pull requests on the master branch. + 1. All pull requests are initiated with a `release-note-label-needed` label. 1. For a PR to be ready to merge, the `release-note-label-needed` label must be removed and one of the other `release-note-*` labels must be added. 1. `release-note-none` is a valid option if the PR does not need to be mentioned at release time. +1. The PR title is the **release note** you want published at release time. + * NOTE: PR titles are mutable and should reflect a release note friendly + message for any `release-note-*` labeled PRs. + +The only exception to these rules is when a PR is not a cherry-pick and is +targeted directly to the non-master branch. In this case, a `release-note-*` +label is optional (and not enforced). + +### Reviewing pre-release notes + +**NOTE: THIS TOOLING IS NOT YET AVAILABLE, BUT COMING SOON!** + +At any time, you can see what the release notes will look like on any branch. + +``` +$ git pull https://github.com/kubernetes/release +$ RELNOTES=$PWD/release/relnotes +$ cd /to/your/kubernetes/repo +$ $RELNOTES -man # for details on how to use the tool +# Show release notes from the last release on a branch to HEAD +$ $RELNOTES --raw --branch=master +``` ## Visual overview -- cgit v1.2.3 From 56731aeca9fe83fbc3bb9ecf658c1b96536e42cc Mon Sep 17 00:00:00 2001 From: David McMahon Date: Thu, 7 Apr 2016 17:14:13 -0700 Subject: Clarify release-note label requirement for non-master PRs. --- pull-requests.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pull-requests.md b/pull-requests.md index 5085651e..a5aeac76 100644 --- a/pull-requests.md +++ b/pull-requests.md @@ -86,7 +86,7 @@ This section applies only to pull requests on the master branch. The only exception to these rules is when a PR is not a cherry-pick and is targeted directly to the non-master branch. In this case, a `release-note-*` -label is optional (and not enforced). +label is required for that non-master PR. ### Reviewing pre-release notes -- cgit v1.2.3 From ff66c016355b4bd5ef92161e927426276913f025 Mon Sep 17 00:00:00 2001 From: "Tim St. Clair" Date: Thu, 7 Apr 2016 10:21:31 -0700 Subject: Update test/e2e for test/e2e/framework refactoring --- writing-good-e2e-tests.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/writing-good-e2e-tests.md b/writing-good-e2e-tests.md index 54b70030..2cb0fe47 100644 --- a/writing-good-e2e-tests.md +++ b/writing-good-e2e-tests.md @@ -180,7 +180,7 @@ right thing. Here are a few pointers: -+ [E2e Framework](../../test/e2e/framework.go): ++ [E2e Framework](../../test/e2e/framework/framework.go): Familiarise yourself with this test framework and how to use it. Amongst others, it automatically creates uniquely named namespaces within which your tests can run to avoid name clashes, and reliably @@ -194,7 +194,7 @@ Here are a few pointers: should always use this framework. Trying other home-grown approaches to avoiding name clashes and resource leaks has proven to be a very bad idea. -+ [E2e utils library](../../test/e2e/util.go): ++ [E2e utils library](../../test/e2e/framework/util.go): This handy library provides tons of reusable code for a host of commonly needed test functionality, including waiting for resources to enter specified states, safely and consistently retrying failed -- cgit v1.2.3 From 974c57d60ef47ada4083c002070614a04b95ac5f Mon Sep 17 00:00:00 2001 From: Prashanth Balasubramanian Date: Fri, 15 Apr 2016 19:25:32 -0700 Subject: Clarify api-group docs by a tiny bit. --- adding-an-APIGroup.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/adding-an-APIGroup.md b/adding-an-APIGroup.md index 0119c7b9..dec5d3f0 100644 --- a/adding-an-APIGroup.md +++ b/adding-an-APIGroup.md @@ -56,7 +56,8 @@ Step 2 and 3 are mechanical, we plan on autogenerate these using the cmd/libs/go 1. Generate conversions and deep-copies: 1. Add your "group/" or "group/version" into hack/after-build/{update-generated-conversions.sh, update-generated-deep-copies.sh, verify-generated-conversions.sh, verify-generated-deep-copies.sh}; - 2. Run hack/update-generated-conversions.sh, hack/update-generated-deep-copies.sh. + 2. Make sure your pkg/apis/``/`` directory has a doc.go file with the comment `// +genconversion=true`, to catch the attention of our gen-conversion script. + 3. Run hack/update-all.sh. 2. Generate files for Ugorji codec: @@ -79,6 +80,7 @@ We are overhauling pkg/client, so this section might be outdated; see [#15730](h 2. Add your "group/version" to `KUBE_API_VERSIONS` and `KUBE_TEST_API_VERSIONS` in hack/test-go.sh. +TODO: Add a troubleshooting section. -- cgit v1.2.3 From c8dd7c28d53923d56d8559d05e3afd4fa8101a4b Mon Sep 17 00:00:00 2001 From: Deyuan Deng Date: Thu, 24 Mar 2016 16:52:57 +0800 Subject: Update API change for internal types --- api_changes.md | 25 ++++++------------------- 1 file changed, 6 insertions(+), 19 deletions(-) diff --git a/api_changes.md b/api_changes.md index e244096f..987d5576 100644 --- a/api_changes.md +++ b/api_changes.md @@ -51,7 +51,6 @@ found at [API Conventions](api-conventions.md). - [Edit types.go](#edit-typesgo) - [Edit validation.go](#edit-validationgo) - [Edit version conversions](#edit-version-conversions) - - [Edit deep copy files](#edit-deep-copy-files) - [Edit json (un)marshaling code](#edit-json-unmarshaling-code) - [Making a new API Group](#making-a-new-api-group) - [Update the fuzzer](#update-the-fuzzer) @@ -456,9 +455,14 @@ regenerate auto-generated ones. To regenerate them: - run ```sh -hack/update-generated-conversions.sh +hack/update-codegen.sh ``` +update-codegen will also generate code to handle deep copy of your versioned +api objects. The deep copy code resides with each versioned API: + - `pkg/api//deep_copy_generated.go` containing auto-generated copy functions + - `pkg/apis/extensions//deep_copy_generated.go` containing auto-generated copy functions + If running the above script is impossible due to compile errors, the easiest workaround is to comment out the code causing errors and let the script to regenerate it. If the auto-generated conversion methods are not used by the @@ -468,23 +472,6 @@ generator to create it from scratch. Unsurprisingly, adding manually written conversion also requires you to add tests to `pkg/api//conversion_test.go`. -## Edit deep copy files - -At this point you have both the versioned API changes and the internal -structure changes done. You now need to generate code to handle deep copy -of your versioned api objects. - -The deep copy code resides with each versioned API: - - `pkg/api//deep_copy_generated.go` containing auto-generated copy functions - - `pkg/apis/extensions//deep_copy_generated.go` containing auto-generated copy functions - -To regenerate them: - - run - -```sh -hack/update-generated-deep-copies.sh -``` - ## Edit json (un)marshaling code We are auto-generating code for marshaling and unmarshaling json representation -- cgit v1.2.3 From 16d43fd660a9127b98ad221b7c92279272dc6f9e Mon Sep 17 00:00:00 2001 From: mikebrow Date: Tue, 19 Apr 2016 09:41:38 -0500 Subject: updates to devel/*.md files Signed-off-by: mikebrow --- adding-an-APIGroup.md | 65 ++- api-conventions.md | 1116 ++++++++++++++++++++++++++++++++++++------------- api_changes.md | 502 +++++++++++----------- automation.md | 77 ++-- cherry-picks.md | 48 ++- client-libraries.md | 3 +- 6 files changed, 1225 insertions(+), 586 deletions(-) diff --git a/adding-an-APIGroup.md b/adding-an-APIGroup.md index dec5d3f0..1e57c0ab 100644 --- a/adding-an-APIGroup.md +++ b/adding-an-APIGroup.md @@ -35,30 +35,55 @@ Documentation for other releases can be found at Adding an API Group =============== -This document includes the steps to add an API group. You may also want to take a look at PR [#16621](https://github.com/kubernetes/kubernetes/pull/16621) and PR [#13146](https://github.com/kubernetes/kubernetes/pull/13146), which add API groups. +This document includes the steps to add an API group. You may also want to take +a look at PR [#16621](https://github.com/kubernetes/kubernetes/pull/16621) and +PR [#13146](https://github.com/kubernetes/kubernetes/pull/13146), which add API +groups. -Please also read about [API conventions](api-conventions.md) and [API changes](api_changes.md) before adding an API group. +Please also read about [API conventions](api-conventions.md) and +[API changes](api_changes.md) before adding an API group. ### Your core group package: -We plan on improving the way the types are factored in the future; see [#16062](https://github.com/kubernetes/kubernetes/pull/16062) for the directions in which this might evolve. +We plan on improving the way the types are factored in the future; see +[#16062](https://github.com/kubernetes/kubernetes/pull/16062) for the directions +in which this might evolve. -1. Create a folder in pkg/apis to hold you group. Create types.go in pkg/apis/``/ and pkg/apis/``/``/ to define API objects in your group; +1. Create a folder in pkg/apis to hold you group. Create types.go in + pkg/apis/``/ and pkg/apis/``/``/ to define API objects + in your group; -2. Create pkg/apis/``/{register.go, ``/register.go} to register this group's API objects to the encoding/decoding scheme (e.g., [pkg/apis/extensions/register.go](../../pkg/apis/extensions/register.go) and [pkg/apis/extensions/v1beta1/register.go](../../pkg/apis/extensions/v1beta1/register.go); +2. Create pkg/apis/``/{register.go, ``/register.go} to register +this group's API objects to the encoding/decoding scheme (e.g., +[pkg/apis/extensions/register.go](../../pkg/apis/extensions/register.go) and +[pkg/apis/extensions/v1beta1/register.go](../../pkg/apis/extensions/v1beta1/register.go); -3. Add a pkg/apis/``/install/install.go, which is responsible for adding the group to the `latest` package, so that other packages can access the group's meta through `latest.Group`. You probably only need to change the name of group and version in the [example](../../pkg/apis/extensions/install/install.go)). You need to import this `install` package in {pkg/master, pkg/client/unversioned}/import_known_versions.go, if you want to make your group accessible to other packages in the kube-apiserver binary, binaries that uses the client package. +3. Add a pkg/apis/``/install/install.go, which is responsible for adding +the group to the `latest` package, so that other packages can access the group's +meta through `latest.Group`. You probably only need to change the name of group +and version in the [example](../../pkg/apis/extensions/install/install.go)). You +need to import this `install` package in {pkg/master, +pkg/client/unversioned}/import_known_versions.go, if you want to make your group +accessible to other packages in the kube-apiserver binary, binaries that uses +the client package. -Step 2 and 3 are mechanical, we plan on autogenerate these using the cmd/libs/go2idl/ tool. +Step 2 and 3 are mechanical, we plan on autogenerate these using the +cmd/libs/go2idl/ tool. ### Scripts changes and auto-generated code: 1. Generate conversions and deep-copies: - 1. Add your "group/" or "group/version" into hack/after-build/{update-generated-conversions.sh, update-generated-deep-copies.sh, verify-generated-conversions.sh, verify-generated-deep-copies.sh}; - 2. Make sure your pkg/apis/``/`` directory has a doc.go file with the comment `// +genconversion=true`, to catch the attention of our gen-conversion script. + 1. Add your "group/" or "group/version" into +hack/after-build/{update-generated-conversions.sh, +update-generated-deep-copies.sh, verify-generated-conversions.sh, +verify-generated-deep-copies.sh}; + 2. Make sure your pkg/apis/``/`` directory has a doc.go file +with the comment `// +genconversion=true`, to catch the attention of our +gen-conversion script. 3. Run hack/update-all.sh. + 2. Generate files for Ugorji codec: 1. Touch types.generated.go in pkg/apis/``{/, ``}; @@ -66,19 +91,29 @@ Step 2 and 3 are mechanical, we plan on autogenerate these using the cmd/libs/go ### Client (optional): -We are overhauling pkg/client, so this section might be outdated; see [#15730](https://github.com/kubernetes/kubernetes/pull/15730) for how the client package might evolve. Currently, to add your group to the client package, you need to +We are overhauling pkg/client, so this section might be outdated; see +[#15730](https://github.com/kubernetes/kubernetes/pull/15730) for how the client +package might evolve. Currently, to add your group to the client package, you +need to: -1. Create pkg/client/unversioned/``.go, define a group client interface and implement the client. You can take pkg/client/unversioned/extensions.go as a reference. +1. Create pkg/client/unversioned/``.go, define a group client interface +and implement the client. You can take pkg/client/unversioned/extensions.go as a +reference. -2. Add the group client interface to the `Interface` in pkg/client/unversioned/client.go and add method to fetch the interface. Again, you can take how we add the Extensions group there as an example. +2. Add the group client interface to the `Interface` in +pkg/client/unversioned/client.go and add method to fetch the interface. Again, +you can take how we add the Extensions group there as an example. -3. If you need to support the group in kubectl, you'll also need to modify pkg/kubectl/cmd/util/factory.go. +3. If you need to support the group in kubectl, you'll also need to modify +pkg/kubectl/cmd/util/factory.go. ### Make the group/version selectable in unit tests (optional): -1. Add your group in pkg/api/testapi/testapi.go, then you can access the group in tests through testapi.``; +1. Add your group in pkg/api/testapi/testapi.go, then you can access the group +in tests through testapi.``; -2. Add your "group/version" to `KUBE_API_VERSIONS` and `KUBE_TEST_API_VERSIONS` in hack/test-go.sh. +2. Add your "group/version" to `KUBE_API_VERSIONS` and `KUBE_TEST_API_VERSIONS` +in hack/test-go.sh. TODO: Add a troubleshooting section. diff --git a/api-conventions.md b/api-conventions.md index 343800af..de715c63 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -36,9 +36,10 @@ API Conventions Updated: 10/8/2015 -*This document is oriented at users who want a deeper understanding of the Kubernetes -API structure, and developers wanting to extend the Kubernetes API. An introduction to -using resources with kubectl can be found in [Working with resources](../user-guide/working-with-resources.md).* +*This document is oriented at users who want a deeper understanding of the +Kubernetes API structure, and developers wanting to extend the Kubernetes API. +An introduction to using resources with kubectl can be found in [Working with +resources](../user-guide/working-with-resources.md).* **Table of Contents** @@ -82,20 +83,38 @@ using resources with kubectl can be found in [Working with resources](../user-gu -The conventions of the [Kubernetes API](../api.md) (and related APIs in the ecosystem) are intended to ease client development and ensure that configuration mechanisms can be implemented that work across a diverse set of use cases consistently. +The conventions of the [Kubernetes API](../api.md) (and related APIs in the +ecosystem) are intended to ease client development and ensure that configuration +mechanisms can be implemented that work across a diverse set of use cases +consistently. -The general style of the Kubernetes API is RESTful - clients create, update, delete, or retrieve a description of an object via the standard HTTP verbs (POST, PUT, DELETE, and GET) - and those APIs preferentially accept and return JSON. Kubernetes also exposes additional endpoints for non-standard verbs and allows alternative content types. All of the JSON accepted and returned by the server has a schema, identified by the "kind" and "apiVersion" fields. Where relevant HTTP header fields exist, they should mirror the content of JSON fields, but the information should not be represented only in the HTTP header. +The general style of the Kubernetes API is RESTful - clients create, update, +delete, or retrieve a description of an object via the standard HTTP verbs +(POST, PUT, DELETE, and GET) - and those APIs preferentially accept and return +JSON. Kubernetes also exposes additional endpoints for non-standard verbs and +allows alternative content types. All of the JSON accepted and returned by the +server has a schema, identified by the "kind" and "apiVersion" fields. Where +relevant HTTP header fields exist, they should mirror the content of JSON +fields, but the information should not be represented only in the HTTP header. The following terms are defined: -* **Kind** the name of a particular object schema (e.g. the "Cat" and "Dog" kinds would have different attributes and properties) -* **Resource** a representation of a system entity, sent or retrieved as JSON via HTTP to the server. Resources are exposed via: +* **Kind** the name of a particular object schema (e.g. the "Cat" and "Dog" +kinds would have different attributes and properties) +* **Resource** a representation of a system entity, sent or retrieved as JSON +via HTTP to the server. Resources are exposed via: * Collections - a list of resources of the same type, which may be queryable * Elements - an individual resource, addressable via a URL -Each resource typically accepts and returns data of a single kind. A kind may be accepted or returned by multiple resources that reflect specific use cases. For instance, the kind "Pod" is exposed as a "pods" resource that allows end users to create, update, and delete pods, while a separate "pod status" resource (that acts on "Pod" kind) allows automated processes to update a subset of the fields in that resource. +Each resource typically accepts and returns data of a single kind. A kind may be +accepted or returned by multiple resources that reflect specific use cases. For +instance, the kind "Pod" is exposed as a "pods" resource that allows end users +to create, update, and delete pods, while a separate "pod status" resource (that +acts on "Pod" kind) allows automated processes to update a subset of the fields +in that resource. -Resource collections should be all lowercase and plural, whereas kinds are CamelCase and singular. +Resource collections should be all lowercase and plural, whereas kinds are +CamelCase and singular. ## Types (Kinds) @@ -104,134 +123,293 @@ Kinds are grouped into three categories: 1. **Objects** represent a persistent entity in the system. - Creating an API object is a record of intent - once created, the system will work to ensure that resource exists. All API objects have common metadata. + Creating an API object is a record of intent - once created, the system will +work to ensure that resource exists. All API objects have common metadata. - An object may have multiple resources that clients can use to perform specific actions that create, update, delete, or get. + An object may have multiple resources that clients can use to perform +specific actions that create, update, delete, or get. Examples: `Pod`, `ReplicationController`, `Service`, `Namespace`, `Node`. -2. **Lists** are collections of **resources** of one (usually) or more (occasionally) kinds. +2. **Lists** are collections of **resources** of one (usually) or more +(occasionally) kinds. - Lists have a limited set of common metadata. All lists use the "items" field to contain the array of objects they return. + Lists have a limited set of common metadata. All lists use the "items" field +to contain the array of objects they return. - Most objects defined in the system should have an endpoint that returns the full set of resources, as well as zero or more endpoints that return subsets of the full list. Some objects may be singletons (the current user, the system defaults) and may not have lists. + Most objects defined in the system should have an endpoint that returns the +full set of resources, as well as zero or more endpoints that return subsets of +the full list. Some objects may be singletons (the current user, the system +defaults) and may not have lists. - In addition, all lists that return objects with labels should support label filtering (see [docs/user-guide/labels.md](../user-guide/labels.md), and most lists should support filtering by fields. + In addition, all lists that return objects with labels should support label +filtering (see [docs/user-guide/labels.md](../user-guide/labels.md), and most +lists should support filtering by fields. Examples: PodLists, ServiceLists, NodeLists TODO: Describe field filtering below or in a separate doc. -3. **Simple** kinds are used for specific actions on objects and for non-persistent entities. - - Given their limited scope, they have the same set of limited common metadata as lists. - - For instance, the "Status" kind is returned when errors occur and is not persisted in the system. - - Many simple resources are "subresources", which are rooted at API paths of specific resources. When resources wish to expose alternative actions or views that are closely coupled to a single resource, they should do so using new sub-resources. Common subresources include: - - * `/binding`: Used to bind a resource representing a user request (e.g., Pod, PersistentVolumeClaim) to a cluster infrastructure resource (e.g., Node, PersistentVolume). - * `/status`: Used to write just the status portion of a resource. For example, the `/pods` endpoint only allows updates to `metadata` and `spec`, since those reflect end-user intent. An automated process should be able to modify status for users to see by sending an updated Pod kind to the server to the "/pods/<name>/status" endpoint - the alternate endpoint allows different rules to be applied to the update, and access to be appropriately restricted. - * `/scale`: Used to read and write the count of a resource in a manner that is independent of the specific resource schema. - - Two additional subresources, `proxy` and `portforward`, provide access to cluster resources as described in [docs/user-guide/accessing-the-cluster.md](../user-guide/accessing-the-cluster.md). - -The standard REST verbs (defined below) MUST return singular JSON objects. Some API endpoints may deviate from the strict REST pattern and return resources that are not singular JSON objects, such as streams of JSON objects or unstructured text log data. - -The term "kind" is reserved for these "top-level" API types. The term "type" should be used for distinguishing sub-categories within objects or subobjects. +3. **Simple** kinds are used for specific actions on objects and for +non-persistent entities. + + Given their limited scope, they have the same set of limited common metadata +as lists. + + For instance, the "Status" kind is returned when errors occur and is not +persisted in the system. + + Many simple resources are "subresources", which are rooted at API paths of +specific resources. When resources wish to expose alternative actions or views +that are closely coupled to a single resource, they should do so using new +sub-resources. Common subresources include: + + * `/binding`: Used to bind a resource representing a user request (e.g., Pod, +PersistentVolumeClaim) to a cluster infrastructure resource (e.g., Node, +PersistentVolume). + * `/status`: Used to write just the status portion of a resource. For +example, the `/pods` endpoint only allows updates to `metadata` and `spec`, +since those reflect end-user intent. An automated process should be able to +modify status for users to see by sending an updated Pod kind to the server to +the "/pods/<name>/status" endpoint - the alternate endpoint allows +different rules to be applied to the update, and access to be appropriately +restricted. + * `/scale`: Used to read and write the count of a resource in a manner that +is independent of the specific resource schema. + + Two additional subresources, `proxy` and `portforward`, provide access to +cluster resources as described in +[docs/user-guide/accessing-the-cluster.md](../user-guide/accessing-the-cluster.md). + +The standard REST verbs (defined below) MUST return singular JSON objects. Some +API endpoints may deviate from the strict REST pattern and return resources that +are not singular JSON objects, such as streams of JSON objects or unstructured +text log data. + +The term "kind" is reserved for these "top-level" API types. The term "type" +should be used for distinguishing sub-categories within objects or subobjects. ### Resources All JSON objects returned by an API MUST have the following fields: * kind: a string that identifies the schema this object should have -* apiVersion: a string that identifies the version of the schema the object should have +* apiVersion: a string that identifies the version of the schema the object +should have -These fields are required for proper decoding of the object. They may be populated by the server by default from the specified URL path, but the client likely needs to know the values in order to construct the URL path. +These fields are required for proper decoding of the object. They may be +populated by the server by default from the specified URL path, but the client +likely needs to know the values in order to construct the URL path. ### Objects #### Metadata -Every object kind MUST have the following metadata in a nested object field called "metadata": - -* namespace: a namespace is a DNS compatible label that objects are subdivided into. The default namespace is 'default'. See [docs/user-guide/namespaces.md](../user-guide/namespaces.md) for more. -* name: a string that uniquely identifies this object within the current namespace (see [docs/user-guide/identifiers.md](../user-guide/identifiers.md)). This value is used in the path when retrieving an individual object. -* uid: a unique in time and space value (typically an RFC 4122 generated identifier, see [docs/user-guide/identifiers.md](../user-guide/identifiers.md)) used to distinguish between objects with the same name that have been deleted and recreated - -Every object SHOULD have the following metadata in a nested object field called "metadata": - -* resourceVersion: a string that identifies the internal version of this object that can be used by clients to determine when objects have changed. This value MUST be treated as opaque by clients and passed unmodified back to the server. Clients should not assume that the resource version has meaning across namespaces, different kinds of resources, or different servers. (see [concurrency control](#concurrency-control-and-consistency), below, for more details) -* generation: a sequence number representing a specific generation of the desired state. Set by the system and monotonically increasing, per-resource. May be compared, such as for RAW and WAW consistency. -* creationTimestamp: a string representing an RFC 3339 date of the date and time an object was created -* deletionTimestamp: a string representing an RFC 3339 date of the date and time after which this resource will be deleted. This field is set by the server when a graceful deletion is requested by the user, and is not directly settable by a client. The resource will be deleted (no longer visible from resource lists, and not reachable by name) after the time in this field. Once set, this value may not be unset or be set further into the future, although it may be shortened or the resource may be deleted prior to this time. -* labels: a map of string keys and values that can be used to organize and categorize objects (see [docs/user-guide/labels.md](../user-guide/labels.md)) -* annotations: a map of string keys and values that can be used by external tooling to store and retrieve arbitrary metadata about this object (see [docs/user-guide/annotations.md](../user-guide/annotations.md)) - -Labels are intended for organizational purposes by end users (select the pods that match this label query). Annotations enable third-party automation and tooling to decorate objects with additional metadata for their own use. +Every object kind MUST have the following metadata in a nested object field +called "metadata": + +* namespace: a namespace is a DNS compatible label that objects are subdivided +into. The default namespace is 'default'. See +[docs/user-guide/namespaces.md](../user-guide/namespaces.md) for more. +* name: a string that uniquely identifies this object within the current +namespace (see [docs/user-guide/identifiers.md](../user-guide/identifiers.md)). +This value is used in the path when retrieving an individual object. +* uid: a unique in time and space value (typically an RFC 4122 generated +identifier, see [docs/user-guide/identifiers.md](../user-guide/identifiers.md)) +used to distinguish between objects with the same name that have been deleted +and recreated + +Every object SHOULD have the following metadata in a nested object field called +"metadata": + +* resourceVersion: a string that identifies the internal version of this object +that can be used by clients to determine when objects have changed. This value +MUST be treated as opaque by clients and passed unmodified back to the server. +Clients should not assume that the resource version has meaning across +namespaces, different kinds of resources, or different servers. (See +[concurrency control](#concurrency-control-and-consistency), below, for more +details.) +* generation: a sequence number representing a specific generation of the +desired state. Set by the system and monotonically increasing, per-resource. May +be compared, such as for RAW and WAW consistency. +* creationTimestamp: a string representing an RFC 3339 date of the date and time +an object was created +* deletionTimestamp: a string representing an RFC 3339 date of the date and time +after which this resource will be deleted. This field is set by the server when +a graceful deletion is requested by the user, and is not directly settable by a +client. The resource will be deleted (no longer visible from resource lists, and +not reachable by name) after the time in this field. Once set, this value may +not be unset or be set further into the future, although it may be shortened or +the resource may be deleted prior to this time. +* labels: a map of string keys and values that can be used to organize and +categorize objects (see [docs/user-guide/labels.md](../user-guide/labels.md)) +* annotations: a map of string keys and values that can be used by external +tooling to store and retrieve arbitrary metadata about this object (see +[docs/user-guide/annotations.md](../user-guide/annotations.md)) + +Labels are intended for organizational purposes by end users (select the pods +that match this label query). Annotations enable third-party automation and +tooling to decorate objects with additional metadata for their own use. #### Spec and Status -By convention, the Kubernetes API makes a distinction between the specification of the desired state of an object (a nested object field called "spec") and the status of the object at the current time (a nested object field called "status"). The specification is a complete description of the desired state, including configuration settings provided by the user, [default values](#defaulting) expanded by the system, and properties initialized or otherwise changed after creation by other ecosystem components (e.g., schedulers, auto-scalers), and is persisted in stable storage with the API object. If the specification is deleted, the object will be purged from the system. The status summarizes the current state of the object in the system, and is usually persisted with the object by an automated processes but may be generated on the fly. At some cost and perhaps some temporary degradation in behavior, the status could be reconstructed by observation if it were lost. - -When a new version of an object is POSTed or PUT, the "spec" is updated and available immediately. Over time the system will work to bring the "status" into line with the "spec". The system will drive toward the most recent "spec" regardless of previous versions of that stanza. In other words, if a value is changed from 2 to 5 in one PUT and then back down to 3 in another PUT the system is not required to 'touch base' at 5 before changing the "status" to 3. In other words, the system's behavior is *level-based* rather than *edge-based*. This enables robust behavior in the presence of missed intermediate state changes. - -The Kubernetes API also serves as the foundation for the declarative configuration schema for the system. In order to facilitate level-based operation and expression of declarative configuration, fields in the specification should have declarative rather than imperative names and semantics -- they represent the desired state, not actions intended to yield the desired state. - -The PUT and POST verbs on objects MUST ignore the "status" values, to avoid accidentally overwriting the status in read-modify-write scenarios. A `/status` subresource MUST be provided to enable system components to update statuses of resources they manage. - -Otherwise, PUT expects the whole object to be specified. Therefore, if a field is omitted it is assumed that the client wants to clear that field's value. The PUT verb does not accept partial updates. Modification of just part of an object may be achieved by GETting the resource, modifying part of the spec, labels, or annotations, and then PUTting it back. See [concurrency control](#concurrency-control-and-consistency), below, regarding read-modify-write consistency when using this pattern. Some objects may expose alternative resource representations that allow mutation of the status, or performing custom actions on the object. - -All objects that represent a physical resource whose state may vary from the user's desired intent SHOULD have a "spec" and a "status". Objects whose state cannot vary from the user's desired intent MAY have only "spec", and MAY rename "spec" to a more appropriate name. - -Objects that contain both spec and status should not contain additional top-level fields other than the standard metadata fields. +By convention, the Kubernetes API makes a distinction between the specification +of the desired state of an object (a nested object field called "spec") and the +status of the object at the current time (a nested object field called +"status"). The specification is a complete description of the desired state, +including configuration settings provided by the user, +[default values](#defaulting) expanded by the system, and properties initialized +or otherwise changed after creation by other ecosystem components (e.g., +schedulers, auto-scalers), and is persisted in stable storage with the API +object. If the specification is deleted, the object will be purged from the +system. The status summarizes the current state of the object in the system, and +is usually persisted with the object by an automated processes but may be +generated on the fly. At some cost and perhaps some temporary degradation in +behavior, the status could be reconstructed by observation if it were lost. + +When a new version of an object is POSTed or PUT, the "spec" is updated and +available immediately. Over time the system will work to bring the "status" into +line with the "spec". The system will drive toward the most recent "spec" +regardless of previous versions of that stanza. In other words, if a value is +changed from 2 to 5 in one PUT and then back down to 3 in another PUT the system +is not required to 'touch base' at 5 before changing the "status" to 3. In other +words, the system's behavior is *level-based* rather than *edge-based*. This +enables robust behavior in the presence of missed intermediate state changes. + +The Kubernetes API also serves as the foundation for the declarative +configuration schema for the system. In order to facilitate level-based +operation and expression of declarative configuration, fields in the +specification should have declarative rather than imperative names and +semantics -- they represent the desired state, not actions intended to yield the +desired state. + +The PUT and POST verbs on objects MUST ignore the "status" values, to avoid +accidentally overwriting the status in read-modify-write scenarios. A `/status` +subresource MUST be provided to enable system components to update statuses of +resources they manage. + +Otherwise, PUT expects the whole object to be specified. Therefore, if a field +is omitted it is assumed that the client wants to clear that field's value. The +PUT verb does not accept partial updates. Modification of just part of an object +may be achieved by GETting the resource, modifying part of the spec, labels, or +annotations, and then PUTting it back. See +[concurrency control](#concurrency-control-and-consistency), below, regarding +read-modify-write consistency when using this pattern. Some objects may expose +alternative resource representations that allow mutation of the status, or +performing custom actions on the object. + +All objects that represent a physical resource whose state may vary from the +user's desired intent SHOULD have a "spec" and a "status". Objects whose state +cannot vary from the user's desired intent MAY have only "spec", and MAY rename +"spec" to a more appropriate name. + +Objects that contain both spec and status should not contain additional +top-level fields other than the standard metadata fields. ##### Typical status properties -**Conditions** represent the latest available observations of an object's current state. Objects may report multiple conditions, and new types of conditions may be added in the future. Therefore, conditions are represented using a list/slice, where all have similar structure. +**Conditions** represent the latest available observations of an object's +current state. Objects may report multiple conditions, and new types of +conditions may be added in the future. Therefore, conditions are represented +using a list/slice, where all have similar structure. -The `FooCondition` type for some resource type `Foo` may include a subset of the following fields, but must contain at least `type` and `status` fields: +The `FooCondition` type for some resource type `Foo` may include a subset of the +following fields, but must contain at least `type` and `status` fields: ```go - Type FooConditionType `json:"type" description:"type of Foo condition"` - Status ConditionStatus `json:"status" description:"status of the condition, one of True, False, Unknown"` - LastHeartbeatTime unversioned.Time `json:"lastHeartbeatTime,omitempty" description:"last time we got an update on a given condition"` - LastTransitionTime unversioned.Time `json:"lastTransitionTime,omitempty" description:"last time the condition transit from one status to another"` - Reason string `json:"reason,omitempty" description:"one-word CamelCase reason for the condition's last transition"` - Message string `json:"message,omitempty" description:"human-readable message indicating details about last transition"` + Type FooConditionType `json:"type" description:"type of Foo condition"` + Status ConditionStatus `json:"status" description:"status of the condition, one of True, False, Unknown"` + LastHeartbeatTime unversioned.Time `json:"lastHeartbeatTime,omitempty" description:"last time we got an update on a given condition"` + LastTransitionTime unversioned.Time `json:"lastTransitionTime,omitempty" description:"last time the condition transit from one status to another"` + Reason string `json:"reason,omitempty" description:"one-word CamelCase reason for the condition's last transition"` + Message string `json:"message,omitempty" description:"human-readable message indicating details about last transition"` ``` Additional fields may be added in the future. -Conditions should be added to explicitly convey properties that users and components care about rather than requiring those properties to be inferred from other observations. - -Condition status values may be `True`, `False`, or `Unknown`. The absence of a condition should be interpreted the same as `Unknown`. - -In general, condition values may change back and forth, but some condition transitions may be monotonic, depending on the resource and condition type. However, conditions are observations and not, themselves, state machines, nor do we define comprehensive state machines for objects, nor behaviors associated with state transitions. The system is level-based rather than edge-triggered, and should assume an Open World. - -A typical oscillating condition type is `Ready`, which indicates the object was believed to be fully operational at the time it was last probed. A possible monotonic condition could be `Succeeded`. A `False` status for `Succeeded` would imply failure. An object that was still active would not have a `Succeeded` condition, or its status would be `Unknown`. - -Some resources in the v1 API contain fields called **`phase`**, and associated `message`, `reason`, and other status fields. The pattern of using `phase` is deprecated. Newer API types should use conditions instead. Phase was essentially a state-machine enumeration field, that contradicted [system-design principles](../design/principles.md#control-logic) and hampered evolution, since [adding new enum values breaks backward compatibility](api_changes.md). Rather than encouraging clients to infer implicit properties from phases, we intend to explicitly expose the conditions that clients need to monitor. Conditions also have the benefit that it is possible to create some conditions with uniform meaning across all resource types, while still exposing others that are unique to specific resource types. See [#7856](http://issues.k8s.io/7856) for more details and discussion. - -In condition types, and everywhere else they appear in the API, **`Reason`** is intended to be a one-word, CamelCase representation of the category of cause of the current status, and **`Message`** is intended to be a human-readable phrase or sentence, which may contain specific details of the individual occurrence. `Reason` is intended to be used in concise output, such as one-line `kubectl get` output, and in summarizing occurrences of causes, whereas `Message` is intended to be presented to users in detailed status explanations, such as `kubectl describe` output. - -Historical information status (e.g., last transition time, failure counts) is only provided with reasonable effort, and is not guaranteed to not be lost. - -Status information that may be large (especially proportional in size to collections of other resources, such as lists of references to other objects -- see below) and/or rapidly changing, such as [resource usage](../design/resources.md#usage-data), should be put into separate objects, with possibly a reference from the original object. This helps to ensure that GETs and watch remain reasonably efficient for the majority of clients, which may not need that data. - -Some resources report the `observedGeneration`, which is the `generation` most recently observed by the component responsible for acting upon changes to the desired state of the resource. This can be used, for instance, to ensure that the reported status reflects the most recent desired status. +Conditions should be added to explicitly convey properties that users and +components care about rather than requiring those properties to be inferred from +other observations. + +Condition status values may be `True`, `False`, or `Unknown`. The absence of a +condition should be interpreted the same as `Unknown`. + +In general, condition values may change back and forth, but some condition +transitions may be monotonic, depending on the resource and condition type. +However, conditions are observations and not, themselves, state machines, nor do +we define comprehensive state machines for objects, nor behaviors associated +with state transitions. The system is level-based rather than edge-triggered, +and should assume an Open World. + +A typical oscillating condition type is `Ready`, which indicates the object was +believed to be fully operational at the time it was last probed. A possible +monotonic condition could be `Succeeded`. A `False` status for `Succeeded` would +imply failure. An object that was still active would not have a `Succeeded` +condition, or its status would be `Unknown`. + +Some resources in the v1 API contain fields called **`phase`**, and associated +`message`, `reason`, and other status fields. The pattern of using `phase` is +deprecated. Newer API types should use conditions instead. Phase was essentially +a state-machine enumeration field, that contradicted +[system-design principles](../design/principles.md#control-logic) and hampered +evolution, since [adding new enum values breaks backward +compatibility](api_changes.md). Rather than encouraging clients to infer +implicit properties from phases, we intend to explicitly expose the conditions +that clients need to monitor. Conditions also have the benefit that it is +possible to create some conditions with uniform meaning across all resource +types, while still exposing others that are unique to specific resource types. +See [#7856](http://issues.k8s.io/7856) for more details and discussion. + +In condition types, and everywhere else they appear in the API, **`Reason`** is +intended to be a one-word, CamelCase representation of the category of cause of +the current status, and **`Message`** is intended to be a human-readable phrase +or sentence, which may contain specific details of the individual occurrence. +`Reason` is intended to be used in concise output, such as one-line +`kubectl get` output, and in summarizing occurrences of causes, whereas +`Message` is intended to be presented to users in detailed status explanations, +such as `kubectl describe` output. + +Historical information status (e.g., last transition time, failure counts) is +only provided with reasonable effort, and is not guaranteed to not be lost. + +Status information that may be large (especially proportional in size to +collections of other resources, such as lists of references to other objects -- +see below) and/or rapidly changing, such as +[resource usage](../design/resources.md#usage-data), should be put into separate +objects, with possibly a reference from the original object. This helps to +ensure that GETs and watch remain reasonably efficient for the majority of +clients, which may not need that data. + +Some resources report the `observedGeneration`, which is the `generation` most +recently observed by the component responsible for acting upon changes to the +desired state of the resource. This can be used, for instance, to ensure that +the reported status reflects the most recent desired status. #### References to related objects -References to loosely coupled sets of objects, such as [pods](../user-guide/pods.md) overseen by a [replication controller](../user-guide/replication-controller.md), are usually best referred to using a [label selector](../user-guide/labels.md). In order to ensure that GETs of individual objects remain bounded in time and space, these sets may be queried via separate API queries, but will not be expanded in the referring object's status. +References to loosely coupled sets of objects, such as +[pods](../user-guide/pods.md) overseen by a +[replication controller](../user-guide/replication-controller.md), are usually +best referred to using a [label selector](../user-guide/labels.md). In order to +ensure that GETs of individual objects remain bounded in time and space, these +sets may be queried via separate API queries, but will not be expanded in the +referring object's status. -References to specific objects, especially specific resource versions and/or specific fields of those objects, are specified using the `ObjectReference` type (or other types representing strict subsets of it). Unlike partial URLs, the ObjectReference type facilitates flexible defaulting of fields from the referring object or other contextual information. +References to specific objects, especially specific resource versions and/or +specific fields of those objects, are specified using the `ObjectReference` type +(or other types representing strict subsets of it). Unlike partial URLs, the +ObjectReference type facilitates flexible defaulting of fields from the +referring object or other contextual information. -References in the status of the referee to the referrer may be permitted, when the references are one-to-one and do not need to be frequently updated, particularly in an edge-based manner. +References in the status of the referee to the referrer may be permitted, when +the references are one-to-one and do not need to be frequently updated, +particularly in an edge-based manner. #### Lists of named subobjects preferred over maps -Discussed in [#2004](http://issue.k8s.io/2004) and elsewhere. There are no maps of subobjects in any API objects. Instead, the convention is to use a list of subobjects containing name fields. +Discussed in [#2004](http://issue.k8s.io/2004) and elsewhere. There are no maps +of subobjects in any API objects. Instead, the convention is to use a list of +subobjects containing name fields. For example: @@ -249,76 +427,137 @@ ports: containerPort: 80 ``` -This rule maintains the invariant that all JSON/YAML keys are fields in API objects. The only exceptions are pure maps in the API (currently, labels, selectors, annotations, data), as opposed to sets of subobjects. +This rule maintains the invariant that all JSON/YAML keys are fields in API +objects. The only exceptions are pure maps in the API (currently, labels, +selectors, annotations, data), as opposed to sets of subobjects. #### Primitive types -* Avoid floating-point values as much as possible, and never use them in spec. Floating-point values cannot be reliably round-tripped (encoded and re-decoded) without changing, and have varying precision and representations across languages and architectures. -* All numbers (e.g., uint32, int64) are converted to float64 by Javascript and some other languages, so any field which is expected to exceed that either in magnitude or in precision (specifically integer values > 53 bits) should be serialized and accepted as strings. -* Do not use unsigned integers, due to inconsistent support across languages and libraries. Just validate that the integer is non-negative if that's the case. +* Avoid floating-point values as much as possible, and never use them in spec. +Floating-point values cannot be reliably round-tripped (encoded and re-decoded) +without changing, and have varying precision and representations across +languages and architectures. +* All numbers (e.g., uint32, int64) are converted to float64 by Javascript and +some other languages, so any field which is expected to exceed that either in +magnitude or in precision (specifically integer values > 53 bits) should be +serialized and accepted as strings. +* Do not use unsigned integers, due to inconsistent support across languages and +libraries. Just validate that the integer is non-negative if that's the case. * Do not use enums. Use aliases for string instead (e.g., `NodeConditionType`). -* Look at similar fields in the API (e.g., ports, durations) and follow the conventions of existing fields. -* All public integer fields MUST use the Go `(u)int32` or Go `(u)int64` types, not `(u)int` (which is ambiguous depending on target platform). Internal types may use `(u)int`. +* Look at similar fields in the API (e.g., ports, durations) and follow the +conventions of existing fields. +* All public integer fields MUST use the Go `(u)int32` or Go `(u)int64` types, +not `(u)int` (which is ambiguous depending on target platform). Internal types +may use `(u)int`. #### Constants -Some fields will have a list of allowed values (enumerations). These values will be strings, and they will be in CamelCase, with an initial uppercase letter. Examples: "ClusterFirst", "Pending", "ClientIP". +Some fields will have a list of allowed values (enumerations). These values will +be strings, and they will be in CamelCase, with an initial uppercase letter. +Examples: "ClusterFirst", "Pending", "ClientIP". #### Unions -Sometimes, at most one of a set of fields can be set. For example, the [volumes] field of a PodSpec has 17 different volume type-specific -fields, such as `nfs` and `iscsi`. All fields in the set should be [Optional](#optional-vs-required). +Sometimes, at most one of a set of fields can be set. For example, the +[volumes] field of a PodSpec has 17 different volume type-specific fields, such +as `nfs` and `iscsi`. All fields in the set should be +[Optional](#optional-vs-required). -Sometimes, when a new type is created, the api designer may anticipate that a union will be needed in the future, even if only one field is -allowed initially. In this case, be sure to make the field [Optional](#optional-vs-required) optional. In the validation, you may -still return an error if the sole field is unset. Do not set a default value for that field. +Sometimes, when a new type is created, the api designer may anticipate that a +union will be needed in the future, even if only one field is allowed initially. +In this case, be sure to make the field [Optional](#optional-vs-required) +optional. In the validation, you may still return an error if the sole field is +unset. Do not set a default value for that field. ### Lists and Simple kinds -Every list or simple kind SHOULD have the following metadata in a nested object field called "metadata": +Every list or simple kind SHOULD have the following metadata in a nested object +field called "metadata": -* resourceVersion: a string that identifies the common version of the objects returned by in a list. This value MUST be treated as opaque by clients and passed unmodified back to the server. A resource version is only valid within a single namespace on a single kind of resource. +* resourceVersion: a string that identifies the common version of the objects +returned by in a list. This value MUST be treated as opaque by clients and +passed unmodified back to the server. A resource version is only valid within a +single namespace on a single kind of resource. -Every simple kind returned by the server, and any simple kind sent to the server that must support idempotency or optimistic concurrency should return this value.Since simple resources are often used as input alternate actions that modify objects, the resource version of the simple resource should correspond to the resource version of the object. +Every simple kind returned by the server, and any simple kind sent to the server +that must support idempotency or optimistic concurrency should return this +value. Since simple resources are often used as input alternate actions that +modify objects, the resource version of the simple resource should correspond to +the resource version of the object. ## Differing Representations -An API may represent a single entity in different ways for different clients, or transform an object after certain transitions in the system occur. In these cases, one request object may have two representations available as different resources, or different kinds. +An API may represent a single entity in different ways for different clients, or +transform an object after certain transitions in the system occur. In these +cases, one request object may have two representations available as different +resources, or different kinds. -An example is a Service, which represents the intent of the user to group a set of pods with common behavior on common ports. When Kubernetes detects a pod matches the service selector, the IP address and port of the pod are added to an Endpoints resource for that Service. The Endpoints resource exists only if the Service exists, but exposes only the IPs and ports of the selected pods. The full service is represented by two distinct resources - under the original Service resource the user created, as well as in the Endpoints resource. +An example is a Service, which represents the intent of the user to group a set +of pods with common behavior on common ports. When Kubernetes detects a pod +matches the service selector, the IP address and port of the pod are added to an +Endpoints resource for that Service. The Endpoints resource exists only if the +Service exists, but exposes only the IPs and ports of the selected pods. The +full service is represented by two distinct resources - under the original +Service resource the user created, as well as in the Endpoints resource. -As another example, a "pod status" resource may accept a PUT with the "pod" kind, with different rules about what fields may be changed. +As another example, a "pod status" resource may accept a PUT with the "pod" +kind, with different rules about what fields may be changed. -Future versions of Kubernetes may allow alternative encodings of objects beyond JSON. +Future versions of Kubernetes may allow alternative encodings of objects beyond +JSON. ## Verbs on Resources API resources should use the traditional REST pattern: -* GET /<resourceNamePlural> - Retrieve a list of type <resourceName>, e.g. GET /pods returns a list of Pods. -* POST /<resourceNamePlural> - Create a new resource from the JSON object provided by the client. -* GET /<resourceNamePlural>/<name> - Retrieves a single resource with the given name, e.g. GET /pods/first returns a Pod named 'first'. Should be constant time, and the resource should be bounded in size. -* DELETE /<resourceNamePlural>/<name> - Delete the single resource with the given name. DeleteOptions may specify gracePeriodSeconds, the optional duration in seconds before the object should be deleted. Individual kinds may declare fields which provide a default grace period, and different kinds may have differing kind-wide default grace periods. A user provided grace period overrides a default grace period, including the zero grace period ("now"). -* PUT /<resourceNamePlural>/<name> - Update or create the resource with the given name with the JSON object provided by the client. -* PATCH /<resourceNamePlural>/<name> - Selectively modify the specified fields of the resource. See more information [below](#patch). -* GET /<resourceNamePlural>&watch=true - Receive a stream of JSON objects corresponding to changes made to any resource of the given kind over time. +* GET /<resourceNamePlural> - Retrieve a list of type +<resourceName>, e.g. GET /pods returns a list of Pods. +* POST /<resourceNamePlural> - Create a new resource from the JSON object +provided by the client. +* GET /<resourceNamePlural>/<name> - Retrieves a single resource +with the given name, e.g. GET /pods/first returns a Pod named 'first'. Should be +constant time, and the resource should be bounded in size. +* DELETE /<resourceNamePlural>/<name> - Delete the single resource +with the given name. DeleteOptions may specify gracePeriodSeconds, the optional +duration in seconds before the object should be deleted. Individual kinds may +declare fields which provide a default grace period, and different kinds may +have differing kind-wide default grace periods. A user provided grace period +overrides a default grace period, including the zero grace period ("now"). +* PUT /<resourceNamePlural>/<name> - Update or create the resource +with the given name with the JSON object provided by the client. +* PATCH /<resourceNamePlural>/<name> - Selectively modify the +specified fields of the resource. See more information [below](#patch). +* GET /<resourceNamePlural>&watch=true - Receive a stream of JSON +objects corresponding to changes made to any resource of the given kind over +time. ### PATCH operations -The API supports three different PATCH operations, determined by their corresponding Content-Type header: +The API supports three different PATCH operations, determined by their +corresponding Content-Type header: * JSON Patch, `Content-Type: application/json-patch+json` - * As defined in [RFC6902](https://tools.ietf.org/html/rfc6902), a JSON Patch is a sequence of operations that are executed on the resource, e.g. `{"op": "add", "path": "/a/b/c", "value": [ "foo", "bar" ]}`. For more details on how to use JSON Patch, see the RFC. + * As defined in [RFC6902](https://tools.ietf.org/html/rfc6902), a JSON Patch is +a sequence of operations that are executed on the resource, e.g. `{"op": "add", +"path": "/a/b/c", "value": [ "foo", "bar" ]}`. For more details on how to use +JSON Patch, see the RFC. * Merge Patch, `Content-Type: application/merge-patch+json` - * As defined in [RFC7386](https://tools.ietf.org/html/rfc7386), a Merge Patch is essentially a partial representation of the resource. The submitted JSON is "merged" with the current resource to create a new one, then the new one is saved. For more details on how to use Merge Patch, see the RFC. + * As defined in [RFC7386](https://tools.ietf.org/html/rfc7386), a Merge Patch +is essentially a partial representation of the resource. The submitted JSON is +"merged" with the current resource to create a new one, then the new one is +saved. For more details on how to use Merge Patch, see the RFC. * Strategic Merge Patch, `Content-Type: application/strategic-merge-patch+json` - * Strategic Merge Patch is a custom implementation of Merge Patch. For a detailed explanation of how it works and why it needed to be introduced, see below. + * Strategic Merge Patch is a custom implementation of Merge Patch. For a +detailed explanation of how it works and why it needed to be introduced, see +below. #### Strategic Merge Patch -In the standard JSON merge patch, JSON objects are always merged but lists are always replaced. Often that isn't what we want. Let's say we start with the following Pod: +In the standard JSON merge patch, JSON objects are always merged but lists are +always replaced. Often that isn't what we want. Let's say we start with the +following Pod: ```yaml spec: @@ -327,7 +566,8 @@ spec: image: nginx-1.0 ``` -...and we POST that to the server (as JSON). Then let's say we want to *add* a container to this Pod. +...and we POST that to the server (as JSON). Then let's say we want to *add* a +container to this Pod. ```yaml PATCH /api/v1/namespaces/default/pods/pod-name @@ -337,17 +577,26 @@ spec: image: log-tailer-1.0 ``` -If we were to use standard Merge Patch, the entire container list would be replaced with the single log-tailer container. However, our intent is for the container lists to merge together based on the `name` field. +If we were to use standard Merge Patch, the entire container list would be +replaced with the single log-tailer container. However, our intent is for the +container lists to merge together based on the `name` field. -To solve this problem, Strategic Merge Patch uses metadata attached to the API objects to determine what lists should be merged and which ones should not. Currently the metadata is available as struct tags on the API objects themselves, but will become available to clients as Swagger annotations in the future. In the above example, the `patchStrategy` metadata for the `containers` field would be `merge` and the `patchMergeKey` would be `name`. +To solve this problem, Strategic Merge Patch uses metadata attached to the API +objects to determine what lists should be merged and which ones should not. +Currently the metadata is available as struct tags on the API objects +themselves, but will become available to clients as Swagger annotations in the +future. In the above example, the `patchStrategy` metadata for the `containers` +field would be `merge` and the `patchMergeKey` would be `name`. -Note: If the patch results in merging two lists of scalars, the scalars are first deduplicated and then merged. +Note: If the patch results in merging two lists of scalars, the scalars are +first deduplicated and then merged. Strategic Merge Patch also supports special operations as listed below. ### List Operations -To override the container list to be strictly replaced, regardless of the default: +To override the container list to be strictly replaced, regardless of the +default: ```yaml containers: @@ -389,9 +638,24 @@ labels: ## Idempotency -All compatible Kubernetes APIs MUST support "name idempotency" and respond with an HTTP status code 409 when a request is made to POST an object that has the same name as an existing object in the system. See [docs/user-guide/identifiers.md](../user-guide/identifiers.md) for details. - -Names generated by the system may be requested using `metadata.generateName`. GenerateName indicates that the name should be made unique by the server prior to persisting it. A non-empty value for the field indicates the name will be made unique (and the name returned to the client will be different than the name passed). The value of this field will be combined with a unique suffix on the server if the Name field has not been provided. The provided value must be valid within the rules for Name, and may be truncated by the length of the suffix required to make the value unique on the server. If this field is specified, and Name is not present, the server will NOT return a 409 if the generated name exists - instead, it will either return 201 Created or 504 with Reason `ServerTimeout` indicating a unique name could not be found in the time allotted, and the client should retry (optionally after the time indicated in the Retry-After header). +All compatible Kubernetes APIs MUST support "name idempotency" and respond with +an HTTP status code 409 when a request is made to POST an object that has the +same name as an existing object in the system. See +[docs/user-guide/identifiers.md](../user-guide/identifiers.md) for details. + +Names generated by the system may be requested using `metadata.generateName`. +GenerateName indicates that the name should be made unique by the server prior +to persisting it. A non-empty value for the field indicates the name will be +made unique (and the name returned to the client will be different than the name +passed). The value of this field will be combined with a unique suffix on the +server if the Name field has not been provided. The provided value must be valid +within the rules for Name, and may be truncated by the length of the suffix +required to make the value unique on the server. If this field is specified, and +Name is not present, the server will NOT return a 409 if the generated name +exists - instead, it will either return 201 Created or 504 with Reason +`ServerTimeout` indicating a unique name could not be found in the time +allotted, and the client should retry (optionally after the time indicated in +the Retry-After header). ## Optional vs Required @@ -400,31 +664,35 @@ Fields must be either optional or required. Optional fields have the following properties: - They have `omitempty` struct tag in Go. -- They are a pointer type in the Go definition (e.g. `bool *awesomeFlag`) or have a built-in `nil` - value (e.g. maps and slices). -- The API server should allow POSTing and PUTing a resource with this field unset. +- They are a pointer type in the Go definition (e.g. `bool *awesomeFlag`) or +have a built-in `nil` value (e.g. maps and slices). +- The API server should allow POSTing and PUTing a resource with this field +unset. Required fields have the opposite properties, namely: - They do not have an `omitempty` struct tag. - They are not a pointer type in the Go definition (e.g. `bool otherFlag`). -- The API server should not allow POSTing or PUTing a resource with this field unset. +- The API server should not allow POSTing or PUTing a resource with this field +unset. -Using the `omitempty` tag causes swagger documentation to reflect that the field is optional. +Using the `omitempty` tag causes swagger documentation to reflect that the field +is optional. Using a pointer allows distinguishing unset from the zero value for that type. -There are some cases where, in principle, a pointer is not needed for an optional field -since the zero value is forbidden, and thus implies unset. There are examples of this in the -codebase. However: +There are some cases where, in principle, a pointer is not needed for an +optional field since the zero value is forbidden, and thus implies unset. There +are examples of this in the codebase. However: -- it can be difficult for implementors to anticipate all cases where an empty value might need to be - distinguished from a zero value -- structs are not omitted from encoder output even where omitempty is specified, which is messy; -- having a pointer consistently imply optional is clearer for users of the Go language client, and any - other clients that use corresponding types +- it can be difficult for implementors to anticipate all cases where an empty +value might need to be distinguished from a zero value +- structs are not omitted from encoder output even where omitempty is specified, +which is messy; +- having a pointer consistently imply optional is clearer for users of the Go +language client, and any other clients that use corresponding types -Therefore, we ask that pointers always be used with optional fields that do not have a built-in -`nil` value. +Therefore, we ask that pointers always be used with optional fields that do not +have a built-in `nil` value. ## Defaulting @@ -445,37 +713,66 @@ API version-specific default values are set by the API server. Late initialization is when resource fields are set by a system controller after an object is created/updated. -For example, the scheduler sets the `pod.spec.nodeName` field after the pod is created. +For example, the scheduler sets the `pod.spec.nodeName` field after the pod is +created. Late-initializers should only make the following types of modifications: - Setting previously unset fields - Adding keys to maps - - Adding values to arrays which have mergeable semantics (`patchStrategy:"merge"` attribute in - the type definition). + - Adding values to arrays which have mergeable semantics +(`patchStrategy:"merge"` attribute in the type definition). These conventions: - 1. allow a user (with sufficient privilege) to override any system-default behaviors by setting - the fields that would otherwise have been defaulted. - 1. enables updates from users to be merged with changes made during late initialization, using - strategic merge patch, as opposed to clobbering the change. - 1. allow the component which does the late-initialization to use strategic merge patch, which - facilitates composition and concurrency of such components. + 1. allow a user (with sufficient privilege) to override any system-default + behaviors by setting the fields that would otherwise have been defaulted. + 1. enables updates from users to be merged with changes made during late +initialization, using strategic merge patch, as opposed to clobbering the +change. + 1. allow the component which does the late-initialization to use strategic +merge patch, which facilitates composition and concurrency of such components. Although the apiserver Admission Control stage acts prior to object creation, Admission Control plugins should follow the Late Initialization conventions -too, to allow their implementation to be later moved to a 'controller', or to client libraries. +too, to allow their implementation to be later moved to a 'controller', or to +client libraries. ## Concurrency Control and Consistency -Kubernetes leverages the concept of *resource versions* to achieve optimistic concurrency. All Kubernetes resources have a "resourceVersion" field as part of their metadata. This resourceVersion is a string that identifies the internal version of an object that can be used by clients to determine when objects have changed. When a record is about to be updated, it's version is checked against a pre-saved value, and if it doesn't match, the update fails with a StatusConflict (HTTP status code 409). - -The resourceVersion is changed by the server every time an object is modified. If resourceVersion is included with the PUT operation the system will verify that there have not been other successful mutations to the resource during a read/modify/write cycle, by verifying that the current value of resourceVersion matches the specified value. - -The resourceVersion is currently backed by [etcd's modifiedIndex](https://coreos.com/docs/distributed-configuration/etcd-api/). However, it's important to note that the application should *not* rely on the implementation details of the versioning system maintained by Kubernetes. We may change the implementation of resourceVersion in the future, such as to change it to a timestamp or per-object counter. - -The only way for a client to know the expected value of resourceVersion is to have received it from the server in response to a prior operation, typically a GET. This value MUST be treated as opaque by clients and passed unmodified back to the server. Clients should not assume that the resource version has meaning across namespaces, different kinds of resources, or different servers. Currently, the value of resourceVersion is set to match etcd's sequencer. You could think of it as a logical clock the API server can use to order requests. However, we expect the implementation of resourceVersion to change in the future, such as in the case we shard the state by kind and/or namespace, or port to another storage system. - -In the case of a conflict, the correct client action at this point is to GET the resource again, apply the changes afresh, and try submitting again. This mechanism can be used to prevent races like the following: +Kubernetes leverages the concept of *resource versions* to achieve optimistic +concurrency. All Kubernetes resources have a "resourceVersion" field as part of +their metadata. This resourceVersion is a string that identifies the internal +version of an object that can be used by clients to determine when objects have +changed. When a record is about to be updated, it's version is checked against a +pre-saved value, and if it doesn't match, the update fails with a StatusConflict +(HTTP status code 409). + +The resourceVersion is changed by the server every time an object is modified. +If resourceVersion is included with the PUT operation the system will verify +that there have not been other successful mutations to the resource during a +read/modify/write cycle, by verifying that the current value of resourceVersion +matches the specified value. + +The resourceVersion is currently backed by [etcd's +modifiedIndex](https://coreos.com/docs/distributed-configuration/etcd-api/). +However, it's important to note that the application should *not* rely on the +implementation details of the versioning system maintained by Kubernetes. We may +change the implementation of resourceVersion in the future, such as to change it +to a timestamp or per-object counter. + +The only way for a client to know the expected value of resourceVersion is to +have received it from the server in response to a prior operation, typically a +GET. This value MUST be treated as opaque by clients and passed unmodified back +to the server. Clients should not assume that the resource version has meaning +across namespaces, different kinds of resources, or different servers. +Currently, the value of resourceVersion is set to match etcd's sequencer. You +could think of it as a logical clock the API server can use to order requests. +However, we expect the implementation of resourceVersion to change in the +future, such as in the case we shard the state by kind and/or namespace, or port +to another storage system. + +In the case of a conflict, the correct client action at this point is to GET the +resource again, apply the changes afresh, and try submitting again. This +mechanism can be used to prevent races like the following: ``` Client #1 Client #2 @@ -484,37 +781,59 @@ Set Foo.Bar = "one" Set Foo.Baz = "two" PUT Foo PUT Foo ``` -When these sequences occur in parallel, either the change to Foo.Bar or the change to Foo.Baz can be lost. +When these sequences occur in parallel, either the change to Foo.Bar or the +change to Foo.Baz can be lost. -On the other hand, when specifying the resourceVersion, one of the PUTs will fail, since whichever write succeeds changes the resourceVersion for Foo. +On the other hand, when specifying the resourceVersion, one of the PUTs will +fail, since whichever write succeeds changes the resourceVersion for Foo. -resourceVersion may be used as a precondition for other operations (e.g., GET, DELETE) in the future, such as for read-after-write consistency in the presence of caching. +resourceVersion may be used as a precondition for other operations (e.g., GET, +DELETE) in the future, such as for read-after-write consistency in the presence +of caching. -"Watch" operations specify resourceVersion using a query parameter. It is used to specify the point at which to begin watching the specified resources. This may be used to ensure that no mutations are missed between a GET of a resource (or list of resources) and a subsequent Watch, even if the current version of the resource is more recent. This is currently the main reason that list operations (GET on a collection) return resourceVersion. +"Watch" operations specify resourceVersion using a query parameter. It is used +to specify the point at which to begin watching the specified resources. This +may be used to ensure that no mutations are missed between a GET of a resource +(or list of resources) and a subsequent Watch, even if the current version of +the resource is more recent. This is currently the main reason that list +operations (GET on a collection) return resourceVersion. ## Serialization Format -APIs may return alternative representations of any resource in response to an Accept header or under alternative endpoints, but the default serialization for input and output of API responses MUST be JSON. +APIs may return alternative representations of any resource in response to an +Accept header or under alternative endpoints, but the default serialization for +input and output of API responses MUST be JSON. All dates should be serialized as RFC3339 strings. ## Units -Units must either be explicit in the field name (e.g., `timeoutSeconds`), or must be specified as part of the value (e.g., `resource.Quantity`). Which approach is preferred is TBD, though currently we use the `fooSeconds` convention for durations. +Units must either be explicit in the field name (e.g., `timeoutSeconds`), or +must be specified as part of the value (e.g., `resource.Quantity`). Which +approach is preferred is TBD, though currently we use the `fooSeconds` +convention for durations. ## Selecting Fields -Some APIs may need to identify which field in a JSON object is invalid, or to reference a value to extract from a separate resource. The current recommendation is to use standard JavaScript syntax for accessing that field, assuming the JSON object was transformed into a JavaScript object, without the leading dot, such as `metadata.name`. +Some APIs may need to identify which field in a JSON object is invalid, or to +reference a value to extract from a separate resource. The current +recommendation is to use standard JavaScript syntax for accessing that field, +assuming the JSON object was transformed into a JavaScript object, without the +leading dot, such as `metadata.name`. Examples: -* Find the field "current" in the object "state" in the second item in the array "fields": `fields[1].state.current` +* Find the field "current" in the object "state" in the second item in the array +"fields": `fields[1].state.current` ## Object references -Object references should either be called `fooName` if referring to an object of kind `Foo` by just the name (within the current namespace, if a namespaced resource), or should be called `fooRef`, and should contain a subset of the fields of the `ObjectReference` type. +Object references should either be called `fooName` if referring to an object of +kind `Foo` by just the name (within the current namespace, if a namespaced +resource), or should be called `fooRef`, and should contain a subset of the +fields of the `ObjectReference` type. TODO: Plugins, extensions, nested kinds, headers @@ -522,7 +841,8 @@ TODO: Plugins, extensions, nested kinds, headers ## HTTP Status codes -The server will respond with HTTP status codes that match the HTTP spec. See the section below for a breakdown of the types of status codes the server will send. +The server will respond with HTTP status codes that match the HTTP spec. See the +section below for a breakdown of the types of status codes the server will send. The following HTTP status codes may be returned by the API. @@ -533,79 +853,135 @@ The following HTTP status codes may be returned by the API. * `201 StatusCreated` * Indicates that the request to create kind completed successfully. * `204 StatusNoContent` - * Indicates that the request completed successfully, and the response contains no body. + * Indicates that the request completed successfully, and the response contains +no body. * Returned in response to HTTP OPTIONS requests. #### Error codes * `307 StatusTemporaryRedirect` * Indicates that the address for the requested resource has changed. - * Suggested client recovery behavior + * Suggested client recovery behavior: * Follow the redirect. + + * `400 StatusBadRequest` * Indicates the requested is invalid. * Suggested client recovery behavior: * Do not retry. Fix the request. + + * `401 StatusUnauthorized` - * Indicates that the server can be reached and understood the request, but refuses to take any further action, because the client must provide authorization. If the client has provided authorization, the server is indicating the provided authorization is unsuitable or invalid. - * Suggested client recovery behavior - * If the user has not supplied authorization information, prompt them for the appropriate credentials - * If the user has supplied authorization information, inform them their credentials were rejected and optionally prompt them again. + * Indicates that the server can be reached and understood the request, but +refuses to take any further action, because the client must provide +authorization. If the client has provided authorization, the server is +indicating the provided authorization is unsuitable or invalid. + * Suggested client recovery behavior: + * If the user has not supplied authorization information, prompt them for +the appropriate credentials. If the user has supplied authorization information, +inform them their credentials were rejected and optionally prompt them again. + + * `403 StatusForbidden` - * Indicates that the server can be reached and understood the request, but refuses to take any further action, because it is configured to deny access for some reason to the requested resource by the client. - * Suggested client recovery behavior + * Indicates that the server can be reached and understood the request, but +refuses to take any further action, because it is configured to deny access for +some reason to the requested resource by the client. + * Suggested client recovery behavior: * Do not retry. Fix the request. + + * `404 StatusNotFound` * Indicates that the requested resource does not exist. - * Suggested client recovery behavior + * Suggested client recovery behavior: * Do not retry. Fix the request. + + * `405 StatusMethodNotAllowed` - * Indicates that the action the client attempted to perform on the resource was not supported by the code. - * Suggested client recovery behavior + * Indicates that the action the client attempted to perform on the resource +was not supported by the code. + * Suggested client recovery behavior: * Do not retry. Fix the request. + + * `409 StatusConflict` - * Indicates that either the resource the client attempted to create already exists or the requested update operation cannot be completed due to a conflict. - * Suggested client recovery behavior - * * If creating a new resource - * * Either change the identifier and try again, or GET and compare the fields in the pre-existing object and issue a PUT/update to modify the existing object. + * Indicates that either the resource the client attempted to create already +exists or the requested update operation cannot be completed due to a conflict. + * Suggested client recovery behavior: + * * If creating a new resource: + * * Either change the identifier and try again, or GET and compare the +fields in the pre-existing object and issue a PUT/update to modify the existing +object. * * If updating an existing resource: - * See `Conflict` from the `status` response section below on how to retrieve more information about the nature of the conflict. - * GET and compare the fields in the pre-existing object, merge changes (if still valid according to preconditions), and retry with the updated request (including `ResourceVersion`). + * See `Conflict` from the `status` response section below on how to +retrieve more information about the nature of the conflict. + * GET and compare the fields in the pre-existing object, merge changes (if +still valid according to preconditions), and retry with the updated request +(including `ResourceVersion`). + + * `410 StatusGone` - * Indicates that the item is no longer available at the server and no forwarding address is known. - * Suggested client recovery behavior + * Indicates that the item is no longer available at the server and no +forwarding address is known. + * Suggested client recovery behavior: * Do not retry. Fix the request. + + * `422 StatusUnprocessableEntity` - * Indicates that the requested create or update operation cannot be completed due to invalid data provided as part of the request. - * Suggested client recovery behavior + * Indicates that the requested create or update operation cannot be completed +due to invalid data provided as part of the request. + * Suggested client recovery behavior: * Do not retry. Fix the request. + + * `429 StatusTooManyRequests` - * Indicates that the either the client rate limit has been exceeded or the server has received more requests then it can process. + * Indicates that the either the client rate limit has been exceeded or the +server has received more requests then it can process. * Suggested client recovery behavior: - * Read the `Retry-After` HTTP header from the response, and wait at least that long before retrying. + * Read the `Retry-After` HTTP header from the response, and wait at least +that long before retrying. + + * `500 StatusInternalServerError` - * Indicates that the server can be reached and understood the request, but either an unexpected internal error occurred and the outcome of the call is unknown, or the server cannot complete the action in a reasonable time (this maybe due to temporary server load or a transient communication issue with another server). + * Indicates that the server can be reached and understood the request, but +either an unexpected internal error occurred and the outcome of the call is +unknown, or the server cannot complete the action in a reasonable time (this may +be due to temporary server load or a transient communication issue with another +server). * Suggested client recovery behavior: * Retry with exponential backoff. + + * `503 StatusServiceUnavailable` * Indicates that required service is unavailable. * Suggested client recovery behavior: * Retry with exponential backoff. + + * `504 StatusServerTimeout` - * Indicates that the request could not be completed within the given time. Clients can get this response ONLY when they specified a timeout param in the request. + * Indicates that the request could not be completed within the given time. +Clients can get this response ONLY when they specified a timeout param in the +request. * Suggested client recovery behavior: - * Increase the value of the timeout param and retry with exponential backoff + * Increase the value of the timeout param and retry with exponential +backoff. ## Response Status Kind -Kubernetes will always return the `Status` kind from any API endpoint when an error occurs. -Clients SHOULD handle these types of objects when appropriate. +Kubernetes will always return the `Status` kind from any API endpoint when an +error occurs. Clients SHOULD handle these types of objects when appropriate. A `Status` kind will be returned by the API in two cases: - * When an operation is not successful (i.e. when the server would return a non 2xx HTTP status code). + * When an operation is not successful (i.e. when the server would return a non +2xx HTTP status code). * When a HTTP `DELETE` call is successful. -The status object is encoded as JSON and provided as the body of the response. The status object contains fields for humans and machine consumers of the API to get more detailed information for the cause of the failure. The information in the status object supplements, but does not override, the HTTP status code's meaning. When fields in the status object have the same meaning as generally defined HTTP headers and that header is returned with the response, the header should be considered as having higher priority. +The status object is encoded as JSON and provided as the body of the response. +The status object contains fields for humans and machine consumers of the API to +get more detailed information for the cause of the failure. The information in +the status object supplements, but does not override, the HTTP status code's +meaning. When fields in the status object have the same meaning as generally +defined HTTP headers and that header is returned with the response, the header +should be considered as having higher priority. **Example:** @@ -645,40 +1021,64 @@ $ curl -v -k -H "Authorization: Bearer WhCDvq4VPpYhrcfmF6ei7V9qlbqTubUc" https:/ `message` may contain human-readable description of the error -`reason` may contain a machine-readable, one-word, CamelCase description of why this operation is in the `Failure` status. If this value is empty there is no information available. The `reason` clarifies an HTTP status code but does not override it. +`reason` may contain a machine-readable, one-word, CamelCase description of why +this operation is in the `Failure` status. If this value is empty there is no +information available. The `reason` clarifies an HTTP status code but does not +override it. -`details` may contain extended data associated with the reason. Each reason may define its own extended details. This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type. +`details` may contain extended data associated with the reason. Each reason may +define its own extended details. This field is optional and the data returned is +not guaranteed to conform to any schema except that defined by the reason type. Possible values for the `reason` and `details` fields: * `BadRequest` - * Indicates that the request itself was invalid, because the request doesn't make any sense, for example deleting a read-only object. - * This is different than `status reason` `Invalid` above which indicates that the API call could possibly succeed, but the data was invalid. + * Indicates that the request itself was invalid, because the request doesn't +make any sense, for example deleting a read-only object. + * This is different than `status reason` `Invalid` above which indicates that +the API call could possibly succeed, but the data was invalid. * API calls that return BadRequest can never succeed. * Http status code: `400 StatusBadRequest` + + * `Unauthorized` - * Indicates that the server can be reached and understood the request, but refuses to take any further action without the client providing appropriate authorization. If the client has provided authorization, this error indicates the provided credentials are insufficient or invalid. + * Indicates that the server can be reached and understood the request, but +refuses to take any further action without the client providing appropriate +authorization. If the client has provided authorization, this error indicates +the provided credentials are insufficient or invalid. * Details (optional): * `kind string` - * The kind attribute of the unauthorized resource (on some operations may differ from the requested resource). + * The kind attribute of the unauthorized resource (on some operations may +differ from the requested resource). * `name string` * The identifier of the unauthorized resource. * HTTP status code: `401 StatusUnauthorized` + + * `Forbidden` - * Indicates that the server can be reached and understood the request, but refuses to take any further action, because it is configured to deny access for some reason to the requested resource by the client. + * Indicates that the server can be reached and understood the request, but +refuses to take any further action, because it is configured to deny access for +some reason to the requested resource by the client. * Details (optional): * `kind string` - * The kind attribute of the forbidden resource (on some operations may differ from the requested resource). + * The kind attribute of the forbidden resource (on some operations may +differ from the requested resource). * `name string` * The identifier of the forbidden resource. - * HTTP status code: `403 StatusForbidden` + * HTTP status code: `403 StatusForbidden` + + * `NotFound` - * Indicates that one or more resources required for this operation could not be found. + * Indicates that one or more resources required for this operation could not +be found. * Details (optional): * `kind string` - * The kind attribute of the missing resource (on some operations may differ from the requested resource). + * The kind attribute of the missing resource (on some operations may +differ from the requested resource). * `name string` * The identifier of the missing resource. * HTTP status code: `404 StatusNotFound` + + * `AlreadyExists` * Indicates that the resource you are creating already exists. * Details (optional): @@ -687,146 +1087,292 @@ Possible values for the `reason` and `details` fields: * `name string` * The identifier of the conflicting resource. * HTTP status code: `409 StatusConflict` + * `Conflict` - * Indicates that the requested update operation cannot be completed due to a conflict. The client may need to alter the request. Each resource may define custom details that indicate the nature of the conflict. + * Indicates that the requested update operation cannot be completed due to a +conflict. The client may need to alter the request. Each resource may define +custom details that indicate the nature of the conflict. * HTTP status code: `409 StatusConflict` + + * `Invalid` - * Indicates that the requested create or update operation cannot be completed due to invalid data provided as part of the request. + * Indicates that the requested create or update operation cannot be completed +due to invalid data provided as part of the request. * Details (optional): * `kind string` * the kind attribute of the invalid resource * `name string` * the identifier of the invalid resource * `causes` - * One or more `StatusCause` entries indicating the data in the provided resource that was invalid. The `reason`, `message`, and `field` attributes will be set. + * One or more `StatusCause` entries indicating the data in the provided +resource that was invalid. The `reason`, `message`, and `field` attributes will +be set. * HTTP status code: `422 StatusUnprocessableEntity` + + * `Timeout` - * Indicates that the request could not be completed within the given time. Clients may receive this response if the server has decided to rate limit the client, or if the server is overloaded and cannot process the request at this time. + * Indicates that the request could not be completed within the given time. +Clients may receive this response if the server has decided to rate limit the +client, or if the server is overloaded and cannot process the request at this +time. * Http status code: `429 TooManyRequests` - * The server should set the `Retry-After` HTTP header and return `retryAfterSeconds` in the details field of the object. A value of `0` is the default. + * The server should set the `Retry-After` HTTP header and return +`retryAfterSeconds` in the details field of the object. A value of `0` is the +default. + + * `ServerTimeout` - * Indicates that the server can be reached and understood the request, but cannot complete the action in a reasonable time. This maybe due to temporary server load or a transient communication issue with another server. + * Indicates that the server can be reached and understood the request, but +cannot complete the action in a reasonable time. This maybe due to temporary +server load or a transient communication issue with another server. * Details (optional): * `kind string` * The kind attribute of the resource being acted on. * `name string` * The operation that is being attempted. - * The server should set the `Retry-After` HTTP header and return `retryAfterSeconds` in the details field of the object. A value of `0` is the default. + * The server should set the `Retry-After` HTTP header and return +`retryAfterSeconds` in the details field of the object. A value of `0` is the +default. * Http status code: `504 StatusServerTimeout` + + * `MethodNotAllowed` - * Indicates that the action the client attempted to perform on the resource was not supported by the code. + * Indicates that the action the client attempted to perform on the resource +was not supported by the code. * For instance, attempting to delete a resource that can only be created. * API calls that return MethodNotAllowed can never succeed. * Http status code: `405 StatusMethodNotAllowed` + + * `InternalError` - * Indicates that an internal error occurred, it is unexpected and the outcome of the call is unknown. + * Indicates that an internal error occurred, it is unexpected and the outcome +of the call is unknown. * Details (optional): * `causes` * The original error. - * Http status code: `500 StatusInternalServerError` - -`code` may contain the suggested HTTP return code for this status. + * Http status code: `500 StatusInternalServerError` `code` may contain the suggested HTTP return code for this status. ## Events -Events are complementary to status information, since they can provide some historical information about status and occurrences in addition to current or previous status. Generate events for situations users or administrators should be alerted about. +Events are complementary to status information, since they can provide some +historical information about status and occurrences in addition to current or +previous status. Generate events for situations users or administrators should +be alerted about. -Choose a unique, specific, short, CamelCase reason for each event category. For example, `FreeDiskSpaceInvalid` is a good event reason because it is likely to refer to just one situation, but `Started` is not a good reason because it doesn't sufficiently indicate what started, even when combined with other event fields. +Choose a unique, specific, short, CamelCase reason for each event category. For +example, `FreeDiskSpaceInvalid` is a good event reason because it is likely to +refer to just one situation, but `Started` is not a good reason because it +doesn't sufficiently indicate what started, even when combined with other event +fields. -`Error creating foo` or `Error creating foo %s` would be appropriate for an event message, with the latter being preferable, since it is more informational. +`Error creating foo` or `Error creating foo %s` would be appropriate for an +event message, with the latter being preferable, since it is more informational. -Accumulate repeated events in the client, especially for frequent events, to reduce data volume, load on the system, and noise exposed to users. +Accumulate repeated events in the client, especially for frequent events, to +reduce data volume, load on the system, and noise exposed to users. ## Naming conventions -* Go field names must be CamelCase. JSON field names must be camelCase. Other than capitalization of the initial letter, the two should almost always match. No underscores nor dashes in either. -* Field and resource names should be declarative, not imperative (DoSomething, SomethingDoer, DoneBy, DoneAt). -* `Minion` has been deprecated in favor of `Node`. Use `Node` where referring to the node resource in the context of the cluster. Use `Host` where referring to properties of the individual physical/virtual system, such as `hostname`, `hostPath`, `hostNetwork`, etc. -* `FooController` is a deprecated kind naming convention. Name the kind after the thing being controlled instead (e.g., `Job` rather than `JobController`). -* The name of a field that specifies the time at which `something` occurs should be called `somethingTime`. Do not use `stamp` (e.g., `creationTimestamp`). -* We use the `fooSeconds` convention for durations, as discussed in the [units subsection](#units). - * `fooPeriodSeconds` is preferred for periodic intervals and other waiting periods (e.g., over `fooIntervalSeconds`). +* Go field names must be CamelCase. JSON field names must be camelCase. Other +than capitalization of the initial letter, the two should almost always match. +No underscores nor dashes in either. +* Field and resource names should be declarative, not imperative (DoSomething, +SomethingDoer, DoneBy, DoneAt). +* `Minion` has been deprecated in favor of `Node`. Use `Node` where referring to +the node resource in the context of the cluster. Use `Host` where referring to +properties of the individual physical/virtual system, such as `hostname`, +`hostPath`, `hostNetwork`, etc. +* `FooController` is a deprecated kind naming convention. Name the kind after +the thing being controlled instead (e.g., `Job` rather than `JobController`). +* The name of a field that specifies the time at which `something` occurs should +be called `somethingTime`. Do not use `stamp` (e.g., `creationTimestamp`). +* We use the `fooSeconds` convention for durations, as discussed in the [units +subsection](#units). + * `fooPeriodSeconds` is preferred for periodic intervals and other waiting +periods (e.g., over `fooIntervalSeconds`). * `fooTimeoutSeconds` is preferred for inactivity/unresponsiveness deadlines. * `fooDeadlineSeconds` is preferred for activity completion deadlines. -* Do not use abbreviations in the API, except where they are extremely commonly used, such as "id", "args", or "stdin". -* Acronyms should similarly only be used when extremely commonly known. All letters in the acronym should have the same case, using the appropriate case for the situation. For example, at the beginning of a field name, the acronym should be all lowercase, such as "httpGet". Where used as a constant, all letters should be uppercase, such as "TCP" or "UDP". -* The name of a field referring to another resource of kind `Foo` by name should be called `fooName`. The name of a field referring to another resource of kind `Foo` by ObjectReference (or subset thereof) should be called `fooRef`. -* More generally, include the units and/or type in the field name if they could be ambiguous and they are not specified by the value or value type. +* Do not use abbreviations in the API, except where they are extremely commonly +used, such as "id", "args", or "stdin". +* Acronyms should similarly only be used when extremely commonly known. All +letters in the acronym should have the same case, using the appropriate case for +the situation. For example, at the beginning of a field name, the acronym should +be all lowercase, such as "httpGet". Where used as a constant, all letters +should be uppercase, such as "TCP" or "UDP". +* The name of a field referring to another resource of kind `Foo` by name should +be called `fooName`. The name of a field referring to another resource of kind +`Foo` by ObjectReference (or subset thereof) should be called `fooRef`. +* More generally, include the units and/or type in the field name if they could +be ambiguous and they are not specified by the value or value type. ## Label, selector, and annotation conventions -Labels are the domain of users. They are intended to facilitate organization and management of API resources using attributes that are meaningful to users, as opposed to meaningful to the system. Think of them as user-created mp3 or email inbox labels, as opposed to the directory structure used by a program to store its data. The former enables the user to apply an arbitrary ontology, whereas the latter is implementation-centric and inflexible. Users will use labels to select resources to operate on, display label values in CLI/UI columns, etc. Users should always retain full power and flexibility over the label schemas they apply to labels in their namespaces. - -However, we should support conveniences for common cases by default. For example, what we now do in ReplicationController is automatically set the RC's selector and labels to the labels in the pod template by default, if they are not already set. That ensures that the selector will match the template, and that the RC can be managed using the same labels as the pods it creates. Note that once we generalize selectors, it won't necessarily be possible to unambiguously generate labels that match an arbitrary selector. - -If the user wants to apply additional labels to the pods that it doesn't select upon, such as to facilitate adoption of pods or in the expectation that some label values will change, they can set the selector to a subset of the pod labels. Similarly, the RC's labels could be initialized to a subset of the pod template's labels, or could include additional/different labels. - -For disciplined users managing resources within their own namespaces, it's not that hard to consistently apply schemas that ensure uniqueness. One just needs to ensure that at least one value of some label key in common differs compared to all other comparable resources. We could/should provide a verification tool to check that. However, development of conventions similar to the examples in [Labels](../user-guide/labels.md) make uniqueness straightforward. Furthermore, relatively narrowly used namespaces (e.g., per environment, per application) can be used to reduce the set of resources that could potentially cause overlap. - -In cases where users could be running misc. examples with inconsistent schemas, or where tooling or components need to programmatically generate new objects to be selected, there needs to be a straightforward way to generate unique label sets. A simple way to ensure uniqueness of the set is to ensure uniqueness of a single label value, such as by using a resource name, uid, resource hash, or generation number. - -Problems with uids and hashes, however, include that they have no semantic meaning to the user, are not memorable nor readily recognizable, and are not predictable. Lack of predictability obstructs use cases such as creation of a replication controller from a pod, such as people want to do when exploring the system, bootstrapping a self-hosted cluster, or deletion and re-creation of a new RC that adopts the pods of the previous one, such as to rename it. Generation numbers are more predictable and much clearer, assuming there is a logical sequence. Fortunately, for deployments that's the case. For jobs, use of creation timestamps is common internally. Users should always be able to turn off auto-generation, in order to permit some of the scenarios described above. Note that auto-generated labels will also become one more field that needs to be stripped out when cloning a resource, within a namespace, in a new namespace, in a new cluster, etc., and will need to be ignored around when updating a resource via patch or read-modify-write sequence. - -Inclusion of a system prefix in a label key is fairly hostile to UX. A prefix is only necessary in the case that the user cannot choose the label key, in order to avoid collisions with user-defined labels. However, I firmly believe that the user should always be allowed to select the label keys to use on their resources, so it should always be possible to override default label keys. - -Therefore, resources supporting auto-generation of unique labels should have a `uniqueLabelKey` field, so that the user could specify the key if they wanted to, but if unspecified, it could be set by default, such as to the resource type, like job, deployment, or replicationController. The value would need to be at least spatially unique, and perhaps temporally unique in the case of job. - -Annotations have very different intended usage from labels. We expect them to be primarily generated and consumed by tooling and system extensions. I'm inclined to generalize annotations to permit them to directly store arbitrary json. Rigid names and name prefixes make sense, since they are analogous to API fields. - -In fact, in-development API fields, including those used to represent fields of newer alpha/beta API versions in the older stable storage version, may be represented as annotations with the form `something.alpha.kubernetes.io/name` or `something.beta.kubernetes.io/name` (depending on our confidence in it). For example `net.alpha.kubernetes.io/policy` might represent an experimental network policy field. The "name" portion of the annotation should follow the below conventions for annotations. When an annotation gets promoted to a field, the name transformation should then be mechanical: `foo-bar` becomes `fooBar`. - -Other advice regarding use of labels, annotations, and other generic map keys by Kubernetes components and tools: - - Key names should be all lowercase, with words separated by dashes, such as `desired-replicas` - - Prefix the key with `kubernetes.io/` or `foo.kubernetes.io/`, preferably the latter if the label/annotation is specific to `foo` - - For instance, prefer `service-account.kubernetes.io/name` over `kubernetes.io/service-account.name` - - Use annotations to store API extensions that the controller responsible for the resource doesn't need to know about, experimental fields that aren't intended to be generally used API fields, etc. Beware that annotations aren't automatically handled by the API conversion machinery. +Labels are the domain of users. They are intended to facilitate organization and +management of API resources using attributes that are meaningful to users, as +opposed to meaningful to the system. Think of them as user-created mp3 or email +inbox labels, as opposed to the directory structure used by a program to store +its data. The former enables the user to apply an arbitrary ontology, whereas +the latter is implementation-centric and inflexible. Users will use labels to +select resources to operate on, display label values in CLI/UI columns, etc. +Users should always retain full power and flexibility over the label schemas +they apply to labels in their namespaces. + +However, we should support conveniences for common cases by default. For +example, what we now do in ReplicationController is automatically set the RC's +selector and labels to the labels in the pod template by default, if they are +not already set. That ensures that the selector will match the template, and +that the RC can be managed using the same labels as the pods it creates. Note +that once we generalize selectors, it won't necessarily be possible to +unambiguously generate labels that match an arbitrary selector. + +If the user wants to apply additional labels to the pods that it doesn't select +upon, such as to facilitate adoption of pods or in the expectation that some +label values will change, they can set the selector to a subset of the pod +labels. Similarly, the RC's labels could be initialized to a subset of the pod +template's labels, or could include additional/different labels. + +For disciplined users managing resources within their own namespaces, it's not +that hard to consistently apply schemas that ensure uniqueness. One just needs +to ensure that at least one value of some label key in common differs compared +to all other comparable resources. We could/should provide a verification tool +to check that. However, development of conventions similar to the examples in +[Labels](../user-guide/labels.md) make uniqueness straightforward. Furthermore, +relatively narrowly used namespaces (e.g., per environment, per application) can +be used to reduce the set of resources that could potentially cause overlap. + +In cases where users could be running misc. examples with inconsistent schemas, +or where tooling or components need to programmatically generate new objects to +be selected, there needs to be a straightforward way to generate unique label +sets. A simple way to ensure uniqueness of the set is to ensure uniqueness of a +single label value, such as by using a resource name, uid, resource hash, or +generation number. + +Problems with uids and hashes, however, include that they have no semantic +meaning to the user, are not memorable nor readily recognizable, and are not +predictable. Lack of predictability obstructs use cases such as creation of a +replication controller from a pod, such as people want to do when exploring the +system, bootstrapping a self-hosted cluster, or deletion and re-creation of a +new RC that adopts the pods of the previous one, such as to rename it. +Generation numbers are more predictable and much clearer, assuming there is a +logical sequence. Fortunately, for deployments that's the case. For jobs, use of +creation timestamps is common internally. Users should always be able to turn +off auto-generation, in order to permit some of the scenarios described above. +Note that auto-generated labels will also become one more field that needs to be +stripped out when cloning a resource, within a namespace, in a new namespace, in +a new cluster, etc., and will need to be ignored around when updating a resource +via patch or read-modify-write sequence. + +Inclusion of a system prefix in a label key is fairly hostile to UX. A prefix is +only necessary in the case that the user cannot choose the label key, in order +to avoid collisions with user-defined labels. However, I firmly believe that the +user should always be allowed to select the label keys to use on their +resources, so it should always be possible to override default label keys. + +Therefore, resources supporting auto-generation of unique labels should have a +`uniqueLabelKey` field, so that the user could specify the key if they wanted +to, but if unspecified, it could be set by default, such as to the resource +type, like job, deployment, or replicationController. The value would need to be +at least spatially unique, and perhaps temporally unique in the case of job. + +Annotations have very different intended usage from labels. We expect them to be +primarily generated and consumed by tooling and system extensions. I'm inclined +to generalize annotations to permit them to directly store arbitrary json. Rigid +names and name prefixes make sense, since they are analogous to API fields. + +In fact, in-development API fields, including those used to represent fields of +newer alpha/beta API versions in the older stable storage version, may be +represented as annotations with the form `something.alpha.kubernetes.io/name` or +`something.beta.kubernetes.io/name` (depending on our confidence in it). For +example `net.alpha.kubernetes.io/policy` might represent an experimental network +policy field. The "name" portion of the annotation should follow the below +conventions for annotations. When an annotation gets promoted to a field, the +name transformation should then be mechanical: `foo-bar` becomes `fooBar`. + +Other advice regarding use of labels, annotations, and other generic map keys by +Kubernetes components and tools: + - Key names should be all lowercase, with words separated by dashes, such as +`desired-replicas` + - Prefix the key with `kubernetes.io/` or `foo.kubernetes.io/`, preferably the +latter if the label/annotation is specific to `foo` + - For instance, prefer `service-account.kubernetes.io/name` over +`kubernetes.io/service-account.name` + - Use annotations to store API extensions that the controller responsible for +the resource doesn't need to know about, experimental fields that aren't +intended to be generally used API fields, etc. Beware that annotations aren't +automatically handled by the API conversion machinery. ## WebSockets and SPDY -Some of the API operations exposed by Kubernetes involve transfer of binary streams between the client and a container, including attach, exec, portforward, and logging. The API therefore exposes certain operations over upgradeable HTTP connections ([described in RFC 2817](https://tools.ietf.org/html/rfc2817)) via the WebSocket and SPDY protocols. These actions are exposed as subresources with their associated verbs (exec, log, attach, and portforward) and are requested via a GET (to support JavaScript in a browser) and POST (semantically accurate). +Some of the API operations exposed by Kubernetes involve transfer of binary +streams between the client and a container, including attach, exec, portforward, +and logging. The API therefore exposes certain operations over upgradeable HTTP +connections ([described in RFC 2817](https://tools.ietf.org/html/rfc2817)) via +the WebSocket and SPDY protocols. These actions are exposed as subresources with +their associated verbs (exec, log, attach, and portforward) and are requested +via a GET (to support JavaScript in a browser) and POST (semantically accurate). There are two primary protocols in use today: 1. Streamed channels - When dealing with multiple independent binary streams of data such as the remote execution of a shell command (writing to STDIN, reading from STDOUT and STDERR) or forwarding multiple ports the streams can be multiplexed onto a single TCP connection. Kubernetes supports a SPDY based framing protocol that leverages SPDY channels and a WebSocket framing protocol that multiplexes multiple channels onto the same stream by prefixing each binary chunk with a byte indicating its channel. The WebSocket protocol supports an optional subprotocol that handles base64-encoded bytes from the client and returns base64-encoded bytes from the server and character based channel prefixes ('0', '1', '2') for ease of use from JavaScript in a browser. + When dealing with multiple independent binary streams of data such as the +remote execution of a shell command (writing to STDIN, reading from STDOUT and +STDERR) or forwarding multiple ports the streams can be multiplexed onto a +single TCP connection. Kubernetes supports a SPDY based framing protocol that +leverages SPDY channels and a WebSocket framing protocol that multiplexes +multiple channels onto the same stream by prefixing each binary chunk with a +byte indicating its channel. The WebSocket protocol supports an optional +subprotocol that handles base64-encoded bytes from the client and returns +base64-encoded bytes from the server and character based channel prefixes ('0', +'1', '2') for ease of use from JavaScript in a browser. 2. Streaming response - The default log output for a channel of streaming data is an HTTP Chunked Transfer-Encoding, which can return an arbitrary stream of binary data from the server. Browser-based JavaScript is limited in its ability to access the raw data from a chunked response, especially when very large amounts of logs are returned, and in future API calls it may be desirable to transfer large files. The streaming API endpoints support an optional WebSocket upgrade that provides a unidirectional channel from the server to the client and chunks data as binary WebSocket frames. An optional WebSocket subprotocol is exposed that base64 encodes the stream before returning it to the client. + The default log output for a channel of streaming data is an HTTP Chunked +Transfer-Encoding, which can return an arbitrary stream of binary data from the +server. Browser-based JavaScript is limited in its ability to access the raw +data from a chunked response, especially when very large amounts of logs are +returned, and in future API calls it may be desirable to transfer large files. +The streaming API endpoints support an optional WebSocket upgrade that provides +a unidirectional channel from the server to the client and chunks data as binary +WebSocket frames. An optional WebSocket subprotocol is exposed that base64 +encodes the stream before returning it to the client. -Clients should use the SPDY protocols if their clients have native support, or WebSockets as a fallback. Note that WebSockets is susceptible to Head-of-Line blocking and so clients must read and process each message sequentionally. In the future, an HTTP/2 implementation will be exposed that deprecates SPDY. +Clients should use the SPDY protocols if their clients have native support, or +WebSockets as a fallback. Note that WebSockets is susceptible to Head-of-Line +blocking and so clients must read and process each message sequentionally. In +the future, an HTTP/2 implementation will be exposed that deprecates SPDY. ## Validation -API objects are validated upon receipt by the apiserver. Validation errors are +API objects are validated upon receipt by the apiserver. Validation errors are flagged and returned to the caller in a `Failure` status with `reason` set to -`Invalid`. In order to facilitate consistent error messages, we ask that +`Invalid`. In order to facilitate consistent error messages, we ask that validation logic adheres to the following guidelines whenever possible (though exceptional cases will exist). * Be as precise as possible. * Telling users what they CAN do is more useful than telling them what they - CANNOT do. +CANNOT do. * When asserting a requirement in the positive, use "must". Examples: "must be - greater than 0", "must match regex '[a-z]+'". Words like "should" imply that - the assertion is optional, and must be avoided. +greater than 0", "must match regex '[a-z]+'". Words like "should" imply that +the assertion is optional, and must be avoided. * When asserting a formatting requirement in the negative, use "must not". - Example: "must not contain '..'". Words like "should not" imply that the - assertion is optional, and must be avoided. +Example: "must not contain '..'". Words like "should not" imply that the +assertion is optional, and must be avoided. * When asserting a behavioral requirement in the negative, use "may not". - Examples: "may not be specified when otherField is empty", "only `name` may be - specified". +Examples: "may not be specified when otherField is empty", "only `name` may be +specified". * When referencing a literal string value, indicate the literal in - single-quotes. Example: "must not contain '..'". +single-quotes. Example: "must not contain '..'". * When referencing another field name, indicate the name in back-quotes. - Example: "must be greater than `request`". +Example: "must be greater than `request`". * When specifying inequalities, use words rather than symbols. Examples: "must - be less than 256", "must be greater than or equal to 0". Do not use words - like "larger than", "bigger than", "more than", "higher than", etc. +be less than 256", "must be greater than or equal to 0". Do not use words +like "larger than", "bigger than", "more than", "higher than", etc. * When specifying numeric ranges, use inclusive ranges when possible. diff --git a/api_changes.md b/api_changes.md index 987d5576..8f6f8dda 100644 --- a/api_changes.md +++ b/api_changes.md @@ -65,15 +65,14 @@ found at [API Conventions](api-conventions.md). # So you want to change the API? -Before attempting a change to the API, you should familiarize yourself -with a number of existing API types and with the [API -conventions](api-conventions.md). If creating a new API -type/resource, we also recommend that you first send a PR containing -just a proposal for the new API types, and that you initially target +Before attempting a change to the API, you should familiarize yourself with a +number of existing API types and with the [API conventions](api-conventions.md). +If creating a new API type/resource, we also recommend that you first send a PR +containing just a proposal for the new API types, and that you initially target the extensions API (pkg/apis/extensions). The Kubernetes API has two major components - the internal structures and -the versioned APIs. The versioned APIs are intended to be stable, while the +the versioned APIs. The versioned APIs are intended to be stable, while the internal structures are implemented to best reflect the needs of the Kubernetes code itself. @@ -88,8 +87,8 @@ It is important to have a high level understanding of the API system used in Kubernetes in order to navigate the rest of this document. As mentioned above, the internal representation of an API object is decoupled -from any one API version. This provides a lot of freedom to evolve the code, -but it requires robust infrastructure to convert between representations. There +from any one API version. This provides a lot of freedom to evolve the code, +but it requires robust infrastructure to convert between representations. There are multiple steps in processing an API operation - even something as simple as a GET involves a great deal of machinery. @@ -97,7 +96,7 @@ The conversion process is logically a "star" with the internal form at the center. Every versioned API can be converted to the internal form (and vice-versa), but versioned APIs do not convert to other versioned APIs directly. This sounds like a heavy process, but in reality we do not intend to keep more -than a small number of versions alive at once. While all of the Kubernetes code +than a small number of versions alive at once. While all of the Kubernetes code operates on the internal structures, they are always converted to a versioned form before being written to storage (disk or etcd) or being sent over a wire. Clients should consume and operate on the versioned APIs exclusively. @@ -110,11 +109,11 @@ To demonstrate the general process, here is a (hypothetical) example: 4. The `v7beta1.Pod` is converted to an `api.Pod` structure 5. The `api.Pod` is validated, and any errors are returned to the user 6. The `api.Pod` is converted to a `v6.Pod` (because v6 is the latest stable - version) +version) 7. The `v6.Pod` is marshalled into JSON and written to etcd Now that we have the `Pod` object stored, a user can GET that object in any -supported api version. For example: +supported api version. For example: 1. A user GETs the `Pod` from `/api/v5/...` 2. The JSON is read from etcd and unmarshalled into a `v6.Pod` structure @@ -132,7 +131,7 @@ Before talking about how to make API changes, it is worthwhile to clarify what we mean by API compatibility. An API change is considered backward-compatible if it: * adds new functionality that is not required for correct behavior (e.g., - does not add a new required field) +does not add a new required field) * does not change existing semantics, including: * default values and behavior * interpretation of existing API types, fields, and values @@ -141,37 +140,37 @@ if it: Put another way: 1. Any API call (e.g. a structure POSTed to a REST endpoint) that worked before - your change must work the same after your change. +your change must work the same after your change. 2. Any API call that uses your change must not cause problems (e.g. crash or - degrade behavior) when issued against servers that do not include your change. +degrade behavior) when issued against servers that do not include your change. 3. It must be possible to round-trip your change (convert to different API - versions and back) with no loss of information. -4. Existing clients need not be aware of your change in order for them to continue - to function as they did previously, even when your change is utilized +versions and back) with no loss of information. +4. Existing clients need not be aware of your change in order for them to +continue to function as they did previously, even when your change is utilized. If your change does not meet these criteria, it is not considered strictly compatible. -Let's consider some examples. In a hypothetical API (assume we're at version +Let's consider some examples. In a hypothetical API (assume we're at version v6), the `Frobber` struct looks something like this: ```go // API v6. type Frobber struct { - Height int `json:"height"` - Param string `json:"param"` + Height int `json:"height"` + Param string `json:"param"` } ``` -You want to add a new `Width` field. It is generally safe to add new fields +You want to add a new `Width` field. It is generally safe to add new fields without changing the API version, so you can simply change it to: ```go // Still API v6. type Frobber struct { - Height int `json:"height"` - Width int `json:"width"` - Param string `json:"param"` + Height int `json:"height"` + Width int `json:"width"` + Param string `json:"param"` } ``` @@ -179,75 +178,76 @@ The onus is on you to define a sane default value for `Width` such that rule #1 above is true - API calls and stored objects that used to work must continue to work. -For your next change you want to allow multiple `Param` values. You can not +For your next change you want to allow multiple `Param` values. You can not simply change `Param string` to `Params []string` (without creating a whole new -API version) - that fails rules #1 and #2. You can instead do something like: +API version) - that fails rules #1 and #2. You can instead do something like: ```go // Still API v6, but kind of clumsy. type Frobber struct { - Height int `json:"height"` - Width int `json:"width"` - Param string `json:"param"` // the first param - ExtraParams []string `json:"extraParams"` // additional params + Height int `json:"height"` + Width int `json:"width"` + Param string `json:"param"` // the first param + ExtraParams []string `json:"extraParams"` // additional params } ``` Now you can satisfy the rules: API calls that provide the old style `Param` will still work, while servers that don't understand `ExtraParams` can ignore -it. This is somewhat unsatisfying as an API, but it is strictly compatible. +it. This is somewhat unsatisfying as an API, but it is strictly compatible. Part of the reason for versioning APIs and for using internal structs that are -distinct from any one version is to handle growth like this. The internal +distinct from any one version is to handle growth like this. The internal representation can be implemented as: ```go // Internal, soon to be v7beta1. type Frobber struct { - Height int - Width int - Params []string + Height int + Width int + Params []string } ``` The code that converts to/from versioned APIs can decode this into the somewhat -uglier (but compatible!) structures. Eventually, a new API version, let's call +uglier (but compatible!) structures. Eventually, a new API version, let's call it v7beta1, will be forked and it can use the clean internal structure. -We've seen how to satisfy rules #1 and #2. Rule #3 means that you can not -extend one versioned API without also extending the others. For example, an +We've seen how to satisfy rules #1 and #2. Rule #3 means that you can not +extend one versioned API without also extending the others. For example, an API call might POST an object in API v7beta1 format, which uses the cleaner `Params` field, but the API server might store that object in trusty old v6 -form (since v7beta1 is "beta"). When the user reads the object back in the -v7beta1 API it would be unacceptable to have lost all but `Params[0]`. This +form (since v7beta1 is "beta"). When the user reads the object back in the +v7beta1 API it would be unacceptable to have lost all but `Params[0]`. This means that, even though it is ugly, a compatible change must be made to the v6 API. -However, this is very challenging to do correctly. It often requires -multiple representations of the same information in the same API resource, which -need to be kept in sync in the event that either is changed. For example, -let's say you decide to rename a field within the same API version. In this case, -you add units to `height` and `width`. You implement this by adding duplicate -fields: +However, this is very challenging to do correctly. It often requires multiple +representations of the same information in the same API resource, which need to +be kept in sync in the event that either is changed. For example, let's say you +decide to rename a field within the same API version. In this case, you add +units to `height` and `width`. You implement this by adding duplicate fields: ```go type Frobber struct { - Height *int `json:"height"` - Width *int `json:"width"` - HeightInInches *int `json:"heightInInches"` - WidthInInches *int `json:"widthInInches"` + Height *int `json:"height"` + Width *int `json:"width"` + HeightInInches *int `json:"heightInInches"` + WidthInInches *int `json:"widthInInches"` } ``` -You convert all of the fields to pointers in order to distinguish between unset and -set to 0, and then set each corresponding field from the other in the defaulting -pass (e.g., `heightInInches` from `height`, and vice versa), which runs just prior -to conversion. That works fine when the user creates a resource from a hand-written -configuration -- clients can write either field and read either field, but what about -creation or update from the output of GET, or update via PATCH (see +You convert all of the fields to pointers in order to distinguish between unset +and set to 0, and then set each corresponding field from the other in the +defaulting pass (e.g., `heightInInches` from `height`, and vice versa), which +runs just prior to conversion. That works fine when the user creates a resource +from a hand-written configuration -- clients can write either field and read +either field, but what about creation or update from the output of GET, or +update via PATCH (see [In-place updates](../user-guide/managing-deployments.md#in-place-updates-of-resources))? -In this case, the two fields will conflict, because only one field would be updated -in the case of an old client that was only aware of the old field (e.g., `height`). +In this case, the two fields will conflict, because only one field would be +updated in the case of an old client that was only aware of the old field (e.g., +`height`). Say the client creates: @@ -280,93 +280,101 @@ then PUTs back: } ``` -The update should not fail, because it would have worked before `heightInInches` was added. +The update should not fail, because it would have worked before `heightInInches` +was added. Therefore, when there are duplicate fields, the old field MUST take precedence over the new, and the new field should be set to match by the server upon write. -A new client would be aware of the old field as well as the new, and so can ensure -that the old field is either unset or is set consistently with the new field. However, -older clients would be unaware of the new field. Please avoid introducing duplicate -fields due to the complexity they incur in the API. - -A new representation, even in a new API version, that is more expressive than an old one -breaks backward compatibility, since clients that only understood the old representation -would not be aware of the new representation nor its semantics. Examples of -proposals that have run into this challenge include [generalized label -selectors](http://issues.k8s.io/341) and [pod-level security +A new client would be aware of the old field as well as the new, and so can +ensure that the old field is either unset or is set consistently with the new +field. However, older clients would be unaware of the new field. Please avoid +introducing duplicate fields due to the complexity they incur in the API. + +A new representation, even in a new API version, that is more expressive than an +old one breaks backward compatibility, since clients that only understood the +old representation would not be aware of the new representation nor its +semantics. Examples of proposals that have run into this challenge include +[generalized label selectors](http://issues.k8s.io/341) and [pod-level security context](http://prs.k8s.io/12823). As another interesting example, enumerated values cause similar challenges. -Adding a new value to an enumerated set is *not* a compatible change. Clients +Adding a new value to an enumerated set is *not* a compatible change. Clients which assume they know how to handle all possible values of a given field will -not be able to handle the new values. However, removing value from an -enumerated set *can* be a compatible change, if handled properly (treat the -removed value as deprecated but allowed). This is actually a special case of -a new representation, discussed above. +not be able to handle the new values. However, removing value from an enumerated +set *can* be a compatible change, if handled properly (treat the removed value +as deprecated but allowed). This is actually a special case of a new +representation, discussed above. -For [Unions](api-conventions.md), sets of fields where at most one should be set, -it is acceptable to add a new option to the union if the [appropriate conventions] -were followed in the original object. Removing an option requires following -the deprecation process. +For [Unions](api-conventions.md#unions), sets of fields where at most one should +be set, it is acceptable to add a new option to the union if the [appropriate +conventions](api-conventions.md#objects) were followed in the original object. +Removing an option requires following the deprecation process. ## Incompatible API changes -There are times when this might be OK, but mostly we want changes that -meet this definition. If you think you need to break compatibility, -you should talk to the Kubernetes team first. - -Breaking compatibility of a beta or stable API version, such as v1, is unacceptable. -Compatibility for experimental or alpha APIs is not strictly required, but -breaking compatibility should not be done lightly, as it disrupts all users of the -feature. Experimental APIs may be removed. Alpha and beta API versions may be deprecated -and eventually removed wholesale, as described in the [versioning document](../design/versioning.md). -Document incompatible changes across API versions under the [conversion tips](../api.md). - -If your change is going to be backward incompatible or might be a breaking change for API -consumers, please send an announcement to `kubernetes-dev@googlegroups.com` before -the change gets in. If you are unsure, ask. Also make sure that the change gets documented in -the release notes for the next release by labeling the PR with the "release-note" github label. +There are times when this might be OK, but mostly we want changes that meet this +definition. If you think you need to break compatibility, you should talk to the +Kubernetes team first. + +Breaking compatibility of a beta or stable API version, such as v1, is +unacceptable. Compatibility for experimental or alpha APIs is not strictly +required, but breaking compatibility should not be done lightly, as it disrupts +all users of the feature. Experimental APIs may be removed. Alpha and beta API +versions may be deprecated and eventually removed wholesale, as described in the +[versioning document](../design/versioning.md). Document incompatible changes +across API versions under the appropriate +[{v? conversion tips tag in the api.md doc](../api.md). + +If your change is going to be backward incompatible or might be a breaking +change for API consumers, please send an announcement to +`kubernetes-dev@googlegroups.com` before the change gets in. If you are unsure, +ask. Also make sure that the change gets documented in the release notes for the +next release by labeling the PR with the "release-note" github label. If you found that your change accidentally broke clients, it should be reverted. In short, the expected API evolution is as follows: + * `extensions/v1alpha1` -> * `newapigroup/v1alpha1` -> ... -> `newapigroup/v1alphaN` -> * `newapigroup/v1beta1` -> ... -> `newapigroup/v1betaN` -> * `newapigroup/v1` -> * `newapigroup/v2alpha1` -> ... -While in extensions we have no obligation to move forward with the API at all and may delete or break it at any time. +While in extensions we have no obligation to move forward with the API at all +and may delete or break it at any time. While in alpha we expect to move forward with it, but may break it. -Once in beta we will preserve forward compatibility, but may introduce new versions and delete old ones. +Once in beta we will preserve forward compatibility, but may introduce new +versions and delete old ones. v1 must be backward-compatible for an extended length of time. ## Changing versioned APIs For most changes, you will probably find it easiest to change the versioned -APIs first. This forces you to think about how to make your change in a -compatible way. Rather than doing each step in every version, it's usually +APIs first. This forces you to think about how to make your change in a +compatible way. Rather than doing each step in every version, it's usually easier to do each versioned API one at a time, or to do all of one version before starting "all the rest". ### Edit types.go -The struct definitions for each API are in `pkg/api//types.go`. Edit -those files to reflect the change you want to make. Note that all types and non-inline -fields in versioned APIs must be preceded by descriptive comments - these are used to generate -documentation. Comments for types should not contain the type name; API documentation is -generated from these comments and end-users should not be exposed to golang type names. +The struct definitions for each API are in `pkg/api//types.go`. Edit +those files to reflect the change you want to make. Note that all types and +non-inline fields in versioned APIs must be preceded by descriptive comments - +these are used to generate documentation. Comments for types should not contain +the type name; API documentation is generated from these comments and end-users +should not be exposed to golang type names. -Optional fields should have the `,omitempty` json tag; fields are interpreted as being -required otherwise. +Optional fields should have the `,omitempty` json tag; fields are interpreted as +being required otherwise. ### Edit defaults.go If your change includes new fields for which you will need default values, you -need to add cases to `pkg/api//defaults.go`. Of course, since you +need to add cases to `pkg/api//defaults.go`. Of course, since you have added code, you have to add a test: `pkg/api//defaults_test.go`. Do use pointers to scalars when you need to distinguish between an unset value @@ -380,19 +388,20 @@ Don't forget to run the tests! ### Edit conversion.go Given that you have not yet changed the internal structs, this might feel -premature, and that's because it is. You don't yet have anything to convert to -or from. We will revisit this in the "internal" section. If you're doing this +premature, and that's because it is. You don't yet have anything to convert to +or from. We will revisit this in the "internal" section. If you're doing this all in a different order (i.e. you started with the internal structs), then you -should jump to that topic below. In the very rare case that you are making an +should jump to that topic below. In the very rare case that you are making an incompatible change you might or might not want to do this now, but you will -have to do more later. The files you want are +have to do more later. The files you want are `pkg/api//conversion.go` and `pkg/api//conversion_test.go`. -Note that the conversion machinery doesn't generically handle conversion of values, -such as various kinds of field references and API constants. [The client +Note that the conversion machinery doesn't generically handle conversion of +values, such as various kinds of field references and API constants. [The client library](../../pkg/client/unversioned/request.go) has custom conversion code for -field references. You also need to add a call to api.Scheme.AddFieldLabelConversionFunc -with a mapping function that understands supported translations. +field references. You also need to add a call to +api.Scheme.AddFieldLabelConversionFunc with a mapping function that understands +supported translations. ## Changing the internal structures @@ -402,7 +411,7 @@ used. ### Edit types.go Similar to the versioned APIs, the definitions for the internal structs are in -`pkg/api/types.go`. Edit those files to reflect the change you want to make. +`pkg/api/types.go`. Edit those files to reflect the change you want to make. Keep in mind that the internal structs must be able to express *all* of the versioned APIs. @@ -410,10 +419,10 @@ versioned APIs. Most changes made to the internal structs need some form of input validation. Validation is currently done on internal objects in -`pkg/api/validation/validation.go`. This validation is the one of the first +`pkg/api/validation/validation.go`. This validation is the one of the first opportunities we have to make a great user experience - good error messages and thorough validation help ensure that users are giving you what you expect and, -when they don't, that they know why and how to fix it. Think hard about the +when they don't, that they know why and how to fix it. Think hard about the contents of `string` fields, the bounds of `int` fields and the requiredness/optionalness of fields. @@ -433,26 +442,26 @@ than the generic ones (which are based on reflections and thus are highly inefficient). The conversion code resides with each versioned API. There are two files: + - `pkg/api//conversion.go` containing manually written conversion - functions +functions - `pkg/api//conversion_generated.go` containing auto-generated - conversion functions +conversion functions - `pkg/apis/extensions//conversion.go` containing manually written - conversion functions +conversion functions - `pkg/apis/extensions//conversion_generated.go` containing - auto-generated conversion functions +auto-generated conversion functions Since auto-generated conversion functions are using manually written ones, -those manually written should be named with a defined convention, i.e. a function -converting type X in pkg a to type Y in pkg b, should be named: +those manually written should be named with a defined convention, i.e. a +function converting type X in pkg a to type Y in pkg b, should be named: `convert_a_X_To_b_Y`. Also note that you can (and for efficiency reasons should) use auto-generated conversion functions when writing your conversion functions. Once all the necessary manually written conversions are added, you need to -regenerate auto-generated ones. To regenerate them: - - run +regenerate auto-generated ones. To regenerate them run: ```sh hack/update-codegen.sh @@ -469,8 +478,9 @@ regenerate it. If the auto-generated conversion methods are not used by the manually-written ones, it's fine to just remove the whole file and let the generator to create it from scratch. -Unsurprisingly, adding manually written conversion also requires you to add tests to -`pkg/api//conversion_test.go`. +Unsurprisingly, adding manually written conversion also requires you to add +tests to `pkg/api//conversion_test.go`. + ## Edit json (un)marshaling code @@ -478,11 +488,11 @@ We are auto-generating code for marshaling and unmarshaling json representation of api objects - this is to improve the overall system performance. The auto-generated code resides with each versioned API: + - `pkg/api//types.generated.go` - `pkg/apis/extensions//types.generated.go` -To regenerate them: - - run +To regenerate them run: ```sh hack/update-codecgen.sh @@ -492,56 +502,56 @@ hack/update-codecgen.sh This section is under construction, as we make the tooling completely generic. -At the moment, you'll have to make a new directory under pkg/apis/; copy the -directory structure from pkg/apis/extensions. Add the new group/version to all -of the hack/{verify,update}-generated-{deep-copy,conversions,swagger}.sh files +At the moment, you'll have to make a new directory under `pkg/apis/`; copy the +directory structure from `pkg/apis/extensions`. Add the new group/version to all +of the `hack/{verify,update}-generated-{deep-copy,conversions,swagger}.sh` files in the appropriate places--it should just require adding your new group/version to a bash array. You will also need to make sure your new types are imported by -the generation commands (cmd/gendeepcopy/ & cmd/genconversion). These +the generation commands (`cmd/gendeepcopy/` & `cmd/genconversion`). These instructions may not be complete and will be updated as we gain experience. -Adding API groups outside of the pkg/apis/ directory is not currently supported, -but is clearly desirable. The deep copy & conversion generators need to work by -parsing go files instead of by reflection; then they will be easy to point at -arbitrary directories: see issue [#13775](http://issue.k8s.io/13775). +Adding API groups outside of the `pkg/apis/` directory is not currently +supported, but is clearly desirable. The deep copy & conversion generators need +to work by parsing go files instead of by reflection; then they will be easy to +point at arbitrary directories: see issue [#13775](http://issue.k8s.io/13775). ## Update the fuzzer Part of our testing regimen for APIs is to "fuzz" (fill with random values) API -objects and then convert them to and from the different API versions. This is +objects and then convert them to and from the different API versions. This is a great way of exposing places where you lost information or made bad -assumptions. If you have added any fields which need very careful formatting +assumptions. If you have added any fields which need very careful formatting (the test does not run validation) or if you have made assumptions such as "this slice will always have at least 1 element", you may get an error or even -a panic from the `serialization_test`. If so, look at the diff it produces (or -the backtrace in case of a panic) and figure out what you forgot. Encode that -into the fuzzer's custom fuzz functions. Hint: if you added defaults for a field, -that field will need to have a custom fuzz function that ensures that the field is -fuzzed to a non-empty value. +a panic from the `serialization_test`. If so, look at the diff it produces (or +the backtrace in case of a panic) and figure out what you forgot. Encode that +into the fuzzer's custom fuzz functions. Hint: if you added defaults for a +field, that field will need to have a custom fuzz function that ensures that the +field is fuzzed to a non-empty value. The fuzzer can be found in `pkg/api/testing/fuzzer.go`. ## Update the semantic comparisons -VERY VERY rarely is this needed, but when it hits, it hurts. In some rare -cases we end up with objects (e.g. resource quantities) that have morally -equivalent values with different bitwise representations (e.g. value 10 with a -base-2 formatter is the same as value 0 with a base-10 formatter). The only way -Go knows how to do deep-equality is through field-by-field bitwise comparisons. +VERY VERY rarely is this needed, but when it hits, it hurts. In some rare cases +we end up with objects (e.g. resource quantities) that have morally equivalent +values with different bitwise representations (e.g. value 10 with a base-2 +formatter is the same as value 0 with a base-10 formatter). The only way Go +knows how to do deep-equality is through field-by-field bitwise comparisons. This is a problem for us. -The first thing you should do is try not to do that. If you really can't avoid -this, I'd like to introduce you to our semantic DeepEqual routine. It supports +The first thing you should do is try not to do that. If you really can't avoid +this, I'd like to introduce you to our `semantic DeepEqual` routine. It supports custom overrides for specific types - you can find that in `pkg/api/helpers.go`. -There's one other time when you might have to touch this: unexported fields. -You see, while Go's `reflect` package is allowed to touch unexported fields, us -mere mortals are not - this includes semantic DeepEqual. Fortunately, most of -our API objects are "dumb structs" all the way down - all fields are exported -(start with a capital letter) and there are no unexported fields. But sometimes +There's one other time when you might have to touch this: `unexported fields`. +You see, while Go's `reflect` package is allowed to touch `unexported fields`, +us mere mortals are not - this includes `semantic DeepEqual`. Fortunately, most +of our API objects are "dumb structs" all the way down - all fields are exported +(start with a capital letter) and there are no unexported fields. But sometimes you want to include an object in our API that does have unexported fields -somewhere in it (for example, `time.Time` has unexported fields). If this hits -you, you may have to touch the semantic DeepEqual customization functions. +somewhere in it (for example, `time.Time` has unexported fields). If this hits +you, you may have to touch the `semantic DeepEqual` customization functions. ## Implement your change @@ -550,17 +560,17 @@ doing! ## Write end-to-end tests -Check out the [E2E docs](e2e-tests.md) for detailed information about how to write end-to-end -tests for your feature. +Check out the [E2E docs](e2e-tests.md) for detailed information about how to +write end-to-end tests for your feature. ## Examples and docs At last, your change is done, all unit tests pass, e2e passes, you're done, -right? Actually, no. You just changed the API. If you are touching an -existing facet of the API, you have to try *really* hard to make sure that -*all* the examples and docs are updated. There's no easy way to do this, due -in part to JSON and YAML silently dropping unknown fields. You're clever - -you'll figure it out. Put `grep` or `ack` to good use. +right? Actually, no. You just changed the API. If you are touching an existing +facet of the API, you have to try *really* hard to make sure that *all* the +examples and docs are updated. There's no easy way to do this, due in part to +JSON and YAML silently dropping unknown fields. You're clever - you'll figure it +out. Put `grep` or `ack` to good use. If you added functionality, you should consider documenting it and/or writing an example to illustrate your change. @@ -575,81 +585,95 @@ The API spec changes should be in a commit separate from your other changes. ## Alpha, Beta, and Stable Versions -New feature development proceeds through a series of stages of increasing maturity: +New feature development proceeds through a series of stages of increasing +maturity: - Development level - Object Versioning: no convention - - Availability: not committed to main kubernetes repo, and thus not available in official releases - - Audience: other developers closely collaborating on a feature or proof-of-concept - - Upgradeability, Reliability, Completeness, and Support: no requirements or guarantees + - Availability: not committed to main kubernetes repo, and thus not available +in official releases + - Audience: other developers closely collaborating on a feature or +proof-of-concept + - Upgradeability, Reliability, Completeness, and Support: no requirements or +guarantees - Alpha level - Object Versioning: API version name contains `alpha` (e.g. `v1alpha1`) - - Availability: committed to main kubernetes repo; appears in an official release; feature is - disabled by default, but may be enabled by flag - - Audience: developers and expert users interested in giving early feedback on features - - Completeness: some API operations, CLI commands, or UI support may not be implemented; the API - need not have had an *API review* (an intensive and targeted review of the API, on top of a normal - code review) - - Upgradeability: the object schema and semantics may change in a later software release, without - any provision for preserving objects in an existing cluster; - removing the upgradability concern allows developers to make rapid progress; in particular, - API versions can increment faster than the minor release cadence and the developer need not - maintain multiple versions; developers should still increment the API version when object schema - or semantics change in an [incompatible way](#on-compatibility) - - Cluster Reliability: because the feature is relatively new, and may lack complete end-to-end - tests, enabling the feature via a flag might expose bugs with destabilize the cluster (e.g. a - bug in a control loop might rapidly create excessive numbers of object, exhausting API storage). - - Support: there is *no commitment* from the project to complete the feature; the feature may be - dropped entirely in a later software release - - Recommended Use Cases: only in short-lived testing clusters, due to complexity of upgradeability - and lack of long-term support and lack of upgradability. + - Availability: committed to main kubernetes repo; appears in an official +release; feature is disabled by default, but may be enabled by flag + - Audience: developers and expert users interested in giving early feedback on +features + - Completeness: some API operations, CLI commands, or UI support may not be +implemented; the API need not have had an *API review* (an intensive and +targeted review of the API, on top of a normal code review) + - Upgradeability: the object schema and semantics may change in a later +software release, without any provision for preserving objects in an existing +cluster; removing the upgradability concern allows developers to make rapid +progress; in particular, API versions can increment faster than the minor +release cadence and the developer need not maintain multiple versions; +developers should still increment the API version when object schema or +semantics change in an [incompatible way](#on-compatibility) + - Cluster Reliability: because the feature is relatively new, and may lack +complete end-to-end tests, enabling the feature via a flag might expose bugs +with destabilize the cluster (e.g. a bug in a control loop might rapidly create +excessive numbers of object, exhausting API storage). + - Support: there is *no commitment* from the project to complete the feature; +the feature may be dropped entirely in a later software release + - Recommended Use Cases: only in short-lived testing clusters, due to +complexity of upgradeability and lack of long-term support and lack of +upgradability. - Beta level: - Object Versioning: API version name contains `beta` (e.g. `v2beta3`) - Availability: in official Kubernetes releases, and enabled by default - Audience: users interested in providing feedback on features - - Completeness: all API operations, CLI commands, and UI support should be implemented; end-to-end - tests complete; the API has had a thorough API review and is thought to be complete, though use - during beta may frequently turn up API issues not thought of during review - - Upgradeability: the object schema and semantics may change in a later software release; when - this happens, an upgrade path will be documented; in some cases, objects will be automatically - converted to the new version; in other cases, a manual upgrade may be necessary; a manual - upgrade may require downtime for anything relying on the new feature, and may require - manual conversion of objects to the new version; when manual conversion is necessary, the - project will provide documentation on the process (for an example, see [v1 conversion - tips](../api.md)) - - Cluster Reliability: since the feature has e2e tests, enabling the feature via a flag should not - create new bugs in unrelated features; because the feature is new, it may have minor bugs - - Support: the project commits to complete the feature, in some form, in a subsequent Stable - version; typically this will happen within 3 months, but sometimes longer; releases should - simultaneously support two consecutive versions (e.g. `v1beta1` and `v1beta2`; or `v1beta2` and - `v1`) for at least one minor release cycle (typically 3 months) so that users have enough time - to upgrade and migrate objects - - Recommended Use Cases: in short-lived testing clusters; in production clusters as part of a - short-lived evaluation of the feature in order to provide feedback + - Completeness: all API operations, CLI commands, and UI support should be +implemented; end-to-end tests complete; the API has had a thorough API review +and is thought to be complete, though use during beta may frequently turn up API +issues not thought of during review + - Upgradeability: the object schema and semantics may change in a later +software release; when this happens, an upgrade path will be documented; in some +cases, objects will be automatically converted to the new version; in other +cases, a manual upgrade may be necessary; a manual upgrade may require downtime +for anything relying on the new feature, and may require manual conversion of +objects to the new version; when manual conversion is necessary, the project +will provide documentation on the process (for an example, see [v1 conversion +tips](../api.md#v1-conversion-tips)) + - Cluster Reliability: since the feature has e2e tests, enabling the feature +via a flag should not create new bugs in unrelated features; because the feature +is new, it may have minor bugs + - Support: the project commits to complete the feature, in some form, in a +subsequent Stable version; typically this will happen within 3 months, but +sometimes longer; releases should simultaneously support two consecutive +versions (e.g. `v1beta1` and `v1beta2`; or `v1beta2` and `v1`) for at least one +minor release cycle (typically 3 months) so that users have enough time to +upgrade and migrate objects + - Recommended Use Cases: in short-lived testing clusters; in production +clusters as part of a short-lived evaluation of the feature in order to provide +feedback - Stable level: - Object Versioning: API version `vX` where `X` is an integer (e.g. `v1`) - Availability: in official Kubernetes releases, and enabled by default - Audience: all users - Completeness: same as beta - - Upgradeability: only [strictly compatible](#on-compatibility) changes allowed in subsequent - software releases + - Upgradeability: only [strictly compatible](#on-compatibility) changes +allowed in subsequent software releases - Cluster Reliability: high - - Support: API version will continue to be present for many subsequent software releases; + - Support: API version will continue to be present for many subsequent +software releases; - Recommended Use Cases: any ### Adding Unstable Features to Stable Versions -When adding a feature to an object which is already Stable, the new fields and new behaviors -need to meet the Stable level requirements. If these cannot be met, then the new -field cannot be added to the object. +When adding a feature to an object which is already Stable, the new fields and +new behaviors need to meet the Stable level requirements. If these cannot be +met, then the new field cannot be added to the object. For example, consider the following object: ```go // API v6. type Frobber struct { - Height int `json:"height"` - Param string `json:"param"` + Height int `json:"height"` + Param string `json:"param"` } ``` @@ -658,26 +682,29 @@ A developer is considering adding a new `Width` parameter, like this: ```go // API v6. type Frobber struct { - Height int `json:"height"` - Width int `json:"height"` - Param string `json:"param"` + Height int `json:"height"` + Width int `json:"height"` + Param string `json:"param"` } ``` -However, the new feature is not stable enough to be used in a stable version (`v6`). -Some reasons for this might include: +However, the new feature is not stable enough to be used in a stable version +(`v6`). Some reasons for this might include: -- the final representation is undecided (e.g. should it be called `Width` or `Breadth`?) -- the implementation is not stable enough for general use (e.g. the `Area()` routine sometimes overflows.) +- the final representation is undecided (e.g. should it be called `Width` or +`Breadth`?) +- the implementation is not stable enough for general use (e.g. the `Area()` +routine sometimes overflows.) -The developer cannot add the new field until stability is met. However, sometimes stability -cannot be met until some users try the new feature, and some users are only able or willing -to accept a released version of Kubernetes. In that case, the developer has a few options, -both of which require staging work over several releases. +The developer cannot add the new field until stability is met. However, +sometimes stability cannot be met until some users try the new feature, and some +users are only able or willing to accept a released version of Kubernetes. In +that case, the developer has a few options, both of which require staging work +over several releases. -A preferred option is to first make a release where the new value (`Width` in this example) -is specified via an annotation, like this: +A preferred option is to first make a release where the new value (`Width` in +this example) is specified via an annotation, like this: ```go kind: frobber @@ -690,9 +717,9 @@ height: 4 param: "green and blue" ``` -This format allows users to specify the new field, but makes it clear -that they are using a Alpha feature when they do, since the word `alpha` -is in the annotation key. +This format allows users to specify the new field, but makes it clear that they +are using a Alpha feature when they do, since the word `alpha` is in the +annotation key. Another option is to introduce a new type with an new `alpha` or `beta` version designator, like this: @@ -700,18 +727,19 @@ designator, like this: ``` // API v6alpha2 type Frobber struct { - Height int `json:"height"` - Width int `json:"height"` - Param string `json:"param"` + Height int `json:"height"` + Width int `json:"height"` + Param string `json:"param"` } ``` -The latter requires that all objects in the same API group as `Frobber` to be replicated in -the new version, `v6alpha2`. This also requires user to use a new client which uses the -other version. Therefore, this is not a preferred option. +The latter requires that all objects in the same API group as `Frobber` to be +replicated in the new version, `v6alpha2`. This also requires user to use a new +client which uses the other version. Therefore, this is not a preferred option. A releated issue is how a cluster manager can roll back from a new version -with a new feature, that is already being used by users. See https://github.com/kubernetes/kubernetes/issues/4855. +with a new feature, that is already being used by users. See +https://github.com/kubernetes/kubernetes/issues/4855. [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/api_changes.md?pixel)]() diff --git a/automation.md b/automation.md index 0d25fe3c..2b3f5437 100644 --- a/automation.md +++ b/automation.md @@ -36,8 +36,9 @@ Documentation for other releases can be found at ## Overview -Kubernetes uses a variety of automated tools in an attempt to relieve developers of repetitive, low -brain power work. This document attempts to describe these processes. +Kubernetes uses a variety of automated tools in an attempt to relieve developers +of repetitive, low brain power work. This document attempts to describe these +processes. ## Submit Queue @@ -47,8 +48,11 @@ In an effort to * maintain e2e stability * load test githubs label feature -We have added an automated [submit-queue](https://github.com/kubernetes/contrib/blob/master/mungegithub/mungers/submit-queue.go) to the -[github "munger"](https://github.com/kubernetes/contrib/tree/master/mungegithub) for kubernetes. +We have added an automated [submit-queue] +(https://github.com/kubernetes/contrib/blob/master/mungegithub/mungers/submit-queue.go) +to the +[github "munger"](https://github.com/kubernetes/contrib/tree/master/mungegithub) +for kubernetes. The submit-queue does the following: @@ -76,59 +80,76 @@ A PR is considered "ready for merging" if it matches the following: * it has passed the Jenkins e2e test * it has the `e2e-not-required` label -Note that the combined whitelist/committer list is available at [submit-queue.k8s.io](http://submit-queue.k8s.io) +Note that the combined whitelist/committer list is available at +[submit-queue.k8s.io](http://submit-queue.k8s.io) ### Merge process -Merges _only_ occur when the `critical builds` (Jenkins e2e for gce, gke, scalability, upgrade) are passing. -We're open to including more builds here, let us know... +Merges _only_ occur when the `critical builds` (Jenkins e2e for gce, gke, +scalability, upgrade) are passing. We're open to including more builds here, let +us know... -Merges are serialized, so only a single PR is merged at a time, to ensure against races. +Merges are serialized, so only a single PR is merged at a time, to ensure +against races. -If the PR has the `e2e-not-required` label, it is simply merged. -If the PR does not have this label, e2e tests are re-run, if these new tests pass, the PR is merged. +If the PR has the `e2e-not-required` label, it is simply merged. If the PR does +not have this label, e2e tests are re-run, if these new tests pass, the PR is +merged. -If e2e flakes or is currently buggy, the PR will not be merged, but it will be re-run on the following -pass. +If e2e flakes or is currently buggy, the PR will not be merged, but it will be +re-run on the following pass. ## Github Munger -We also run a [github "munger"](https://github.com/kubernetes/contrib/tree/master/mungegithub) +We also run a [github "munger."] +(https://github.com/kubernetes/contrib/tree/master/mungegithub) -This runs repeatedly over github pulls and issues and runs modular "mungers" similar to "mungedocs" +This runs repeatedly over github pulls and issues and runs modular "mungers" +similar to "mungedocs." Currently this runs: - * blunderbuss - Tries to automatically find an owner for a PR without an owner, uses mapping file here: + * blunderbuss - Tries to automatically find an owner for a PR without an +owner, uses mapping file here: https://github.com/kubernetes/contrib/blob/master/mungegithub/blunderbuss.yml - * needs-rebase - Adds `needs-rebase` to PRs that aren't currently mergeable, and removes it from those that are. + * needs-rebase - Adds `needs-rebase` to PRs that aren't currently mergeable, +and removes it from those that are. * size - Adds `size/xs` - `size/xxl` labels to PRs - * ok-to-test - Adds the `ok-to-test` message to PRs that have an `lgtm` but the e2e-builder would otherwise not test due to whitelist - * ping-ci - Attempts to ping the ci systems (Travis) if they are missing from a PR. - * lgtm-after-commit - Removes the `lgtm` label from PRs where there are commits that are newer than the `lgtm` label + * ok-to-test - Adds the `ok-to-test` message to PRs that have an `lgtm` but +the e2e-builder would otherwise not test due to whitelist + * ping-ci - Attempts to ping the ci systems (Travis) if they are missing from +a PR. + * lgtm-after-commit - Removes the `lgtm` label from PRs where there are +commits that are newer than the `lgtm` label In the works: - * issue-detector - machine learning for determining if an issue that has been filed is a `support` issue, `bug` or `feature` + * issue-detector - machine learning for determining if an issue that has been +filed is a `support` issue, `bug` or `feature` -Please feel free to unleash your creativity on this tool, send us new mungers that you think will help support the Kubernetes development process. +Please feel free to unleash your creativity on this tool, send us new mungers +that you think will help support the Kubernetes development process. ## PR builder We also run a robotic PR builder that attempts to run e2e tests for each PR. -Before a PR from an unknown user is run, the PR builder bot (`k8s-bot`) asks to a message from a -contributor that a PR is "ok to test", the contributor replies with that message. Contributors can also -add users to the whitelist by replying with the message "add to whitelist" ("please" is optional, but -remember to treat your robots with kindness...) +Before a PR from an unknown user is run, the PR builder bot (`k8s-bot`) asks to +a message from a contributor that a PR is "ok to test", the contributor replies +with that message. Contributors can also add users to the whitelist by replying +with the message "add to whitelist" ("please" is optional, but remember to treat +your robots with kindness...) -If a PR is approved for testing, and tests either haven't run, or need to be re-run, you can ask the -PR builder to re-run the tests. To do this, reply to the PR with a message that begins with `@k8s-bot test this`, this should trigger a re-build/re-test. +If a PR is approved for testing, and tests either haven't run, or need to be +re-run, you can ask the PR builder to re-run the tests. To do this, reply to the +PR with a message that begins with `@k8s-bot test this`, this should trigger a +re-build/re-test. ## FAQ: #### How can I ask my PR to be tested again for Jenkins failures? -Right now you have to ask a contributor (this may be you!) to re-run the test with "@k8s-bot test this" +Right now you have to ask a contributor (this may be you!) to re-run the test +with "@k8s-bot test this" ### How can I kick Travis to re-test on a failure? diff --git a/cherry-picks.md b/cherry-picks.md index 3bc2a3ff..328ebe7c 100644 --- a/cherry-picks.md +++ b/cherry-picks.md @@ -40,46 +40,54 @@ depending on the point in the release cycle. ## Propose a Cherry Pick -1. Cherrypicks are [managed with labels and milestones](pull-requests.md#release-notes) -1. All label/milestone accounting happens on PRs on master. There's nothing to do on PRs targeted to the release branches. -1. When you want a PR to be merged to the release branch, make the following label changes to the **master** branch PR: +1. Cherrypicks are [managed with labels and milestones] +(pull-requests.md#release-notes) +1. All label/milestone accounting happens on PRs on master. There's nothing to +do on PRs targeted to the release branches. +1. When you want a PR to be merged to the release branch, make the following +label changes to the **master** branch PR: * Remove release-note-label-needed * Add an appropriate release-note-(!label-needed) label * Add an appropriate milestone * Add the `cherrypick-candidate` label * The PR title is the **release note** you want published at release time and - note that PR titles are mutable and should reflect a release note - friendly message for any `release-note-*` labeled PRs. +note that PR titles are mutable and should reflect a release note +friendly message for any `release-note-*` labeled PRs. ### How do cherrypick-candidates make it to the release branch? 1. **BATCHING:** After a branch is first created and before the X.Y.0 release * Branch owners review the list of `cherrypick-candidate` labeled PRs. - * PRs batched up and merged to the release branch get a `cherrypick-approved` label and lose the `cherrypick-candidate` label. - * PRs that won't be merged to the release branch, lose the `cherrypick-candidate` label. + * PRs batched up and merged to the release branch get a `cherrypick-approved` +label and lose the `cherrypick-candidate` label. + * PRs that won't be merged to the release branch, lose the +`cherrypick-candidate` label. 1. **INDIVIDUAL CHERRYPICKS:** After the first X.Y.0 on a branch - * Run the cherry pick script. This example applies a master branch PR #98765 to the remote branch `upstream/release-3.14`: - `hack/cherry_pick_pull.sh upstream/release-3.14 98765` + * Run the cherry pick script. This example applies a master branch PR #98765 +to the remote branch `upstream/release-3.14`: +`hack/cherry_pick_pull.sh upstream/release-3.14 98765` * Your cherrypick PR (targeted to the branch) will immediately get the - `do-not-merge` label. The branch owner will triage PRs targeted to - the branch and label the ones to be merged by applying the `lgtm` - label. +`do-not-merge` label. The branch owner will triage PRs targeted to +the branch and label the ones to be merged by applying the `lgtm` +label. -There is an [issue](https://github.com/kubernetes/kubernetes/issues/23347) open tracking the tool to automate the batching procedure. +There is an [issue](https://github.com/kubernetes/kubernetes/issues/23347) open +tracking the tool to automate the batching procedure. #### Cherrypicking a doc change If you are cherrypicking a change which adds a doc, then you also need to run `build/versionize-docs.sh` in the release branch to versionize that doc. -Ideally, just running `hack/cherry_pick_pull.sh` should be enough, but we are not there -yet: [#18861](https://github.com/kubernetes/kubernetes/issues/18861) +Ideally, just running `hack/cherry_pick_pull.sh` should be enough, but we are +not there yet: [#18861](https://github.com/kubernetes/kubernetes/issues/18861) -To cherrypick PR 123456 to release-3.14, run the following commands after running `hack/cherry_pick_pull.sh` and before merging the PR: +To cherrypick PR 123456 to release-3.14, run the following commands after +running `hack/cherry_pick_pull.sh` and before merging the PR: ``` $ git checkout -b automated-cherry-pick-of-#123456-upstream-release-3.14 - origin/automated-cherry-pick-of-#123456-upstream-release-3.14 +origin/automated-cherry-pick-of-#123456-upstream-release-3.14 $ ./build/versionize-docs.sh release-3.14 $ git commit -a -m "Running versionize docs" $ git push origin automated-cherry-pick-of-#123456-upstream-release-3.14 @@ -97,9 +105,9 @@ requested - this should not be the norm, but it may happen. See the [cherrypick queue dashboard](http://cherrypick.k8s.io/#/queue) for status of PRs labeled as `cherrypick-candidate`. -[Contributor License Agreements](http://releases.k8s.io/HEAD/CONTRIBUTING.md) is considered implicit -for all code within cherry-pick pull requests, ***unless there is a large -conflict***. +[Contributor License Agreements](http://releases.k8s.io/HEAD/CONTRIBUTING.md) is +considered implicit for all code within cherry-pick pull requests, ***unless +there is a large conflict***. diff --git a/client-libraries.md b/client-libraries.md index a195b383..95a3dfeb 100644 --- a/client-libraries.md +++ b/client-libraries.md @@ -40,7 +40,8 @@ Documentation for other releases can be found at ### User Contributed -*Note: Libraries provided by outside parties are supported by their authors, not the core Kubernetes team* +*Note: Libraries provided by outside parties are supported by their authors, not +the core Kubernetes team* * [Clojure](https://github.com/yanatan16/clj-kubernetes-api) * [Java (OSGi)](https://bitbucket.org/amdatulabs/amdatu-kubernetes) -- cgit v1.2.3 From f3e75e1aa70561dd8272281a1c078ba6a479e7a4 Mon Sep 17 00:00:00 2001 From: mikebrow Date: Tue, 19 Apr 2016 14:52:56 -0500 Subject: updates to vagrant.md Signed-off-by: mikebrow --- developer-guides/vagrant.md | 388 +++++++++++++++++++++++++------------------- 1 file changed, 219 insertions(+), 169 deletions(-) diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index 43b59c81..64bfa13f 100644 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -34,45 +34,67 @@ Documentation for other releases can be found at ## Getting started with Vagrant -Running kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/develop on your local machine (Linux, Mac OS X). +Running Kubernetes with Vagrant is an easy way to run/test/develop on your +local machine in an environment using the same setup procedures when running on +GCE or AWS cloud providers. This provider is not tested on a per PR basis, if +you experience bugs when testing from HEAD, please open an issue. ### Prerequisites -1. Install latest version >= 1.6.2 of vagrant from http://www.vagrantup.com/downloads.html -2. Install one of: - 1. The latest version of Virtual Box from https://www.virtualbox.org/wiki/Downloads - 2. [VMWare Fusion](https://www.vmware.com/products/fusion/) version 5 or greater as well as the appropriate [Vagrant VMWare Fusion provider](https://www.vagrantup.com/vmware) - 3. [VMWare Workstation](https://www.vmware.com/products/workstation/) version 9 or greater as well as the [Vagrant VMWare Workstation provider](https://www.vagrantup.com/vmware) - 4. [Parallels Desktop](https://www.parallels.com/products/desktop/) version 9 or greater as well as the [Vagrant Parallels provider](https://parallels.github.io/vagrant-parallels/) -3. Get or build a [binary release](../../../docs/getting-started-guides/binary_release.md) +1. Install latest version >= 1.8.1 of vagrant from +http://www.vagrantup.com/downloads.html + +2. Install a virtual machine host. Examples: + 1. [Virtual Box](https://www.virtualbox.org/wiki/Downloads) + 2. [VMWare Fusion](https://www.vmware.com/products/fusion/) plus +[Vagrant VMWare Fusion provider](https://www.vagrantup.com/vmware) + 3. [Parallels Desktop](https://www.parallels.com/products/desktop/) +plus +[Vagrant Parallels provider](https://parallels.github.io/vagrant-parallels/) + +3. Get or build a +[binary release](../../../docs/getting-started-guides/binary_release.md) ### Setup Setting up a cluster is as simple as running: -```sh +```shell export KUBERNETES_PROVIDER=vagrant curl -sS https://get.k8s.io | bash ``` -Alternatively, you can download [Kubernetes release](https://github.com/kubernetes/kubernetes/releases) and extract the archive. To start your local cluster, open a shell and run: +Alternatively, you can download +[Kubernetes release](https://github.com/kubernetes/kubernetes/releases) and +extract the archive. To start your local cluster, open a shell and run: -```sh +```shell cd kubernetes export KUBERNETES_PROVIDER=vagrant ./cluster/kube-up.sh ``` -The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine. +The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster +management scripts which variant to use. If you forget to set this, the +assumption is you are running on Google Compute Engine. -By default, the Vagrant setup will create a single master VM (called kubernetes-master) and one node (called kubernetes-node-1). Each VM will take 1 GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate free disk space). +By default, the Vagrant setup will create a single master VM (called +kubernetes-master) and one node (called kubernetes-node-1). Each VM will take 1 +GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate +free disk space). -Vagrant will provision each machine in the cluster with all the necessary components to run Kubernetes. The initial setup can take a few minutes to complete on each machine. +Vagrant will provision each machine in the cluster with all the necessary +components to run Kubernetes. The initial setup can take a few minutes to +complete on each machine. -If you installed more than one Vagrant provider, Kubernetes will usually pick the appropriate one. However, you can override which one Kubernetes will use by setting the [`VAGRANT_DEFAULT_PROVIDER`](https://docs.vagrantup.com/v2/providers/default.html) environment variable: +If you installed more than one Vagrant provider, Kubernetes will usually pick +the appropriate one. However, you can override which one Kubernetes will use by +setting the +[`VAGRANT_DEFAULT_PROVIDER`](https://docs.vagrantup.com/v2/providers/default.html) +environment variable: -```sh +```shell export VAGRANT_DEFAULT_PROVIDER=parallels export KUBERNETES_PROVIDER=vagrant ./cluster/kube-up.sh @@ -82,25 +104,26 @@ By default, each VM in the cluster is running Fedora. To access the master or any node: -```sh +```shell vagrant ssh master vagrant ssh node-1 ``` If you are running more than one node, you can access the others by: -```sh +```shell vagrant ssh node-2 vagrant ssh node-3 ``` Each node in the cluster installs the docker daemon and the kubelet. -The master node instantiates the Kubernetes master components as pods on the machine. +The master node instantiates the Kubernetes master components as pods on the +machine. To view the service status and/or logs on the kubernetes-master: -```console +```shell [vagrant@kubernetes-master ~] $ vagrant ssh master [vagrant@kubernetes-master ~] $ sudo su @@ -117,7 +140,7 @@ To view the service status and/or logs on the kubernetes-master: To view the services on any of the nodes: -```console +```shell [vagrant@kubernetes-master ~] $ vagrant ssh node-1 [vagrant@kubernetes-master ~] $ sudo su @@ -130,197 +153,134 @@ To view the services on any of the nodes: ### Interacting with your Kubernetes cluster with Vagrant. -With your Kubernetes cluster up, you can manage the nodes in your cluster with the regular Vagrant commands. +With your Kubernetes cluster up, you can manage the nodes in your cluster with +the regular Vagrant commands. To push updates to new Kubernetes code after making source changes: -```sh +```shell ./cluster/kube-push.sh ``` To stop and then restart the cluster: -```sh +```shell vagrant halt ./cluster/kube-up.sh ``` To destroy the cluster: -```sh +```shell vagrant destroy ``` -Once your Vagrant machines are up and provisioned, the first thing to do is to check that you can use the `kubectl.sh` script. +Once your Vagrant machines are up and provisioned, the first thing to do is to +check that you can use the `kubectl.sh` script. You may need to build the binaries first, you can do this with `make` -```console +```shell $ ./cluster/kubectl.sh get nodes - -NAME LABELS STATUS -kubernetes-node-0whl kubernetes.io/hostname=kubernetes-node-0whl Ready -kubernetes-node-4jdf kubernetes.io/hostname=kubernetes-node-4jdf Ready -kubernetes-node-epbe kubernetes.io/hostname=kubernetes-node-epbe Ready -``` - -### Interacting with your Kubernetes cluster with the `kube-*` scripts. - -Alternatively to using the vagrant commands, you can also use the `cluster/kube-*.sh` scripts to interact with the vagrant based provider just like any other hosting platform for kubernetes. - -All of these commands assume you have set `KUBERNETES_PROVIDER` appropriately: - -```sh -export KUBERNETES_PROVIDER=vagrant ``` -Bring up a vagrant cluster - -```sh -./cluster/kube-up.sh -``` - -Destroy the vagrant cluster - -```sh -./cluster/kube-down.sh -``` - -Update the vagrant cluster after you make changes (only works when building your own releases locally): - -```sh -./cluster/kube-push.sh -``` +### Authenticating with your master -Interact with the cluster +When using the vagrant provider in Kubernetes, the `cluster/kubectl.sh` script +will cache your credentials in a `~/.kubernetes_vagrant_auth` file so you will +not be prompted for them in the future. -```sh -./cluster/kubectl.sh +```shell +cat ~/.kubernetes_vagrant_auth ``` -### Authenticating with your master - -When using the vagrant provider in Kubernetes, the `cluster/kubectl.sh` script will cache your credentials in a `~/.kubernetes_vagrant_auth` file so you will not be prompted for them in the future. - -```console -$ cat ~/.kubernetes_vagrant_auth +```json { "User": "vagrant", - "Password": "vagrant" + "Password": "vagrant", "CAFile": "/home/k8s_user/.kubernetes.vagrant.ca.crt", "CertFile": "/home/k8s_user/.kubecfg.vagrant.crt", "KeyFile": "/home/k8s_user/.kubecfg.vagrant.key" } ``` -You should now be set to use the `cluster/kubectl.sh` script. For example try to list the nodes that you have started with: +You should now be set to use the `cluster/kubectl.sh` script. For example try to +list the nodes that you have started with: -```sh +```shell ./cluster/kubectl.sh get nodes ``` ### Running containers -Your cluster is running, you can list the nodes in your cluster: - -```console -$ ./cluster/kubectl.sh get nodes - -NAME LABELS STATUS -kubernetes-node-0whl kubernetes.io/hostname=kubernetes-node-0whl Ready -kubernetes-node-4jdf kubernetes.io/hostname=kubernetes-node-4jdf Ready -kubernetes-node-epbe kubernetes.io/hostname=kubernetes-node-epbe Ready -``` - -Now start running some containers! - -You can now use any of the cluster/kube-*.sh commands to interact with your VM machines. -Before starting a container there will be no pods, services and replication controllers. - -```console -$ cluster/kubectl.sh get pods -NAME READY STATUS RESTARTS AGE - -$ cluster/kubectl.sh get services -NAME LABELS SELECTOR IP(S) PORT(S) +You can use `cluster/kube-*.sh` commands to interact with your VM machines: -$ cluster/kubectl.sh get rc -CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS -``` - -Start a container running nginx with a replication controller and three replicas - -```console -$ cluster/kubectl.sh run my-nginx --image=nginx --replicas=3 --port=80 -CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS -my-nginx my-nginx nginx run=my-nginx 3 -``` +```shell +$ ./cluster/kubectl.sh get pods +NAME READY STATUS RESTARTS AGE -When listing the pods, you will see that three containers have been started and are in Waiting state: +$ ./cluster/kubectl.sh get services +NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE -```console -$ cluster/kubectl.sh get pods -NAME READY STATUS RESTARTS AGE -my-nginx-389da 1/1 Waiting 0 33s -my-nginx-kqdjk 1/1 Waiting 0 33s -my-nginx-nyj3x 1/1 Waiting 0 33s +$ ./cluster/kubectl.sh get deployments +CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS ``` -You need to wait for the provisioning to complete, you can monitor the nodes by doing: +To Start a container running nginx with a Deployment and three replicas: -```console -$ sudo salt '*node-1' cmd.run 'docker images' -kubernetes-node-1: - REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE - 96864a7d2df3 26 hours ago 204.4 MB - kubernetes/pause latest 6c4579af347b 8 weeks ago 239.8 kB +```shell +$ ./cluster/kubectl.sh run my-nginx --image=nginx --replicas=3 --port=80 ``` -Once the docker image for nginx has been downloaded, the container will start and you can list it: +When listing the pods, you will see that three containers have been started and +are in Waiting state: -```console -$ sudo salt '*node-1' cmd.run 'docker ps' -kubernetes-node-1: - CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES - dbe79bf6e25b nginx:latest "nginx" 21 seconds ago Up 19 seconds k8s--mynginx.8c5b8a3a--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--fcfa837f - fa0e29c94501 kubernetes/pause:latest "/pause" 8 minutes ago Up 8 minutes 0.0.0.0:8080->80/tcp k8s--net.a90e7ce4--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--baf5b21b +```shell +$ ./cluster/kubectl.sh get pods +NAME READY STATUS RESTARTS AGE +my-nginx-3800858182-4e6pe 0/1 ContainerCreating 0 3s +my-nginx-3800858182-8ko0s 1/1 Running 0 3s +my-nginx-3800858182-seu3u 0/1 ContainerCreating 0 3s ``` -Going back to listing the pods, services and replicationcontrollers, you now have: +When the provisioning is complete: -```console -$ cluster/kubectl.sh get pods -NAME READY STATUS RESTARTS AGE -my-nginx-389da 1/1 Running 0 33s -my-nginx-kqdjk 1/1 Running 0 33s -my-nginx-nyj3x 1/1 Running 0 33s +```shell +$ ./cluster/kubectl.sh get pods +NAME READY STATUS RESTARTS AGE +my-nginx-3800858182-4e6pe 1/1 Running 0 40s +my-nginx-3800858182-8ko0s 1/1 Running 0 40s +my-nginx-3800858182-seu3u 1/1 Running 0 40s -$ cluster/kubectl.sh get services -NAME LABELS SELECTOR IP(S) PORT(S) +$ ./cluster/kubectl.sh get services +NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE -$ cluster/kubectl.sh get rc -NAME IMAGE(S) SELECTOR REPLICAS -my-nginx nginx run=my-nginx 3 +$ ./cluster/kubectl.sh get deployments +NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE +my-nginx 3 3 3 3 1m ``` -We did not start any services, hence there are none listed. But we see three replicas displayed properly. -Check the [guestbook](../../../examples/guestbook/README.md) application to learn how to create a service. -You can already play with scaling the replicas with: +We did not start any Services, hence there are none listed. But we see three +replicas displayed properly. Check the +[guestbook](https://github.com/kubernetes/kubernetes/tree/%7B%7Bpage.githubbranch%7D%7D/examples/guestbook) +application to learn how to create a Service. You can already play with scaling +the replicas with: -```console -$ ./cluster/kubectl.sh scale rc my-nginx --replicas=2 +```shell +$ ./cluster/kubectl.sh scale deployments my-nginx --replicas=2 $ ./cluster/kubectl.sh get pods -NAME READY STATUS RESTARTS AGE -my-nginx-kqdjk 1/1 Running 0 13m -my-nginx-nyj3x 1/1 Running 0 13m +NAME READY STATUS RESTARTS AGE +my-nginx-3800858182-4e6pe 1/1 Running 0 2m +my-nginx-3800858182-8ko0s 1/1 Running 0 2m ``` Congratulations! ### Testing -The following will run all of the end-to-end testing scenarios assuming you set your environment in `cluster/kube-env.sh`: +The following will run all of the end-to-end testing scenarios assuming you set +your environment in `cluster/kube-env.sh`: -```sh +```shell NUM_NODES=3 go run hack/e2e.go -v --build --up --test --down ``` @@ -328,27 +288,74 @@ NUM_NODES=3 go run hack/e2e.go -v --build --up --test --down #### I keep downloading the same (large) box all the time! -By default the Vagrantfile will download the box from S3. You can change this (and cache the box locally) by providing a name and an alternate URL when calling `kube-up.sh` +By default the Vagrantfile will download the box from S3. You can change this +(and cache the box locally) by providing a name and an alternate URL when +calling `kube-up.sh` -```sh +```shell export KUBERNETES_BOX_NAME=choose_your_own_name_for_your_kuber_box export KUBERNETES_BOX_URL=path_of_your_kuber_box export KUBERNETES_PROVIDER=vagrant ./cluster/kube-up.sh ``` +#### I am getting timeouts when trying to curl the master from my host! + +During provision of the cluster, you may see the following message: + +```shell +Validating node-1 +............. +Waiting for each node to be registered with cloud provider +error: couldn't read version from server: Get https://10.245.1.2/api: dial tcp 10.245.1.2:443: i/o timeout +``` + +Some users have reported VPNs may prevent traffic from being routed to the host +machine into the virtual machine network. + +To debug, first verify that the master is binding to the proper IP address: + +``` +$ vagrant ssh master +$ ifconfig | grep eth1 -C 2 +eth1: flags=4163 mtu 1500 inet 10.245.1.2 netmask + 255.255.255.0 broadcast 10.245.1.255 +``` + +Then verify that your host machine has a network connection to a bridge that can +serve that address: + +```shell +$ ifconfig | grep 10.245.1 -C 2 + +vboxnet5: flags=4163 mtu 1500 + inet 10.245.1.1 netmask 255.255.255.0 broadcast 10.245.1.255 + inet6 fe80::800:27ff:fe00:5 prefixlen 64 scopeid 0x20 + ether 0a:00:27:00:00:05 txqueuelen 1000 (Ethernet) +``` + +If you do not see a response on your host machine, you will most likely need to +connect your host to the virtual network created by the virtualization provider. + +If you do see a network, but are still unable to ping the machine, check if your +VPN is blocking the request. + #### I just created the cluster, but I am getting authorization errors! -You probably have an incorrect ~/.kubernetes_vagrant_auth file for the cluster you are attempting to contact. +You probably have an incorrect ~/.kubernetes_vagrant_auth file for the cluster +you are attempting to contact. -```sh +```shell rm ~/.kubernetes_vagrant_auth ``` After using kubectl.sh make sure that the correct credentials are set: -```console -$ cat ~/.kubernetes_vagrant_auth +```shell +cat ~/.kubernetes_vagrant_auth +``` + +```json { "User": "vagrant", "Password": "vagrant" @@ -357,45 +364,88 @@ $ cat ~/.kubernetes_vagrant_auth #### I just created the cluster, but I do not see my container running! -If this is your first time creating the cluster, the kubelet on each node schedules a number of docker pull requests to fetch prerequisite images. This can take some time and as a result may delay your initial pod getting provisioned. - -#### I changed Kubernetes code, but it's not running! +If this is your first time creating the cluster, the kubelet on each node +schedules a number of docker pull requests to fetch prerequisite images. This +can take some time and as a result may delay your initial pod getting +provisioned. -Are you sure there was no build error? After running `$ vagrant provision`, scroll up and ensure that each Salt state was completed successfully on each box in the cluster. -It's very likely you see a build error due to an error in your source files! +#### I have Vagrant up but the nodes won't validate! -#### I have brought Vagrant up but the nodes won't validate! - -Are you sure you built a release first? Did you install `net-tools`? For more clues, login to one of the nodes (`vagrant ssh node-1`) and inspect the salt minion log (`sudo cat /var/log/salt/minion`). +Log on to one of the nodes (`vagrant ssh node-1`) and inspect the salt minion +log (`sudo cat /var/log/salt/minion`). #### I want to change the number of nodes! -You can control the number of nodes that are instantiated via the environment variable `NUM_NODES` on your host machine. If you plan to work with replicas, we strongly encourage you to work with enough nodes to satisfy your largest intended replica size. If you do not plan to work with replicas, you can save some system resources by running with a single node. You do this, by setting `NUM_NODES` to 1 like so: +You can control the number of nodes that are instantiated via the environment +variable `NUM_NODES` on your host machine. If you plan to work with replicas, we +strongly encourage you to work with enough nodes to satisfy your largest +intended replica size. If you do not plan to work with replicas, you can save +some system resources by running with a single node. You do this, by setting +`NUM_NODES` to 1 like so: -```sh +```shell export NUM_NODES=1 ``` #### I want my VMs to have more memory! -You can control the memory allotted to virtual machines with the `KUBERNETES_MEMORY` environment variable. -Just set it to the number of megabytes you would like the machines to have. For example: +You can control the memory allotted to virtual machines with the +`KUBERNETES_MEMORY` environment variable. Just set it to the number of megabytes +you would like the machines to have. For example: -```sh +```shell export KUBERNETES_MEMORY=2048 ``` -If you need more granular control, you can set the amount of memory for the master and nodes independently. For example: +If you need more granular control, you can set the amount of memory for the +master and nodes independently. For example: -```sh +```shell export KUBERNETES_MASTER_MEMORY=1536 export KUBERNETES_NODE_MEMORY=2048 ``` +#### I want to set proxy settings for my Kubernetes cluster boot strapping! + +If you are behind a proxy, you need to install the Vagrant proxy plugin and set +the proxy settings: + +```shell +vagrant plugin install vagrant-proxyconf +export KUBERNETES_HTTP_PROXY=http://username:password@proxyaddr:proxyport +export KUBERNETES_HTTPS_PROXY=https://username:password@proxyaddr:proxyport +``` + +You can also specify addresses that bypass the proxy, for example: + +```shell +export KUBERNETES_NO_PROXY=127.0.0.1 +``` + +If you are using sudo to make Kubernetes build, use the `-E` flag to pass in the +environment variables. For example, if running `make quick-release`, use: + +```shell +sudo -E make quick-release +``` + #### I ran vagrant suspend and nothing works! -`vagrant suspend` seems to mess up the network. It's not supported at this time. +`vagrant suspend` seems to mess up the network. It's not supported at this time. + +#### I want vagrant to sync folders via nfs! +You can ensure that vagrant uses nfs to sync folders with virtual machines by +setting the KUBERNETES_VAGRANT_USE_NFS environment variable to 'true'. nfs is +faster than virtualbox or vmware's 'shared folders' and does not require guest +additions. See the +[vagrant docs](http://docs.vagrantup.com/v2/synced-folders/nfs.html) for details +on configuring nfs on the host. This setting will have no effect on the libvirt +provider, which uses nfs by default. For example: + +```shell +export KUBERNETES_VAGRANT_USE_NFS=true +``` [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/developer-guides/vagrant.md?pixel)]() -- cgit v1.2.3 From b2a0bc70116a62b046542b20f0a6b594ba009851 Mon Sep 17 00:00:00 2001 From: Clayton Coleman Date: Fri, 22 Apr 2016 11:48:58 -0400 Subject: Protobuf doc changes --- adding-an-APIGroup.md | 5 +++++ api-conventions.md | 8 +++++--- api_changes.md | 17 +++++++++++++++++ 3 files changed, 27 insertions(+), 3 deletions(-) diff --git a/adding-an-APIGroup.md b/adding-an-APIGroup.md index dec5d3f0..2732ffa5 100644 --- a/adding-an-APIGroup.md +++ b/adding-an-APIGroup.md @@ -64,6 +64,11 @@ Step 2 and 3 are mechanical, we plan on autogenerate these using the cmd/libs/go 1. Touch types.generated.go in pkg/apis/``{/, ``}; 2. Run hack/update-codecgen.sh. +3. Generate protobuf objects: + + 1. Add your group to `cmd/libs/go2idl/go-to-protobuf/protobuf/cmd.go` to `New()` in the `Packages` field + 2. Run hack/update-generated-protobuf.sh + ### Client (optional): We are overhauling pkg/client, so this section might be outdated; see [#15730](https://github.com/kubernetes/kubernetes/pull/15730) for how the client package might evolve. Currently, to add your group to the client package, you need to diff --git a/api-conventions.md b/api-conventions.md index 343800af..10dad772 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -34,7 +34,7 @@ Documentation for other releases can be found at API Conventions =============== -Updated: 10/8/2015 +Updated: 4/22/2016 *This document is oriented at users who want a deeper understanding of the Kubernetes API structure, and developers wanting to extend the Kubernetes API. An introduction to @@ -497,6 +497,8 @@ resourceVersion may be used as a precondition for other operations (e.g., GET, D APIs may return alternative representations of any resource in response to an Accept header or under alternative endpoints, but the default serialization for input and output of API responses MUST be JSON. +Protobuf serialization of API objects are currently **EXPERIMENTAL** and will change without notice. + All dates should be serialized as RFC3339 strings. ## Units @@ -617,13 +619,13 @@ $ curl -v -k -H "Authorization: Bearer WhCDvq4VPpYhrcfmF6ei7V9qlbqTubUc" https:/ > Host: 10.240.122.184 > Accept: */* > Authorization: Bearer WhCDvq4VPpYhrcfmF6ei7V9qlbqTubUc -> +> < HTTP/1.1 404 Not Found < Content-Type: application/json < Date: Wed, 20 May 2015 18:10:42 GMT < Content-Length: 232 -< +< { "kind": "Status", "apiVersion": "v1", diff --git a/api_changes.md b/api_changes.md index 987d5576..703b1743 100644 --- a/api_changes.md +++ b/api_changes.md @@ -51,6 +51,7 @@ found at [API Conventions](api-conventions.md). - [Edit types.go](#edit-typesgo) - [Edit validation.go](#edit-validationgo) - [Edit version conversions](#edit-version-conversions) + - [Generate protobuf objects](#generate-protobuf-objects) - [Edit json (un)marshaling code](#edit-json-unmarshaling-code) - [Making a new API Group](#making-a-new-api-group) - [Update the fuzzer](#update-the-fuzzer) @@ -472,6 +473,22 @@ generator to create it from scratch. Unsurprisingly, adding manually written conversion also requires you to add tests to `pkg/api//conversion_test.go`. +## Generate protobuf objects + +For any core API object, we also need to generate the Protobuf IDL and marshallers. +That generation is done with + +```sh +hack/update-generated-protobuf.sh +``` + +The vast majority of objects will not need any consideration when converting +to protobuf, but be aware that if you depend on a Golang type in the standard +library there may be additional work requried, although in practice we typically +use our own equivalents for JSON serialization. The `pkg/api/serialization_test.go` +will verify that your protobuf serialization preserves all fields - be sure to +run it several times to ensure there are no incompletely calculated fields. + ## Edit json (un)marshaling code We are auto-generating code for marshaling and unmarshaling json representation -- cgit v1.2.3 From 469f0872126bcc5e9e5291d292feaa9d65f7856c Mon Sep 17 00:00:00 2001 From: Chao Xu Date: Thu, 21 Apr 2016 16:35:33 -0700 Subject: move pods.go to pods_test.go --- testing.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/testing.md b/testing.md index dc6a8bd7..25d955dc 100644 --- a/testing.md +++ b/testing.md @@ -141,7 +141,7 @@ is [table driven testing](https://github.com/golang/go/wiki/TableDrivenTests) - Example: [TestNamespaceAuthorization](../../test/integration/auth_test.go) * Integration tests must run in parallel - Each test should create its own master, httpserver and config. - - Example: [TestPodUpdateActiveDeadlineSeconds](../../test/integration/pods.go) + - Example: [TestPodUpdateActiveDeadlineSeconds](../../test/integration/pods_test.go) * See [coding conventions](coding-conventions.md). ### Install etcd dependency -- cgit v1.2.3 From e1d3691293c2a3458e56379641cd0b704efca512 Mon Sep 17 00:00:00 2001 From: Morgan Bauer Date: Thu, 28 Apr 2016 18:41:45 -0700 Subject: more explicit requirements for pre-commit hook --- development.md | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/development.md b/development.md index 53706cad..415bb490 100644 --- a/development.md +++ b/development.md @@ -116,8 +116,11 @@ git remote set-url --push upstream no_push ### Committing changes to your fork -Before committing any changes, please link/copy these pre-commit hooks into your .git -directory. This will keep you from accidentally committing non-gofmt'd Go code. +Before committing any changes, please link/copy the pre-commit hook +into your .git directory. This will keep you from accidentally +committing non-gofmt'd Go code. In addition this hook will do a build. + +The hook requires both Godep and etcd on your `PATH`. ```sh cd kubernetes/.git/hooks/ -- cgit v1.2.3 From 223c20ecd2741be4760c5a19e283d524b6fd67ef Mon Sep 17 00:00:00 2001 From: Dan Lorenc Date: Wed, 27 Apr 2016 15:29:11 -0700 Subject: Add missing "--test" flag to conformance test instructions. --- e2e-tests.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/e2e-tests.md b/e2e-tests.md index 175c323b..e8dfce89 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -224,7 +224,7 @@ export KUBECONFIG=/path/to/kubeconfig export KUBERNETES_CONFORMANCE_TEST=y # run all conformance tests -go run hack/e2e.go -v --test_args="--ginkgo.focus=\[Conformance\]" +go run hack/e2e.go -v --test --test_args="--ginkgo.focus=\[Conformance\]" # run all parallel-safe conformance tests in parallel GINKGO_PARALLEL=y go run hack/e2e.go --v --test --test_args="--ginkgo.focus=\[Conformance\] --ginkgo.skip=\[Serial\]" -- cgit v1.2.3 From ce70f0b829ba01bc51800961484c095f952e9eed Mon Sep 17 00:00:00 2001 From: Janet Kuo Date: Tue, 3 May 2016 10:23:25 -0700 Subject: Update testing convention docs --- coding-conventions.md | 30 ++++++++++++++++++++++++++---- e2e-tests.md | 33 +++++++++++++++++++++++++++++++++ testing.md | 19 +++++++++++++++++++ 3 files changed, 78 insertions(+), 4 deletions(-) diff --git a/coding-conventions.md b/coding-conventions.md index 2603319c..ca4e8431 100644 --- a/coding-conventions.md +++ b/coding-conventions.md @@ -31,7 +31,24 @@ Documentation for other releases can be found at -Code conventions + +# Coding Conventions + +Updated: 5/3/2016 + +**Table of Contents** + + +- [Coding Conventions](#coding-conventions) + - [Code conventions](#code-conventions) + - [Testing conventions](#testing-conventions) + - [Directory and file conventions](#directory-and-file-conventions) + - [Coding advice](#coding-advice) + + + +## Code conventions + - Bash - https://google-styleguide.googlecode.com/svn/trunk/shell.xml - Ensure that build, release, test, and cluster-management scripts run on OS X @@ -58,15 +75,19 @@ Code conventions - [Kubectl conventions](kubectl-conventions.md) - [Logging conventions](logging.md) -Testing conventions +## Testing conventions + - All new packages and most new significant functionality must come with unit tests - Table-driven tests are preferred for testing multiple scenarios/inputs; for example, see [TestNamespaceAuthorization](../../test/integration/auth_test.go) - Significant features should come with integration (test/integration) and/or [end-to-end (test/e2e) tests](e2e-tests.md) - Including new kubectl commands and major features of existing commands - Unit tests must pass on OS X and Windows platforms - if you use Linux specific features, your test case must either be skipped on windows or compiled out (skipped is better when running Linux specific commands, compiled out is required when your code does not compile on Windows). + - Avoid relying on Docker hub (e.g. pull from Docker hub). Use gcr.io instead. + - Avoid waiting for a short amount of time (or without waiting) and expect an asynchronous thing to happen (e.g. wait for 1 seconds and expect a Pod to be running). Wait and retry instead. - See the [testing guide](testing.md) for additional testing advice. -Directory and file conventions +## Directory and file conventions + - Avoid package sprawl. Find an appropriate subdirectory for new packages. (See [#4851](http://issues.k8s.io/4851) for discussion.) - Libraries with no more appropriate home belong in new package subdirectories of pkg/util - Avoid general utility packages. Packages called "util" are suspect. Instead, derive a name that describes your desired function. For example, the utility functions dealing with waiting for operations are in the "wait" package and include functionality like Poll. So the full name is wait.Poll @@ -85,7 +106,8 @@ Directory and file conventions - Third-party code must include licenses - This includes modified third-party code and excerpts, as well -Coding advice +## Coding advice + - Go - [Go landmines](https://gist.github.com/lavalamp/4bd23295a9f32706a48f) diff --git a/e2e-tests.md b/e2e-tests.md index 175c323b..682c1980 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -34,6 +34,35 @@ Documentation for other releases can be found at # End-to-End Testing in Kubernetes +Updated: 5/3/2016 + +**Table of Contents** + + +- [End-to-End Testing in Kubernetes](#end-to-end-testing-in-kubernetes) + - [Overview](#overview) + - [Building and Running the Tests](#building-and-running-the-tests) + - [Cleaning up](#cleaning-up) + - [Advanced testing](#advanced-testing) + - [Bringing up a cluster for testing](#bringing-up-a-cluster-for-testing) + - [Debugging clusters](#debugging-clusters) + - [Local clusters](#local-clusters) + - [Testing against local clusters](#testing-against-local-clusters) + - [Kinds of tests](#kinds-of-tests) + - [Conformance tests](#conformance-tests) + - [Defining Conformance Subset](#defining-conformance-subset) + - [Continuous Integration](#continuous-integration) + - [What is CI?](#what-is-ci) + - [What runs in CI?](#what-runs-in-ci) + - [Non-default tests](#non-default-tests) + - [The PR-builder](#the-pr-builder) + - [Adding a test to CI](#adding-a-test-to-ci) + - [Moving a test out of CI](#moving-a-test-out-of-ci) + - [Performance Evaluation](#performance-evaluation) + - [One More Thing](#one-more-thing) + + + ## Overview End-to-end (e2e) tests for Kubernetes provide a mechanism to test end-to-end behavior of the system, and is the last signal to ensure end user operations match developer specifications. Although unit and integration tests should ideally provide a good signal, the reality is in a distributed system like Kubernetes it is not uncommon that a minor change may pass all unit and integration tests, but cause unforeseen changes at the system level. e2e testing is very costly, both in time to run tests and difficulty debugging, though: it takes a long time to build, deploy, and exercise a cluster. Thus, the primary objectives of the e2e tests are to ensure a consistent and reliable behavior of the kubernetes code base, and to catch hard-to-test bugs before users do, when unit and integration tests are insufficient. @@ -318,6 +347,10 @@ job: { Once prometheus is scraping the kubernetes endpoints, that data can then be plotted using promdash, and alerts can be created against the assortment of metrics that kubernetes provides. +## One More Thing + +You should also know the [testing conventions](coding-conventions.md#testing-conventions). + **HAPPY TESTING!** diff --git a/testing.md b/testing.md index 25d955dc..e415e442 100644 --- a/testing.md +++ b/testing.md @@ -29,6 +29,25 @@ Documentation for other releases can be found at # Testing guide +Updated: 5/3/2016 + +**Table of Contents** + + +- [Testing guide](#testing-guide) + - [Unit tests](#unit-tests) + - [Run all unit tests](#run-all-unit-tests) + - [Run some unit tests](#run-some-unit-tests) + - [Stress running unit tests](#stress-running-unit-tests) + - [Unit test coverage](#unit-test-coverage) + - [Benchmark unit tests](#benchmark-unit-tests) + - [Integration tests](#integration-tests) + - [Install etcd dependency](#install-etcd-dependency) + - [Run integration tests](#run-integration-tests) + - [End-to-End tests](#end-to-end-tests) + + + This assumes you already read the [development guide](development.md) to install go, godeps, and configure your git client. -- cgit v1.2.3 From 858de9f09c8764d2f2823ea8325db6ec95da0690 Mon Sep 17 00:00:00 2001 From: David McMahon Date: Mon, 2 May 2016 11:22:03 -0700 Subject: Update docs to describe new PR release-note block parsing. --- cherry-picks.md | 9 ++++++--- pull-requests.md | 10 +++++++--- 2 files changed, 13 insertions(+), 6 deletions(-) diff --git a/cherry-picks.md b/cherry-picks.md index 328ebe7c..81b8cd47 100644 --- a/cherry-picks.md +++ b/cherry-picks.md @@ -50,9 +50,12 @@ label changes to the **master** branch PR: * Add an appropriate release-note-(!label-needed) label * Add an appropriate milestone * Add the `cherrypick-candidate` label - * The PR title is the **release note** you want published at release time and -note that PR titles are mutable and should reflect a release note -friendly message for any `release-note-*` labeled PRs. +1. `release-note` labeled PRs generate a release note using the PR title by + default OR the release-note block in the PR template if filled in. + * See the [PR template](../../.github/PULL_REQUEST_TEMPLATE.md) for more + details. + * PR titles and body comments are mutable and can be modified at any time + prior to the release to reflect a release note friendly message. ### How do cherrypick-candidates make it to the release branch? diff --git a/pull-requests.md b/pull-requests.md index a5aeac76..64a1c2c6 100644 --- a/pull-requests.md +++ b/pull-requests.md @@ -75,14 +75,18 @@ The following will save time for both you and your reviewer: ## Release Notes This section applies only to pull requests on the master branch. +For cherry-pick PRs, see the [Cherrypick instructions](cherry-picks.md) 1. All pull requests are initiated with a `release-note-label-needed` label. 1. For a PR to be ready to merge, the `release-note-label-needed` label must be removed and one of the other `release-note-*` labels must be added. 1. `release-note-none` is a valid option if the PR does not need to be mentioned at release time. -1. The PR title is the **release note** you want published at release time. - * NOTE: PR titles are mutable and should reflect a release note friendly - message for any `release-note-*` labeled PRs. +1. `release-note` labeled PRs generate a release note using the PR title by + default OR the release-note block in the PR template if filled in. + * See the [PR template](../../.github/PULL_REQUEST_TEMPLATE.md) for more + details. + * PR titles and body comments are mutable and can be modified at any time + prior to the release to reflect a release note friendly message. The only exception to these rules is when a PR is not a cherry-pick and is targeted directly to the non-master branch. In this case, a `release-note-*` -- cgit v1.2.3 From 59f159feb47c309c45beff80c45521f813b51c3a Mon Sep 17 00:00:00 2001 From: Mike Brown Date: Wed, 4 May 2016 16:14:05 -0500 Subject: devel/ tree 80col wrap and other minor edits Signed-off-by: Mike Brown --- node-performance-testing.md | 89 +++++++++++++------------ on-call-build-cop.md | 156 ++++++++++++++++++++++++++++++++------------ on-call-rotations.md | 52 ++++++++++----- on-call-user-support.md | 83 ++++++++++++++++------- 4 files changed, 260 insertions(+), 120 deletions(-) diff --git a/node-performance-testing.md b/node-performance-testing.md index ae8789a7..54c15dee 100644 --- a/node-performance-testing.md +++ b/node-performance-testing.md @@ -34,69 +34,76 @@ Documentation for other releases can be found at # Measuring Node Performance -This document outlines the issues and pitfalls of measuring Node performance, as well as the tools -available. +This document outlines the issues and pitfalls of measuring Node performance, as +well as the tools available. ## Cluster Set-up -There are lots of factors which can affect node performance numbers, so care must be taken in -setting up the cluster to make the intended measurements. In addition to taking the following steps -into consideration, it is important to document precisely which setup was used. For example, -performance can vary wildly from commit-to-commit, so it is very important to **document which commit +There are lots of factors which can affect node performance numbers, so care +must be taken in setting up the cluster to make the intended measurements. In +addition to taking the following steps into consideration, it is important to +document precisely which setup was used. For example, performance can vary +wildly from commit-to-commit, so it is very important to **document which commit or version** of Kubernetes was used, which Docker version was used, etc. ### Addon pods -Be aware of which addon pods are running on which nodes. By default Kubernetes runs 8 addon pods, -plus another 2 per node (`fluentd-elasticsearch` and `kube-proxy`) in the `kube-system` -namespace. The addon pods can be disabled for more consistent results, but doing so can also have -performance implications. +Be aware of which addon pods are running on which nodes. By default Kubernetes +runs 8 addon pods, plus another 2 per node (`fluentd-elasticsearch` and +`kube-proxy`) in the `kube-system` namespace. The addon pods can be disabled for +more consistent results, but doing so can also have performance implications. -For example, Heapster polls each node regularly to collect stats data. Disabling Heapster will hide -the performance cost of serving those stats in the Kubelet. +For example, Heapster polls each node regularly to collect stats data. Disabling +Heapster will hide the performance cost of serving those stats in the Kubelet. #### Disabling Add-ons -Disabling addons is simple. Just ssh into the Kubernetes master and move the addon from -`/etc/kubernetes/addons/` to a backup location. More details [here](../../cluster/addons/). +Disabling addons is simple. Just ssh into the Kubernetes master and move the +addon from `/etc/kubernetes/addons/` to a backup location. More details +[here](../../cluster/addons/). ### Which / how many pods? -Performance will vary a lot between a node with 0 pods and a node with 100 pods. In many cases -you'll want to make measurements with several different amounts of pods. On a single node cluster -scaling a replication controller makes this easy, just make sure the system reaches a steady-state -before starting the measurement. E.g. `kubectl scale replicationcontroller pause --replicas=100` +Performance will vary a lot between a node with 0 pods and a node with 100 pods. +In many cases you'll want to make measurements with several different amounts of +pods. On a single node cluster scaling a replication controller makes this easy, +just make sure the system reaches a steady-state before starting the +measurement. E.g. `kubectl scale replicationcontroller pause --replicas=100` -In most cases pause pods will yield the most consistent measurements since the system will not be -affected by pod load. However, in some special cases Kubernetes has been tuned to optimize pods that -are not doing anything, such as the cAdvisor housekeeping (stats gathering). In these cases, -performing a very light task (such as a simple network ping) can make a difference. +In most cases pause pods will yield the most consistent measurements since the +system will not be affected by pod load. However, in some special cases +Kubernetes has been tuned to optimize pods that are not doing anything, such as +the cAdvisor housekeeping (stats gathering). In these cases, performing a very +light task (such as a simple network ping) can make a difference. -Finally, you should also consider which features yours pods should be using. For example, if you -want to measure performance with probing, you should obviously use pods with liveness or readiness -probes configured. Likewise for volumes, number of containers, etc. +Finally, you should also consider which features yours pods should be using. For +example, if you want to measure performance with probing, you should obviously +use pods with liveness or readiness probes configured. Likewise for volumes, +number of containers, etc. ### Other Tips -**Number of nodes** - On the one hand, it can be easier to manage logs, pods, environment etc. with - a single node to worry about. On the other hand, having multiple nodes will let you gather more - data in parallel for more robust sampling. +**Number of nodes** - On the one hand, it can be easier to manage logs, pods, +environment etc. with a single node to worry about. On the other hand, having +multiple nodes will let you gather more data in parallel for more robust +sampling. ## E2E Performance Test -There is an end-to-end test for collecting overall resource usage of node components: -[kubelet_perf.go](../../test/e2e/kubelet_perf.go). To -run the test, simply make sure you have an e2e cluster running (`go run hack/e2e.go -up`) and -[set up](#cluster-set-up) correctly. +There is an end-to-end test for collecting overall resource usage of node +components: [kubelet_perf.go](../../test/e2e/kubelet_perf.go). To +run the test, simply make sure you have an e2e cluster running (`go run +hack/e2e.go -up`) and [set up](#cluster-set-up) correctly. Run the test with `go run hack/e2e.go -v -test ---test_args="--ginkgo.focus=resource\susage\stracking"`. You may also wish to customise the number of -pods or other parameters of the test (remember to rerun `make WHAT=test/e2e/e2e.test` after you do). +--test_args="--ginkgo.focus=resource\susage\stracking"`. You may also wish to +customise the number of pods or other parameters of the test (remember to rerun +`make WHAT=test/e2e/e2e.test` after you do). ## Profiling -Kubelet installs the [go pprof handlers](https://golang.org/pkg/net/http/pprof/), which can be -queried for CPU profiles: +Kubelet installs the [go pprof handlers] +(https://golang.org/pkg/net/http/pprof/), which can be queried for CPU profiles: ```console $ kubectl proxy & @@ -109,13 +116,15 @@ $ go tool pprof -web $KUBELET_BIN $OUTPUT `pprof` can also provide heap usage, from the `/debug/pprof/heap` endpoint (e.g. `http://localhost:8001/api/v1/proxy/nodes/${NODE}:10250/debug/pprof/heap`). -More information on go profiling can be found [here](http://blog.golang.org/profiling-go-programs). +More information on go profiling can be found +[here](http://blog.golang.org/profiling-go-programs). ## Benchmarks -Before jumping through all the hoops to measure a live Kubernetes node in a real cluster, it is -worth considering whether the data you need can be gathered through a Benchmark test. Go provides a -really simple benchmarking mechanism, just add a unit test of the form: +Before jumping through all the hoops to measure a live Kubernetes node in a real +cluster, it is worth considering whether the data you need can be gathered +through a Benchmark test. Go provides a really simple benchmarking mechanism, +just add a unit test of the form: ```go // In foo_test.go diff --git a/on-call-build-cop.md b/on-call-build-cop.md index 32660b2b..cc5ff4f1 100644 --- a/on-call-build-cop.md +++ b/on-call-build-cop.md @@ -31,79 +31,155 @@ Documentation for other releases can be found at -Kubernetes "Github and Build-cop" Rotation -========================================== -Preqrequisites --------------- +## Kubernetes "Github and Build-cop" Rotation + +### Preqrequisites * Ensure you have [write access to http://github.com/kubernetes/kubernetes](https://github.com/orgs/kubernetes/teams/kubernetes-maintainers) * Test your admin access by e.g. adding a label to an issue. -Traffic sources and responsibilities ------------------------------------- +### Traffic sources and responsibilities + +* GitHub Kubernetes [issues](https://github.com/kubernetes/kubernetes/issues) +and [pulls](https://github.com/kubernetes/kubernetes/pulls): Your job is to be +the first responder to all new issues and PRs. If you are not equipped to do +this (which is fine!), it is your job to seek guidance! + + * Support issues should be closed and redirected to Stackoverflow (see example +response below). + + * All incoming issues should be tagged with a team label +(team/{api,ux,control-plane,node,cluster,csi,redhat,mesosphere,gke,release-infra,test-infra,none}); +for issues that overlap teams, you can use multiple team labels + + * There is a related concept of "Github teams" which allow you to @ mention +a set of people; feel free to @ mention a Github team if you wish, but this is +not a substitute for adding a team/* label, which is required -* GitHub [https://github.com/kubernetes/kubernetes/issues](https://github.com/kubernetes/kubernetes/issues) and [https://github.com/kubernetes/kubernetes/pulls](https://github.com/kubernetes/kubernetes/pulls): Your job is to be the first responder to all new issues and PRs. If you are not equipped to do this (which is fine!), it is your job to seek guidance! - * Support issues should be closed and redirected to Stackoverflow (see example response below). - * All incoming issues should be tagged with a team label (team/{api,ux,control-plane,node,cluster,csi,redhat,mesosphere,gke,release-infra,test-infra,none}); for issues that overlap teams, you can use multiple team labels - * There is a related concept of "Github teams" which allow you to @ mention a set of people; feel free to @ mention a Github team if you wish, but this is not a substitute for adding a team/* label, which is required * [Google teams](https://github.com/orgs/kubernetes/teams?utf8=%E2%9C%93&query=goog-) * [Redhat teams](https://github.com/orgs/kubernetes/teams?utf8=%E2%9C%93&query=rh-) * [SIGs](https://github.com/orgs/kubernetes/teams?utf8=%E2%9C%93&query=sig-) - * If the issue is reporting broken builds, broken e2e tests, or other obvious P0 issues, label the issue with priority/P0 and assign it to someone. This is the only situation in which you should add a priority/* label + + * If the issue is reporting broken builds, broken e2e tests, or other +obvious P0 issues, label the issue with priority/P0 and assign it to someone. +This is the only situation in which you should add a priority/* label * non-P0 issues do not need a reviewer assigned initially - * Assign any issues related to Vagrant to @derekwaynecarr (and @mention him in the issue) + + * Assign any issues related to Vagrant to @derekwaynecarr (and @mention him +in the issue) + * All incoming PRs should be assigned a reviewer. + * unless it is a WIP (Work in Progress), RFC (Request for Comments), or design proposal. * An auto-assigner [should do this for you] (https://github.com/kubernetes/kubernetes/pull/12365/files) * When in doubt, choose a TL or team maintainer of the most relevant team; they can delegate - * Keep in mind that you can @ mention people in an issue/PR to bring it to their attention without assigning it to them. You can also @ mention github teams, such as @kubernetes/goog-ux or @kubernetes/kubectl - * If you need help triaging an issue or PR, consult with (or assign it to) @brendandburns, @thockin, @bgrant0607, @quinton-hoole, @davidopp, @dchen1107, @lavalamp (all U.S. Pacific Time) or @fgrzadkowski (Central European Time). - * At the beginning of your shift, please add team/* labels to any issues that have fallen through the cracks and don't have one. Likewise, be fair to the next person in rotation: try to ensure that every issue that gets filed while you are on duty is handled. The Github query to find issues with no team/* label is: [here](https://github.com/kubernetes/kubernetes/issues?utf8=%E2%9C%93&q=is%3Aopen+is%3Aissue+-label%3Ateam%2Fcontrol-plane+-label%3Ateam%2Fmesosphere+-label%3Ateam%2Fredhat+-label%3Ateam%2Frelease-infra+-label%3Ateam%2Fnone+-label%3Ateam%2Fnode+-label%3Ateam%2Fcluster+-label%3Ateam%2Fux+-label%3Ateam%2Fapi+-label%3Ateam%2Ftest-infra+-label%3Ateam%2Fgke+-label%3A"team%2FCSI-API+Machinery+SIG"+-label%3Ateam%2Fhuawei+-label%3Ateam%2Fsig-aws). + + * Keep in mind that you can @ mention people in an issue/PR to bring it to +their attention without assigning it to them. You can also @ mention github +teams, such as @kubernetes/goog-ux or @kubernetes/kubectl + + * If you need help triaging an issue or PR, consult with (or assign it to) +@brendandburns, @thockin, @bgrant0607, @quinton-hoole, @davidopp, @dchen1107, +@lavalamp (all U.S. Pacific Time) or @fgrzadkowski (Central European Time). + + * At the beginning of your shift, please add team/* labels to any issues that +have fallen through the cracks and don't have one. Likewise, be fair to the next +person in rotation: try to ensure that every issue that gets filed while you are +on duty is handled. The Github query to find issues with no team/* label is: +[here](https://github.com/kubernetes/kubernetes/issues?utf8=%E2%9C%93&q=is%3Aopen+is%3Aissue+-label%3Ateam%2Fcontrol-plane+-label%3Ateam%2Fmesosphere+-label%3Ateam%2Fredhat+-label%3Ateam%2Frelease-infra+-label%3Ateam%2Fnone+-label%3Ateam%2Fnode+-label%3Ateam%2Fcluster+-label%3Ateam%2Fux+-label%3Ateam%2Fapi+-label%3Ateam%2Ftest-infra+-label%3Ateam%2Fgke+-label%3A"team%2FCSI-API+Machinery+SIG"+-label%3Ateam%2Fhuawei+-label%3Ateam%2Fsig-aws). Example response for support issues: - Please re-post your question to [stackoverflow](http://stackoverflow.com/questions/tagged/kubernetes). +```code +Please re-post your question to [stackoverflow] +(http://stackoverflow.com/questions/tagged/kubernetes). + +We are trying to consolidate the channels to which questions for help/support +are posted so that we can improve our efficiency in responding to your requests, +and to make it easier for you to find answers to frequently asked questions and +how to address common use cases. + +We regularly see messages posted in multiple forums, with the full response +thread only in one place or, worse, spread across multiple forums. Also, the +large volume of support issues on github is making it difficult for us to use +issues to identify real bugs. - We are trying to consolidate the channels to which questions for help/support are posted so that we can improve our efficiency in responding to your requests, and to make it easier for you to find answers to frequently asked questions and how to address common use cases. +The Kubernetes team scans stackoverflow on a regular basis, and will try to +ensure your questions don't go unanswered. - We regularly see messages posted in multiple forums, with the full response thread only in one place or, worse, spread across multiple forums. Also, the large volume of support issues on github is making it difficult for us to use issues to identify real bugs. +Before posting a new question, please search stackoverflow for answers to +similar questions, and also familiarize yourself with: - The Kubernetes team scans stackoverflow on a regular basis, and will try to ensure your questions don't go unanswered. + * [user guide](http://kubernetes.io/v1.0/) + * [troubleshooting guide](http://kubernetes.io/v1.0/docs/troubleshooting.html) - Before posting a new question, please search stackoverflow for answers to similar questions, and also familiarize yourself with: - * [the user guide](http://kubernetes.io/v1.0/) - * [the troubleshooting guide](http://kubernetes.io/v1.0/docs/troubleshooting.html) +Again, thanks for using Kubernetes. - Again, thanks for using Kubernetes. +The Kubernetes Team +``` - The Kubernetes Team +### Build-copping -Build-copping -------------- +* The [merge-bot submit queue](http://submit-queue.k8s.io/) +([source](https://github.com/kubernetes/contrib/tree/master/mungegithub/mungers/submit-queue.go)) +should auto-merge all eligible PRs for you once they've passed all the relevant +checks mentioned below and all [critical e2e tests] +(https://goto.google.com/k8s-test/view/Critical%20Builds/) are passing. If the +merge-bot been disabled for some reason, or tests are failing, you might need to +do some manual merging to get things back on track. + +* Once a day or so, look at the [flaky test builds] +(https://goto.google.com/k8s-test/view/Flaky/); if they are timing out, clusters +are failing to start, or tests are consistently failing (instead of just +flaking), file an issue to get things back on track. + +* Jobs that are not in [critical e2e tests](https://goto.google.com/k8s-test/view/Critical%20Builds/) +or [flaky test builds](https://goto.google.com/k8s-test/view/Flaky/) are not +your responsibility to monitor. The `Test owner:` in the job description will be +automatically emailed if the job is failing. + +* If you are a weekday oncall, ensure that PRs confirming to the following +pre-requisites are being merged at a reasonable rate: -* The [merge-bot submit queue](http://submit-queue.k8s.io/) ([source](https://github.com/kubernetes/contrib/tree/master/mungegithub/mungers/submit-queue.go)) should auto-merge all eligible PRs for you once they've passed all the relevant checks mentioned below and all [critical e2e tests] (https://goto.google.com/k8s-test/view/Critical%20Builds/) are passing. If the merge-bot been disabled for some reason, or tests are failing, you might need to do some manual merging to get things back on track. -* Once a day or so, look at the [flaky test builds](https://goto.google.com/k8s-test/view/Flaky/); if they are timing out, clusters are failing to start, or tests are consistently failing (instead of just flaking), file an issue to get things back on track. -* Jobs that are not in [critical e2e tests] (https://goto.google.com/k8s-test/view/Critical%20Builds/) or [flaky test builds](https://goto.google.com/k8s-test/view/Flaky/) are not your responsibility to monitor. The `Test owner:` in the job description will be automatically emailed if the job is failing. -* If you are a weekday oncall, ensure that PRs confirming to the following pre-requisites are being merged at a reasonable rate: * [Have been LGTMd](https://github.com/kubernetes/kubernetes/labels/lgtm) * Pass Travis and Jenkins per-PR tests. * Author has signed CLA if applicable. -* If you are a weekend oncall, [never merge PRs manually](collab.md), instead add the label "lgtm" to the PRs once they have been LGTMd and passed Travis; this will cause merge-bot to merge them automatically (or make them easy to find by the next oncall, who will merge them). + + +* If you are a weekend oncall, [never merge PRs manually](collab.md), instead +add the label "lgtm" to the PRs once they have been LGTMd and passed Travis; +this will cause merge-bot to merge them automatically (or make them easy to find +by the next oncall, who will merge them). + * When the build is broken, roll back the PRs responsible ASAP -* When E2E tests are unstable, a "merge freeze" may be instituted. During a merge freeze: - * Oncall should slowly merge LGTMd changes throughout the day while monitoring E2E to ensure stability. - * Ideally the E2E run should be green, but some tests are flaky and can fail randomly (not as a result of a particular change). - * If a large number of tests fail, or tests that normally pass fail, that is an indication that one or more of the PR(s) in that build might be problematic (and should be reverted). - * Use the Test Results Analyzer to see individual test history over time. + +* When E2E tests are unstable, a "merge freeze" may be instituted. During a +merge freeze: + + * Oncall should slowly merge LGTMd changes throughout the day while monitoring +E2E to ensure stability. + + * Ideally the E2E run should be green, but some tests are flaky and can fail +randomly (not as a result of a particular change). + * If a large number of tests fail, or tests that normally pass fail, that +is an indication that one or more of the PR(s) in that build might be +problematic (and should be reverted). + * Use the Test Results Analyzer to see individual test history over time. + + * Flake mitigation - * Tests that flake (fail a small percentage of the time) need an issue filed against them. Please read [this](flaky-tests.md#filing-issues-for-flaky-tests); the build cop is expected to file issues for any flaky tests they encounter. + + * Tests that flake (fail a small percentage of the time) need an issue filed +against them. Please read [this](flaky-tests.md#filing-issues-for-flaky-tests); +the build cop is expected to file issues for any flaky tests they encounter. + * It's reasonable to manually merge PRs that fix a flake or otherwise mitigate it. -Contact information -------------------- +### Contact information -[@k8s-oncall](https://github.com/k8s-oncall) will reach the current person on call. +[@k8s-oncall](https://github.com/k8s-oncall) will reach the current person on +call. [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/on-call-build-cop.md?pixel)]() diff --git a/on-call-rotations.md b/on-call-rotations.md index 46d5b75f..6cf8d0bf 100644 --- a/on-call-rotations.md +++ b/on-call-rotations.md @@ -31,23 +31,43 @@ Documentation for other releases can be found at -Kubernetes On-Call Rotations -==================== -Kubernetes "first responder" rotations --------------------------------------- - -Kubernetes has generated a lot of public traffic: email, pull-requests, bugs, etc. So much traffic that it's becoming impossible to keep up with it all! This is a fantastic problem to have. In order to be sure that SOMEONE, but not EVERYONE on the team is paying attention to public traffic, we have instituted two "first responder" rotations, listed below. Please read this page before proceeding to the pages linked below, which are specific to each rotation. - -Please also read our [notes on OSS collaboration](collab.md), particularly the bits about hours. Specifically, each rotation is expected to be active primarily during work hours, less so off hours. - -During regular workday work hours of your shift, your primary responsibility is to monitor the traffic sources specific to your rotation. You can check traffic in the evenings if you feel so inclined, but it is not expected to be as highly focused as work hours. For weekends, you should check traffic very occasionally (e.g. once or twice a day). Again, it is not expected to be as highly focused as workdays. It is assumed that over time, everyone will get weekday and weekend shifts, so the workload will balance out. - -If you can not serve your shift, and you know this ahead of time, it is your responsibility to find someone to cover and to change the rotation. If you have an emergency, your responsibilities fall on the primary of the other rotation, who acts as your secondary. If you need help to cover all of the tasks, partners with oncall rotations (e.g., [Redhat](https://github.com/orgs/kubernetes/teams/rh-oncall)). - -If you are not on duty you DO NOT need to do these things. You are free to focus on "real work". - -Note that Kubernetes will occasionally enter code slush/freeze, prior to milestones. When it does, there might be changes in the instructions (assigning milestones, for instance). +## Kubernetes On-Call Rotations + +### Kubernetes "first responder" rotations + +Kubernetes has generated a lot of public traffic: email, pull-requests, bugs, +etc. So much traffic that it's becoming impossible to keep up with it all! This +is a fantastic problem to have. In order to be sure that SOMEONE, but not +EVERYONE on the team is paying attention to public traffic, we have instituted +two "first responder" rotations, listed below. Please read this page before +proceeding to the pages linked below, which are specific to each rotation. + +Please also read our [notes on OSS collaboration](collab.md), particularly the +bits about hours. Specifically, each rotation is expected to be active primarily +during work hours, less so off hours. + +During regular workday work hours of your shift, your primary responsibility is +to monitor the traffic sources specific to your rotation. You can check traffic +in the evenings if you feel so inclined, but it is not expected to be as highly +focused as work hours. For weekends, you should check traffic very occasionally +(e.g. once or twice a day). Again, it is not expected to be as highly focused as +workdays. It is assumed that over time, everyone will get weekday and weekend +shifts, so the workload will balance out. + +If you can not serve your shift, and you know this ahead of time, it is your +responsibility to find someone to cover and to change the rotation. If you have +an emergency, your responsibilities fall on the primary of the other rotation, +who acts as your secondary. If you need help to cover all of the tasks, partners +with oncall rotations (e.g., +[Redhat](https://github.com/orgs/kubernetes/teams/rh-oncall)). + +If you are not on duty you DO NOT need to do these things. You are free to focus +on "real work". + +Note that Kubernetes will occasionally enter code slush/freeze, prior to +milestones. When it does, there might be changes in the instructions (assigning +milestones, for instance). * [Github and Build Cop Rotation](on-call-build-cop.md) * [User Support Rotation](on-call-user-support.md) diff --git a/on-call-user-support.md b/on-call-user-support.md index 1be99f17..1e9f3cb3 100644 --- a/on-call-user-support.md +++ b/on-call-user-support.md @@ -31,55 +31,90 @@ Documentation for other releases can be found at -Kubernetes "User Support" Rotation -================================== -Traffic sources and responsibilities ------------------------------------- +## Kubernetes "User Support" Rotation + +### Traffic sources and responsibilities + +* [StackOverflow](http://stackoverflow.com/questions/tagged/kubernetes) and +[ServerFault](http://serverfault.com/questions/tagged/google-kubernetes): +Respond to any thread that has no responses and is more than 6 hours old (over +time we will lengthen this timeout to allow community responses). If you are not +equipped to respond, it is your job to redirect to someone who can. -* [StackOverflow](http://stackoverflow.com/questions/tagged/kubernetes) and [ServerFault](http://serverfault.com/questions/tagged/google-kubernetes): Respond to any thread that has no responses and is more than 6 hours old (over time we will lengthen this timeout to allow community responses). If you are not equipped to respond, it is your job to redirect to someone who can. * [Query for unanswered Kubernetes StackOverflow questions](http://stackoverflow.com/search?q=%5Bkubernetes%5D+answers%3A0) * [Query for unanswered Kubernetes ServerFault questions](http://serverfault.com/questions/tagged/google-kubernetes?sort=unanswered&pageSize=15) * Direct poorly formulated questions to [stackoverflow's tips about how to ask](http://stackoverflow.com/help/how-to-ask) * Direct off-topic questions to [stackoverflow's policy](http://stackoverflow.com/help/on-topic) -* [Slack](https://kubernetes.slack.com) ([registration](http://slack.k8s.io)): Your job is to be on Slack, watching for questions and answering or redirecting as needed. Also check out the [Slack Archive](http://kubernetes.slackarchive.io/). -* [Email/Groups](https://groups.google.com/forum/#!forum/google-containers): Respond to any thread that has no responses and is more than 6 hours old (over time we will lengthen this timeout to allow community responses). If you are not equipped to respond, it is your job to redirect to someone who can. -* [Legacy] [IRC](irc://irc.freenode.net/#google-containers) (irc.freenode.net #google-containers): watch IRC for questions and try to redirect users to Slack. Also check out the [IRC logs](https://botbot.me/freenode/google-containers/). + +* [Slack](https://kubernetes.slack.com) ([registration](http://slack.k8s.io)): +Your job is to be on Slack, watching for questions and answering or redirecting +as needed. Also check out the [Slack Archive](http://kubernetes.slackarchive.io/). + +* [Email/Groups](https://groups.google.com/forum/#!forum/google-containers): +Respond to any thread that has no responses and is more than 6 hours old (over +time we will lengthen this timeout to allow community responses). If you are not +equipped to respond, it is your job to redirect to someone who can. + +* [Legacy] [IRC](irc://irc.freenode.net/#google-containers) +(irc.freenode.net #google-containers): watch IRC for questions and try to +redirect users to Slack. Also check out the +[IRC logs](https://botbot.me/freenode/google-containers/). In general, try to direct support questions to: -1. Documentation, such as the [user guide](../user-guide/README.md) and [troubleshooting guide](../troubleshooting.md) +1. Documentation, such as the [user guide](../user-guide/README.md) and +[troubleshooting guide](../troubleshooting.md) + 2. Stackoverflow -If you see questions on a forum other than Stackoverflow, try to redirect them to Stackoverflow. Example response: +If you see questions on a forum other than Stackoverflow, try to redirect them +to Stackoverflow. Example response: + +```code +Please re-post your question to [stackoverflow] +(http://stackoverflow.com/questions/tagged/kubernetes). + +We are trying to consolidate the channels to which questions for help/support +are posted so that we can improve our efficiency in responding to your requests, +and to make it easier for you to find answers to frequently asked questions and +how to address common use cases. - Please re-post your question to [stackoverflow](http://stackoverflow.com/questions/tagged/kubernetes). +We regularly see messages posted in multiple forums, with the full response +thread only in one place or, worse, spread across multiple forums. Also, the +large volume of support issues on github is making it difficult for us to use +issues to identify real bugs. - We are trying to consolidate the channels to which questions for help/support are posted so that we can improve our efficiency in responding to your requests, and to make it easier for you to find answers to frequently asked questions and how to address common use cases. +The Kubernetes team scans stackoverflow on a regular basis, and will try to +ensure your questions don't go unanswered. - We regularly see messages posted in multiple forums, with the full response thread only in one place or, worse, spread across multiple forums. Also, the large volume of support issues on github is making it difficult for us to use issues to identify real bugs. +Before posting a new question, please search stackoverflow for answers to +similar questions, and also familiarize yourself with: - The Kubernetes team scans stackoverflow on a regular basis, and will try to ensure your questions don't go unanswered. + * [user guide](http://kubernetes.io/v1.1/) + * [troubleshooting guide](http://kubernetes.io/v1.1/docs/troubleshooting.html) - Before posting a new question, please search stackoverflow for answers to similar questions, and also familiarize yourself with: - * [the user guide](http://kubernetes.io/v1.1/) - * [the troubleshooting guide](http://kubernetes.io/v1.1/docs/troubleshooting.html) +Again, thanks for using Kubernetes. - Again, thanks for using Kubernetes. +The Kubernetes Team +``` - The Kubernetes Team +If you answer a question (in any of the above forums) that you think might be +useful for someone else in the future, *please add it to one of the FAQs in the +wiki*: -If you answer a question (in any of the above forums) that you think might be useful for someone else in the future, *please add it to one of the FAQs in the wiki*: * [User FAQ](https://github.com/kubernetes/kubernetes/wiki/User-FAQ) * [Developer FAQ](https://github.com/kubernetes/kubernetes/wiki/Developer-FAQ) * [Debugging FAQ](https://github.com/kubernetes/kubernetes/wiki/Debugging-FAQ). -Getting it into the FAQ is more important than polish. Please indicate the date it was added, so people can judge the likelihood that it is out-of-date (and please correct any FAQ entries that you see contain out-of-date information). +Getting it into the FAQ is more important than polish. Please indicate the date +it was added, so people can judge the likelihood that it is out-of-date (and +please correct any FAQ entries that you see contain out-of-date information). -Contact information -------------------- +### Contact information -[@k8s-support-oncall](https://github.com/k8s-support-oncall) will reach the current person on call. +[@k8s-support-oncall](https://github.com/k8s-support-oncall) will reach the +current person on call. -- cgit v1.2.3 From 1f7e8a462bccdcc4219d41d4ed94d024c66486b5 Mon Sep 17 00:00:00 2001 From: CJ Cullen Date: Sat, 7 May 2016 11:21:32 -0700 Subject: Update adding-an-APIGroup.md for #23110 --- adding-an-APIGroup.md | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/adding-an-APIGroup.md b/adding-an-APIGroup.md index 240c3eb1..2b318828 100644 --- a/adding-an-APIGroup.md +++ b/adding-an-APIGroup.md @@ -75,9 +75,7 @@ cmd/libs/go2idl/ tool. 1. Generate conversions and deep-copies: 1. Add your "group/" or "group/version" into -hack/after-build/{update-generated-conversions.sh, -update-generated-deep-copies.sh, verify-generated-conversions.sh, -verify-generated-deep-copies.sh}; +cmd/libs/go2idl/{conversion-gen, deep-copy-gen}/main.go; 2. Make sure your pkg/apis/``/`` directory has a doc.go file with the comment `// +genconversion=true`, to catch the attention of our gen-conversion script. -- cgit v1.2.3 From f02a0dc5c12a0c24586bfd6d73820ee1a4551eaa Mon Sep 17 00:00:00 2001 From: Tim Hockin Date: Wed, 13 Apr 2016 23:30:15 -0700 Subject: Convert everything to use vendor/ --- development.md | 2 +- e2e-node-tests.md | 4 ++-- flaky-tests.md | 2 +- testing.md | 6 +++--- 4 files changed, 7 insertions(+), 7 deletions(-) diff --git a/development.md b/development.md index 415bb490..d08dc3d2 100644 --- a/development.md +++ b/development.md @@ -193,7 +193,7 @@ godep v53 (linux/amd64/go1.5.3) ### Using godep -Here's a quick walkthrough of one way to use godeps to add or update a Kubernetes dependency into Godeps/\_workspace. For more details, please see the instructions in [godep's documentation](https://github.com/tools/godep). +Here's a quick walkthrough of one way to use godeps to add or update a Kubernetes dependency into `vendor/`. For more details, please see the instructions in [godep's documentation](https://github.com/tools/godep). 1) Devote a directory to this endeavor: diff --git a/e2e-node-tests.md b/e2e-node-tests.md index 09189457..98450796 100644 --- a/e2e-node-tests.md +++ b/e2e-node-tests.md @@ -82,12 +82,12 @@ See [setup_host.sh](../../test/e2e_node/environment/setup_host.sh) * **Requires password-less ssh and sudo access** * Make sure this works - e.g. `ssh -- sudo echo "ok"` * If ssh flags are required (e.g. `-i`), they can be used and passed to the tests with `--ssh-options` - * `godep go run test/e2e_node/runner/run_e2e.go --logtostderr --hosts ` + * `go run test/e2e_node/runner/run_e2e.go --logtostderr --hosts ` * **Must be run from kubernetes root** * requires (go get): `github.com/tools/godep`, `github.com/onsi/gomega`, `github.com/onsi/ginkgo/ginkgo` 3. Alternatively, manually build and copy `e2e_node_test.tar.gz` to a remote host - * Build the tar.gz `godep go run test/e2e_node/runner/run_e2e.go --logtostderr --build-only` + * Build the tar.gz `go run test/e2e_node/runner/run_e2e.go --logtostderr --build-only` * requires (go get): `github.com/tools/godep`, `github.com/onsi/gomega`, `github.com/onsi/ginkgo/ginkgo` * Copy `e2e_node_test.tar.gz` to the remote host * Extract the archive on the remote host `tar -xzvf e2e_node_test.tar.gz` diff --git a/flaky-tests.md b/flaky-tests.md index cd27c200..e757021f 100644 --- a/flaky-tests.md +++ b/flaky-tests.md @@ -107,7 +107,7 @@ $ go install golang.org/x/tools/cmd/stress Then build your test binary ``` -$ godep go test -c -race +$ go test -c -race ``` Then run it under stress diff --git a/testing.md b/testing.md index e415e442..72f1c328 100644 --- a/testing.md +++ b/testing.md @@ -84,10 +84,10 @@ hack/test-go.sh # Run all unit tests. cd kubernetes # Run all tests under pkg (requires client to be in $GOPATH/src/k8s.io) -godep go test ./pkg/... +go test ./pkg/... # Run all tests in the pkg/api (but not subpackages) -godep go test ./pkg/api +go test ./pkg/api ``` ### Stress running unit tests @@ -135,7 +135,7 @@ To run benchmark tests, you'll typically use something like: ```sh cd kubernetes -godep go test ./pkg/apiserver -benchmem -run=XXX -bench=BenchmarkWatch +go test ./pkg/apiserver -benchmem -run=XXX -bench=BenchmarkWatch ``` This will do the following: -- cgit v1.2.3 From 973df9cfd8f4589bae610d1efca58cd944811630 Mon Sep 17 00:00:00 2001 From: Tim Hockin Date: Tue, 3 May 2016 22:00:27 -0700 Subject: Get rid of hack/after-build scripts The build is now fast enough to not need them. --- adding-an-APIGroup.md | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/adding-an-APIGroup.md b/adding-an-APIGroup.md index 2b318828..e0f95fc7 100644 --- a/adding-an-APIGroup.md +++ b/adding-an-APIGroup.md @@ -50,8 +50,8 @@ We plan on improving the way the types are factored in the future; see in which this might evolve. 1. Create a folder in pkg/apis to hold you group. Create types.go in - pkg/apis/``/ and pkg/apis/``/``/ to define API objects - in your group; +pkg/apis/``/ and pkg/apis/``/``/ to define API objects +in your group; 2. Create pkg/apis/``/{register.go, ``/register.go} to register this group's API objects to the encoding/decoding scheme (e.g., @@ -75,10 +75,10 @@ cmd/libs/go2idl/ tool. 1. Generate conversions and deep-copies: 1. Add your "group/" or "group/version" into -cmd/libs/go2idl/{conversion-gen, deep-copy-gen}/main.go; + cmd/libs/go2idl/{conversion-gen, deep-copy-gen}/main.go; 2. Make sure your pkg/apis/``/`` directory has a doc.go file -with the comment `// +genconversion=true`, to catch the attention of our -gen-conversion script. + with the comment `// +genconversion=true`, to catch the attention of our + gen-conversion script. 3. Run hack/update-all.sh. @@ -89,7 +89,8 @@ gen-conversion script. 3. Generate protobuf objects: - 1. Add your group to `cmd/libs/go2idl/go-to-protobuf/protobuf/cmd.go` to `New()` in the `Packages` field + 1. Add your group to `cmd/libs/go2idl/go-to-protobuf/protobuf/cmd.go` to + `New()` in the `Packages` field 2. Run hack/update-generated-protobuf.sh ### Client (optional): -- cgit v1.2.3 From c43b5ec40ca4a3d4fafdacfafaca5c4d142d42d2 Mon Sep 17 00:00:00 2001 From: deads2k Date: Wed, 4 May 2016 08:19:56 -0400 Subject: create command guidance --- kubectl-conventions.md | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/kubectl-conventions.md b/kubectl-conventions.md index cc69c78f..9b1d77ae 100644 --- a/kubectl-conventions.md +++ b/kubectl-conventions.md @@ -42,6 +42,7 @@ Updated: 8/27/2015 - [Principles](#principles) - [Command conventions](#command-conventions) + - [Create commands](#create-commands) - [Flag conventions](#flag-conventions) - [Output conventions](#output-conventions) - [Documentation conventions](#documentation-conventions) @@ -71,6 +72,19 @@ Updated: 8/27/2015 * Commands that generate resources, such as `run` or `expose`, should obey specific conventions, see [generators](#generators). * A command group (e.g., `kubectl config`) may be used to group related non-standard commands, such as custom generators, mutations, and computations. + +### Create commands + +`kubectl create ` commands fill the gap between "I want to try Kubernetes, but I don't know or care what gets created" (`kubectl run`) and "I want to create exactly this" (author yaml and run `kubectl create -f`). +They provide an easy way to create a valid object without having to know the vagaries of particular kinds, nested fields, and object key typos that are ignored by the yaml/json parser. +Because editing an already created object is easier than authoring one from scratch, these commands only need to have enough parameters to create a valid object and set common immutable fields. It should default as much as is reasonably possible. +Once that valid object is created, it can be further manipulated using `kubectl edit` or the eventual `kubectl set` commands. + +`kubectl create ` commands help in cases where you need to perform non-trivial configuration generation/transformation tailored for a common use case. +`kubectl create secret` is a good example, there's a `generic` flavor with keys mapping to files, then there's a `docker-registry` flavor that is tailored for creating an image pull secret, +and there's a `tls` flavor for creating tls secrets. You create these as separate commands to get distinct flags and separate help that is tailored for the particular usage. + + ## Flag conventions * Flags are all lowercase, with words separated by hyphens @@ -253,6 +267,7 @@ func (g *NamespaceGeneratorV1) validate() error { The generator struct (`NamespaceGeneratorV1`) holds the necessary fields for namespace generation. It also satisfies the `kubectl.StructuredGenerator` interface by implementing the `StructuredGenerate() (runtime.Object, error)` method which configures the generated namespace that callers of the generator (`kubectl create namespace` in our case) need to create. * `--dry-run` should output the resource that would be created, without creating it. + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/kubectl-conventions.md?pixel)]() -- cgit v1.2.3 From 182a990c16bb9ca1450313b9b250fe657dc8e747 Mon Sep 17 00:00:00 2001 From: David McMahon Date: Mon, 2 May 2016 17:35:10 -0700 Subject: Update pull request and cherrypick docs for release notes to more accurately reflect current process. --- cherry-picks.md | 14 ++++++-------- 1 file changed, 6 insertions(+), 8 deletions(-) diff --git a/cherry-picks.md b/cherry-picks.md index 81b8cd47..d5456a1a 100644 --- a/cherry-picks.md +++ b/cherry-picks.md @@ -42,14 +42,12 @@ depending on the point in the release cycle. 1. Cherrypicks are [managed with labels and milestones] (pull-requests.md#release-notes) -1. All label/milestone accounting happens on PRs on master. There's nothing to -do on PRs targeted to the release branches. -1. When you want a PR to be merged to the release branch, make the following -label changes to the **master** branch PR: - * Remove release-note-label-needed - * Add an appropriate release-note-(!label-needed) label - * Add an appropriate milestone - * Add the `cherrypick-candidate` label +1. To get a PR merged to the release branch, first ensure the following labels + are on the original **master** branch PR: + * An appropriate milestone (e.g. v1.3) + * The `cherrypick-candidate` label +1. If `release-note-none` is set on the master PR, the cherrypick PR will need + to set the same label to confirm that no release note is needed. 1. `release-note` labeled PRs generate a release note using the PR title by default OR the release-note block in the PR template if filled in. * See the [PR template](../../.github/PULL_REQUEST_TEMPLATE.md) for more -- cgit v1.2.3 From 3358641372ab1602a228b0e618e78883e54a84e8 Mon Sep 17 00:00:00 2001 From: Mike Brown Date: Fri, 29 Apr 2016 15:04:03 -0500 Subject: devel/ tree 80col wrap and other minor edits Signed-off-by: Mike Brown --- coding-conventions.md | 118 ++++++++++++---- collab.md | 72 ++++++++-- development.md | 76 +++++++---- e2e-node-tests.md | 81 +++++++---- e2e-tests.md | 372 +++++++++++++++++++++++++++++++++++++------------- 5 files changed, 536 insertions(+), 183 deletions(-) diff --git a/coding-conventions.md b/coding-conventions.md index ca4e8431..3a59cd2a 100644 --- a/coding-conventions.md +++ b/coding-conventions.md @@ -50,65 +50,129 @@ Updated: 5/3/2016 ## Code conventions - Bash + - https://google-styleguide.googlecode.com/svn/trunk/shell.xml - - Ensure that build, release, test, and cluster-management scripts run on OS X + + - Ensure that build, release, test, and cluster-management scripts run on +OS X + - Go + - Ensure your code passes the [presubmit checks](development.md#hooks) - - [Go Code Review Comments](https://github.com/golang/go/wiki/CodeReviewComments) + + - [Go Code Review +Comments](https://github.com/golang/go/wiki/CodeReviewComments) + - [Effective Go](https://golang.org/doc/effective_go.html) + - Comment your code. - - [Go's commenting conventions](http://blog.golang.org/godoc-documenting-go-code) - - If reviewers ask questions about why the code is the way it is, that's a sign that comments might be helpful. + - [Go's commenting +conventions](http://blog.golang.org/godoc-documenting-go-code) + - If reviewers ask questions about why the code is the way it is, that's a +sign that comments might be helpful. + + - Command-line flags should use dashes, not underscores + + - Naming - - Please consider package name when selecting an interface name, and avoid redundancy. - - e.g.: `storage.Interface` is better than `storage.StorageInterface`. - - Do not use uppercase characters, underscores, or dashes in package names. + - Please consider package name when selecting an interface name, and avoid +redundancy. + + - e.g.: `storage.Interface` is better than `storage.StorageInterface`. + + - Do not use uppercase characters, underscores, or dashes in package +names. - Please consider parent directory name when choosing a package name. - - so pkg/controllers/autoscaler/foo.go should say `package autoscaler` not `package autoscalercontroller`. - - Unless there's a good reason, the `package foo` line should match the name of the directory in which the .go file exists. - - Importers can use a different name if they need to disambiguate. - - Locks should be called `lock` and should never be embedded (always `lock sync.Mutex`). When multiple locks are present, give each lock a distinct name following Go conventions - `stateLock`, `mapLock` etc. - - API conventions - - [API changes](api_changes.md) - - [API conventions](api-conventions.md) + + - so pkg/controllers/autoscaler/foo.go should say `package autoscaler` +not `package autoscalercontroller`. + - Unless there's a good reason, the `package foo` line should match +the name of the directory in which the .go file exists. + - Importers can use a different name if they need to disambiguate. + + - Locks should be called `lock` and should never be embedded (always `lock +sync.Mutex`). When multiple locks are present, give each lock a distinct name +following Go conventions - `stateLock`, `mapLock` etc. + + - [API changes](api_changes.md) + + - [API conventions](api-conventions.md) + - [Kubectl conventions](kubectl-conventions.md) + - [Logging conventions](logging.md) ## Testing conventions - - All new packages and most new significant functionality must come with unit tests - - Table-driven tests are preferred for testing multiple scenarios/inputs; for example, see [TestNamespaceAuthorization](../../test/integration/auth_test.go) - - Significant features should come with integration (test/integration) and/or [end-to-end (test/e2e) tests](e2e-tests.md) + - All new packages and most new significant functionality must come with unit +tests + + - Table-driven tests are preferred for testing multiple scenarios/inputs; for +example, see [TestNamespaceAuthorization](../../test/integration/auth_test.go) + + - Significant features should come with integration (test/integration) and/or +[end-to-end (test/e2e) tests](e2e-tests.md) - Including new kubectl commands and major features of existing commands - - Unit tests must pass on OS X and Windows platforms - if you use Linux specific features, your test case must either be skipped on windows or compiled out (skipped is better when running Linux specific commands, compiled out is required when your code does not compile on Windows). + + - Unit tests must pass on OS X and Windows platforms - if you use Linux +specific features, your test case must either be skipped on windows or compiled +out (skipped is better when running Linux specific commands, compiled out is +required when your code does not compile on Windows). + - Avoid relying on Docker hub (e.g. pull from Docker hub). Use gcr.io instead. - - Avoid waiting for a short amount of time (or without waiting) and expect an asynchronous thing to happen (e.g. wait for 1 seconds and expect a Pod to be running). Wait and retry instead. + + - Avoid waiting for a short amount of time (or without waiting) and expect an +asynchronous thing to happen (e.g. wait for 1 seconds and expect a Pod to be +running). Wait and retry instead. + - See the [testing guide](testing.md) for additional testing advice. ## Directory and file conventions - - Avoid package sprawl. Find an appropriate subdirectory for new packages. (See [#4851](http://issues.k8s.io/4851) for discussion.) - - Libraries with no more appropriate home belong in new package subdirectories of pkg/util - - Avoid general utility packages. Packages called "util" are suspect. Instead, derive a name that describes your desired function. For example, the utility functions dealing with waiting for operations are in the "wait" package and include functionality like Poll. So the full name is wait.Poll + - Avoid package sprawl. Find an appropriate subdirectory for new packages. +(See [#4851](http://issues.k8s.io/4851) for discussion.) + - Libraries with no more appropriate home belong in new package +subdirectories of pkg/util + + - Avoid general utility packages. Packages called "util" are suspect. Instead, +derive a name that describes your desired function. For example, the utility +functions dealing with waiting for operations are in the "wait" package and +include functionality like Poll. So the full name is wait.Poll + - All filenames should be lowercase + - Go source files and directories use underscores, not dashes - - Package directories should generally avoid using separators as much as possible (when packages are multiple words, they usually should be in nested subdirectories). + - Package directories should generally avoid using separators as much as +possible (when packages are multiple words, they usually should be in nested +subdirectories). + - Document directories and filenames should use dashes rather than underscores - - Contrived examples that illustrate system features belong in /docs/user-guide or /docs/admin, depending on whether it is a feature primarily intended for users that deploy applications or cluster administrators, respectively. Actual application examples belong in /examples. - - Examples should also illustrate - [best practices for configuration and using the system](../user-guide/config-best-practices.md) + + - Contrived examples that illustrate system features belong in +/docs/user-guide or /docs/admin, depending on whether it is a feature primarily +intended for users that deploy applications or cluster administrators, +respectively. Actual application examples belong in /examples. + - Examples should also illustrate [best practices for configuration and +using the system](../user-guide/config-best-practices.md) + - Third-party code - - Go code for normal third-party dependencies is managed using [Godeps](https://github.com/tools/godep) + + - Go code for normal third-party dependencies is managed using +[Godeps](https://github.com/tools/godep) + - Other third-party code belongs in `/third_party` - forked third party Go code goes in `/third_party/forked` - forked _golang stdlib_ code goes in `/third_party/golang` + - Third-party code must include licenses + - This includes modified third-party code and excerpts, as well ## Coding advice - Go + - [Go landmines](https://gist.github.com/lavalamp/4bd23295a9f32706a48f) diff --git a/collab.md b/collab.md index ab2e3337..0742b548 100644 --- a/collab.md +++ b/collab.md @@ -34,44 +34,86 @@ Documentation for other releases can be found at # On Collaborative Development -Kubernetes is open source, but many of the people working on it do so as their day job. In order to avoid forcing people to be "at work" effectively 24/7, we want to establish some semi-formal protocols around development. Hopefully these rules make things go more smoothly. If you find that this is not the case, please complain loudly. +Kubernetes is open source, but many of the people working on it do so as their +day job. In order to avoid forcing people to be "at work" effectively 24/7, we +want to establish some semi-formal protocols around development. Hopefully these +rules make things go more smoothly. If you find that this is not the case, +please complain loudly. ## Patches welcome -First and foremost: as a potential contributor, your changes and ideas are welcome at any hour of the day or night, weekdays, weekends, and holidays. Please do not ever hesitate to ask a question or send a PR. +First and foremost: as a potential contributor, your changes and ideas are +welcome at any hour of the day or night, weekdays, weekends, and holidays. +Please do not ever hesitate to ask a question or send a PR. ## Code reviews -All changes must be code reviewed. For non-maintainers this is obvious, since you can't commit anyway. But even for maintainers, we want all changes to get at least one review, preferably (for non-trivial changes obligatorily) from someone who knows the areas the change touches. For non-trivial changes we may want two reviewers. The primary reviewer will make this decision and nominate a second reviewer, if needed. Except for trivial changes, PRs should not be committed until relevant parties (e.g. owners of the subsystem affected by the PR) have had a reasonable chance to look at PR in their local business hours. +All changes must be code reviewed. For non-maintainers this is obvious, since +you can't commit anyway. But even for maintainers, we want all changes to get at +least one review, preferably (for non-trivial changes obligatorily) from someone +who knows the areas the change touches. For non-trivial changes we may want two +reviewers. The primary reviewer will make this decision and nominate a second +reviewer, if needed. Except for trivial changes, PRs should not be committed +until relevant parties (e.g. owners of the subsystem affected by the PR) have +had a reasonable chance to look at PR in their local business hours. -Most PRs will find reviewers organically. If a maintainer intends to be the primary reviewer of a PR they should set themselves as the assignee on GitHub and say so in a reply to the PR. Only the primary reviewer of a change should actually do the merge, except in rare cases (e.g. they are unavailable in a reasonable timeframe). +Most PRs will find reviewers organically. If a maintainer intends to be the +primary reviewer of a PR they should set themselves as the assignee on GitHub +and say so in a reply to the PR. Only the primary reviewer of a change should +actually do the merge, except in rare cases (e.g. they are unavailable in a +reasonable timeframe). -If a PR has gone 2 work days without an owner emerging, please poke the PR thread and ask for a reviewer to be assigned. +If a PR has gone 2 work days without an owner emerging, please poke the PR +thread and ask for a reviewer to be assigned. -Except for rare cases, such as trivial changes (e.g. typos, comments) or emergencies (e.g. broken builds), maintainers should not merge their own changes. +Except for rare cases, such as trivial changes (e.g. typos, comments) or +emergencies (e.g. broken builds), maintainers should not merge their own +changes. -Expect reviewers to request that you avoid [common go style mistakes](https://github.com/golang/go/wiki/CodeReviewComments) in your PRs. +Expect reviewers to request that you avoid [common go style +mistakes](https://github.com/golang/go/wiki/CodeReviewComments) in your PRs. ## Assigned reviews -Maintainers can assign reviews to other maintainers, when appropriate. The assignee becomes the shepherd for that PR and is responsible for merging the PR once they are satisfied with it or else closing it. The assignee might request reviews from non-maintainers. +Maintainers can assign reviews to other maintainers, when appropriate. The +assignee becomes the shepherd for that PR and is responsible for merging the PR +once they are satisfied with it or else closing it. The assignee might request +reviews from non-maintainers. ## Merge hours -Maintainers will do merges of appropriately reviewed-and-approved changes during their local "business hours" (typically 7:00 am Monday to 5:00 pm (17:00h) Friday). PRs that arrive over the weekend or on holidays will only be merged if there is a very good reason for it and if the code review requirements have been met. Concretely this means that nobody should merge changes immediately before going to bed for the night. +Maintainers will do merges of appropriately reviewed-and-approved changes during +their local "business hours" (typically 7:00 am Monday to 5:00 pm (17:00h) +Friday). PRs that arrive over the weekend or on holidays will only be merged if +there is a very good reason for it and if the code review requirements have been +met. Concretely this means that nobody should merge changes immediately before +going to bed for the night. -There may be discussion an even approvals granted outside of the above hours, but merges will generally be deferred. +There may be discussion an even approvals granted outside of the above hours, +but merges will generally be deferred. -If a PR is considered complex or controversial, the merge of that PR should be delayed to give all interested parties in all timezones the opportunity to provide feedback. Concretely, this means that such PRs should be held for 24 -hours before merging. Of course "complex" and "controversial" are left to the judgment of the people involved, but we trust that part of being a committer is the judgment required to evaluate such things honestly, and not be -motivated by your desire (or your cube-mate's desire) to get their code merged. Also see "Holds" below, any reviewer can issue a "hold" to indicate that the PR is in fact complicated or complex and deserves further review. +If a PR is considered complex or controversial, the merge of that PR should be +delayed to give all interested parties in all timezones the opportunity to +provide feedback. Concretely, this means that such PRs should be held for 24 +hours before merging. Of course "complex" and "controversial" are left to the +judgment of the people involved, but we trust that part of being a committer is +the judgment required to evaluate such things honestly, and not be motivated by +your desire (or your cube-mate's desire) to get their code merged. Also see +"Holds" below, any reviewer can issue a "hold" to indicate that the PR is in +fact complicated or complex and deserves further review. -PRs that are incorrectly judged to be merge-able, may be reverted and subject to re-review, if subsequent reviewers believe that they in fact are controversial or complex. +PRs that are incorrectly judged to be merge-able, may be reverted and subject to +re-review, if subsequent reviewers believe that they in fact are controversial +or complex. ## Holds -Any maintainer or core contributor who wants to review a PR but does not have time immediately may put a hold on a PR simply by saying so on the PR discussion and offering an ETA measured in single-digit days at most. Any PR that has a hold shall not be merged until the person who requested the hold acks the review, withdraws their hold, or is overruled by a preponderance of maintainers. +Any maintainer or core contributor who wants to review a PR but does not have +time immediately may put a hold on a PR simply by saying so on the PR discussion +and offering an ETA measured in single-digit days at most. Any PR that has a +hold shall not be merged until the person who requested the hold acks the +review, withdraws their hold, or is overruled by a preponderance of maintainers. diff --git a/development.md b/development.md index d08dc3d2..3e782e03 100644 --- a/development.md +++ b/development.md @@ -47,7 +47,8 @@ branch, but release branches of Kubernetes should not change. ## Building Kubernetes Official releases are built using Docker containers. To build Kubernetes using -Docker please follow [these instructions](http://releases.k8s.io/HEAD/build/README.md). +Docker please follow [these +instructions](http://releases.k8s.io/HEAD/build/README.md). ### Go development environment @@ -55,14 +56,16 @@ Kubernetes is written in the [Go](http://golang.org) programming language. To build Kubernetes without using Docker containers, you'll need a Go development environment. Builds for Kubernetes 1.0 - 1.2 require Go version 1.4.2. Builds for Kubernetes 1.3 and higher require Go version 1.6.0. If you -haven't set up a Go development environment, please follow [these instructions](http://golang.org/doc/code.html) -to install the go tools and set up a GOPATH. +haven't set up a Go development environment, please follow [these +instructions](http://golang.org/doc/code.html) to install the go tools and set +up a GOPATH. To build Kubernetes using your local Go development environment (generate linux binaries): hack/build-go.sh -You may pass build options and packages to the script as necessary. To build binaries for all platforms: +You may pass build options and packages to the script as necessary. To build +binaries for all platforms: hack/build-cross.sh @@ -82,7 +85,9 @@ Other git workflows are also valid. ### Clone your fork -The commands below require that you have $GOPATH set ([$GOPATH docs](https://golang.org/doc/code.html#GOPATH)). We highly recommend you put Kubernetes' code into your GOPATH. Note: the commands below will not work if +The commands below require that you have $GOPATH set ([$GOPATH +docs](https://golang.org/doc/code.html#GOPATH)). We highly recommend you put +Kubernetes' code into your GOPATH. Note: the commands below will not work if there is more than one directory in your `$GOPATH`. ```sh @@ -108,7 +113,9 @@ git fetch upstream git rebase upstream/master ``` -Note: If you have write access to the main repository at github.com/kubernetes/kubernetes, you should modify your git configuration so that you can't accidentally push to upstream: +Note: If you have write access to the main repository at +github.com/kubernetes/kubernetes, you should modify your git configuration so +that you can't accidentally push to upstream: ```sh git remote set-url --push upstream no_push @@ -116,9 +123,10 @@ git remote set-url --push upstream no_push ### Committing changes to your fork -Before committing any changes, please link/copy the pre-commit hook -into your .git directory. This will keep you from accidentally -committing non-gofmt'd Go code. In addition this hook will do a build. +Before committing any changes, please link/copy the pre-commit hook into your +.git directory. This will keep you from accidentally committing non-gofmt'd Go +code. This hook will also do a build and test whether documentation generation +scripts need to be executed. The hook requires both Godep and etcd on your `PATH`. @@ -156,15 +164,22 @@ See [Faster Reviews](faster_reviews.md) for more details. ## godep and dependency management -Kubernetes uses [godep](https://github.com/tools/godep) to manage dependencies. It is not strictly required for building Kubernetes but it is required when managing dependencies under the Godeps/ tree, and is required by a number of the build and test scripts. Please make sure that ``godep`` is installed and in your ``$PATH``. +Kubernetes uses [godep](https://github.com/tools/godep) to manage dependencies. +It is not strictly required for building Kubernetes but it is required when +managing dependencies under the Godeps/ tree, and is required by a number of the +build and test scripts. Please make sure that ``godep`` is installed and in your +``$PATH``. ### Installing godep -There are many ways to build and host Go binaries. Here is an easy way to get utilities like `godep` installed: +There are many ways to build and host Go binaries. Here is an easy way to get +utilities like `godep` installed: -1) Ensure that [mercurial](http://mercurial.selenic.com/wiki/Download) is installed on your system. (some of godep's dependencies use the mercurial -source control system). Use `apt-get install mercurial` or `yum install mercurial` on Linux, or [brew.sh](http://brew.sh) on OS X, or download -directly from mercurial. +1) Ensure that [mercurial](http://mercurial.selenic.com/wiki/Download) is +installed on your system. (some of godep's dependencies use the mercurial +source control system). Use `apt-get install mercurial` or `yum install +mercurial` on Linux, or [brew.sh](http://brew.sh) on OS X, or download directly +from mercurial. 2) Create a new GOPATH for your tools and install godep: @@ -182,7 +197,8 @@ export PATH=$PATH:$GOPATH/bin ``` Note: -At this time, godep update in the Kubernetes project only works properly if your version of godep is < 54. +At this time, godep update in the Kubernetes project only works properly if your +version of godep is < 54. To check your version of godep: @@ -193,11 +209,14 @@ godep v53 (linux/amd64/go1.5.3) ### Using godep -Here's a quick walkthrough of one way to use godeps to add or update a Kubernetes dependency into `vendor/`. For more details, please see the instructions in [godep's documentation](https://github.com/tools/godep). +Here's a quick walkthrough of one way to use godeps to add or update a +Kubernetes dependency into `vendor/`. For more details, please see the +instructions in [godep's documentation](https://github.com/tools/godep). 1) Devote a directory to this endeavor: -_Devoting a separate directory is not required, but it is helpful to separate dependency updates from other changes._ +_Devoting a separate directory is not required, but it is helpful to separate +dependency updates from other changes._ ```sh export KPATH=$HOME/code/kubernetes @@ -240,20 +259,27 @@ go get -u path/to/dependency godep update path/to/dependency/... ``` -_If `go get -u path/to/dependency` fails with compilation errors, instead try `go get -d -u path/to/dependency` -to fetch the dependencies without compiling them. This can happen when updating the cadvisor dependency._ +_If `go get -u path/to/dependency` fails with compilation errors, instead try +`go get -d -u path/to/dependency` to fetch the dependencies without compiling +them. This can happen when updating the cadvisor dependency._ -5) Before sending your PR, it's a good idea to sanity check that your Godeps.json file is ok by running `hack/verify-godeps.sh` +5) Before sending your PR, it's a good idea to sanity check that your +Godeps.json file is ok by running `hack/verify-godeps.sh` -_If hack/verify-godeps.sh fails after a `godep update`, it is possible that a transitive dependency was added or removed but not -updated by godeps. It then may be necessary to perform a `godep save ./...` to pick up the transitive dependency changes._ +_If hack/verify-godeps.sh fails after a `godep update`, it is possible that a +transitive dependency was added or removed but not updated by godeps. It then +may be necessary to perform a `godep save ./...` to pick up the transitive +dependency changes._ -It is sometimes expedient to manually fix the /Godeps/godeps.json file to minimize the changes. +It is sometimes expedient to manually fix the /Godeps/godeps.json file to +minimize the changes. -Please send dependency updates in separate commits within your PR, for easier reviewing. +Please send dependency updates in separate commits within your PR, for easier +reviewing. -6) If you updated the Godeps, please also update `Godeps/LICENSES` by running `hack/update-godep-licenses.sh`. +6) If you updated the Godeps, please also update `Godeps/LICENSES` by running +`hack/update-godep-licenses.sh`. ## Testing diff --git a/e2e-node-tests.md b/e2e-node-tests.md index 98450796..840d3c3a 100644 --- a/e2e-node-tests.md +++ b/e2e-node-tests.md @@ -34,32 +34,39 @@ Documentation for other releases can be found at # Node End-To-End tests -Node e2e tests start kubelet and minimal supporting infrastructure to validate the kubelet on a host. -Tests can be run either locally, against a remote host or against a GCE image. +Node e2e tests start kubelet and minimal supporting infrastructure to validate +the kubelet on a host. Tests can be run either locally, against a remote host or +against a GCE image. *Note: Linux only. Mac and Windows unsupported.* ## Running tests locally -etcd must be installed and on the PATH to run the node e2e tests. To verify etcd is installed: `which etcd`. -You can find instructions for installing etcd [on the etcd releases page](https://github.com/coreos/etcd/releases). +etcd must be installed and on the PATH to run the node e2e tests. To verify +etcd is installed: `which etcd`. You can find instructions for installing etcd +[on the etcd releases page](https://github.com/coreos/etcd/releases). Run the tests locally: `make test_e2e_node` -Running the node e2e tests locally will build the kubernetes go source files and then start the -kubelet, kube-apiserver, and etcd binaries on localhost before executing the ginkgo tests under -test/e2e_node against the local kubelet instance. +Running the node e2e tests locally will build the kubernetes go source files and +then start the kubelet, kube-apiserver, and etcd binaries on localhost before +executing the ginkgo tests under test/e2e_node against the local kubelet +instance. ## Running tests against a remote host -The node e2e tests can be run against one or more remote hosts using one of -* [e2e-node-jenkins.sh](../../test/e2e_node/jenkins/e2e-node-jenkins.sh) (gce only) -* [run_e2e.go](../../test/e2e_node/runner/run_e2e.go) (requires passwordless ssh and remote passwordless sudo access over ssh) -* using [run_e2e.go](../../test/e2e_node/runner/run_e2e.go) to build a tar.gz and executing on host (requires host access w/ remote sudo) +The node e2e tests can be run against one or more remote hosts using one of: +* [e2e-node-jenkins.sh](../../test/e2e_node/jenkins/e2e-node-jenkins.sh) (gce +only) +* [run_e2e.go](../../test/e2e_node/runner/run_e2e.go) (requires passwordless ssh +and remote passwordless sudo access over ssh) +* using [run_e2e.go](../../test/e2e_node/runner/run_e2e.go) to build a tar.gz +and executing on host (requires host access w/ remote sudo) ### Configuring a new remote host for testing -The host must contain a environment capable of supporting a mini-kubernetes cluster. Includes: +The host must contain a environment capable of supporting a mini-kubernetes +cluster. Includes: * install etcd * install docker * install lxc and update grub commandline @@ -70,35 +77,60 @@ See [setup_host.sh](../../test/e2e_node/environment/setup_host.sh) ### Running the tests 1. If running against a host on gce + * Copy [template.properties](../../test/e2e_node/jenkins/template.properties) + * Fill in `GCE_HOSTS` * Set `INSTALL_GODEP=true` to install `godep`, `gomega`, `ginkgo` + * Make sure host names are resolvable to ssh `ssh `. - * If needed, you can run `gcloud compute config-ssh` to add gce hostnames to your .ssh/config so they are resolvable by ssh. + + * If needed, you can run `gcloud compute config-ssh` to add gce hostnames to +your .ssh/config so they are resolvable by ssh. + * Run `test/e2e_node/jenkins/e2e-node-jenkins.sh ` * **Must be run from kubernetes root** 2. If running against a host anywhere else + * **Requires password-less ssh and sudo access** + * Make sure this works - e.g. `ssh -- sudo echo "ok"` - * If ssh flags are required (e.g. `-i`), they can be used and passed to the tests with `--ssh-options` - * `go run test/e2e_node/runner/run_e2e.go --logtostderr --hosts ` + * If ssh flags are required (e.g. `-i`), they can be used and passed to the +tests with `--ssh-options` + + * `go run test/e2e_node/runner/run_e2e.go --logtostderr --hosts ` + * **Must be run from kubernetes root** - * requires (go get): `github.com/tools/godep`, `github.com/onsi/gomega`, `github.com/onsi/ginkgo/ginkgo` + * requires (go get): `github.com/tools/godep`, `github.com/onsi/gomega`, +`github.com/onsi/ginkgo/ginkgo` + +3. Alternatively, manually build and copy `e2e_node_test.tar.gz` to a remote +host + + * Build the tar.gz `go run test/e2e_node/runner/run_e2e.go --logtostderr +--build-only` + + * requires (go get): `github.com/tools/godep`, `github.com/onsi/gomega`, +`github.com/onsi/ginkgo/ginkgo` -3. Alternatively, manually build and copy `e2e_node_test.tar.gz` to a remote host - * Build the tar.gz `go run test/e2e_node/runner/run_e2e.go --logtostderr --build-only` - * requires (go get): `github.com/tools/godep`, `github.com/onsi/gomega`, `github.com/onsi/ginkgo/ginkgo` * Copy `e2e_node_test.tar.gz` to the remote host + * Extract the archive on the remote host `tar -xzvf e2e_node_test.tar.gz` - * Run the tests `./e2e_node.test --logtostderr --vmodule=*=2 --build-services=false --node-name=` - * Note: This must be run from the directory containing the kubelet and kube-apiserver binaries. + + * Run the tests `./e2e_node.test --logtostderr --vmodule=*=2 +--build-services=false --node-name=` + + * Note: This must be run from the directory containing the kubelet and +kube-apiserver binaries. ## Running tests against a gce image * Build a gce image from a prepared gce host * Create the host from a base image and configure it (see above) - * Run tests against this remote host to ensure that it is setup correctly before doing anything else + * Run tests against this remote host to ensure that it is setup correctly +before doing anything else * Create a gce *snapshot* of the instance * Create a gce *disk* from the snapshot * Create a gce *image* from the disk @@ -112,8 +144,9 @@ See [setup_host.sh](../../test/e2e_node/environment/setup_host.sh) ## Kubernetes Jenkins CI and PR builder -Node e2e tests are run against a static list of host environments continuously or when manually triggered on a github.com -pull requests using the trigger phrase `@k8s-bot test node e2e experimental` - *results not yet publish, pending +Node e2e tests are run against a static list of host environments continuously +or when manually triggered on a github.com pull requests using the trigger +phrase `@k8s-bot test node e2e experimental` - *results not yet publish, pending evaluation of test stability.*. diff --git a/e2e-tests.md b/e2e-tests.md index 1a40ab73..d09ab9e7 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -65,19 +65,40 @@ Updated: 5/3/2016 ## Overview -End-to-end (e2e) tests for Kubernetes provide a mechanism to test end-to-end behavior of the system, and is the last signal to ensure end user operations match developer specifications. Although unit and integration tests should ideally provide a good signal, the reality is in a distributed system like Kubernetes it is not uncommon that a minor change may pass all unit and integration tests, but cause unforeseen changes at the system level. e2e testing is very costly, both in time to run tests and difficulty debugging, though: it takes a long time to build, deploy, and exercise a cluster. Thus, the primary objectives of the e2e tests are to ensure a consistent and reliable behavior of the kubernetes code base, and to catch hard-to-test bugs before users do, when unit and integration tests are insufficient. - -The e2e tests in kubernetes are built atop of [Ginkgo](http://onsi.github.io/ginkgo/) and [Gomega](http://onsi.github.io/gomega/). There are a host of features that this BDD testing framework provides, and it is recommended that the developer read the documentation prior to diving into the tests. - -The purpose of *this* document is to serve as a primer for developers who are looking to execute or add tests using a local development environment. - -Before writing new tests or making substantive changes to existing tests, you should also read [Writing Good e2e Tests](writing-good-e2e-tests.md) +End-to-end (e2e) tests for Kubernetes provide a mechanism to test end-to-end +behavior of the system, and is the last signal to ensure end user operations +match developer specifications. Although unit and integration tests provide a +good signal, in a distributed system like Kubernetes it is not uncommon that a +minor change may pass all unit and integration tests, but cause unforeseen +changes at the system level. + +The primary objectives of the e2e tests are to ensure a consistent and reliable +behavior of the kubernetes code base, and to catch hard-to-test bugs before +users do, when unit and integration tests are insufficient. + +The e2e tests in kubernetes are built atop of +[Ginkgo](http://onsi.github.io/ginkgo/) and +[Gomega](http://onsi.github.io/gomega/). There are a host of features that this +Behavior-Driven Development (BDD) testing framework provides, and it is +recommended that the developer read the documentation prior to diving into the + tests. + +The purpose of *this* document is to serve as a primer for developers who are +looking to execute or add tests using a local development environment. + +Before writing new tests or making substantive changes to existing tests, you +should also read [Writing Good e2e Tests](writing-good-e2e-tests.md) ## Building and Running the Tests -There are a variety of ways to run e2e tests, but we aim to decrease the number of ways to run e2e tests to a canonical way: `hack/e2e.go`. +There are a variety of ways to run e2e tests, but we aim to decrease the number +of ways to run e2e tests to a canonical way: `hack/e2e.go`. -You can run an end-to-end test which will bring up a master and nodes, perform some tests, and then tear everything down. Make sure you have followed the getting started steps for your chosen cloud platform (which might involve changing the `KUBERNETES_PROVIDER` environment variable to something other than "gce"). +You can run an end-to-end test which will bring up a master and nodes, perform +some tests, and then tear everything down. Make sure you have followed the +getting started steps for your chosen cloud platform (which might involve +changing the `KUBERNETES_PROVIDER` environment variable to something other than +"gce"). To build Kubernetes, up a cluster, run tests, and tear everything down, use: @@ -130,11 +151,16 @@ go run hack/e2e.go -v -ctl='get events' go run hack/e2e.go -v -ctl='delete pod foobar' ``` -The tests are built into a single binary which can be run used to deploy a Kubernetes system or run tests against an already-deployed Kubernetes system. See `go run hack/e2e.go --help` (or the flag definitions in `hack/e2e.go`) for more options, such as reusing an existing cluster. +The tests are built into a single binary which can be run used to deploy a +Kubernetes system or run tests against an already-deployed Kubernetes system. +See `go run hack/e2e.go --help` (or the flag definitions in `hack/e2e.go`) for +more options, such as reusing an existing cluster. ### Cleaning up -During a run, pressing `control-C` should result in an orderly shutdown, but if something goes wrong and you still have some VMs running you can force a cleanup with this command: +During a run, pressing `control-C` should result in an orderly shutdown, but if +something goes wrong and you still have some VMs running you can force a cleanup +with this command: ```sh go run hack/e2e.go -v --down @@ -144,24 +170,49 @@ go run hack/e2e.go -v --down ### Bringing up a cluster for testing -If you want, you may bring up a cluster in some other manner and run tests against it. To do so, or to do other non-standard test things, you can pass arguments into Ginkgo using `--test_args` (e.g. see above). For the purposes of brevity, we will look at a subset of the options, which are listed below: +If you want, you may bring up a cluster in some other manner and run tests +against it. To do so, or to do other non-standard test things, you can pass +arguments into Ginkgo using `--test_args` (e.g. see above). For the purposes of +brevity, we will look at a subset of the options, which are listed below: ``` --ginkgo.dryRun=false: If set, ginkgo will walk the test hierarchy without actually running anything. Best paired with -v. --ginkgo.failFast=false: If set, ginkgo will stop running a test suite after a failure occurs. --ginkgo.failOnPending=false: If set, ginkgo will mark the test suite as failed if any specs are pending. --ginkgo.focus="": If set, ginkgo will only run specs that match this regular expression. --ginkgo.skip="": If set, ginkgo will only run specs that do not match this regular expression. --ginkgo.trace=false: If set, default reporter prints out the full stack trace when a failure occurs +-ginkgo.dryRun=false: If set, ginkgo will walk the test hierarchy without +actually running anything. Best paired with -v. + +-ginkgo.failFast=false: If set, ginkgo will stop running a test suite after a +failure occurs. + +-ginkgo.failOnPending=false: If set, ginkgo will mark the test suite as failed +if any specs are pending. + +-ginkgo.focus="": If set, ginkgo will only run specs that match this regular +expression. + +-ginkgo.skip="": If set, ginkgo will only run specs that do not match this +regular expression. + +-ginkgo.trace=false: If set, default reporter prints out the full stack trace +when a failure occurs + -ginkgo.v=false: If set, default reporter print out all specs as they begin. + -host="": The host, or api-server, to connect to + -kubeconfig="": Path to kubeconfig containing embedded authinfo. --prom-push-gateway="": The URL to prometheus gateway, so that metrics can be pushed during e2es and scraped by prometheus. Typically something like 127.0.0.1:9091. --provider="": The name of the Kubernetes provider (gce, gke, local, vagrant, etc.) --repo-root="../../": Root directory of kubernetes repository, for finding test files. + +-prom-push-gateway="": The URL to prometheus gateway, so that metrics can be +pushed during e2es and scraped by prometheus. Typically something like +127.0.0.1:9091. + +-provider="": The name of the Kubernetes provider (gce, gke, local, vagrant, +etc.) + +-repo-root="../../": Root directory of kubernetes repository, for finding test +files. ``` -Prior to running the tests, you may want to first create a simple auth file in your home directory, e.g. `$HOME/.kube/config` , with the following: +Prior to running the tests, you may want to first create a simple auth file in +your home directory, e.g. `$HOME/.kube/config`, with the following: ``` { @@ -170,12 +221,16 @@ Prior to running the tests, you may want to first create a simple auth file in y } ``` -As mentioned earlier there are a host of other options that are available, but they are left to the developer. +As mentioned earlier there are a host of other options that are available, but +they are left to the developer. + +**NOTE:** If you are running tests on a local cluster repeatedly, you may need +to periodically perform some manual cleanup: -**NOTE:** If you are running tests on a local cluster repeatedly, you may need to periodically perform some manual cleanup. + - `rm -rf /var/run/kubernetes`, clear kube generated credentials, sometimes +stale permissions can cause problems. -- `rm -rf /var/run/kubernetes`, clear kube generated credentials, sometimes stale permissions can cause problems. -- `sudo iptables -F`, clear ip tables rules left by the kube-proxy. + - `sudo iptables -F`, clear ip tables rules left by the kube-proxy. ### Debugging clusters @@ -184,22 +239,22 @@ state to debug a failed e2e test, you can use the `cluster/log-dump.sh` script to gather logs. This script requires that the cluster provider supports ssh. Assuming it does, -running +running: ``` cluster/log-dump.sh ```` -will ssh to the master and all nodes -and download a variety of useful logs to the provided directory (which should -already exist). +will ssh to the master and all nodes and download a variety of useful logs to +the provided directory (which should already exist). The Google-run Jenkins builds automatically collected these logs for every build, saving them in the `artifacts` directory uploaded to GCS. ### Local clusters -It can be much faster to iterate on a local cluster instead of a cloud-based one. To start a local cluster, you can run: +It can be much faster to iterate on a local cluster instead of a cloud-based +one. To start a local cluster, you can run: ```sh # The PATH construction is needed because PATH is one of the special-cased @@ -207,11 +262,13 @@ It can be much faster to iterate on a local cluster instead of a cloud-based one sudo PATH=$PATH hack/local-up-cluster.sh ``` -This will start a single-node Kubernetes cluster than runs pods using the local docker daemon. Press Control-C to stop the cluster. +This will start a single-node Kubernetes cluster than runs pods using the local +docker daemon. Press Control-C to stop the cluster. #### Testing against local clusters -In order to run an E2E test against a locally running cluster, point the tests at a custom host directly: +In order to run an E2E test against a locally running cluster, point the tests +at a custom host directly: ```sh export KUBECONFIG=/path/to/kubeconfig @@ -226,26 +283,72 @@ go run hack/e2e.go -v --test_args="--host=http://127.0.0.1:8080" --ginkgo.focus= ## Kinds of tests -We are working on implementing clearer partitioning of our e2e tests to make running a known set of tests easier (#10548). Tests can be labeled with any of the following labels, in order of increasing precedence (that is, each label listed below supersedes the previous ones): - -- If a test has no labels, it is expected to run fast (under five minutes), be able to be run in parallel, and be consistent. -- `[Slow]`: If a test takes more than five minutes to run (by itself or in parallel with many other tests), it is labeled `[Slow]`. This partition allows us to run almost all of our tests quickly in parallel, without waiting for the stragglers to finish. -- `[Serial]`: If a test cannot be run in parallel with other tests (e.g. it takes too many resources or restarts nodes), it is labeled `[Serial]`, and should be run in serial as part of a separate suite. -- `[Disruptive]`: If a test restarts components that might cause other tests to fail or break the cluster completely, it is labeled `[Disruptive]`. Any `[Disruptive]` test is also assumed to qualify for the `[Serial]` label, but need not be labeled as both. These tests are not run against soak clusters to avoid restarting components. -- `[Flaky]`: If a test is found to be flaky and we have decided that it's too hard to fix in the short term (e.g. it's going to take a full engineer-week), it receives the `[Flaky]` label until it is fixed. The `[Flaky]` label should be used very sparingly, and should be accompanied with a reference to the issue for de-flaking the test, because while a test remains labeled `[Flaky]`, it is not monitored closely in CI. `[Flaky]` tests are by default not run, unless a `focus` or `skip` argument is explicitly given. -- `[Feature:.+]`: If a test has non-default requirements to run or targets some non-core functionality, and thus should not be run as part of the standard suite, it receives a `[Feature:.+]` label, e.g. `[Feature:Performance]` or `[Feature:Ingress]`. `[Feature:.+]` tests are not run in our core suites, instead running in custom suites. If a feature is experimental or alpha and is not enabled by default due to being incomplete or potentially subject to breaking changes, it does *not* block the merge-queue, and thus should run in some separate test suites owned by the feature owner(s) (see #continuous_integration below). +We are working on implementing clearer partitioning of our e2e tests to make +running a known set of tests easier (#10548). Tests can be labeled with any of +the following labels, in order of increasing precedence (that is, each label +listed below supersedes the previous ones): + + - If a test has no labels, it is expected to run fast (under five minutes), be +able to be run in parallel, and be consistent. + + - `[Slow]`: If a test takes more than five minutes to run (by itself or in +parallel with many other tests), it is labeled `[Slow]`. This partition allows +us to run almost all of our tests quickly in parallel, without waiting for the +stragglers to finish. + + - `[Serial]`: If a test cannot be run in parallel with other tests (e.g. it +takes too many resources or restarts nodes), it is labeled `[Serial]`, and +should be run in serial as part of a separate suite. + + - `[Disruptive]`: If a test restarts components that might cause other tests +to fail or break the cluster completely, it is labeled `[Disruptive]`. Any +`[Disruptive]` test is also assumed to qualify for the `[Serial]` label, but +need not be labeled as both. These tests are not run against soak clusters to +avoid restarting components. + + - `[Flaky]`: If a test is found to be flaky and we have decided that it's too +hard to fix in the short term (e.g. it's going to take a full engineer-week), it +receives the `[Flaky]` label until it is fixed. The `[Flaky]` label should be +used very sparingly, and should be accompanied with a reference to the issue for +de-flaking the test, because while a test remains labeled `[Flaky]`, it is not +monitored closely in CI. `[Flaky]` tests are by default not run, unless a +`focus` or `skip` argument is explicitly given. + + - `[Feature:.+]`: If a test has non-default requirements to run or targets +some non-core functionality, and thus should not be run as part of the standard +suite, it receives a `[Feature:.+]` label, e.g. `[Feature:Performance]` or +`[Feature:Ingress]`. `[Feature:.+]` tests are not run in our core suites, +instead running in custom suites. If a feature is experimental or alpha and is +not enabled by default due to being incomplete or potentially subject to +breaking changes, it does *not* block the merge-queue, and thus should run in +some separate test suites owned by the feature owner(s) +(see [Continuous Integration](#continuous-integration) below). ### Conformance tests -Finally, `[Conformance]` tests represent a subset of the e2e-tests we expect to pass on **any** Kubernetes cluster. The `[Conformance]` label does not supersede any other labels. +Finally, `[Conformance]` tests represent a subset of the e2e-tests we expect to +pass on **any** Kubernetes cluster. The `[Conformance]` label does not supersede +any other labels. -As each new release of Kubernetes providers new functionality, the subset of tests necessary to demonstrate conformance grows with each release. Conformance is thus considered versioned, with the same backwards compatibility guarantees as laid out in [our versioning policy](../design/versioning.md#supported-releases). Conformance tests for a given version should be run off of the release branch that corresponds to that version. Thus `v1.2` conformance tests would be run from the head of the `release-1.2` branch. eg: +As each new release of Kubernetes providers new functionality, the subset of +tests necessary to demonstrate conformance grows with each release. Conformance +is thus considered versioned, with the same backwards compatibility guarantees +as laid out in [our versioning policy](../design/versioning.md#supported-releases). +Conformance tests for a given version should be run off of the release branch +that corresponds to that version. Thus `v1.2` conformance tests would be run +from the head of the `release-1.2` branch. eg: - A v1.3 development cluster should pass v1.1, v1.2 conformance tests + - A v1.2 cluster should pass v1.1, v1.2 conformance tests - - A v1.1 cluster should pass v1.0, v1.1 conformance tests, and fail v1.2 conformance tests -Conformance tests are designed to be run with no cloud provider configured. Conformance tests can be run against clusters that have not been created with `hack/e2e.go`, just provide a kubeconfig with the appropriate endpoint and credentials. + - A v1.1 cluster should pass v1.0, v1.1 conformance tests, and fail v1.2 +conformance tests + +Conformance tests are designed to be run with no cloud provider configured. +Conformance tests can be run against clusters that have not been created with +`hack/e2e.go`, just provide a kubeconfig with the appropriate endpoint and +credentials. ```sh # setup for conformance tests @@ -257,20 +360,30 @@ go run hack/e2e.go -v --test --test_args="--ginkgo.focus=\[Conformance\]" # run all parallel-safe conformance tests in parallel GINKGO_PARALLEL=y go run hack/e2e.go --v --test --test_args="--ginkgo.focus=\[Conformance\] --ginkgo.skip=\[Serial\]" + # ... and finish up with remaining tests in serial go run hack/e2e.go --v --test --test_args="--ginkgo.focus=\[Serial\].*\[Conformance\]" ``` ### Defining Conformance Subset -It is impossible to define the entire space of Conformance tests without knowing the future, so instead, we define the compliment of conformance tests, below. +It is impossible to define the entire space of Conformance tests without knowing +the future, so instead, we define the compliment of conformance tests, below +(`Please update this with companion PRs as necessary`): + + - A conformance test cannot test cloud provider specific features (i.e. GCE +monitoring, S3 Bucketing, ...) + + - A conformance test cannot rely on any particular non-standard file system +permissions granted to containers or users (i.e. sharing writable host /tmp with +a container) -Please update this with companion PRs as necessary. + - A conformance test cannot rely on any binaries that are not required for the +linux kernel or for a kubelet to run (i.e. git) - - A conformance test cannot test cloud provider specific features (i.e. GCE monitoring, S3 Bucketing, ...) - - A conformance test cannot rely on any particular non-standard file system permissions granted to containers or users (i.e. sharing writable host /tmp with a container) - - A conformance test cannot rely on any binaries that are not required for the linux kernel or for a kubelet to run (i.e. git) - - A conformance test cannot test a feature which obviously cannot be supported on a broad range of platforms (i.e. testing of multiple disk mounts, GPUs, high density) + - A conformance test cannot test a feature which obviously cannot be supported +on a broad range of platforms (i.e. testing of multiple disk mounts, GPUs, high +density) ## Continuous Integration @@ -278,74 +391,149 @@ A quick overview of how we run e2e CI on Kubernetes. ### What is CI? -We run a battery of `e2e` tests against `HEAD` of the master branch on a continuous basis, and block merges via the [submit queue](http://submit-queue.k8s.io/) on a subset of those tests if they fail (the subset is defined in the [munger config](https://github.com/kubernetes/contrib/blob/master/mungegithub/mungers/submit-queue.go) via the `jenkins-jobs` flag; note we also block on `kubernetes-build` and `kubernetes-test-go` jobs for build and unit and integration tests). +We run a battery of `e2e` tests against `HEAD` of the master branch on a +continuous basis, and block merges via the [submit +queue](http://submit-queue.k8s.io/) on a subset of those tests if they fail (the +subset is defined in the [munger config] +(https://github.com/kubernetes/contrib/blob/master/mungegithub/mungers/submit-queue.go) +via the `jenkins-jobs` flag; note we also block on `kubernetes-build` and +`kubernetes-test-go` jobs for build and unit and integration tests). -CI results can be found at [ci-test.k8s.io](http://ci-test.k8s.io), e.g. [ci-test.k8s.io/kubernetes-e2e-gce/10594](http://ci-test.k8s.io/kubernetes-e2e-gce/10594). +CI results can be found at [ci-test.k8s.io](http://ci-test.k8s.io), e.g. +[ci-test.k8s.io/kubernetes-e2e-gce/10594](http://ci-test.k8s.io/kubernetes-e2e-gce/10594). ### What runs in CI? -We run all default tests (those that aren't marked `[Flaky]` or `[Feature:.+]`) against GCE and GKE. To minimize the time from regression-to-green-run, we partition tests across different jobs: +We run all default tests (those that aren't marked `[Flaky]` or `[Feature:.+]`) +against GCE and GKE. To minimize the time from regression-to-green-run, we +partition tests across different jobs: -- `kubernetes-e2e-` runs all non-`[Slow]`, non-`[Serial]`, non-`[Disruptive]`, non-`[Flaky]`, non-`[Feature:.+]` tests in parallel. -- `kubernetes-e2e--slow` runs all `[Slow]`, non-`[Serial]`, non-`[Disruptive]`, non-`[Flaky]`, non-`[Feature:.+]` tests in parallel. -- `kubernetes-e2e--serial` runs all `[Serial]` and `[Disruptive]`, non-`[Flaky]`, non-`[Feature:.+]` tests in serial. + - `kubernetes-e2e-` runs all non-`[Slow]`, non-`[Serial]`, +non-`[Disruptive]`, non-`[Flaky]`, non-`[Feature:.+]` tests in parallel. -We also run non-default tests if the tests exercise general-availability ("GA") features that require a special environment to run in, e.g. `kubernetes-e2e-gce-scalability` and `kubernetes-kubemark-gce`, which test for Kubernetes performance. + - `kubernetes-e2e--slow` runs all `[Slow]`, non-`[Serial]`, +non-`[Disruptive]`, non-`[Flaky]`, non-`[Feature:.+]` tests in parallel. + + - `kubernetes-e2e--serial` runs all `[Serial]` and `[Disruptive]`, +non-`[Flaky]`, non-`[Feature:.+]` tests in serial. + +We also run non-default tests if the tests exercise general-availability ("GA") +features that require a special environment to run in, e.g. +`kubernetes-e2e-gce-scalability` and `kubernetes-kubemark-gce`, which test for +Kubernetes performance. #### Non-default tests -Many `[Feature:.+]` tests we don't run in CI. These tests are for features that are experimental (often in the `experimental` API), and aren't enabled by default. +Many `[Feature:.+]` tests we don't run in CI. These tests are for features that +are experimental (often in the `experimental` API), and aren't enabled by +default. ### The PR-builder -We also run a battery of tests against every PR before we merge it. These tests are equivalent to `kubernetes-gce`: it runs all non-`[Slow]`, non-`[Serial]`, non-`[Disruptive]`, non-`[Flaky]`, non-`[Feature:.+]` tests in parallel. These tests are considered "smoke tests" to give a decent signal that the PR doesn't break most functionality. Results for you PR can be found at [pr-test.k8s.io](http://pr-test.k8s.io), e.g. [pr-test.k8s.io/20354](http://pr-test.k8s.io/20354) for #20354. +We also run a battery of tests against every PR before we merge it. These tests +are equivalent to `kubernetes-gce`: it runs all non-`[Slow]`, non-`[Serial]`, +non-`[Disruptive]`, non-`[Flaky]`, non-`[Feature:.+]` tests in parallel. These +tests are considered "smoke tests" to give a decent signal that the PR doesn't +break most functionality. Results for your PR can be found at +[pr-test.k8s.io](http://pr-test.k8s.io), e.g. +[pr-test.k8s.io/20354](http://pr-test.k8s.io/20354) for #20354. ### Adding a test to CI -As mentioned above, prior to adding a new test, it is a good idea to perform a `-ginkgo.dryRun=true` on the system, in order to see if a behavior is already being tested, or to determine if it may be possible to augment an existing set of tests for a specific use case. - -If a behavior does not currently have coverage and a developer wishes to add a new e2e test, navigate to the ./test/e2e directory and create a new test using the existing suite as a guide. - -TODO(#20357): Create a self-documented example which has been disabled, but can be copied to create new tests and outlines the capabilities and libraries used. - -When writing a test, consult #kinds_of_tests above to determine how your test should be marked, (e.g. `[Slow]`, `[Serial]`; remember, by default we assume a test can run in parallel with other tests!). - -When first adding a test it should *not* go straight into CI, because failures block ordinary development. A test should only be added to CI after is has been running in some non-CI suite long enough to establish a track record showing that the test does not fail when run against *working* software. Note also that tests running in CI are generally running on a well-loaded cluster, so must contend for resources; see above about [kinds of tests](#kinds_of_tests). - -Generally, a feature starts as `experimental`, and will be run in some suite owned by the team developing the feature. If a feature is in beta or GA, it *should* block the merge-queue. In moving from experimental to beta or GA, tests that are expected to pass by default should simply remove the `[Feature:.+]` label, and will be incorporated into our core suites. If tests are not expected to pass by default, (e.g. they require a special environment such as added quota,) they should remain with the `[Feature:.+]` label, and the suites that run them should be incorporated into the [munger config](https://github.com/kubernetes/contrib/blob/master/mungegithub/mungers/submit-queue.go) via the `jenkins-jobs` flag. - -Occasionally, we'll want to add tests to better exercise features that are already GA. These tests also shouldn't go straight to CI. They should begin by being marked as `[Flaky]` to be run outside of CI, and once a track-record for them is established, they may be promoted out of `[Flaky]`. +As mentioned above, prior to adding a new test, it is a good idea to perform a +`-ginkgo.dryRun=true` on the system, in order to see if a behavior is already +being tested, or to determine if it may be possible to augment an existing set +of tests for a specific use case. + +If a behavior does not currently have coverage and a developer wishes to add a +new e2e test, navigate to the ./test/e2e directory and create a new test using +the existing suite as a guide. + +TODO(#20357): Create a self-documented example which has been disabled, but can +be copied to create new tests and outlines the capabilities and libraries used. + +When writing a test, consult #kinds_of_tests above to determine how your test +should be marked, (e.g. `[Slow]`, `[Serial]`; remember, by default we assume a +test can run in parallel with other tests!). + +When first adding a test it should *not* go straight into CI, because failures +block ordinary development. A test should only be added to CI after is has been +running in some non-CI suite long enough to establish a track record showing +that the test does not fail when run against *working* software. Note also that +tests running in CI are generally running on a well-loaded cluster, so must +contend for resources; see above about [kinds of tests](#kinds_of_tests). + +Generally, a feature starts as `experimental`, and will be run in some suite +owned by the team developing the feature. If a feature is in beta or GA, it +*should* block the merge-queue. In moving from experimental to beta or GA, tests +that are expected to pass by default should simply remove the `[Feature:.+]` +label, and will be incorporated into our core suites. If tests are not expected +to pass by default, (e.g. they require a special environment such as added +quota,) they should remain with the `[Feature:.+]` label, and the suites that +run them should be incorporated into the +[munger config](https://github.com/kubernetes/contrib/blob/master/mungegithub/mungers/submit-queue.go) +via the `jenkins-jobs` flag. + +Occasionally, we'll want to add tests to better exercise features that are +already GA. These tests also shouldn't go straight to CI. They should begin by +being marked as `[Flaky]` to be run outside of CI, and once a track-record for +them is established, they may be promoted out of `[Flaky]`. ### Moving a test out of CI -If we have determined that a test is known-flaky and cannot be fixed in the short-term, we may move it out of CI indefinitely. This move should be used sparingly, as it effectively means that we have no coverage of that test. When a test if demoted, it should be marked `[Flaky]` with a comment accompanying the label with a reference to an issue opened to fix the test. +If we have determined that a test is known-flaky and cannot be fixed in the +short-term, we may move it out of CI indefinitely. This move should be used +sparingly, as it effectively means that we have no coverage of that test. When a +test is demoted, it should be marked `[Flaky]` with a comment accompanying the +label with a reference to an issue opened to fix the test. ## Performance Evaluation -Another benefit of the e2e tests is the ability to create reproducible loads on the system, which can then be used to determine the responsiveness, or analyze other characteristics of the system. For example, the density tests load the system to 30,50,100 pods per/node and measures the different characteristics of the system, such as throughput, api-latency, etc. - -For a good overview of how we analyze performance data, please read the following [post](http://blog.kubernetes.io/2015/09/kubernetes-performance-measurements-and.html) - -For developers who are interested in doing their own performance analysis, we recommend setting up [prometheus](http://prometheus.io/) for data collection, and using [promdash](http://prometheus.io/docs/visualization/promdash/) to visualize the data. There also exists the option of pushing your own metrics in from the tests using a [prom-push-gateway](http://prometheus.io/docs/instrumenting/pushing/). Containers for all of these components can be found [here](https://hub.docker.com/u/prom/). - -For more accurate measurements, you may wish to set up prometheus external to kubernetes in an environment where it can access the major system components (api-server, controller-manager, scheduler). This is especially useful when attempting to gather metrics in a load-balanced api-server environment, because all api-servers can be analyzed independently as well as collectively. On startup, configuration file is passed to prometheus that specifies the endpoints that prometheus will scrape, as well as the sampling interval. +Another benefit of the e2e tests is the ability to create reproducible loads on +the system, which can then be used to determine the responsiveness, or analyze +other characteristics of the system. For example, the density tests load the +system to 30,50,100 pods per/node and measures the different characteristics of +the system, such as throughput, api-latency, etc. + +For a good overview of how we analyze performance data, please read the +following [post](http://blog.kubernetes.io/2015/09/kubernetes-performance-measurements-and.html) + +For developers who are interested in doing their own performance analysis, we +recommend setting up [prometheus](http://prometheus.io/) for data collection, +and using [promdash](http://prometheus.io/docs/visualization/promdash/) to +visualize the data. There also exists the option of pushing your own metrics in +from the tests using a +[prom-push-gateway](http://prometheus.io/docs/instrumenting/pushing/). +Containers for all of these components can be found +[here](https://hub.docker.com/u/prom/). + +For more accurate measurements, you may wish to set up prometheus external to +kubernetes in an environment where it can access the major system components +(api-server, controller-manager, scheduler). This is especially useful when +attempting to gather metrics in a load-balanced api-server environment, because +all api-servers can be analyzed independently as well as collectively. On +startup, configuration file is passed to prometheus that specifies the endpoints +that prometheus will scrape, as well as the sampling interval. ``` #prometheus.conf job: { - name: "kubernetes" - scrape_interval: "1s" - target_group: { - # apiserver(s) - target: "http://localhost:8080/metrics" - # scheduler - target: "http://localhost:10251/metrics" - # controller-manager - target: "http://localhost:10252/metrics" - } + name: "kubernetes" + scrape_interval: "1s" + target_group: { + # apiserver(s) + target: "http://localhost:8080/metrics" + # scheduler + target: "http://localhost:10251/metrics" + # controller-manager + target: "http://localhost:10252/metrics" + } +} ``` -Once prometheus is scraping the kubernetes endpoints, that data can then be plotted using promdash, and alerts can be created against the assortment of metrics that kubernetes provides. +Once prometheus is scraping the kubernetes endpoints, that data can then be +plotted using promdash, and alerts can be created against the assortment of +metrics that kubernetes provides. ## One More Thing -- cgit v1.2.3 From 5cd8a2f393da062256247ccef04dfa780d6be453 Mon Sep 17 00:00:00 2001 From: Janet Kuo Date: Fri, 6 May 2016 10:40:45 -0700 Subject: Document that kubectl commands shouldn't have aliases --- kubectl-conventions.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/kubectl-conventions.md b/kubectl-conventions.md index cc69c78f..394503cf 100644 --- a/kubectl-conventions.md +++ b/kubectl-conventions.md @@ -56,15 +56,16 @@ Updated: 8/27/2015 * Explicit should always override implicit * Environment variables should override default values * Command-line flags should override default values and environment variables - * --namespace should also override the value specified in a specified resource + * `--namespace` should also override the value specified in a specified resource ## Command conventions * Command names are all lowercase, and hyphenated if multiple words. * kubectl VERB NOUNs for commands that apply to multiple resource types. +* Command itself should not have built-in aliases. * NOUNs may be specified as `TYPE name1 name2` or `TYPE/name1 TYPE/name2` or `TYPE1,TYPE2,TYPE3/name1`; TYPE is omitted when only a single type is expected. * Resource types are all lowercase, with no hyphens; both singular and plural forms are accepted. -* NOUNs may also be specified by one or more file arguments: -f file1 -f file2 ... +* NOUNs may also be specified by one or more file arguments: `-f file1 -f file2 ...` * Resource types may have 2- or 3-letter aliases. * Business logic should be decoupled from the command framework, so that it can be reused independently of kubectl, cobra, etc. * Ideally, commonly needed functionality would be implemented server-side in order to avoid problems typical of "fat" clients and to make it readily available to non-Go clients. @@ -75,7 +76,7 @@ Updated: 8/27/2015 * Flags are all lowercase, with words separated by hyphens * Flag names and single-character aliases should have the same meaning across all commands -* Command-line flags corresponding to API fields should accept API enums exactly (e.g., --restart=Always) +* Command-line flags corresponding to API fields should accept API enums exactly (e.g., `--restart=Always`) * Do not reuse flags for different semantic purposes, and do not use different flag names for the same semantic purpose -- grep for `"Flags()"` before adding a new flag * Use short flags sparingly, only for the most frequently used options, prefer lowercase over uppercase for the most common cases, try to stick to well known conventions for UNIX commands and/or Docker, where they exist, and update this list when adding new short flags * `-f`: Resource file @@ -87,7 +88,6 @@ Updated: 8/27/2015 * also used for `--client` in `version`, but should be deprecated * `-i`: Attach stdin * `-t`: Allocate TTY - * also used for `--template`, but deprecated * `-w`: Watch (currently also used for `--www` in `proxy`, but should be deprecated) * `-p`: Previous * also used for `--pod` in `exec`, but deprecated @@ -97,8 +97,8 @@ Updated: 8/27/2015 * `-r`: Replicas * `-u`: Unix socket * `-v`: Verbose logging level -* `--dry-run`: Don't modify the live state; simulate the mutation and display the output -* `--local`: Don't contact the server; just do local read, transformation, generation, etc. and display the output +* `--dry-run`: Don't modify the live state; simulate the mutation and display the output. All mutations should support it. +* `--local`: Don't contact the server; just do local read, transformation, generation, etc., and display the output * `--output-version=...`: Convert the output to a different API group/version * `--validate`: Validate the resource schema -- cgit v1.2.3 From 1e78d934a1aa2819d27a8f1fbc7f347cb96fc605 Mon Sep 17 00:00:00 2001 From: Phillip Wittrock Date: Wed, 27 Apr 2016 11:35:06 -0700 Subject: How to update docs - doc --- updating-docs-for-feature-changes.md | 40 ++++++++++++++++++++++++++++++++++++ 1 file changed, 40 insertions(+) create mode 100644 updating-docs-for-feature-changes.md diff --git a/updating-docs-for-feature-changes.md b/updating-docs-for-feature-changes.md new file mode 100644 index 00000000..763b618d --- /dev/null +++ b/updating-docs-for-feature-changes.md @@ -0,0 +1,40 @@ +# How to update docs for new kubernetes features + +Docs github repo: https://github.com/kubernetes/kubernetes.github.io + +Instructions for updating the website: http://kubernetes.io/editdocs/ + +**cc *@kubernetes/docs* on your docs update PRs** + +## Docs Types To Consider +* Guides + * Walkthroughs + * Other Content +* Reference / Glossary +* Examples + +## Content Areas +* API Objects (Pod / Deployment / Service) +* Tools (kubectl / kube-dashboard) +* Cluster Creation + Management + +## Questions to ask yourself +* Does this change how any commands are run or the results of running those commands? + * *Update documentation specifying those commands* +* Should this be present in (or require an update to) one of the walkthroughs? + * Hellonode + * K8s101 / k8s201 + * Thorough Walkthrough +* Should this have an overview / dedicated [glossary](http://kubernetes.io/docs/user-guide/images/) section? + * *Yes for new APIs and kubectl commands* +* Should an existing overview / [glossary](http://kubernetes.io/docs/user-guide/images/) section be updated these changes? + * *Yes for updates to existing APIs and kubectl commands* +* Should [cluster setup / management](http://kubernetes.io/docs/admin/cluster-management/) guides be updated (which)? Does this impact all or just some clusters? +* Should [cluster / application debug](https://github.com/kubernetes/kubernetes/wiki/Services-FAQ) guides be updated? +* Should any [tool](http://kubernetes.io/docs/user-guide/kubectl-overview/) guides be updated (kubectl, dashboard)? +* Are there any downstream effects / Does this replace another methodology? (PetSet -> PVC, Deployment -> ReplicationController) - *Which docs for those need to be updated*? + * Update tutorials to use new style + * Update examples to use new style + * Update how tos to use new style + * Promote new content over old content that it replaces + -- cgit v1.2.3 From 16886ed9d8ed3f495953642d00ddf4b4a2c96594 Mon Sep 17 00:00:00 2001 From: Phillip Wittrock Date: Mon, 9 May 2016 13:37:42 -0700 Subject: Address PR comments --- updating-docs-for-feature-changes.md | 97 ++++++++++++++++++++++-------------- 1 file changed, 60 insertions(+), 37 deletions(-) diff --git a/updating-docs-for-feature-changes.md b/updating-docs-for-feature-changes.md index 763b618d..3db10a64 100644 --- a/updating-docs-for-feature-changes.md +++ b/updating-docs-for-feature-changes.md @@ -1,40 +1,63 @@ # How to update docs for new kubernetes features -Docs github repo: https://github.com/kubernetes/kubernetes.github.io - -Instructions for updating the website: http://kubernetes.io/editdocs/ - -**cc *@kubernetes/docs* on your docs update PRs** - -## Docs Types To Consider -* Guides - * Walkthroughs - * Other Content -* Reference / Glossary -* Examples - -## Content Areas -* API Objects (Pod / Deployment / Service) -* Tools (kubectl / kube-dashboard) -* Cluster Creation + Management - -## Questions to ask yourself -* Does this change how any commands are run or the results of running those commands? - * *Update documentation specifying those commands* -* Should this be present in (or require an update to) one of the walkthroughs? - * Hellonode - * K8s101 / k8s201 - * Thorough Walkthrough -* Should this have an overview / dedicated [glossary](http://kubernetes.io/docs/user-guide/images/) section? - * *Yes for new APIs and kubectl commands* -* Should an existing overview / [glossary](http://kubernetes.io/docs/user-guide/images/) section be updated these changes? - * *Yes for updates to existing APIs and kubectl commands* -* Should [cluster setup / management](http://kubernetes.io/docs/admin/cluster-management/) guides be updated (which)? Does this impact all or just some clusters? -* Should [cluster / application debug](https://github.com/kubernetes/kubernetes/wiki/Services-FAQ) guides be updated? -* Should any [tool](http://kubernetes.io/docs/user-guide/kubectl-overview/) guides be updated (kubectl, dashboard)? -* Are there any downstream effects / Does this replace another methodology? (PetSet -> PVC, Deployment -> ReplicationController) - *Which docs for those need to be updated*? - * Update tutorials to use new style - * Update examples to use new style - * Update how tos to use new style - * Promote new content over old content that it replaces +This document describes things to consider when updating Kubernetes docs for new features or changes to existing features (including removing features). +## Who should read this doc? +Anyone making user facing changes to kubernetes. This is especially important for Api changes or anything impacting the getting started experience. + +## What docs changes are needed when adding or updating a feature in kubernetes? + +### When making Api changes +*e.g. adding Deployments* +* Always make sure docs for downstream effects are updated *(PetSet -> PVC, Deployment -> ReplicationController)* +* Add or update the corresponding *[Glossary](http://kubernetes.io/docs/reference/)* item +* Verify the guides / walkthroughs do not require any changes: + * **If your change will be recommended over the approaches shown in these guides, then they must be updated to reflect your change** + * [Hello Node](http://kubernetes.io/docs/hellonode/) + * [K8s101](http://kubernetes.io/docs/user-guide/walkthrough/) + * [K8S201](http://kubernetes.io/docs/user-guide/walkthrough/k8s201/) + * [Guest-book](https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/guestbook/) + * [Thorough-walkthrough](http://kubernetes.io/docs/user-guide/) +* Verify the [landing page examples](http://kubernetes.io/docs/samples/) do not require any changes (those under "Recently updated samples") + * **If your change will be recommended over the approaches shown in the "Updated" examples, then they must be updated to reflect your change** + * If you are aware that your change will be recommended over the approaches shown in non-"Updated" examples, create an Issue +* Verify the collection of docs under the "Guides" section do not require updates (may need to use grep for this until are docs are more organized) + +### When making Tools changes +*e.g. updating kube-dash or kubectl* +* If changing kubectl, verify the guides / walkthroughs do not require any changes: + * **If your change will be recommended over the approaches shown in these guides, then they must be updated to reflect your change** + * [Hello Node](http://kubernetes.io/docs/hellonode/) + * [K8s101](http://kubernetes.io/docs/user-guide/walkthrough/) + * [K8S201](http://kubernetes.io/docs/user-guide/walkthrough/k8s201/) + * [Guest-book](https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/guestbook/) + * [Thorough-walkthrough](http://kubernetes.io/docs/user-guide/) +* If updating an existing tool + * Search for any docs about the tool and update them +* If adding a new tool for end users + * Add a new page under [Guides](http://kubernetes.io/docs/) +* **If removing a tool (kube-ui), make sure documentation that references it is updated appropriately!** + +### When making cluster setup changes +*e.g. adding Multi-AZ support* +* Update the relevant [Administering Clusters](http://kubernetes.io/docs/) pages + +### When making Kubernetes binary changes +*e.g. adding a flag, changing Pod GC behavior, etc* +* Add or update a page under [Configuring Kubernetes](http://kubernetes.io/docs/) + +## Where do the docs live? +1. Most external user facing docs live in the [kubernetes/docs](https://github.com/kubernetes/kubernetes.github.io) repo + * Also see the *[general instructions](http://kubernetes.io/editdocs/)* for making changes to the docs website +2. Internal design and development docs live in the [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes) repo + +## Who should help review docs changes? +* cc *@kubernetes/docs* +* Changes to [kubernetes/docs](https://github.com/kubernetes/kubernetes.github.io) repo must have both a Technical Review and a Docs Review + +## Tips for writing new docs +* Try to keep new docs small and focused +* Document pre-requisites (if they exist) +* Document what concepts will be covered in the document +* Include screen shots or pictures in documents for GUIs +* *TODO once we have a standard widget set we are happy with* - include diagrams to help describe complex ideas (not required yet) -- cgit v1.2.3 From f1eeaef7d9826371e8298095eceffb1bd8a43b6e Mon Sep 17 00:00:00 2001 From: Phillip Wittrock Date: Mon, 9 May 2016 23:20:08 +0000 Subject: Address PR comments --- updating-docs-for-feature-changes.md | 46 ++++++++++++++++++++++++++++++++++-- 1 file changed, 44 insertions(+), 2 deletions(-) diff --git a/updating-docs-for-feature-changes.md b/updating-docs-for-feature-changes.md index 3db10a64..f0f3197d 100644 --- a/updating-docs-for-feature-changes.md +++ b/updating-docs-for-feature-changes.md @@ -1,13 +1,44 @@ + + + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). +
+-- + + + + + # How to update docs for new kubernetes features This document describes things to consider when updating Kubernetes docs for new features or changes to existing features (including removing features). ## Who should read this doc? + Anyone making user facing changes to kubernetes. This is especially important for Api changes or anything impacting the getting started experience. ## What docs changes are needed when adding or updating a feature in kubernetes? ### When making Api changes + *e.g. adding Deployments* * Always make sure docs for downstream effects are updated *(PetSet -> PVC, Deployment -> ReplicationController)* * Add or update the corresponding *[Glossary](http://kubernetes.io/docs/reference/)* item @@ -16,7 +47,7 @@ Anyone making user facing changes to kubernetes. This is especially important f * [Hello Node](http://kubernetes.io/docs/hellonode/) * [K8s101](http://kubernetes.io/docs/user-guide/walkthrough/) * [K8S201](http://kubernetes.io/docs/user-guide/walkthrough/k8s201/) - * [Guest-book](https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/guestbook/) + * [Guest-book](https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/guestbook) * [Thorough-walkthrough](http://kubernetes.io/docs/user-guide/) * Verify the [landing page examples](http://kubernetes.io/docs/samples/) do not require any changes (those under "Recently updated samples") * **If your change will be recommended over the approaches shown in the "Updated" examples, then they must be updated to reflect your change** @@ -24,13 +55,14 @@ Anyone making user facing changes to kubernetes. This is especially important f * Verify the collection of docs under the "Guides" section do not require updates (may need to use grep for this until are docs are more organized) ### When making Tools changes + *e.g. updating kube-dash or kubectl* * If changing kubectl, verify the guides / walkthroughs do not require any changes: * **If your change will be recommended over the approaches shown in these guides, then they must be updated to reflect your change** * [Hello Node](http://kubernetes.io/docs/hellonode/) * [K8s101](http://kubernetes.io/docs/user-guide/walkthrough/) * [K8S201](http://kubernetes.io/docs/user-guide/walkthrough/k8s201/) - * [Guest-book](https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/guestbook/) + * [Guest-book](https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/guestbook) * [Thorough-walkthrough](http://kubernetes.io/docs/user-guide/) * If updating an existing tool * Search for any docs about the tool and update them @@ -39,25 +71,35 @@ Anyone making user facing changes to kubernetes. This is especially important f * **If removing a tool (kube-ui), make sure documentation that references it is updated appropriately!** ### When making cluster setup changes + *e.g. adding Multi-AZ support* * Update the relevant [Administering Clusters](http://kubernetes.io/docs/) pages ### When making Kubernetes binary changes + *e.g. adding a flag, changing Pod GC behavior, etc* * Add or update a page under [Configuring Kubernetes](http://kubernetes.io/docs/) ## Where do the docs live? + 1. Most external user facing docs live in the [kubernetes/docs](https://github.com/kubernetes/kubernetes.github.io) repo * Also see the *[general instructions](http://kubernetes.io/editdocs/)* for making changes to the docs website 2. Internal design and development docs live in the [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes) repo ## Who should help review docs changes? + * cc *@kubernetes/docs* * Changes to [kubernetes/docs](https://github.com/kubernetes/kubernetes.github.io) repo must have both a Technical Review and a Docs Review ## Tips for writing new docs + * Try to keep new docs small and focused * Document pre-requisites (if they exist) * Document what concepts will be covered in the document * Include screen shots or pictures in documents for GUIs * *TODO once we have a standard widget set we are happy with* - include diagrams to help describe complex ideas (not required yet) + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/updating-docs-for-feature-changes.md?pixel)]() + -- cgit v1.2.3 From 6ce7973d9bd77c7423da30344f43d039513c004a Mon Sep 17 00:00:00 2001 From: Mike Brown Date: Wed, 4 May 2016 14:52:32 -0500 Subject: devel/ tree 80col updates; and other minor edits Signed-off-by: Mike Brown --- issues.md | 64 +++++++++---- kubectl-conventions.md | 235 +++++++++++++++++++++++++++++++++++++---------- kubemark-guide.md | 198 +++++++++++++++++++++++++-------------- logging.md | 14 ++- making-release-notes.md | 18 +++- mesos-style.md | 240 ++++++++++++++++++++++++++++++++---------------- 6 files changed, 544 insertions(+), 225 deletions(-) diff --git a/issues.md b/issues.md index ed541adc..1a068faa 100644 --- a/issues.md +++ b/issues.md @@ -31,34 +31,62 @@ Documentation for other releases can be found at -GitHub Issues for the Kubernetes Project -======================================== -A quick overview of how we will review and prioritize incoming issues at https://github.com/kubernetes/kubernetes/issues +## GitHub Issues for the Kubernetes Project -Priorities ----------- +A quick overview of how we will review and prioritize incoming issues at +https://github.com/kubernetes/kubernetes/issues -We use GitHub issue labels for prioritization. The absence of a -priority label means the bug has not been reviewed and prioritized -yet. +### Priorities -We try to apply these priority labels consistently across the entire project, but if you notice an issue that you believe to be misprioritized, please do let us know and we will evaluate your counter-proposal. +We use GitHub issue labels for prioritization. The absence of a priority label +means the bug has not been reviewed and prioritized yet. -- **priority/P0**: Must be actively worked on as someone's top priority right now. Stuff is burning. If it's not being actively worked on, someone is expected to drop what they're doing immediately to work on it. TL's of teams are responsible for making sure that all P0's in their area are being actively worked on. Examples include user-visible bugs in core features, broken builds or tests and critical security issues. -- **priority/P1**: Must be staffed and worked on either currently, or very soon, ideally in time for the next release. -- **priority/P2**: There appears to be general agreement that this would be good to have, but we don't have anyone available to work on it right now or in the immediate future. Community contributions would be most welcome in the mean time (although it might take a while to get them reviewed if reviewers are fully occupied with higher priority issues, for example immediately before a release). -- **priority/P3**: Possibly useful, but not yet enough support to actually get it done. These are mostly place-holders for potentially good ideas, so that they don't get completely forgotten, and can be referenced/deduped every time they come up. +We try to apply these priority labels consistently across the entire project, +but if you notice an issue that you believe to be incorrectly prioritized, +please do let us know and we will evaluate your counter-proposal. -Milestones ----------- +- **priority/P0**: Must be actively worked on as someone's top priority right +now. Stuff is burning. If it's not being actively worked on, someone is expected +to drop what they're doing immediately to work on it. Team leaders are +responsible for making sure that all P0's in their area are being actively +worked on. Examples include user-visible bugs in core features, broken builds or +tests and critical security issues. -We additionally use milestones, based on minor version, for determining if a bug should be fixed for the next release. These milestones will be especially scrutinized as we get to the weeks just before a release. We can release a new version of Kubernetes once they are empty. We will have two milestones per minor release. +- **priority/P1**: Must be staffed and worked on either currently, or very soon, +ideally in time for the next release. + +- **priority/P2**: There appears to be general agreement that this would be good +to have, but we may not have anyone available to work on it right now or in the +immediate future. Community contributions would be most welcome in the mean time +(although it might take a while to get them reviewed if reviewers are fully +occupied with higher priority issues, for example immediately before a release). + +- **priority/P3**: Possibly useful, but not yet enough support to actually get +it done. These are mostly place-holders for potentially good ideas, so that they +don't get completely forgotten, and can be referenced/deduped every time they +come up. + +### Milestones + +We additionally use milestones, based on minor version, for determining if a bug +should be fixed for the next release. These milestones will be especially +scrutinized as we get to the weeks just before a release. We can release a new +version of Kubernetes once they are empty. We will have two milestones per minor +release. - **vX.Y**: The list of bugs that will be merged for that milestone once ready. -- **vX.Y-candidate**: The list of bug that we might merge for that milestone. A bug shouldn't be in this milestone for moe than a day or two towards the end of a milestone. It should be triaged either into vX.Y, or moved out of the release milestones. -The above priority scheme still applies, so P0 and P1 bugs are work we feel must get done before release, while P2 and P3 represent work we would merge into the release if it gets done, but we wouldn't block the release on it. A few days before release, we will probably move all P2 and P3 bugs out of that milestone tag in bulk. +- **vX.Y-candidate**: The list of bug that we might merge for that milestone. A +bug shouldn't be in this milestone for more than a day or two towards the end of +a milestone. It should be triaged either into vX.Y, or moved out of the release +milestones. + +The above priority scheme still applies. P0 and P1 issues are work we feel must +get done before release. P2 and P3 issues are work we would merge into the +release if it gets done, but we wouldn't block the release on it. A few days +before release, we will probably move all P2 and P3 bugs out of that milestone +in bulk. [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/issues.md?pixel)]() diff --git a/kubectl-conventions.md b/kubectl-conventions.md index 9b9db4b6..2833ed37 100644 --- a/kubectl-conventions.md +++ b/kubectl-conventions.md @@ -32,14 +32,14 @@ Documentation for other releases can be found at -Kubectl Conventions -=================== +# Kubectl Conventions Updated: 8/27/2015 **Table of Contents** +- [Kubectl Conventions](#kubectl-conventions) - [Principles](#principles) - [Command conventions](#command-conventions) - [Create commands](#create-commands) @@ -54,45 +54,89 @@ Updated: 8/27/2015 ## Principles * Strive for consistency across commands + * Explicit should always override implicit + * Environment variables should override default values + * Command-line flags should override default values and environment variables - * `--namespace` should also override the value specified in a specified resource + + * `--namespace` should also override the value specified in a specified +resource ## Command conventions * Command names are all lowercase, and hyphenated if multiple words. + * kubectl VERB NOUNs for commands that apply to multiple resource types. + * Command itself should not have built-in aliases. -* NOUNs may be specified as `TYPE name1 name2` or `TYPE/name1 TYPE/name2` or `TYPE1,TYPE2,TYPE3/name1`; TYPE is omitted when only a single type is expected. -* Resource types are all lowercase, with no hyphens; both singular and plural forms are accepted. -* NOUNs may also be specified by one or more file arguments: `-f file1 -f file2 ...` + +* NOUNs may be specified as `TYPE name1 name2` or `TYPE/name1 TYPE/name2` or +`TYPE1,TYPE2,TYPE3/name1`; TYPE is omitted when only a single type is expected. + +* Resource types are all lowercase, with no hyphens; both singular and plural +forms are accepted. + +* NOUNs may also be specified by one or more file arguments: `-f file1 -f file2 +...` + * Resource types may have 2- or 3-letter aliases. -* Business logic should be decoupled from the command framework, so that it can be reused independently of kubectl, cobra, etc. - * Ideally, commonly needed functionality would be implemented server-side in order to avoid problems typical of "fat" clients and to make it readily available to non-Go clients. -* Commands that generate resources, such as `run` or `expose`, should obey specific conventions, see [generators](#generators). -* A command group (e.g., `kubectl config`) may be used to group related non-standard commands, such as custom generators, mutations, and computations. +* Business logic should be decoupled from the command framework, so that it can +be reused independently of kubectl, cobra, etc. + * Ideally, commonly needed functionality would be implemented server-side in +order to avoid problems typical of "fat" clients and to make it readily +available to non-Go clients. -### Create commands +* Commands that generate resources, such as `run` or `expose`, should obey +specific conventions, see [generators](#generators). -`kubectl create ` commands fill the gap between "I want to try Kubernetes, but I don't know or care what gets created" (`kubectl run`) and "I want to create exactly this" (author yaml and run `kubectl create -f`). -They provide an easy way to create a valid object without having to know the vagaries of particular kinds, nested fields, and object key typos that are ignored by the yaml/json parser. -Because editing an already created object is easier than authoring one from scratch, these commands only need to have enough parameters to create a valid object and set common immutable fields. It should default as much as is reasonably possible. -Once that valid object is created, it can be further manipulated using `kubectl edit` or the eventual `kubectl set` commands. +* A command group (e.g., `kubectl config`) may be used to group related +non-standard commands, such as custom generators, mutations, and computations. + + +### Create commands -`kubectl create ` commands help in cases where you need to perform non-trivial configuration generation/transformation tailored for a common use case. -`kubectl create secret` is a good example, there's a `generic` flavor with keys mapping to files, then there's a `docker-registry` flavor that is tailored for creating an image pull secret, -and there's a `tls` flavor for creating tls secrets. You create these as separate commands to get distinct flags and separate help that is tailored for the particular usage. +`kubectl create ` commands fill the gap between "I want to try +Kubernetes, but I don't know or care what gets created" (`kubectl run`) and "I +want to create exactly this" (author yaml and run `kubectl create -f`). They +provide an easy way to create a valid object without having to know the vagaries +of particular kinds, nested fields, and object key typos that are ignored by the +yaml/json parser. Because editing an already created object is easier than +authoring one from scratch, these commands only need to have enough parameters +to create a valid object and set common immutable fields. It should default as +much as is reasonably possible. Once that valid object is created, it can be +further manipulated using `kubectl edit` or the eventual `kubectl set` commands. + +`kubectl create ` commands help in cases where you need +to perform non-trivial configuration generation/transformation tailored for a +common use case. `kubectl create secret` is a good example, there's a `generic` +flavor with keys mapping to files, then there's a `docker-registry` flavor that +is tailored for creating an image pull secret, and there's a `tls` flavor for +creating tls secrets. You create these as separate commands to get distinct +flags and separate help that is tailored for the particular usage. ## Flag conventions * Flags are all lowercase, with words separated by hyphens -* Flag names and single-character aliases should have the same meaning across all commands -* Command-line flags corresponding to API fields should accept API enums exactly (e.g., `--restart=Always`) -* Do not reuse flags for different semantic purposes, and do not use different flag names for the same semantic purpose -- grep for `"Flags()"` before adding a new flag -* Use short flags sparingly, only for the most frequently used options, prefer lowercase over uppercase for the most common cases, try to stick to well known conventions for UNIX commands and/or Docker, where they exist, and update this list when adding new short flags + +* Flag names and single-character aliases should have the same meaning across +all commands + +* Command-line flags corresponding to API fields should accept API enums +exactly (e.g., `--restart=Always`) + +* Do not reuse flags for different semantic purposes, and do not use different +flag names for the same semantic purpose -- grep for `"Flags()"` before adding a +new flag + +* Use short flags sparingly, only for the most frequently used options, prefer +lowercase over uppercase for the most common cases, try to stick to well known +conventions for UNIX commands and/or Docker, where they exist, and update this +list when adding new short flags + * `-f`: Resource file * also used for `--follow` in `logs`, but should be deprecated in favor of `-F` * `-l`: Label selector @@ -111,51 +155,116 @@ and there's a `tls` flavor for creating tls secrets. You create these as separa * `-r`: Replicas * `-u`: Unix socket * `-v`: Verbose logging level -* `--dry-run`: Don't modify the live state; simulate the mutation and display the output. All mutations should support it. -* `--local`: Don't contact the server; just do local read, transformation, generation, etc., and display the output + + +* `--dry-run`: Don't modify the live state; simulate the mutation and display +the output. All mutations should support it. + +* `--local`: Don't contact the server; just do local read, transformation, +generation, etc., and display the output + * `--output-version=...`: Convert the output to a different API group/version + * `--validate`: Validate the resource schema ## Output conventions * By default, output is intended for humans rather than programs * However, affordances are made for simple parsing of `get` output + * Only errors should be directed to stderr + * `get` commands should output one row per resource, and one resource per row - * Column titles and values should not contain spaces in order to facilitate commands that break lines into fields: cut, awk, etc. Instead, use `-` as the word separator. + + * Column titles and values should not contain spaces in order to facilitate +commands that break lines into fields: cut, awk, etc. Instead, use `-` as the +word separator. + * By default, `get` output should fit within about 80 columns + * Eventually we could perhaps auto-detect width * `-o wide` may be used to display additional columns - * The first column should be the resource name, titled `NAME` (may change this to an abbreviation of resource type) - * NAMESPACE should be displayed as the first column when --all-namespaces is specified + + + * The first column should be the resource name, titled `NAME` (may change this +to an abbreviation of resource type) + + * NAMESPACE should be displayed as the first column when --all-namespaces is +specified + * The last default column should be time since creation, titled `AGE` - * `-Lkey` should append a column containing the value of label with key `key`, with `` if not present - * json, yaml, Go template, and jsonpath template formats should be supported and encouraged for subsequent processing - * Users should use --api-version or --output-version to ensure the output uses the version they expect -* `describe` commands may output on multiple lines and may include information from related resources, such as events. Describe should add additional information from related resources that a normal user may need to know - if a user would always run "describe resource1" and the immediately want to run a "get type2" or "describe resource2", consider including that info. Examples, persistent volume claims for pods that reference claims, events for most resources, nodes and the pods scheduled on them. When fetching related resources, a targeted field selector should be used in favor of client side filtering of related resources. -* For fields that can be explicitly unset (booleans, integers, structs), the output should say ``. Likewise, for arrays `` should be used. Lastly `` should be used where unrecognized field type was specified. -* Mutations should output TYPE/name verbed by default, where TYPE is singular; `-o name` may be used to just display TYPE/name, which may be used to specify resources in other commands + + * `-Lkey` should append a column containing the value of label with key `key`, +with `` if not present + + * json, yaml, Go template, and jsonpath template formats should be supported +and encouraged for subsequent processing + + * Users should use --api-version or --output-version to ensure the output +uses the version they expect + + +* `describe` commands may output on multiple lines and may include information +from related resources, such as events. Describe should add additional +information from related resources that a normal user may need to know - if a +user would always run "describe resource1" and the immediately want to run a +"get type2" or "describe resource2", consider including that info. Examples, +persistent volume claims for pods that reference claims, events for most +resources, nodes and the pods scheduled on them. When fetching related +resources, a targeted field selector should be used in favor of client side +filtering of related resources. + +* For fields that can be explicitly unset (booleans, integers, structs), the +output should say ``. Likewise, for arrays `` should be used. +Lastly `` should be used where unrecognized field type was specified. + +* Mutations should output TYPE/name verbed by default, where TYPE is singular; +`-o name` may be used to just display TYPE/name, which may be used to specify +resources in other commands ## Documentation conventions -* Commands are documented using Cobra; docs are then auto-generated by `hack/update-generated-docs.sh`. - * Use should contain a short usage string for the most common use case(s), not an exhaustive specification +* Commands are documented using Cobra; docs are then auto-generated by +`hack/update-generated-docs.sh`. + + * Use should contain a short usage string for the most common use case(s), not +an exhaustive specification + * Short should contain a one-line explanation of what the command does - * Long may contain multiple lines, including additional information about input, output, commonly used flags, etc. + + * Long may contain multiple lines, including additional information about +input, output, commonly used flags, etc. + * Example should contain examples * Start commands with `$` * A comment should precede each example command, and should begin with `#` + + * Use "FILENAME" for filenames -* Use "TYPE" for the particular flavor of resource type accepted by kubectl, rather than "RESOURCE" or "KIND" + +* Use "TYPE" for the particular flavor of resource type accepted by kubectl, +rather than "RESOURCE" or "KIND" + * Use "NAME" for resource names ## Command implementation conventions -For every command there should be a `NewCmd` function that creates the command and returns a pointer to a `cobra.Command`, which can later be added to other parent commands to compose the structure tree. There should also be a `Config` struct with a variable to every flag and argument declared by the command (and any other variable required for the command to run). This makes tests and mocking easier. The struct ideally exposes three methods: +For every command there should be a `NewCmd` function that creates +the command and returns a pointer to a `cobra.Command`, which can later be added +to other parent commands to compose the structure tree. There should also be a +`Config` struct with a variable to every flag and argument declared +by the command (and any other variable required for the command to run). This +makes tests and mocking easier. The struct ideally exposes three methods: + +* `Complete`: Completes the struct fields with values that may or may not be +directly provided by the user, for example, by flags pointers, by the `args` +slice, by using the Factory, etc. -* `Complete`: Completes the struct fields with values that may or may not be directly provided by the user, for example, by flags pointers, by the `args` slice, by using the Factory, etc. -* `Validate`: performs validation on the struct fields and returns appropriate errors. -* `Run`: runs the actual logic of the command, taking as assumption that the struct is complete with all required values to run, and they are valid. +* `Validate`: performs validation on the struct fields and returns appropriate +errors. + +* `Run`: runs the actual logic of the command, taking as assumption +that the struct is complete with all required values to run, and they are valid. Sample command skeleton: @@ -221,19 +330,41 @@ func (o MineConfig) RunMine() error { } ``` -The `Run` method should contain the business logic of the command and as noted in [command conventions](#command-conventions), ideally that logic should exist server-side so any client could take advantage of it. Notice that this is not a mandatory structure and not every command is implemented this way, but this is a nice convention so try to be compliant with it. As an example, have a look at how [kubectl logs](../../pkg/kubectl/cmd/logs.go) is implemented. +The `Run` method should contain the business logic of the command +and as noted in [command conventions](#command-conventions), ideally that logic +should exist server-side so any client could take advantage of it. Notice that +this is not a mandatory structure and not every command is implemented this way, +but this is a nice convention so try to be compliant with it. As an example, +have a look at how [kubectl logs](../../pkg/kubectl/cmd/logs.go) is implemented. ## Generators -Generators are kubectl commands that generate resources based on a set of inputs (other resources, flags, or a combination of both). +Generators are kubectl commands that generate resources based on a set of inputs +(other resources, flags, or a combination of both). The point of generators is: -* to enable users using kubectl in a scripted fashion to pin to a particular behavior which may change in the future. Explicit use of a generator will always guarantee that the expected behavior stays the same. -* to enable potential expansion of the generated resources for scenarios other than just creation, similar to how -f is supported for most general-purpose commands. + +* to enable users using kubectl in a scripted fashion to pin to a particular +behavior which may change in the future. Explicit use of a generator will always +guarantee that the expected behavior stays the same. + +* to enable potential expansion of the generated resources for scenarios other +than just creation, similar to how -f is supported for most general-purpose +commands. Generator commands shoud obey to the following conventions: -* A `--generator` flag should be defined. Users then can choose between different generators, if the command supports them (for example, `kubectl run` currently supports generators for pods, jobs, replication controllers, and deployments), or between different versions of a generator so that users depending on a specific behavior may pin to that version (for example, `kubectl expose` currently supports two different versions of a service generator). -* Generation should be decoupled from creation. A generator should implement the `kubectl.StructuredGenerator` interface and have no dependencies on cobra or the Factory. See, for example, how the first version of the namespace generator is defined: + +* A `--generator` flag should be defined. Users then can choose between +different generators, if the command supports them (for example, `kubectl run` +currently supports generators for pods, jobs, replication controllers, and +deployments), or between different versions of a generator so that users +depending on a specific behavior may pin to that version (for example, `kubectl +expose` currently supports two different versions of a service generator). + +* Generation should be decoupled from creation. A generator should implement the +`kubectl.StructuredGenerator` interface and have no dependencies on cobra or the +Factory. See, for example, how the first version of the namespace generator is +defined: ```go // NamespaceGeneratorV1 supports stable generation of a namespace @@ -264,8 +395,14 @@ func (g *NamespaceGeneratorV1) validate() error { } ``` -The generator struct (`NamespaceGeneratorV1`) holds the necessary fields for namespace generation. It also satisfies the `kubectl.StructuredGenerator` interface by implementing the `StructuredGenerate() (runtime.Object, error)` method which configures the generated namespace that callers of the generator (`kubectl create namespace` in our case) need to create. -* `--dry-run` should output the resource that would be created, without creating it. +The generator struct (`NamespaceGeneratorV1`) holds the necessary fields for +namespace generation. It also satisfies the `kubectl.StructuredGenerator` +interface by implementing the `StructuredGenerate() (runtime.Object, error)` +method which configures the generated namespace that callers of the generator +(`kubectl create namespace` in our case) need to create. + +* `--dry-run` should output the resource that would be created, without +creating it. diff --git a/kubemark-guide.md b/kubemark-guide.md index e5c8fdc4..3f93cd36 100644 --- a/kubemark-guide.md +++ b/kubemark-guide.md @@ -36,27 +36,37 @@ Documentation for other releases can be found at ## Introduction -Kubemark is a performance testing tool which allows users to run experiments on simulated clusters. The primary use case is scalability testing, as simulated clusters can be -much bigger than the real ones. The objective is to expose problems with the master components (API server, controller manager or scheduler) that appear only on bigger -clusters (e.g. small memory leaks). +Kubemark is a performance testing tool which allows users to run experiments on +simulated clusters. The primary use case is scalability testing, as simulated +clusters can be much bigger than the real ones. The objective is to expose +problems with the master components (API server, controller manager or +scheduler) that appear only on bigger clusters (e.g. small memory leaks). -This document serves as a primer to understand what Kubemark is, what it is not, and how to use it. +This document serves as a primer to understand what Kubemark is, what it is not, +and how to use it. ## Architecture -On a very high level Kubemark cluster consists of two parts: real master components and a set of “Hollow” Nodes. The prefix “Hollow” means an implementation/instantiation of a -component with all “moving” parts mocked out. The best example is HollowKubelet, which pretends to be an ordinary Kubelet, but does not start anything, nor mount any volumes - -it just lies it does. More detailed design and implementation details are at the end of this document. +On a very high level Kubemark cluster consists of two parts: real master +components and a set of “Hollow” Nodes. The prefix “Hollow” means an +implementation/instantiation of a component with all “moving” parts mocked out. +The best example is HollowKubelet, which pretends to be an ordinary Kubelet, but +does not start anything, nor mount any volumes - it just lies it does. More +detailed design and implementation details are at the end of this document. -Currently master components run on a dedicated machine(s), and HollowNodes run on an ‘external’ Kubernetes cluster. This design has a slight advantage, over running master -components on external cluster, of completely isolating master resources from everything else. +Currently master components run on a dedicated machine(s), and HollowNodes run +on an ‘external’ Kubernetes cluster. This design has a slight advantage, over +running master components on external cluster, of completely isolating master +resources from everything else. ## Requirements -To run Kubemark you need a Kubernetes cluster for running all your HollowNodes and a dedicated machine for a master. Master machine has to be directly routable from -HollowNodes. You also need an access to some Docker repository. +To run Kubemark you need a Kubernetes cluster for running all your HollowNodes +and a dedicated machine for a master. Master machine has to be directly routable +from HollowNodes. You also need an access to some Docker repository. -Currently scripts are written to be easily usable by GCE, but it should be relatively straightforward to port them to different providers or bare metal. +Currently scripts are written to be easily usable by GCE, but it should be +relatively straightforward to port them to different providers or bare metal. ## Common use cases and helper scripts @@ -66,71 +76,116 @@ Common workflow for Kubemark is: - monitoring test execution and debugging problems - turning down Kubemark cluster -Included in descrptions there will be comments helpful for anyone who’ll want to port Kubemark to different providers. +Included in descrptions there will be comments helpful for anyone who’ll want to +port Kubemark to different providers. ### Starting a Kubemark cluster -To start a Kubemark cluster on GCE you need to create an external cluster (it can be GCE, GKE or any other cluster) by yourself, build a kubernetes release (e.g. by running -`make quick-release`) and run `test/kubemark/start-kubemark.sh` script. This script will create a VM for master components, Pods for HollowNodes and do all the setup necessary -to let them talk to each other. It will use the configuration stored in `cluster/kubemark/config-default.sh` - you can tweak it however you want, but note that some features -may not be implemented yet, as implementation of Hollow components/mocks will probably be lagging behind ‘real’ one. For performance tests interesting variables are -`NUM_NODES` and `MASTER_SIZE`. After start-kubemark script is finished you’ll have a ready Kubemark cluster, a kubeconfig file for talking to the Kubemark -cluster is stored in `test/kubemark/kubeconfig.loc`. - -Currently we're running HollowNode with limit of 0.05 a CPU core and ~60MB or memory, which taking into account default cluster addons and fluentD running on an 'external' -cluster, allows running ~17.5 HollowNodes per core. +To start a Kubemark cluster on GCE you need to create an external cluster (it +can be GCE, GKE or any other cluster) by yourself, build a kubernetes release +(e.g. by running `make quick-release`) and run `test/kubemark/start-kubemark.sh` +script. This script will create a VM for master components, Pods for HollowNodes +and do all the setup necessary to let them talk to each other. It will use the +configuration stored in `cluster/kubemark/config-default.sh` - you can tweak it +however you want, but note that some features may not be implemented yet, as +implementation of Hollow components/mocks will probably be lagging behind ‘real’ +one. For performance tests interesting variables are `NUM_NODES` and +`MASTER_SIZE`. After start-kubemark script is finished you’ll have a ready +Kubemark cluster, a kubeconfig file for talking to the Kubemark cluster is +stored in `test/kubemark/kubeconfig.loc`. + +Currently we're running HollowNode with limit of 0.05 a CPU core and ~60MB or +memory, which taking into account default cluster addons and fluentD running on +an 'external' cluster, allows running ~17.5 HollowNodes per core. #### Behind the scene details: Start-kubemark script does quite a lot of things: -- Creates a master machine called hollow-cluster-master and PD for it (*uses gcloud, should be easy to do outside of GCE*) -- Creates a firewall rule which opens port 443\* on the master machine (*uses gcloud, should be easy to do outside of GCE*) -- Builds a Docker image for HollowNode from the current repository and pushes it to the Docker repository (*GCR for us, using scripts from `cluster/gce/util.sh` - it may get -tricky outside of GCE*) -- Generates certificates and kubeconfig files, writes a kubeconfig locally to `test/kubemark/kubeconfig.loc` and creates a Secret which stores kubeconfig for HollowKubelet/ -HollowProxy use (*used gcloud to transfer files to Master, should be easy to do outside of GCE*). -- Creates a ReplicationController for HollowNodes and starts them up. (*will work exactly the same everywhere as long as MASTER_IP will be populated correctly, but you’ll need -to update docker image address if you’re not using GCR and default image name*) -- Waits until all HollowNodes are in the Running phase (*will work exactly the same everywhere*) - -\* Port 443 is a secured port on the master machine which is used for all external communication with the API server. In the last sentence *external* means all traffic -coming from other machines, including all the Nodes, not only from outside of the cluster. Currently local components, i.e. ControllerManager and Scheduler talk with API server using insecure port 8080. -### Running e2e tests on Kubemark cluster +- Creates a master machine called hollow-cluster-master and PD for it (*uses +gcloud, should be easy to do outside of GCE*) -To run standard e2e test on your Kubemark cluster created in the previous step you execute `test/kubemark/run-e2e-tests.sh` script. It will configure ginkgo to -use Kubemark cluster instead of something else and start an e2e test. This script should not need any changes to work on other cloud providers. +- Creates a firewall rule which opens port 443\* on the master machine (*uses +gcloud, should be easy to do outside of GCE*) -By default (if nothing will be passed to it) the script will run a Density '30 test. If you want to run a different e2e test you just need to provide flags you want to be -passed to `hack/ginkgo-e2e.sh` script, e.g. `--ginkgo.focus="Load"` to run the Load test. +- Builds a Docker image for HollowNode from the current repository and pushes it +to the Docker repository (*GCR for us, using scripts from +`cluster/gce/util.sh` - it may get tricky outside of GCE*) -By default, at the end of each test, it will delete namespaces and everything under it (e.g. events, replication controllers) on Kubemark master, which takes a lot of time. -Such work aren't needed in most cases: if you delete your Kubemark cluster after running `run-e2e-tests.sh`; -you don't care about namespace deletion performance, specifically related to etcd; etc. -There is a flag that enables you to avoid namespace deletion: `--delete-namespace=false`. -Adding the flag should let you see in logs: `Found DeleteNamespace=false, skipping namespace deletion!` +- Generates certificates and kubeconfig files, writes a kubeconfig locally to +`test/kubemark/kubeconfig.loc` and creates a Secret which stores kubeconfig for +HollowKubelet/HollowProxy use (*used gcloud to transfer files to Master, should +be easy to do outside of GCE*). -### Monitoring test execution and debugging problems +- Creates a ReplicationController for HollowNodes and starts them up. (*will +work exactly the same everywhere as long as MASTER_IP will be populated +correctly, but you’ll need to update docker image address if you’re not using +GCR and default image name*) -Run-e2e-tests prints the same output on Kubemark as on ordinary e2e cluster, but if you need to dig deeper you need to learn how to debug HollowNodes and how Master -machine (currently) differs from the ordinary one. +- Waits until all HollowNodes are in the Running phase (*will work exactly the +same everywhere*) -If you need to debug master machine you can do similar things as you do on your ordinary master. The difference between Kubemark setup and ordinary setup is that in Kubemark -etcd is run as a plain docker container, and all master components are run as normal processes. There’s no Kubelet overseeing them. Logs are stored in exactly the same place, -i.e. `/var/logs/` directory. Because binaries are not supervised by anything they won't be restarted in the case of a crash. +\* Port 443 is a secured port on the master machine which is used for all +external communication with the API server. In the last sentence *external* +means all traffic coming from other machines, including all the Nodes, not only +from outside of the cluster. Currently local components, i.e. ControllerManager +and Scheduler talk with API server using insecure port 8080. -To help you with debugging from inside the cluster startup script puts a `~/configure-kubectl.sh` script on the master. It downloads `gcloud` and `kubectl` tool and configures -kubectl to work on unsecured master port (useful if there are problems with security). After the script is run you can use kubectl command from the master machine to play with -the cluster. +### Running e2e tests on Kubemark cluster -Debugging HollowNodes is a bit more tricky, as if you experience a problem on one of them you need to learn which hollow-node pod corresponds to a given HollowNode known by -the Master. During self-registeration HollowNodes provide their cluster IPs as Names, which means that if you need to find a HollowNode named `10.2.4.5` you just need to find a -Pod in external cluster with this cluster IP. There’s a helper script `test/kubemark/get-real-pod-for-hollow-node.sh` that does this for you. +To run standard e2e test on your Kubemark cluster created in the previous step +you execute `test/kubemark/run-e2e-tests.sh` script. It will configure ginkgo to +use Kubemark cluster instead of something else and start an e2e test. This +script should not need any changes to work on other cloud providers. + +By default (if nothing will be passed to it) the script will run a Density '30 +test. If you want to run a different e2e test you just need to provide flags you want to be +passed to `hack/ginkgo-e2e.sh` script, e.g. `--ginkgo.focus="Load"` to run the +Load test. + +By default, at the end of each test, it will delete namespaces and everything +under it (e.g. events, replication controllers) on Kubemark master, which takes +a lot of time. Such work aren't needed in most cases: if you delete your +Kubemark cluster after running `run-e2e-tests.sh`; you don't care about +namespace deletion performance, specifically related to etcd; etc. There is a +flag that enables you to avoid namespace deletion: `--delete-namespace=false`. +Adding the flag should let you see in logs: `Found DeleteNamespace=false, +skipping namespace deletion!` -When you have a Pod name you can use `kubectl logs` on external cluster to get logs, or use a `kubectl describe pod` call to find an external Node on which this particular -HollowNode is running so you can ssh to it. +### Monitoring test execution and debugging problems -E.g. you want to see the logs of HollowKubelet on which pod `my-pod` is running. To do so you can execute: +Run-e2e-tests prints the same output on Kubemark as on ordinary e2e cluster, but +if you need to dig deeper you need to learn how to debug HollowNodes and how +Master machine (currently) differs from the ordinary one. + +If you need to debug master machine you can do similar things as you do on your +ordinary master. The difference between Kubemark setup and ordinary setup is +that in Kubemark etcd is run as a plain docker container, and all master +components are run as normal processes. There’s no Kubelet overseeing them. Logs +are stored in exactly the same place, i.e. `/var/logs/` directory. Because +binaries are not supervised by anything they won't be restarted in the case of a +crash. + +To help you with debugging from inside the cluster startup script puts a +`~/configure-kubectl.sh` script on the master. It downloads `gcloud` and +`kubectl` tool and configures kubectl to work on unsecured master port (useful +if there are problems with security). After the script is run you can use +kubectl command from the master machine to play with the cluster. + +Debugging HollowNodes is a bit more tricky, as if you experience a problem on +one of them you need to learn which hollow-node pod corresponds to a given +HollowNode known by the Master. During self-registeration HollowNodes provide +their cluster IPs as Names, which means that if you need to find a HollowNode +named `10.2.4.5` you just need to find a Pod in external cluster with this +cluster IP. There’s a helper script +`test/kubemark/get-real-pod-for-hollow-node.sh` that does this for you. + +When you have a Pod name you can use `kubectl logs` on external cluster to get +logs, or use a `kubectl describe pod` call to find an external Node on which +this particular HollowNode is running so you can ssh to it. + +E.g. you want to see the logs of HollowKubelet on which pod `my-pod` is running. +To do so you can execute: ``` $ kubectl kubernetes/test/kubemark/kubeconfig.loc describe pod my-pod @@ -142,7 +197,8 @@ Which outputs pod description and among it a line: Node: 1.2.3.4/1.2.3.4 ``` -To learn the `hollow-node` pod corresponding to node `1.2.3.4` you use aforementioned script: +To learn the `hollow-node` pod corresponding to node `1.2.3.4` you use +aforementioned script: ``` $ kubernetes/test/kubemark/get-real-pod-for-hollow-node.sh 1.2.3.4 @@ -164,17 +220,23 @@ All those things should work exactly the same on all cloud providers. ### Turning down Kubemark cluster -On GCE you just need to execute `test/kubemark/stop-kubemark.sh` script, which will delete HollowNode ReplicationController and all the resources for you. On other providers -you’ll need to delete all this stuff by yourself. +On GCE you just need to execute `test/kubemark/stop-kubemark.sh` script, which +will delete HollowNode ReplicationController and all the resources for you. On +other providers you’ll need to delete all this stuff by yourself. ## Some current implementation details -Kubemark master uses exactly the same binaries as ordinary Kubernetes does. This means that it will never be out of date. On the other hand HollowNodes use existing fake for -Kubelet (called SimpleKubelet), which mocks its runtime manager with `pkg/kubelet/fake-docker-manager.go`, where most logic sits. Because there’s no easy way of mocking other -managers (e.g. VolumeManager), they are not supported in Kubemark (e.g. we can’t schedule Pods with volumes in them yet). - -As the time passes more fakes will probably be plugged into HollowNodes, but it’s crucial to make it as simple as possible to allow running a big number of Hollows on a single -core. +Kubemark master uses exactly the same binaries as ordinary Kubernetes does. This +means that it will never be out of date. On the other hand HollowNodes use +existing fake for Kubelet (called SimpleKubelet), which mocks its runtime +manager with `pkg/kubelet/fake-docker-manager.go`, where most logic sits. +Because there’s no easy way of mocking other managers (e.g. VolumeManager), they +are not supported in Kubemark (e.g. we can’t schedule Pods with volumes in them +yet). + +As the time passes more fakes will probably be plugged into HollowNodes, but +it’s crucial to make it as simple as possible to allow running a big number of +Hollows on a single core. diff --git a/logging.md b/logging.md index e0869980..f0350dca 100644 --- a/logging.md +++ b/logging.md @@ -31,13 +31,17 @@ Documentation for other releases can be found at -Logging Conventions -=================== -The following conventions for the glog levels to use. [glog](http://godoc.org/github.com/golang/glog) is globally preferred to [log](http://golang.org/pkg/log/) for better runtime control. +## Logging Conventions + +The following conventions for the glog levels to use. +[glog](http://godoc.org/github.com/golang/glog) is globally preferred to +[log](http://golang.org/pkg/log/) for better runtime control. * glog.Errorf() - Always an error + * glog.Warningf() - Something unexpected, but probably not an error + * glog.Infof() has multiple levels: * glog.V(0) - Generally useful for this to ALWAYS be visible to an operator * Programmer errors @@ -56,7 +60,9 @@ The following conventions for the glog levels to use. [glog](http://godoc.org/g * glog.V(4) - Debug level verbosity (for now) * Logging in particularly thorny parts of code where you may want to come back later and check it -As per the comments, the practical default level is V(2). Developers and QE environments may wish to run at V(3) or V(4). If you wish to change the log level, you can pass in `-v=X` where X is the desired maximum level to log. +As per the comments, the practical default level is V(2). Developers and QE +environments may wish to run at V(3) or V(4). If you wish to change the log +level, you can pass in `-v=X` where X is the desired maximum level to log. diff --git a/making-release-notes.md b/making-release-notes.md index 3418258e..01ef369e 100644 --- a/making-release-notes.md +++ b/making-release-notes.md @@ -38,10 +38,14 @@ This documents the process for making release notes for a release. ### 1) Note the PR number of the previous release -Find the most-recent PR that was merged with the previous .0 release. Remember this as $LASTPR. -_TODO_: Figure out a way to record this somewhere to save the next release engineer time. +Find the most-recent PR that was merged with the previous .0 release. Remember +this as $LASTPR. -Find the most-recent PR that was merged with the current .0 release. Remember this as $CURRENTPR. +- _TODO_: Figure out a way to record this somewhere to save the next +release engineer time. + +Find the most-recent PR that was merged with the current .0 release. Remember +this as $CURRENTPR. ### 2) Run the release-notes tool @@ -52,7 +56,7 @@ ${KUBERNETES_ROOT}/build/make-release-notes.sh $LASTPR $CURRENTPR ### 3) Trim the release notes This generates a list of the entire set of PRs merged since the last minor -release. It is likely long and many PRs aren't worth mentioning. If any of the +release. It is likely long and many PRs aren't worth mentioning. If any of the PRs were cherrypicked into patches on the last minor release, you should exclude them from the current release's notes. @@ -67,9 +71,13 @@ With the final markdown all set, cut and paste it to the top of `CHANGELOG.md` ### 5) Update the Release page - * Switch to the [releases](https://github.com/kubernetes/kubernetes/releases) page. + * Switch to the [releases](https://github.com/kubernetes/kubernetes/releases) +page. + * Open up the release you are working on. + * Cut and paste the final markdown from above into the release notes + * Press Save. diff --git a/mesos-style.md b/mesos-style.md index 9616dc31..fdf9da08 100644 --- a/mesos-style.md +++ b/mesos-style.md @@ -36,129 +36,207 @@ Documentation for other releases can be found at ## Introduction -We have observed two different cluster management architectures, which can be categorized as "Borg-style" and "Mesos/Omega-style." -(In the remainder of this document, we will abbreviate the latter as "Mesos-style.") -Although out-of-the box Kubernetes uses a Borg-style architecture, it can also be configured in a Mesos-style architecture, -and in fact can support both styles at the same time. This document describes the two approaches and describes how -to deploy a Mesos-style architecture on Kubernetes. +We have observed two different cluster management architectures, which can be +categorized as "Borg-style" and "Mesos/Omega-style." In the remainder of this +document, we will abbreviate the latter as "Mesos-style." Although out-of-the +box Kubernetes uses a Borg-style architecture, it can also be configured in a +Mesos-style architecture, and in fact can support both styles at the same time. +This document describes the two approaches and describes how to deploy a +Mesos-style architecture on Kubernetes. -(As an aside, the converse is also true: one can deploy a Borg/Kubernetes-style architecture on Mesos.) +As an aside, the converse is also true: one can deploy a Borg/Kubernetes-style +architecture on Mesos. -This document is NOT intended to provide a comprehensive comparison of Borg and Mesos. For example, we omit discussion -of the tradeoffs between scheduling with full knowledge of cluster state vs. scheduling using the "offer" model. -(That issue is discussed in some detail in the Omega paper (see references section at the end of this doc).) +This document is NOT intended to provide a comprehensive comparison of Borg and +Mesos. For example, we omit discussion of the tradeoffs between scheduling with +full knowledge of cluster state vs. scheduling using the "offer" model. That +issue is discussed in some detail in the Omega paper. +(See [references](#references) below.) ## What is a Borg-style architecture? A Borg-style architecture is characterized by: -* a single logical API endpoint for clients, where some amount of processing is done on requests, such as admission control and applying defaults -* generic (non-application-specific) collection abstractions described declaratively, -* generic controllers/state machines that manage the lifecycle of the collection abstractions and the containers spawned from them + +* a single logical API endpoint for clients, where some amount of processing is +done on requests, such as admission control and applying defaults + +* generic (non-application-specific) collection abstractions described +declaratively, + +* generic controllers/state machines that manage the lifecycle of the collection +abstractions and the containers spawned from them + * a generic scheduler -For example, Borg's primary collection abstraction is a Job, and every application that runs on Borg--whether it's a user-facing -service like the GMail front-end, a batch job like a MapReduce, or an infrastructure service like GFS--must represent itself as -a Job. Borg has corresponding state machine logic for managing Jobs and their instances, and a scheduler that's responsible -for assigning the instances to machines. +For example, Borg's primary collection abstraction is a Job, and every +application that runs on Borg--whether it's a user-facing service like the GMail +front-end, a batch job like a MapReduce, or an infrastructure service like +GFS--must represent itself as a Job. Borg has corresponding state machine logic +for managing Jobs and their instances, and a scheduler that's responsible for +assigning the instances to machines. The flow of a request in Borg is: 1. Client submits a collection object to the Borgmaster API endpoint + 1. Admission control, quota, applying defaults, etc. run on the collection -1. If the collection is admitted, it is persisted, and the collection state machine creates the underlying instances -1. The scheduler assigns a hostname to the instance, and tells the Borglet to start the instance's container(s) + +1. If the collection is admitted, it is persisted, and the collection state +machine creates the underlying instances + +1. The scheduler assigns a hostname to the instance, and tells the Borglet to +start the instance's container(s) + 1. Borglet starts the container(s) -1. The instance state machine manages the instances and the collection state machine manages the collection during their lifetimes -Out-of-the-box Kubernetes has *workload-specific* abstractions (ReplicaSet, Job, DaemonSet, etc.) and corresponding controllers, -and in the future may have [workload-specific schedulers](../../docs/proposals/multiple-schedulers.md), -e.g. different schedulers for long-running services vs. short-running batch. But these abstractions, controllers, and -schedulers are not *application-specific*. +1. The instance state machine manages the instances and the collection state +machine manages the collection during their lifetimes + +Out-of-the-box Kubernetes has *workload-specific* abstractions (ReplicaSet, Job, +DaemonSet, etc.) and corresponding controllers, and in the future may have +[workload-specific schedulers](../../docs/proposals/multiple-schedulers.md), +e.g. different schedulers for long-running services vs. short-running batch. But +these abstractions, controllers, and schedulers are not *application-specific*. The usual request flow in Kubernetes is very similar, namely -1. Client submits a collection object (e.g. ReplicaSet, Job, ...) to the API server +1. Client submits a collection object (e.g. ReplicaSet, Job, ...) to the API +server + 1. Admission control, quota, applying defaults, etc. run on the collection -1. If the collection is admitted, it is persisted, and the corresponding collection controller creates the underlying pods -1. Admission control, quota, applying defaults, etc. runs on each pod; if there are multiple schedulers, one of the admission -controllers will write the scheduler name as an annotation based on a policy + +1. If the collection is admitted, it is persisted, and the corresponding +collection controller creates the underlying pods + +1. Admission control, quota, applying defaults, etc. runs on each pod; if there +are multiple schedulers, one of the admission controllers will write the +scheduler name as an annotation based on a policy + 1. If a pod is admitted, it is persisted -1. The appropriate scheduler assigns a nodeName to the instance, which triggers the Kubelet to start the pod's container(s) + +1. The appropriate scheduler assigns a nodeName to the instance, which triggers +the Kubelet to start the pod's container(s) + 1. Kubelet starts the container(s) -1. The controller corresponding to the collection manages the pod and the collection during their lifetime -In the Borg model, application-level scheduling and cluster-level scheduling are handled by separate -components. For example, a MapReduce master might request Borg to create a job with a certain number of instances -with a particular resource shape, where each instance corresponds to a MapReduce worker; the MapReduce master would -then schedule individual units of work onto those workers. +1. The controller corresponding to the collection manages the pod and the +collection during their lifetime + +In the Borg model, application-level scheduling and cluster-level scheduling are +handled by separate components. For example, a MapReduce master might request +Borg to create a job with a certain number of instances with a particular +resource shape, where each instance corresponds to a MapReduce worker; the +MapReduce master would then schedule individual units of work onto those +workers. ## What is a Mesos-style architecture? -Mesos is fundamentally designed to support multiple application-specific "frameworks." A framework is -composed of a "framework scheduler" and a "framework executor." We will abbreviate "framework scheduler" -as "framework" since "scheduler" means something very different in Kubernetes (something that just -assigns pods to nodes). +Mesos is fundamentally designed to support multiple application-specific +"frameworks." A framework is composed of a "framework scheduler" and a +"framework executor." We will abbreviate "framework scheduler" as "framework" +since "scheduler" means something very different in Kubernetes (something that +just assigns pods to nodes). + +Unlike Borg and Kubernetes, where there is a single logical endpoint that +receives all API requests (the Borgmaster and API server, respectively), in +Mesos every framework is a separate API endpoint. Mesos does not have any +standard set of collection abstractions, controllers/state machines, or +schedulers; the logic for all of these things is contained in each +[application-specific framework](http://mesos.apache.org/documentation/latest/frameworks/) +individually. (Note that the notion of application-specific does sometimes blur +into the realm of workload-specific, for example +[Chronos](https://github.com/mesos/chronos) is a generic framework for batch +jobs. However, regardless of what set of Mesos frameworks you are using, the key +properties remain: each framework is its own API endpoint with its own +client-facing and internal abstractions, state machines, and scheduler). + +A Mesos framework can integrate application-level scheduling and cluster-level +scheduling into a single component. + +Note: Although Mesos frameworks expose their own API endpoints to clients, they +consume a common infrastructure via a common API endpoint for controlling tasks +(launching, detecting failure, etc.) and learning about available cluster +resources. More details +[here](http://mesos.apache.org/documentation/latest/scheduler-http-api/). -Unlike Borg and Kubernetes, where there is a single logical endpoint that receives all API requests (the Borgmaster and API server, -respectively), in Mesos every framework is a separate API endpoint. Mesos does not have any standard set of -collection abstractions, controllers/state machines, or schedulers; the logic for all of these things is contained -in each [application-specific framework](http://mesos.apache.org/documentation/latest/frameworks/) individually. -(Note that the notion of application-specific does sometimes blur into the realm of workload-specific, -for example [Chronos](https://github.com/mesos/chronos) is a generic framework for batch jobs. -However, regardless of what set of Mesos frameworks you are using, the key properties remain: each -framework is its own API endpoint with its own client-facing and internal abstractions, state machines, and scheduler). +## Building a Mesos-style framework on Kubernetes -A Mesos framework can integrate application-level scheduling and cluster-level scheduling into a single component. +Implementing the Mesos model on Kubernetes boils down to enabling +application-specific collection abstractions, controllers/state machines, and +scheduling. There are just three steps: -Note: Although Mesos frameworks expose their own API endpoints to clients, they consume a common -infrastructure via a common API endpoint for controlling tasks (launching, detecting failure, etc.) and learning about available -cluster resources. More details [here](http://mesos.apache.org/documentation/latest/scheduler-http-api/). +* Use API plugins to create API resources for your new application-specific +collection abstraction(s) -## Building a Mesos-style framework on Kubernetes +* Implement controllers for the new abstractions (and for managing the lifecycle +of the pods the controllers generate) -Implementing the Mesos model on Kubernetes boils down to enabling application-specific collection abstractions, -controllers/state machines, and scheduling. There are just three steps: -* Use API plugins to create API resources for your new application-specific collection abstraction(s) -* Implement controllers for the new abstractions (and for managing the lifecycle of the pods the controllers generate) * Implement a scheduler with the application-specific scheduling logic -Note that the last two can be combined: a Kubernetes controller can do the scheduling for the pods it creates, -by writing node name to the pods when it creates them. +Note that the last two can be combined: a Kubernetes controller can do the +scheduling for the pods it creates, by writing node name to the pods when it +creates them. + +Once you've done this, you end up with an architecture that is extremely similar +to the Mesos-style--the Kubernetes controller is effectively a Mesos framework. +The remaining differences are: -Once you've done this, you end up with an architecture that is extremely similar to the Mesos-style--the -Kubernetes controller is effectively a Mesos framework. The remaining differences are -* In Kubernetes, all API operations go through a single logical endpoint, the API server (we say logical because the API server can be replicated). -In contrast, in Mesos, API operations go to a particular framework. However, the Kubernetes API plugin model makes this difference fairly small. -* In Kubernetes, application-specific admission control, quota, defaulting, etc. rules can be implemented -in the API server rather than in the controller. Of course you can choose to make these operations be no-ops for -your application-specific collection abstractions, and handle them in your controller. -* On the node level, Mesos allows application-specific executors, whereas Kubernetes only has -executors for Docker and rkt containers. +* In Kubernetes, all API operations go through a single logical endpoint, the +API server (we say logical because the API server can be replicated). In +contrast, in Mesos, API operations go to a particular framework. However, the +Kubernetes API plugin model makes this difference fairly small. -The end-to-end flow is +* In Kubernetes, application-specific admission control, quota, defaulting, etc. +rules can be implemented in the API server rather than in the controller. Of +course you can choose to make these operations be no-ops for your +application-specific collection abstractions, and handle them in your controller. + +* On the node level, Mesos allows application-specific executors, whereas +Kubernetes only has executors for Docker and rkt containers. + +The end-to-end flow is: 1. Client submits an application-specific collection object to the API server -2. The API server plugin for that collection object forwards the request to the API server that handles that collection type -3. Admission control, quota, applying defaults, etc. runs on the collection object + +2. The API server plugin for that collection object forwards the request to the +API server that handles that collection type + +3. Admission control, quota, applying defaults, etc. runs on the collection +object + 4. If the collection is admitted, it is persisted -5. The collection controller sees the collection object and in response creates the underlying pods and chooses which nodes they will run on by setting node name + +5. The collection controller sees the collection object and in response creates +the underlying pods and chooses which nodes they will run on by setting node +name + 6. Kubelet sees the pods with node name set and starts the container(s) -7. The collection controller manages the pods and the collection during their lifetimes -(note that if the controller and scheduler are separated, then step 5 breaks down into multiple steps: -(5a) collection controller creates pods with empty node name. (5b) API server admission control, quota, defaulting, -etc. runs on the pods; one of the admission controller steps writes the scheduler name as an annotation on each pods -(see #18262 for more details). -(5c) The corresponding application-specific scheduler chooses a node and writes node name, which triggers the Kubelet to start the pod's container(s).) +7. The collection controller manages the pods and the collection during their +lifetimes + +*Note: if the controller and scheduler are separated, then step 5 breaks +down into multiple steps:* + +(5a) collection controller creates pods with empty node name. + +(5b) API server admission control, quota, defaulting, etc. runs on the +pods; one of the admission controller steps writes the scheduler name as an +annotation on each pods (see pull request `#18262` for more details). -As a final note, the Kubernetes model allows multiple levels of iterative refinement of runtime abstractions, -as long as the lowest level is the pod. For example, clients of application Foo might create a `FooSet` -which is picked up by the FooController which in turn creates `BatchFooSet` and `ServiceFooSet` objects, -which are picked up by the BatchFoo controller and ServiceFoo controller respectively, which in turn -create pods. In between each of these steps there is an opportunity for object-specific admission control, -quota, and defaulting to run in the API server, though these can instead be handled by the controllers. +(5c) The corresponding application-specific scheduler chooses a node and +writes node name, which triggers the Kubelet to start the pod's container(s). +As a final note, the Kubernetes model allows multiple levels of iterative +refinement of runtime abstractions, as long as the lowest level is the pod. For +example, clients of application Foo might create a `FooSet` which is picked up +by the FooController which in turn creates `BatchFooSet` and `ServiceFooSet` +objects, which are picked up by the BatchFoo controller and ServiceFoo +controller respectively, which in turn create pods. In between each of these +steps there is an opportunity for object-specific admission control, quota, and +defaulting to run in the API server, though these can instead be handled by the +controllers. ## References -- cgit v1.2.3 From 78a50e8e28c292fa8225d6aa19ac51e1979fa565 Mon Sep 17 00:00:00 2001 From: Robert Bailey Date: Sun, 8 May 2016 00:09:10 -0700 Subject: Update links for the user and troubleshooting guides for the build cop to copy-paste from the oncall documentation. --- on-call-build-cop.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/on-call-build-cop.md b/on-call-build-cop.md index cc5ff4f1..7a91e5cb 100644 --- a/on-call-build-cop.md +++ b/on-call-build-cop.md @@ -111,8 +111,8 @@ ensure your questions don't go unanswered. Before posting a new question, please search stackoverflow for answers to similar questions, and also familiarize yourself with: - * [user guide](http://kubernetes.io/v1.0/) - * [troubleshooting guide](http://kubernetes.io/v1.0/docs/troubleshooting.html) + * [user guide](http://kubernetes.io/docs/user-guide/) + * [troubleshooting guide](http://kubernetes.io/docs/admin/cluster-troubleshooting/) Again, thanks for using Kubernetes. -- cgit v1.2.3 From 7f53c8425862d6e3174a8d94fa45cb59396bb1d2 Mon Sep 17 00:00:00 2001 From: Eric Paris Date: Thu, 10 Mar 2016 13:30:53 -0500 Subject: Stop pinning to version v53 --- development.md | 11 ----------- 1 file changed, 11 deletions(-) diff --git a/development.md b/development.md index 3e782e03..1d541520 100644 --- a/development.md +++ b/development.md @@ -196,17 +196,6 @@ export GOPATH=$HOME/go-tools export PATH=$PATH:$GOPATH/bin ``` -Note: -At this time, godep update in the Kubernetes project only works properly if your -version of godep is < 54. - -To check your version of godep: - -```sh -$ godep version -godep v53 (linux/amd64/go1.5.3) -``` - ### Using godep Here's a quick walkthrough of one way to use godeps to add or update a -- cgit v1.2.3 From b2bfe549a34be0ae48fb7360c9431b03201afc5d Mon Sep 17 00:00:00 2001 From: Tim Hockin Date: Wed, 11 May 2016 20:52:10 -0700 Subject: Update docs re: godep --- development.md | 54 +++++++++++++++++++++++++++++++++--------------------- 1 file changed, 33 insertions(+), 21 deletions(-) diff --git a/development.md b/development.md index 1d541520..46020fb5 100644 --- a/development.md +++ b/development.md @@ -166,9 +166,9 @@ See [Faster Reviews](faster_reviews.md) for more details. Kubernetes uses [godep](https://github.com/tools/godep) to manage dependencies. It is not strictly required for building Kubernetes but it is required when -managing dependencies under the Godeps/ tree, and is required by a number of the -build and test scripts. Please make sure that ``godep`` is installed and in your -``$PATH``. +managing dependencies under the vendor/ tree, and is required by a number of the +build and test scripts. Please make sure that `godep` is installed and in your +`$PATH`, and that `godep version` says it is at least v63. ### Installing godep @@ -186,16 +186,29 @@ from mercurial. ```sh export GOPATH=$HOME/go-tools mkdir -p $GOPATH -go get github.com/tools/godep +go get -u github.com/tools/godep ``` -3) Add $GOPATH/bin to your path. Typically you'd add this to your ~/.profile: +3) Add this $GOPATH/bin to your path. Typically you'd add this to your ~/.profile: ```sh export GOPATH=$HOME/go-tools export PATH=$PATH:$GOPATH/bin ``` +Note: +At this time, godep version >= v63 is known to work in the Kubernetes project + +To check your version of godep: + +```sh +$ godep version +godep v66 (linux/amd64/go1.6.2) +``` + +If it is not a valid version try, make sure you have updated the godep repo +with `go get -u github.com/tools/godep`. + ### Using godep Here's a quick walkthrough of one way to use godeps to add or update a @@ -204,8 +217,8 @@ instructions in [godep's documentation](https://github.com/tools/godep). 1) Devote a directory to this endeavor: -_Devoting a separate directory is not required, but it is helpful to separate -dependency updates from other changes._ +_Devoting a separate directory is not strictly required, but it is helpful to +separate dependency updates from other changes._ ```sh export KPATH=$HOME/code/kubernetes @@ -218,11 +231,8 @@ git clone https://path/to/your/fork . 2) Set up your GOPATH. ```sh -# Option A: this will let your builds see packages that exist elsewhere on your system. -export GOPATH=$KPATH:$GOPATH -# Option B: This will *not* let your local builds see packages that exist elsewhere on your system. +# This will *not* let your local builds see packages that exist elsewhere on your system. export GOPATH=$KPATH -# Option B is recommended if you're going to mess with the dependencies. ``` 3) Populate your new GOPATH. @@ -237,32 +247,34 @@ godep restore ```sh # To add a new dependency, do: cd $KPATH/src/k8s.io/kubernetes -go get path/to/dependency -# Change code in Kubernetes to use the dependency. -godep save ./... +godep get path/to/dependency +# Now change code in Kubernetes to use the dependency. +hack/godep-save.sh + # To update an existing dependency, do: cd $KPATH/src/k8s.io/kubernetes go get -u path/to/dependency # Change code in Kubernetes accordingly if necessary. -godep update path/to/dependency/... +godep update path/to/dependency ``` _If `go get -u path/to/dependency` fails with compilation errors, instead try `go get -d -u path/to/dependency` to fetch the dependencies without compiling -them. This can happen when updating the cadvisor dependency._ +them. This is unusual, but has been observed._ 5) Before sending your PR, it's a good idea to sanity check that your -Godeps.json file is ok by running `hack/verify-godeps.sh` +Godeps.json file and the contents of `vendor/ `are ok by running `hack/verify-godeps.sh` -_If hack/verify-godeps.sh fails after a `godep update`, it is possible that a +_If `hack/verify-godeps.sh` fails after a `godep update`, it is possible that a transitive dependency was added or removed but not updated by godeps. It then -may be necessary to perform a `godep save ./...` to pick up the transitive +may be necessary to perform a `hack/godep-save.sh` to pick up the transitive dependency changes._ -It is sometimes expedient to manually fix the /Godeps/godeps.json file to -minimize the changes. +It is sometimes expedient to manually fix the /Godeps/Godeps.json file to +minimize the changes. However without great care this can lead to failures +with `hack/verify-godeps.sh`. This must pass for every PR. Please send dependency updates in separate commits within your PR, for easier reviewing. -- cgit v1.2.3 From f6cab74b41cf2d0c06850ca7a3e0b4e20fcca7ce Mon Sep 17 00:00:00 2001 From: Klaus Ma Date: Sat, 14 May 2016 11:10:29 +0800 Subject: Update kubectl service output. --- kubectl-conventions.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/kubectl-conventions.md b/kubectl-conventions.md index 2833ed37..23a73f11 100644 --- a/kubectl-conventions.md +++ b/kubectl-conventions.md @@ -215,8 +215,10 @@ resources, a targeted field selector should be used in favor of client side filtering of related resources. * For fields that can be explicitly unset (booleans, integers, structs), the -output should say ``. Likewise, for arrays `` should be used. -Lastly `` should be used where unrecognized field type was specified. +output should say ``. Likewise, for arrays `` should be used; for +external IP, `` should be used; for load balancer, `` should be +used. Lastly `` should be used where unrecognized field type was +specified. * Mutations should output TYPE/name verbed by default, where TYPE is singular; `-o name` may be used to just display TYPE/name, which may be used to specify -- cgit v1.2.3 From 9532507be58198e6d94c85e2dc44c690213370ae Mon Sep 17 00:00:00 2001 From: Isaac Hollander McCreery Date: Mon, 16 May 2016 07:29:04 -0700 Subject: Fix link to Jenkins --- releasing.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/releasing.md b/releasing.md index 0f08caf6..5747ed6b 100644 --- a/releasing.md +++ b/releasing.md @@ -82,11 +82,11 @@ from, and other prerequisites. * You should still look for green tests, (see below). No matter what you're cutting, you're going to want to look at -[Jenkins](http://go/k8s-test/). Figure out what branch you're cutting from, -(see above,) and look at the critical jobs building from that branch. First -glance through builds and look for nice solid rows of green builds, and then -check temporally with the other critical builds to make sure they're solid -around then as well. +[Jenkins](http://kubekins.dls.corp.google.com/) (Google internal only). Figure +out what branch you're cutting from, (see above,) and look at the critical jobs +building from that branch. First glance through builds and look for nice solid +rows of green builds, and then check temporally with the other critical builds +to make sure they're solid around then as well. If you're doing an alpha release or cutting a new release series, you can choose an arbitrary build. If you are doing an official release, you have to -- cgit v1.2.3 From f11086ed30e2d7db46de6e3cd7017b02ee366849 Mon Sep 17 00:00:00 2001 From: Tim Hockin Date: Mon, 16 May 2016 12:26:55 -0700 Subject: Document godep updates better `godep update` doesn't work. It just says 'nothing to update'. Drop it, and use The Big Hammer instead. --- development.md | 38 ++++++++++++++++++++++++++++---------- 1 file changed, 28 insertions(+), 10 deletions(-) diff --git a/development.md b/development.md index 46020fb5..9e008191 100644 --- a/development.md +++ b/development.md @@ -244,25 +244,46 @@ godep restore 4) Next, you can either add a new dependency or update an existing one. +To add a new dependency is simple (if a bit slow): + ```sh -# To add a new dependency, do: cd $KPATH/src/k8s.io/kubernetes -godep get path/to/dependency +DEP=example.com/path/to/dependency +godep get $DEP/... # Now change code in Kubernetes to use the dependency. -hack/godep-save.sh +./hack/godep-save.sh +``` +To update an existing dependency is a bit more complicated. Godep has an +`update` command, but none of us can figure out how to actually make it work. +Instead, this procedure seems to work reliably: -# To update an existing dependency, do: +```sh cd $KPATH/src/k8s.io/kubernetes -go get -u path/to/dependency -# Change code in Kubernetes accordingly if necessary. -godep update path/to/dependency +DEP=example.com/path/to/dependency +# NB: For the next step, $DEP is assumed be the repo root. If it is actually a +# subdir of the repo, use the repo root here. This is required to keep godep +# from getting angry because `godep restore` left the tree in a "detached head" +# state. +rm -rf $KPATH/src/$DEP # repo root +godep get $DEP/... +# Change code in Kubernetes, if necessary. +rm -rf Godeps +rm -rf vendor +./hack/godep-save.sh +git co -- $(git st -s | grep "^ D" | awk '{print $2}' | grep ^Godeps) ``` _If `go get -u path/to/dependency` fails with compilation errors, instead try `go get -d -u path/to/dependency` to fetch the dependencies without compiling them. This is unusual, but has been observed._ +After all of this is done, `git status` should show you what files have been +modified and added/removed. Make sure to `git add` and `git rm` them. It is +commonly advised to make one `git commit` which includes just the dependency +update and Godeps files, and another `git commit` that includes changes to +Kubernetes code to use the new/updated dependency. These commits can go into a +single pull request. 5) Before sending your PR, it's a good idea to sanity check that your Godeps.json file and the contents of `vendor/ `are ok by running `hack/verify-godeps.sh` @@ -276,9 +297,6 @@ It is sometimes expedient to manually fix the /Godeps/Godeps.json file to minimize the changes. However without great care this can lead to failures with `hack/verify-godeps.sh`. This must pass for every PR. -Please send dependency updates in separate commits within your PR, for easier -reviewing. - 6) If you updated the Godeps, please also update `Godeps/LICENSES` by running `hack/update-godep-licenses.sh`. -- cgit v1.2.3 From 97c046019f6e927098858589d5d8fced3199e12c Mon Sep 17 00:00:00 2001 From: Paul Morie Date: Tue, 17 May 2016 12:31:58 -0400 Subject: Add notes about endgame for test flakes --- flaky-tests.md | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/flaky-tests.md b/flaky-tests.md index e757021f..b599f80f 100644 --- a/flaky-tests.md +++ b/flaky-tests.md @@ -88,11 +88,20 @@ we have the following guidelines: 3. If you can reproduce it (or it's obvious from the logs what happened), you should then be able to fix it, or in the case where someone is clearly more qualified to fix it, reassign it with very clear instructions. -4. If you can't reproduce it: __don't just close it!__ Every time a flake comes +4. PRs that fix or help debug flakes may have the P0 priority set to get them + through the merge queue as fast as possible. +5. Once you have made a change that you believe fixes a flake, it is conservative + to keep the issue for the flake open and see if it manifests again after the + change is merged. +6. If you can't reproduce a flake: __don't just close it!__ Every time a flake comes back, at least 2 hours of merge time is wasted. So we need to make monotonic progress towards narrowing it down every time a flake occurs. If you can't figure it out from the logs, add log messages that would have help you figure - it out. + it out. If you make changes to make a flake more reproducible, please link + your pull request to the flake you're working on. +7. If a flake has been open, could not be reproduced, and has not manifested in + 3 months, it is reasonable to close the flake issue with a note saying + why. # Reproducing unit test flakes -- cgit v1.2.3 From a9712c656007b24d7aa504c947cda164fd56221b Mon Sep 17 00:00:00 2001 From: Vishnu kannan Date: Thu, 1 Oct 2015 11:57:17 -0700 Subject: Updating QoS policy to be per-pod instead of per-resource. Signed-off-by: Vishnu kannan --- scheduler_algorithm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/scheduler_algorithm.md b/scheduler_algorithm.md index 63206c8b..b6c7ea01 100755 --- a/scheduler_algorithm.md +++ b/scheduler_algorithm.md @@ -42,7 +42,7 @@ The purpose of filtering the nodes is to filter out the nodes that do not meet c - `NoDiskConflict`: Evaluate if a pod can fit due to the volumes it requests, and those that are already mounted. - `NoVolumeZoneConflict`: Evaluate if the volumes a pod requests are available on the node, given the Zone restrictions. -- `PodFitsResources`: Check if the free resource (CPU and Memory) meets the requirement of the Pod. The free resource is measured by the capacity minus the sum of requests of all Pods on the node. To learn more about the resource QoS in Kubernetes, please check [QoS proposal](../proposals/resource-qos.md). +- `PodFitsResources`: Check if the free resource (CPU and Memory) meets the requirement of the Pod. The free resource is measured by the capacity minus the sum of requests of all Pods on the node. To learn more about the resource QoS in Kubernetes, please check [QoS proposal](../design/resource-qos.md). - `PodFitsHostPorts`: Check if any HostPort required by the Pod is already occupied on the node. - `HostName`: Filter out all nodes except the one specified in the PodSpec's NodeName field. - `MatchNodeSelector`: Check if the labels of the node match the labels specified in the Pod's `nodeSelector` field and, as of Kubernetes v1.2, also match the `scheduler.alpha.kubernetes.io/affinity` pod annotation if present. See [here](../user-guide/node-selection/) for more details on both. -- cgit v1.2.3 From c3d5cfb6c45213fd9645115f25322a26ecdcbc1e Mon Sep 17 00:00:00 2001 From: Jan Chaloupka Date: Thu, 12 May 2016 14:01:33 +0200 Subject: Scheduler: introduce CheckNodeMemoryPressurePredicate, don't schedule pods for nodes that reports memory pressury. Introduce unit-test for CheckNodeMemoryPressurePredicate Following work done in #14943 --- scheduler_algorithm.md | 1 + 1 file changed, 1 insertion(+) diff --git a/scheduler_algorithm.md b/scheduler_algorithm.md index 63206c8b..7e79e24b 100755 --- a/scheduler_algorithm.md +++ b/scheduler_algorithm.md @@ -48,6 +48,7 @@ The purpose of filtering the nodes is to filter out the nodes that do not meet c - `MatchNodeSelector`: Check if the labels of the node match the labels specified in the Pod's `nodeSelector` field and, as of Kubernetes v1.2, also match the `scheduler.alpha.kubernetes.io/affinity` pod annotation if present. See [here](../user-guide/node-selection/) for more details on both. - `MaxEBSVolumeCount`: Ensure that the number of attached ElasticBlockStore volumes does not exceed a maximum value (by default, 39, since Amazon recommends a maximum of 40 with one of those 40 reserved for the root volume -- see [Amazon's documentation](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/volume_limits.html#linux-specific-volume-limits)). The maximum value can be controlled by setting the `KUBE_MAX_PD_VOLS` environment variable. - `MaxGCEPDVolumeCount`: Ensure that the number of attached GCE PersistentDisk volumes does not exceed a maximum value (by default, 16, which is the maximum GCE allows -- see [GCE's documentation](https://cloud.google.com/compute/docs/disks/persistent-disks#limits_for_predefined_machine_types)). The maximum value can be controlled by setting the `KUBE_MAX_PD_VOLS` environment variable. +- `CheckNodeMemoryPressure`: Check if a pod can be scheduled on a node reporting memory pressure condition. Currently, no ``BestEffort`` should be placed on a node under memory pressure as it gets automatically evicted by kubelet. The details of the above predicates can be found in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/predicates/predicates.go). All predicates mentioned above can be used in combination to perform a sophisticated filtering policy. Kubernetes uses some, but not all, of these predicates by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). -- cgit v1.2.3 From cc10b53fc0c6e12c1f7f2ac69b34774f94fbb5df Mon Sep 17 00:00:00 2001 From: Girish Kalele Date: Fri, 27 May 2016 12:05:24 -0700 Subject: Switch DNS addons from skydns to kubedns Unified skydns templates using a simple underscore based template and added transform sed scripts to transform into salt and sed yaml templates Moved all content out of cluster/addons/dns into build/kube-dns and saltbase/salt/kube-dns --- running-locally.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/running-locally.md b/running-locally.md index 98df8cfc..6999e588 100644 --- a/running-locally.md +++ b/running-locally.md @@ -189,7 +189,7 @@ KUBE_DNS_DOMAIN="cluster.local" KUBE_DNS_REPLICAS=1 ``` -To know more on DNS service you can look [here](http://issue.k8s.io/6667). Related documents can be found [here](../../cluster/addons/dns/#how-do-i-configure-it) +To know more on DNS service you can look [here](http://issue.k8s.io/6667). Related documents can be found [here](../../build/kube-dns/#how-do-i-configure-it) -- cgit v1.2.3 From 3577ba87d6a4fa0eecbdc5176463b7e99ba22183 Mon Sep 17 00:00:00 2001 From: Quinton Hoole Date: Thu, 2 Jun 2016 11:30:31 -0700 Subject: Add note to development guide regarding GNU tools versions, especially on Mac OS X. --- development.md | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/development.md b/development.md index 9e008191..d64b6cba 100644 --- a/development.md +++ b/development.md @@ -50,6 +50,14 @@ Official releases are built using Docker containers. To build Kubernetes using Docker please follow [these instructions](http://releases.k8s.io/HEAD/build/README.md). +### Local OS/shell environment + +Many of the Kubernetes development helper scripts rely on a fairly up-to-date GNU tools +environment, so most recent Linux distros should work just fine +out-of-the-box. Note that Mac OS X ships with somewhat outdated +BSD-based tools, some of which may be incompatible in subtle ways, so we recommend +[replacing those with modern GNU tools](https://www.topbug.net/blog/2013/04/14/install-and-use-gnu-command-line-tools-in-mac-os-x/). + ### Go development environment Kubernetes is written in the [Go](http://golang.org) programming language. -- cgit v1.2.3 From 5ebb3f2f88d2f7188cc411c6459f90d9f9aec89e Mon Sep 17 00:00:00 2001 From: pwittrock Date: Tue, 31 May 2016 16:35:10 +0000 Subject: Node e2e use vendored testing packages. --- e2e-node-tests.md | 51 +++++++++++++++++++++++++++------------------------ 1 file changed, 27 insertions(+), 24 deletions(-) diff --git a/e2e-node-tests.md b/e2e-node-tests.md index 840d3c3a..d2634aa9 100644 --- a/e2e-node-tests.md +++ b/e2e-node-tests.md @@ -63,30 +63,40 @@ and remote passwordless sudo access over ssh) * using [run_e2e.go](../../test/e2e_node/runner/run_e2e.go) to build a tar.gz and executing on host (requires host access w/ remote sudo) -### Configuring a new remote host for testing +### Option 1: Configuring a new remote host from scratch for testing -The host must contain a environment capable of supporting a mini-kubernetes -cluster. Includes: +The host must contain an environment capable of running a minimal kubernetes cluster +consisting of etcd, the kube-apiserver, and kubelet. The steps required to step a host vary between distributions +(coreos, rhel, ubuntu, etc), but may include: * install etcd * install docker +* add user running tests to docker group * install lxc and update grub commandline * enable tty-less sudo access -See [setup_host.sh](../../test/e2e_node/environment/setup_host.sh) +These steps should be captured in [setup_host.sh](../../test/e2e_node/environment/setup_host.sh) -### Running the tests +### Option 2: Copying an existing host image from another project -1. If running against a host on gce +If there is an existing image in another project you would like to use, you can use the script +[copy-e2e-image.sh](../../test/e2e_node/jenkins/copy-e2e-image.sh) to copy an image +from one GCE project to another. - * Copy [template.properties](../../test/e2e_node/jenkins/template.properties) +```sh +copy-e2e-image.sh +``` - * Fill in `GCE_HOSTS` - * Set `INSTALL_GODEP=true` to install `godep`, `gomega`, `ginkgo` +### Running the tests - * Make sure host names are resolvable to ssh `ssh `. +1. If running tests against a running host on gce - * If needed, you can run `gcloud compute config-ssh` to add gce hostnames to -your .ssh/config so they are resolvable by ssh. + * Make sure host names are resolvable to ssh by running `gcloud compute config-ssh` to + update ~/.ssh/config with the GCE hosts. After running this command, check the hostnames + in the ~/.ssh/config file and verify you have the correct access by running `ssh `. + + * Copy [template.properties](../../test/e2e_node/jenkins/template.properties) + + * Fill in `GCE_HOSTS` with the name of the host * Run `test/e2e_node/jenkins/e2e-node-jenkins.sh ` * **Must be run from kubernetes root** @@ -103,8 +113,6 @@ tests with `--ssh-options` separated hosts>` * **Must be run from kubernetes root** - * requires (go get): `github.com/tools/godep`, `github.com/onsi/gomega`, -`github.com/onsi/ginkgo/ginkgo` 3. Alternatively, manually build and copy `e2e_node_test.tar.gz` to a remote host @@ -112,9 +120,6 @@ host * Build the tar.gz `go run test/e2e_node/runner/run_e2e.go --logtostderr --build-only` - * requires (go get): `github.com/tools/godep`, `github.com/onsi/gomega`, -`github.com/onsi/ginkgo/ginkgo` - * Copy `e2e_node_test.tar.gz` to the remote host * Extract the archive on the remote host `tar -xzvf e2e_node_test.tar.gz` @@ -127,13 +132,15 @@ kube-apiserver binaries. ## Running tests against a gce image -* Build a gce image from a prepared gce host +* Option 1: Build a gce image from a prepared gce host * Create the host from a base image and configure it (see above) * Run tests against this remote host to ensure that it is setup correctly before doing anything else * Create a gce *snapshot* of the instance * Create a gce *disk* from the snapshot * Create a gce *image* from the disk +* Option 2: Copy a prepared image from another project + * Instructions above * Test that the necessary gcloud credentials are setup for the project * `gcloud compute --project --zone images list` * Verify that your image appears in the list @@ -146,9 +153,7 @@ before doing anything else Node e2e tests are run against a static list of host environments continuously or when manually triggered on a github.com pull requests using the trigger -phrase `@k8s-bot test node e2e experimental` - *results not yet publish, pending -evaluation of test stability.*. - +phrase `@k8s-bot test node e2e` ### CI Host environments @@ -159,14 +164,12 @@ TBD | linux distro | distro version | docker version | etcd version | cloud provider | |-----------------|----------------|----------------|--------------|----------------| | containervm | | 1.8 | | gce | -| rhel | 7 | 1.10 | | gce | -| centos | 7 | 1.10 | | gce | | coreos | stable | 1.8 | | gce | | debian | jessie | 1.10 | | gce | | ubuntu | trusty | 1.8 | | gce | | ubuntu | trusty | 1.9 | | gce | | ubuntu | trusty | 1.10 | | gce | -| ubuntu | wily | 1.10 | | gce | + -- cgit v1.2.3 From 8ea330781d1b32d5f144aea01e0a708806e1e59b Mon Sep 17 00:00:00 2001 From: Eric Paris Date: Tue, 7 Jun 2016 17:30:50 -0400 Subject: update automation.md --- automation.md | 94 ++++++++++++++++++++++------------------------------------- 1 file changed, 35 insertions(+), 59 deletions(-) diff --git a/automation.md b/automation.md index 2b3f5437..6ba74fd0 100644 --- a/automation.md +++ b/automation.md @@ -59,7 +59,9 @@ The submit-queue does the following: ```go for _, pr := range readyToMergePRs() { if testsAreStable() { - mergePR(pr) + if retestPR(pr) == success { + mergePR(pr) + } } } ``` @@ -68,92 +70,66 @@ The status of the submit-queue is [online.](http://submit-queue.k8s.io/) ### Ready to merge status +The submit-queue lists what it believes are required on the [merge requirements tab](http://submit-queue.k8s.io/#/info) of the info page. That may be more up to date. + A PR is considered "ready for merging" if it matches the following: - * it has the `lgtm` label, and that `lgtm` is newer than the latest commit - * it has passed the cla pre-submit and has the `cla:yes` label - * it has passed the travis pre-submit tests - * one (or all) of - * its author is in kubernetes/contrib/submit-queue/whitelist.txt - * its author is in contributors.txt via the github API. - * the PR has the `ok-to-merge` label - * One (or both of) - * it has passed the Jenkins e2e test - * it has the `e2e-not-required` label - -Note that the combined whitelist/committer list is available at -[submit-queue.k8s.io](http://submit-queue.k8s.io) + * The PR must have the label "cla: yes" or "cla: human-approved" + * The PR must be mergeable. aka cannot need a rebase + * All of the following github statuses must be green + * Jenkins GCE Node e2e + * Jenkins GCE e2e + * Jenkins unit/integration + * The PR cannot have any prohibited future milestones (such as a v1.5 milestone during v1.4 code freeze) + * The PR must have the "lgtm" label + * The PR must not have been updated since the "lgtm" label was applied + * The PR must not have the "do-not-merge" label ### Merge process -Merges _only_ occur when the `critical builds` (Jenkins e2e for gce, gke, -scalability, upgrade) are passing. We're open to including more builds here, let -us know... +Merges _only_ occur when the [critical builds](http://submit-queue.k8s.io/#/e2e) +are passing. We're open to including more builds here, let us know... Merges are serialized, so only a single PR is merged at a time, to ensure against races. -If the PR has the `e2e-not-required` label, it is simply merged. If the PR does -not have this label, e2e tests are re-run, if these new tests pass, the PR is -merged. - -If e2e flakes or is currently buggy, the PR will not be merged, but it will be -re-run on the following pass. +If the PR has the `retest-not-required` label, it is simply merged. If the PR does +not have this label the e2e, unit/integration, and node tests are re-run. If these +tests pass a second time, the PR will be merged as long as the `critical builds` are +green when this PR finishes retesting. ## Github Munger -We also run a [github "munger."] -(https://github.com/kubernetes/contrib/tree/master/mungegithub) +We run a [github "munger"](https://github.com/kubernetes/contrib/tree/master/mungegithub). This runs repeatedly over github pulls and issues and runs modular "mungers" -similar to "mungedocs." - -Currently this runs: - * blunderbuss - Tries to automatically find an owner for a PR without an -owner, uses mapping file here: - https://github.com/kubernetes/contrib/blob/master/mungegithub/blunderbuss.yml - * needs-rebase - Adds `needs-rebase` to PRs that aren't currently mergeable, -and removes it from those that are. - * size - Adds `size/xs` - `size/xxl` labels to PRs - * ok-to-test - Adds the `ok-to-test` message to PRs that have an `lgtm` but -the e2e-builder would otherwise not test due to whitelist - * ping-ci - Attempts to ping the ci systems (Travis) if they are missing from -a PR. - * lgtm-after-commit - Removes the `lgtm` label from PRs where there are -commits that are newer than the `lgtm` label - -In the works: - * issue-detector - machine learning for determining if an issue that has been -filed is a `support` issue, `bug` or `feature` +similar to "mungedocs." The mungers include the 'submit-queue' referenced above along +with numerous other functions. See the README in the link above. Please feel free to unleash your creativity on this tool, send us new mungers that you think will help support the Kubernetes development process. ## PR builder -We also run a robotic PR builder that attempts to run e2e tests for each PR. +We also run a robotic PR builder that attempts to run tests for each PR. Before a PR from an unknown user is run, the PR builder bot (`k8s-bot`) asks to a message from a contributor that a PR is "ok to test", the contributor replies -with that message. Contributors can also add users to the whitelist by replying -with the message "add to whitelist" ("please" is optional, but remember to treat -your robots with kindness...) - -If a PR is approved for testing, and tests either haven't run, or need to be -re-run, you can ask the PR builder to re-run the tests. To do this, reply to the -PR with a message that begins with `@k8s-bot test this`, this should trigger a -re-build/re-test. - +with that message. ("please" is optional, but remember to treat your robots with +kindness...) ## FAQ: #### How can I ask my PR to be tested again for Jenkins failures? -Right now you have to ask a contributor (this may be you!) to re-run the test -with "@k8s-bot test this" - -### How can I kick Travis to re-test on a failure? +PRs should only need to be manually re-tested if you believe there was a flake +during the original test. All flakes should be filed as an +[issue](https://github.com/kubernetes/kubernetes/issues?q=is%3Aopen+is%3Aissue+label%3Akind%2Fflake). +Once you find or file a flake a contributer (this may be you!) should request +a retest with "@k8s-bot test this issue: #NNNNN", where NNNNN is replaced with +the issue number you found or filed. -Right now the easiest way is to close and then immediately re-open the PR. +Any pushes of new code to the PR will automatically trigger a new test. No human +interraction is required. [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/automation.md?pixel)]() -- cgit v1.2.3 From 304fe5515360ec94ef094977177a40825cacb2a7 Mon Sep 17 00:00:00 2001 From: Aaron Levy Date: Mon, 6 Jun 2016 19:25:36 -0700 Subject: Use a skeleton provider for unimplemented functionality --- e2e-tests.md | 1 + 1 file changed, 1 insertion(+) diff --git a/e2e-tests.md b/e2e-tests.md index d09ab9e7..7720e2d8 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -354,6 +354,7 @@ credentials. # setup for conformance tests export KUBECONFIG=/path/to/kubeconfig export KUBERNETES_CONFORMANCE_TEST=y +export KUBERNETES_PROVIDER=skeleton # run all conformance tests go run hack/e2e.go -v --test --test_args="--ginkgo.focus=\[Conformance\]" -- cgit v1.2.3 From be58a6eddd0a78a9069875126e755c30aace2885 Mon Sep 17 00:00:00 2001 From: Phillip Wittrock Date: Fri, 3 Jun 2016 17:50:21 -0700 Subject: Node e2e Makefile support for running remote tests against kubernetes-node-e2e-images. Also includes other improvements: - Makefile rule to run tests against remote instance using existing host or image - Makefile will reuse an instance created from an image if it was not torn down - Runner starts gce instances in parallel with building source - Runner uses instance ip instead of hostname so that it doesn't need to resolve - Runner supports cleaning up files and processes on an instance without stopping / deleting it - Runner runs tests using `ginkgo` binary to support running tests in parallel --- e2e-node-tests.md | 228 ++++++++++++++++++++++++++++++++---------------------- 1 file changed, 134 insertions(+), 94 deletions(-) diff --git a/e2e-node-tests.md b/e2e-node-tests.md index d2634aa9..f2869134 100644 --- a/e2e-node-tests.md +++ b/e2e-node-tests.md @@ -34,147 +34,187 @@ Documentation for other releases can be found at # Node End-To-End tests -Node e2e tests start kubelet and minimal supporting infrastructure to validate -the kubelet on a host. Tests can be run either locally, against a remote host or -against a GCE image. +Node e2e tests are component tests meant for testing the Kubelet code on a custom host environment. + +Tests can be run either locally or against a host running on GCE. + +Node e2e tests are run as both pre- and post- submit tests by the Kubernetes project. *Note: Linux only. Mac and Windows unsupported.* -## Running tests locally +# Running tests -etcd must be installed and on the PATH to run the node e2e tests. To verify -etcd is installed: `which etcd`. You can find instructions for installing etcd -[on the etcd releases page](https://github.com/coreos/etcd/releases). +## Locally -Run the tests locally: `make test_e2e_node` +Why run tests *Locally*? Much faster than running tests Remotely. -Running the node e2e tests locally will build the kubernetes go source files and -then start the kubelet, kube-apiserver, and etcd binaries on localhost before -executing the ginkgo tests under test/e2e_node against the local kubelet -instance. +Prerequisites: +- [Install etcd](https://github.com/coreos/etcd/releases) on your PATH + - Verify etcd is installed correctly by running `which etcd` +- [Install ginkgo](https://github.com/onsi/ginkgo) on your PATH + - Verify ginkgo is installed correctly by running `which ginkgo` -## Running tests against a remote host +From the Kubernetes base directory, run: -The node e2e tests can be run against one or more remote hosts using one of: -* [e2e-node-jenkins.sh](../../test/e2e_node/jenkins/e2e-node-jenkins.sh) (gce -only) -* [run_e2e.go](../../test/e2e_node/runner/run_e2e.go) (requires passwordless ssh -and remote passwordless sudo access over ssh) -* using [run_e2e.go](../../test/e2e_node/runner/run_e2e.go) to build a tar.gz -and executing on host (requires host access w/ remote sudo) +```sh +make test_e2e_node +``` -### Option 1: Configuring a new remote host from scratch for testing +This will: run the *ginkgo* binary against the subdirectory *test/e2e_node*, which will in turn: +- Ask for sudo access (needed for running some of the processes) +- Build the Kubernetes source code +- Pre-pull docker images used by the tests +- Start a local instance of *etcd* +- Start a local instance of *kube-apiserver* +- Start a local instance of *kubelet* +- Run the test using the locally started processes +- Output the test results to STDOUT +- Stop *kubelet*, *kube-apiserver*, and *etcd* -The host must contain an environment capable of running a minimal kubernetes cluster -consisting of etcd, the kube-apiserver, and kubelet. The steps required to step a host vary between distributions -(coreos, rhel, ubuntu, etc), but may include: -* install etcd -* install docker -* add user running tests to docker group -* install lxc and update grub commandline -* enable tty-less sudo access +## Remotely -These steps should be captured in [setup_host.sh](../../test/e2e_node/environment/setup_host.sh) +Why Run tests *Remotely*? Tests will be run in a customized pristine environment. Closely mimics what will be done +as pre- and post- submit testing performed by the project. -### Option 2: Copying an existing host image from another project +Prerequisites: +- [join the googlegroup](https://groups.google.com/forum/#!forum/kubernetes-dev) +`kubernetes-dev@googlegroups.com` + - *This provides read access to the node test images.* +- Setup a [Google Cloud Platform](https://cloud.google.com/) account and project with Google Compute Engine enabled +- Install and setup the [gcloud sdk](https://cloud.google.com/sdk/downloads) + - Verify the sdk is setup correctly by running `gcloud compute instances list` and `gcloud compute images list --project kubernetes-node-e2e-images` -If there is an existing image in another project you would like to use, you can use the script -[copy-e2e-image.sh](../../test/e2e_node/jenkins/copy-e2e-image.sh) to copy an image -from one GCE project to another. +Run: ```sh -copy-e2e-image.sh +make test_e2e_node REMOTE=true ``` -### Running the tests +This will: +- Build the Kubernetes source code +- Create a new GCE instance using the default test image + - Instance will be called **test-e2e-node-containervm-v20160321-image** +- Lookup the instance public ip address +- Copy a compressed archive file to the host containing the following binaries: + - ginkgo + - kubelet + - kube-apiserver + - e2e_node.test (this binary contains the actual tests to be run) +- Unzip the archive to a directory under **/tmp/gcloud** +- Run the tests using the `ginkgo` command + - Starts etcd, kube-apiserver, kubelet + - The ginkgo command is used because this supports more features than running the test binary directly +- Output the remote test results to STDOUT +- `scp` the log files back to the local host under /tmp/_artifacts/e2e-node-containervm-v20160321-image +- Stop the processes on the remote host +- **Leave the GCE instance running** -1. If running tests against a running host on gce +**Note: Subsequent tests run using the same image will *reuse the existing host* instead of deleting it and +provisioning a new one. To delete the GCE instance after each test see +*[DELETE_INSTANCE](#delete-instance-after-tests-run)*.** - * Make sure host names are resolvable to ssh by running `gcloud compute config-ssh` to - update ~/.ssh/config with the GCE hosts. After running this command, check the hostnames - in the ~/.ssh/config file and verify you have the correct access by running `ssh `. - * Copy [template.properties](../../test/e2e_node/jenkins/template.properties) +# Additional Remote Options - * Fill in `GCE_HOSTS` with the name of the host +## Run tests using different images - * Run `test/e2e_node/jenkins/e2e-node-jenkins.sh ` - * **Must be run from kubernetes root** +This is useful if you want to run tests against a host using a different OS distro or container runtime than +provided by the default image. -2. If running against a host anywhere else +List the available test images using gcloud. - * **Requires password-less ssh and sudo access** +```sh +make test_e2e_node LIST_IMAGES=true +``` - * Make sure this works - e.g. `ssh -- sudo echo "ok"` - * If ssh flags are required (e.g. `-i`), they can be used and passed to the -tests with `--ssh-options` +This will output a list of the available images for the default image project. - * `go run test/e2e_node/runner/run_e2e.go --logtostderr --hosts ` +Then run: - * **Must be run from kubernetes root** +```sh +make test_e2e_node REMOTE=true IMAGES="" +``` -3. Alternatively, manually build and copy `e2e_node_test.tar.gz` to a remote -host +## Run tests against a running GCE instance (not an image) - * Build the tar.gz `go run test/e2e_node/runner/run_e2e.go --logtostderr ---build-only` +This is useful if you have an host instance running already and want to run the tests there instead of on a new instance. - * Copy `e2e_node_test.tar.gz` to the remote host +```sh +make test_e2e_node REMOTE=true HOSTS="" +``` - * Extract the archive on the remote host `tar -xzvf e2e_node_test.tar.gz` +## Delete instance after tests run - * Run the tests `./e2e_node.test --logtostderr --vmodule=*=2 ---build-services=false --node-name=` +This is useful if you want recreate the instance for each test run to trigger flakes related to starting the instance. - * Note: This must be run from the directory containing the kubelet and -kube-apiserver binaries. +```sh +make test_e2e_node REMOTE=true DELETE_INSTANCES=true +``` -## Running tests against a gce image +## Keep instance, test binaries, and *processes* around after tests run -* Option 1: Build a gce image from a prepared gce host - * Create the host from a base image and configure it (see above) - * Run tests against this remote host to ensure that it is setup correctly -before doing anything else - * Create a gce *snapshot* of the instance - * Create a gce *disk* from the snapshot - * Create a gce *image* from the disk -* Option 2: Copy a prepared image from another project - * Instructions above -* Test that the necessary gcloud credentials are setup for the project - * `gcloud compute --project --zone images list` - * Verify that your image appears in the list -* Copy [template.properties](../../test/e2e_node/jenkins/template.properties) - * Fill in `GCE_PROJECT`, `GCE_ZONE`, `GCE_IMAGES` -* Run `test/e2e_node/jenkins/e2e-node-jenkins.sh ` - * **Must be run from kubernetes root** +This is useful if you want to manually inspect or debug the kubelet process run as part of the tests. -## Kubernetes Jenkins CI and PR builder +```sh +make test_e2e_node REMOTE=true CLEANUP=false +``` -Node e2e tests are run against a static list of host environments continuously -or when manually triggered on a github.com pull requests using the trigger -phrase `@k8s-bot test node e2e` +## Run tests using an image in another project -### CI Host environments +This is useful if you want to create your own host image in another project and use it for testing. -TBD +```sh +make test_e2e_node REMOTE=true IMAGE_PROJECT="" IMAGES="" +``` -### PR builder host environments +Setting up your own host image may require additional steps such as installing etcd or docker. See +[setup_host.sh](../../test/e2e_node/environment/setup_host.sh) for common steps to setup hosts to run node tests. -| linux distro | distro version | docker version | etcd version | cloud provider | -|-----------------|----------------|----------------|--------------|----------------| -| containervm | | 1.8 | | gce | -| coreos | stable | 1.8 | | gce | -| debian | jessie | 1.10 | | gce | -| ubuntu | trusty | 1.8 | | gce | -| ubuntu | trusty | 1.9 | | gce | -| ubuntu | trusty | 1.10 | | gce | +## Create instances using a different instance name prefix +This is useful if you want to create instances using a different name so that you can run multiple copies of the +test in parallel against different instances of the same image. +```sh +make test_e2e_node REMOTE=true INSTANCE_PREFIX="my-prefix" +``` + +# Additional Test Options for both Remote and Local execution + +## Only run a subset of the tests + +To run tests matching a regex: + +```sh +make test_e2e_node REMOTE=true FOCUS="" +``` + +To run tests NOT matching a regex: + +```sh +make test_e2e_node REMOTE=true SKIP="" +``` + +## Run tests continually until they fail + +This is useful if you are trying to debug a flaky test failure. This will cause ginkgo to continually +run the tests until they fail. **Note: this will only perform test setup once (e.g. creating the instance) and is +less useful for catching flakes related creating the instance from an image.** + +```sh +make test_e2e_node REMOTE=true RUN_UNTIL_FAILURE=true +``` +# Notes on tests run by the Kubernetes project during pre-, post- submit. +The node e2e tests are run by the PR builder for each Pull Request and the results published at +the bottom of the comments section. To re-run just the node e2e tests from the PR builder add the comment +`@k8s-bot node e2e test this issue: #` and **include a link to the test +failure logs if caused by a flake.** +The PR builder runs tests against the images listed in [jenkins-pull.properties](../../test/e2e_node/jenkins/jenkins-pull.properties) +The post submit tests run against the images listed in [jenkins-ci.properties](../../test/e2e_node/jenkins/jenkins-ci.properties) -- cgit v1.2.3 From ff8d9af6f8da3622764b6c81209360fdfab69633 Mon Sep 17 00:00:00 2001 From: Daniel Smith Date: Fri, 3 Jun 2016 16:49:35 -0700 Subject: update documentation & hooks --- how-to-doc.md | 24 ++++++++++++++++++------ pull-requests.md | 3 ++- 2 files changed, 20 insertions(+), 7 deletions(-) diff --git a/how-to-doc.md b/how-to-doc.md index 2c508611..67bffe15 100644 --- a/how-to-doc.md +++ b/how-to-doc.md @@ -50,12 +50,13 @@ Updated: 11/3/2015 - [Unversioned Warning](#unversioned-warning) - [Is Versioned](#is-versioned) - [Generate Analytics](#generate-analytics) +- [Generated documentation](#generated-documentation) ## General Concepts -Each document needs to be munged to ensure its format is correct, links are valid, etc. To munge a document, simply run `hack/update-generated-docs.sh`. We verify that all documents have been munged using `hack/verify-generated-docs.sh`. The scripts for munging documents are called mungers, see the [mungers section](#what-are-mungers) below if you're curious about how mungers are implemented or if you want to write one. +Each document needs to be munged to ensure its format is correct, links are valid, etc. To munge a document, simply run `hack/update-munge-docs.sh`. We verify that all documents have been munged using `hack/verify-munge-docs.sh`. The scripts for munging documents are called mungers, see the [mungers section](#what-are-mungers) below if you're curious about how mungers are implemented or if you want to write one. ## How to Get a Table of Contents @@ -66,7 +67,7 @@ Instead of writing table of contents by hand, insert the following code in your ``` -After running `hack/update-generated-docs.sh`, you'll see a table of contents generated for you, layered based on the headings. +After running `hack/update-munge-docs.sh`, you'll see a table of contents generated for you, layered based on the headings. ## How to Write Links @@ -99,7 +100,7 @@ While writing examples, you may want to show the content of certain example file ``` -Note that you should replace `path/to/file` with the relative path to the example file. Then `hack/update-generated-docs.sh` will generate a code block with the content of the specified file, and a link to download it. This way, you save the time to do the copy-and-paste; what's better, the content won't become out-of-date every time you update the example file. +Note that you should replace `path/to/file` with the relative path to the example file. Then `hack/update-munge-docs.sh` will generate a code block with the content of the specified file, and a link to download it. This way, you save the time to do the copy-and-paste; what's better, the content won't become out-of-date every time you update the example file. For example, the following: @@ -108,7 +109,7 @@ For example, the following: ``` -generates the following after `hack/update-generated-docs.sh`: +generates the following after `hack/update-munge-docs.sh`: @@ -169,11 +170,11 @@ Mungers are like gofmt for md docs which we use to format documents. To use it, ``` -in your md files. Note that xxxx is the placeholder for a specific munger. Appropriate content will be generated and inserted between two brackets after you run `hack/update-generated-docs.sh`. See [munger document](http://releases.k8s.io/HEAD/cmd/mungedocs/) for more details. +in your md files. Note that xxxx is the placeholder for a specific munger. Appropriate content will be generated and inserted between two brackets after you run `hack/update-munge-docs.sh`. See [munger document](http://releases.k8s.io/HEAD/cmd/mungedocs/) for more details. ## Auto-added Mungers -After running `hack/update-generated-docs.sh`, you may see some code / mungers in your md file that are auto-added. You don't have to add them manually. It's recommended to just read this section as a reference instead of messing up with the following mungers. +After running `hack/update-munge-docs.sh`, you may see some code / mungers in your md file that are auto-added. You don't have to add them manually. It's recommended to just read this section as a reference instead of messing up with the following mungers. ### Unversioned Warning @@ -207,6 +208,17 @@ ANALYTICS munger inserts a Google Anaylytics link for this page. ``` +# Generated documentation + +Some documents can be generated automatically. Run `hack/generate-docs.sh` to +populate your repository with these generated documents, and a list of the files +it generates is placed in `.generated_docs`. To reduce merge conflicts, we do +not want to check these documents in; however, to make the link checker in the +munger happy, we check in a placeholder. `hack/update-generated-docs.sh` puts a +placeholder in the location where each generated document would go, and +`hack/verify-generated-docs.sh` verifies that the placeholder is in place. + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/how-to-doc.md?pixel)]() diff --git a/pull-requests.md b/pull-requests.md index 64a1c2c6..6803c464 100644 --- a/pull-requests.md +++ b/pull-requests.md @@ -69,8 +69,9 @@ Additionally, for infrequent or new contributors, we require the on call to appl The following will save time for both you and your reviewer: * Enable [pre-commit hooks](development.md#committing-changes-to-your-fork) and verify they pass. -* Verify `hack/verify-generated-docs.sh` passes. +* Verify `hack/verify-all.sh` passes. * Verify `hack/test-go.sh` passes. +* Verify `hack/test-integration.sh` passes. ## Release Notes -- cgit v1.2.3 From e1cb35afcdaa20d4cb4b02b80b795aa19243bc50 Mon Sep 17 00:00:00 2001 From: Colin Hom Date: Wed, 8 Jun 2016 13:04:45 -0700 Subject: document federation e2e cli flow --- e2e-tests.md | 121 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 121 insertions(+) diff --git a/e2e-tests.md b/e2e-tests.md index d09ab9e7..b67d3a5e 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -45,6 +45,14 @@ Updated: 5/3/2016 - [Cleaning up](#cleaning-up) - [Advanced testing](#advanced-testing) - [Bringing up a cluster for testing](#bringing-up-a-cluster-for-testing) + - [Federation e2e tests](#federation-e2e-tests) + - [Configuring federation e2e tests](#configuring-federation-e2e-tests) + - [Image Push Repository](#image-push-repository) + - [Build](#build) + - [Deploy federation control plane](#deploy-federation-control-plane) + - [Run the Tests](#run-the-tests) + - [Teardown](#teardown) + - [Shortcuts for test developers](#shortcuts-for-test-developers) - [Debugging clusters](#debugging-clusters) - [Local clusters](#local-clusters) - [Testing against local clusters](#testing-against-local-clusters) @@ -232,6 +240,119 @@ stale permissions can cause problems. - `sudo iptables -F`, clear ip tables rules left by the kube-proxy. +### Federation e2e tests + +By default, `e2e.go` provisions a single Kubernetes cluster, and any `Feature:Federation` ginkgo tests will be skipped. + +Federation e2e testing involve bringing up multiple "underlying" Kubernetes clusters, +and deploying the federation control plane as a Kubernetes application on the underlying clusters. + +The federation e2e tests are still managed via `e2e.go`, but require some extra configuration items. + +#### Configuring federation e2e tests + +The following environment variables will enable federation e2e building, provisioning and testing. + +```sh +$ export FEDERATION=true +$ export E2E_ZONES="us-central1-a us-central1-b us-central1-f" +``` + +A Kubernetes cluster will be provisioned in each zone listed in `E2E_ZONES`. A zone can only appear once in the `E2E_ZONES` list. + +#### Image Push Repository + +Next, specify the docker repository where your ci images will be pushed. + +* **If `KUBERNETES_PROVIDER=gce` or `KUBERNETES_PROVIDER=gke`**: + + You can simply set your push repo base based on your project name, and the necessary repositories will be auto-created when you + first push your container images. + + ```sh + $ export FEDERATION_PUSH_REPO_BASE="gcr.io/${GCE_PROJECT_NAME}" + ``` + + Skip ahead to the **Build** section. + +* **For all other providers**: + + You'll be responsible for creating and managing access to the repositories manually. + + ```sh + $ export FEDERATION_PUSH_REPO_BASE="quay.io/colin_hom" + ``` + + Given this example, the `federation-apiserver` container image will be pushed to the repository + `quay.io/colin_hom/federation-apiserver`. + + The docker client on the machine running `e2e.go` must have push access for the following pre-existing repositories: + + * `${FEDERATION_PUSH_REPO_BASE}/federation-apiserver` + * `${FEDERATION_PUSH_REPO_BASE}/federation-controller-manager` + + These repositories must allow public read access, as the e2e node docker daemons will not have any credentials. If you're using + gce/gke as your provider, the repositories will have read-access by default. + +#### Build + +* Compile the binaries and build container images: + + ```sh + $ KUBE_RELEASE_RUN_TESTS=n KUBE_FASTBUILD=true go run hack/e2e.go -v -build + ``` + +* Push the federation container images + + ```sh + $ build/push-federation-images.sh + ``` + +#### Deploy federation control plane + +The following command will create the underlying Kubernetes clusters in each of `E2E_ZONES`, and then provision the +federation control plane in the cluster occupying the last zone in the `E2E_ZONES` list. + +```sh +$ go run hack/e2e.go -v -up +``` + +#### Run the Tests + +This will run only the `Feature:Federation` e2e tests. You can omit the `ginkgo.focus` argument to run the entire e2e suite. + +```sh +$ go run hack/e2e.go -v -test --test_args="--ginkgo.focus=\[Feature:Federation\]" +``` + +#### Teardown + +```sh +$ go run hack/e2e.go -v -down +``` + +#### Shortcuts for test developers + +* To speed up `e2e.go -up`, provision a single-node kubernetes cluster in a single e2e zone: + + `NUM_NODES=1 E2E_ZONES="us-central1-f"` + + Keep in mind that some tests may require multiple underlying clusters and/or minimum compute resource availability. + +* You can quickly recompile the e2e testing framework via `go install ./test/e2e`. This will not do anything besides + allow you to verify that the go code compiles. + +* If you want to run your e2e testing framework without re-provisioning the e2e setup, you can do so via + `make WHAT=test/e2e/e2e.test` and then re-running the ginkgo tests. + +* If you're hacking around with the federation control plane deployment itself, + you can quickly re-deploy the federation control plane Kubernetes manifests without tearing any resources down. + To re-deploy the federation control plane after running `-up` for the first time: + + ```sh + $ federation/cluster/federation-up.sh + ``` + ### Debugging clusters If a cluster fails to initialize, or you'd like to better understand cluster -- cgit v1.2.3 From 31b4c511aaa0bf76cc0acaa422d3f864306470c3 Mon Sep 17 00:00:00 2001 From: David McMahon Date: Fri, 10 Jun 2016 14:21:20 -0700 Subject: Versioning docs and examples for v1.4.0-alpha.0. --- README.md | 36 ++++++--------------------- adding-an-APIGroup.md | 36 ++++++--------------------- api-conventions.md | 36 ++++++--------------------- api_changes.md | 36 ++++++--------------------- automation.md | 36 ++++++--------------------- cherry-picks.md | 38 ++++++---------------------- cli-roadmap.md | 36 ++++++--------------------- client-libraries.md | 38 ++++++---------------------- coding-conventions.md | 36 ++++++--------------------- collab.md | 36 ++++++--------------------- developer-guides/vagrant.md | 36 ++++++--------------------- development.md | 38 ++++++---------------------- e2e-node-tests.md | 36 ++++++--------------------- e2e-tests.md | 36 ++++++--------------------- faster_reviews.md | 36 ++++++--------------------- flaky-tests.md | 36 ++++++--------------------- generating-clientset.md | 36 ++++++--------------------- getting-builds.md | 38 ++++++---------------------- how-to-doc.md | 34 +++---------------------- instrumentation.md | 36 ++++++--------------------- issues.md | 36 ++++++--------------------- kubectl-conventions.md | 36 ++++++--------------------- kubemark-guide.md | 36 ++++++--------------------- logging.md | 36 ++++++--------------------- making-release-notes.md | 36 ++++++--------------------- mesos-style.md | 36 ++++++--------------------- node-performance-testing.md | 36 ++++++--------------------- on-call-build-cop.md | 36 ++++++--------------------- on-call-rotations.md | 36 ++++++--------------------- on-call-user-support.md | 36 ++++++--------------------- owners.md | 36 ++++++--------------------- profiling.md | 36 ++++++--------------------- pull-requests.md | 36 ++++++--------------------- releasing.md | 36 ++++++--------------------- running-locally.md | 36 ++++++--------------------- scheduler.md | 48 ++++++++++-------------------------- scheduler_algorithm.md | 40 +++++++----------------------- testing.md | 31 ++++++----------------- update-release-docs.md | 36 ++++++--------------------- updating-docs-for-feature-changes.md | 31 ++++++----------------- writing-a-getting-started-guide.md | 36 ++++++--------------------- writing-good-e2e-tests.md | 36 ++++++--------------------- 42 files changed, 303 insertions(+), 1221 deletions(-) diff --git a/README.md b/README.md index b1af07df..9a563619 100644 --- a/README.md +++ b/README.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/README.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -116,6 +87,13 @@ Guide](../admin/README.md). and how the version information gets embedded into the built binaries. + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/README.md?pixel)]() diff --git a/adding-an-APIGroup.md b/adding-an-APIGroup.md index e0f95fc7..17fe534a 100644 --- a/adding-an-APIGroup.md +++ b/adding-an-APIGroup.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/adding-an-APIGroup.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -123,6 +94,13 @@ TODO: Add a troubleshooting section. + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/adding-an-APIGroup.md?pixel)]() diff --git a/api-conventions.md b/api-conventions.md index 757de2dd..97ee0d86 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - API Conventions @@ -1377,6 +1348,13 @@ be less than 256", "must be greater than or equal to 0". Do not use words like "larger than", "bigger than", "more than", "higher than", etc. * When specifying numeric ranges, use inclusive ranges when possible. + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/api-conventions.md?pixel)]() diff --git a/api_changes.md b/api_changes.md index 4ec383e7..a561d356 100644 --- a/api_changes.md +++ b/api_changes.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/api_changes.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -758,6 +729,13 @@ A releated issue is how a cluster manager can roll back from a new version with a new feature, that is already being used by users. See https://github.com/kubernetes/kubernetes/issues/4855. + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/api_changes.md?pixel)]() diff --git a/automation.md b/automation.md index 6ba74fd0..b0cb4438 100644 --- a/automation.md +++ b/automation.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/automation.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -131,6 +102,13 @@ the issue number you found or filed. Any pushes of new code to the PR will automatically trigger a new test. No human interraction is required. + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/automation.md?pixel)]() diff --git a/cherry-picks.md b/cherry-picks.md index d5456a1a..7fd8be13 100644 --- a/cherry-picks.md +++ b/cherry-picks.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/cherry-picks.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -106,11 +77,18 @@ requested - this should not be the norm, but it may happen. See the [cherrypick queue dashboard](http://cherrypick.k8s.io/#/queue) for status of PRs labeled as `cherrypick-candidate`. -[Contributor License Agreements](http://releases.k8s.io/HEAD/CONTRIBUTING.md) is +[Contributor License Agreements](http://releases.k8s.io/v1.4.0-alpha.0/CONTRIBUTING.md) is considered implicit for all code within cherry-pick pull requests, ***unless there is a large conflict***. + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/cherry-picks.md?pixel)]() diff --git a/cli-roadmap.md b/cli-roadmap.md index 7a7791b8..8d000ad4 100644 --- a/cli-roadmap.md +++ b/cli-roadmap.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/cli-roadmap.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -40,6 +11,13 @@ See github issues with the following labels: * [component/clientlib](https://github.com/kubernetes/kubernetes/labels/component/clientlib) + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/cli-roadmap.md?pixel)]() diff --git a/client-libraries.md b/client-libraries.md index 95a3dfeb..c36b1ad1 100644 --- a/client-libraries.md +++ b/client-libraries.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/client-libraries.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -36,7 +7,7 @@ Documentation for other releases can be found at ### Supported - * [Go](http://releases.k8s.io/HEAD/pkg/client/) + * [Go](http://releases.k8s.io/v1.4.0-alpha.0/pkg/client/) ### User Contributed @@ -55,6 +26,13 @@ the core Kubernetes team* * [Ruby](https://github.com/abonas/kubeclient) * [Scala](https://github.com/doriordan/skuber) + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/client-libraries.md?pixel)]() diff --git a/coding-conventions.md b/coding-conventions.md index 3a59cd2a..1d7d75ac 100644 --- a/coding-conventions.md +++ b/coding-conventions.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/coding-conventions.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -176,6 +147,13 @@ using the system](../user-guide/config-best-practices.md) - [Go landmines](https://gist.github.com/lavalamp/4bd23295a9f32706a48f) + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/coding-conventions.md?pixel)]() diff --git a/collab.md b/collab.md index 0742b548..68c13df5 100644 --- a/collab.md +++ b/collab.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/collab.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -116,6 +87,13 @@ hold shall not be merged until the person who requested the hold acks the review, withdraws their hold, or is overruled by a preponderance of maintainers. + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/collab.md?pixel)]() diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index 64bfa13f..d87152ef 100644 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/developer-guides/vagrant.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -447,6 +418,13 @@ provider, which uses nfs by default. For example: export KUBERNETES_VAGRANT_USE_NFS=true ``` + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/developer-guides/vagrant.md?pixel)]() diff --git a/development.md b/development.md index 9e008191..550e4878 100644 --- a/development.md +++ b/development.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/development.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -48,7 +19,7 @@ branch, but release branches of Kubernetes should not change. Official releases are built using Docker containers. To build Kubernetes using Docker please follow [these -instructions](http://releases.k8s.io/HEAD/build/README.md). +instructions](http://releases.k8s.io/v1.4.0-alpha.0/build/README.md). ### Go development environment @@ -320,6 +291,13 @@ hack/update-generated-docs.sh ``` + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/development.md?pixel)]() diff --git a/e2e-node-tests.md b/e2e-node-tests.md index f2869134..876ad526 100644 --- a/e2e-node-tests.md +++ b/e2e-node-tests.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/e2e-node-tests.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -217,6 +188,13 @@ The PR builder runs tests against the images listed in [jenkins-pull.properties] The post submit tests run against the images listed in [jenkins-ci.properties](../../test/e2e_node/jenkins/jenkins-ci.properties) + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/e2e-node-tests.md?pixel)]() diff --git a/e2e-tests.md b/e2e-tests.md index b67d3a5e..f1137a26 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/e2e-tests.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -664,6 +635,13 @@ You should also know the [testing conventions](coding-conventions.md#testing-con + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/e2e-tests.md?pixel)]() diff --git a/faster_reviews.md b/faster_reviews.md index eb7416d6..d6bc43bd 100644 --- a/faster_reviews.md +++ b/faster_reviews.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/faster_reviews.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -235,6 +206,13 @@ a bit of thought into how your work can be made easier to review. If you do these things your PRs will flow much more easily. + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/faster_reviews.md?pixel)]() diff --git a/flaky-tests.md b/flaky-tests.md index b599f80f..e6563220 100644 --- a/flaky-tests.md +++ b/flaky-tests.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/flaky-tests.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -197,6 +168,13 @@ If you do a final check for flakes with `docker ps -a`, ignore tasks that exited Happy flake hunting! + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/flaky-tests.md?pixel)]() diff --git a/generating-clientset.md b/generating-clientset.md index 42851cc9..ea411103 100644 --- a/generating-clientset.md +++ b/generating-clientset.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/generating-clientset.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -64,6 +35,13 @@ At the 1.2 release, we have two released clientsets in the repo: internalclients + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/generating-clientset.md?pixel)]() diff --git a/getting-builds.md b/getting-builds.md index ab1df171..bc3223f9 100644 --- a/getting-builds.md +++ b/getting-builds.md @@ -1,40 +1,11 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/getting-builds.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - # Getting Kubernetes Builds -You can use [hack/get-build.sh](http://releases.k8s.io/HEAD/hack/get-build.sh) to or use as a reference on how to get the most recent builds with curl. With `get-build.sh` you can grab the most recent stable build, the most recent release candidate, or the most recent build to pass our ci and gce e2e tests (essentially a nightly build). +You can use [hack/get-build.sh](http://releases.k8s.io/v1.4.0-alpha.0/hack/get-build.sh) to or use as a reference on how to get the most recent builds with curl. With `get-build.sh` you can grab the most recent stable build, the most recent release candidate, or the most recent build to pass our ci and gce e2e tests (essentially a nightly build). Run `./hack/get-build.sh -h` for its usage. @@ -74,6 +45,13 @@ $ curl -sSL https://storage.googleapis.com/pub/gsutil.tar.gz | sudo tar -xz -C / $ sudo ln -s /usr/local/src/gsutil/gsutil /usr/bin/gsutil ``` + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/getting-builds.md?pixel)]() diff --git a/how-to-doc.md b/how-to-doc.md index 67bffe15..a978b908 100644 --- a/how-to-doc.md +++ b/how-to-doc.md @@ -1,29 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). -
--- - - @@ -73,7 +49,7 @@ After running `hack/update-munge-docs.sh`, you'll see a table of contents genera It's important to follow the rules when writing links. It helps us correctly versionize documents for each release. -Use inline links instead of urls at all times. When you add internal links to `docs/` or `examples/`, use relative links; otherwise, use `http://releases.k8s.io/HEAD/`. For example, avoid using: +Use inline links instead of urls at all times. When you add internal links to `docs/` or `examples/`, use relative links; otherwise, use `http://releases.k8s.io/v1.4.0-alpha.0/`. For example, avoid using: ``` [GCE](https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/gce.md) # note that it's under docs/ @@ -85,11 +61,11 @@ Instead, use: ``` [GCE](../getting-started-guides/gce.md) # note that it's under docs/ -[Kubernetes package](http://releases.k8s.io/HEAD/pkg/) # note that it's under pkg/ +[Kubernetes package](http://releases.k8s.io/v1.4.0-alpha.0/pkg/) # note that it's under pkg/ [Kubernetes](http://kubernetes.io/) # external link ``` -The above example generates the following links: [GCE](../getting-started-guides/gce.md), [Kubernetes package](http://releases.k8s.io/HEAD/pkg/), and [Kubernetes](http://kubernetes.io/). +The above example generates the following links: [GCE](../getting-started-guides/gce.md), [Kubernetes package](http://releases.k8s.io/v1.4.0-alpha.0/pkg/), and [Kubernetes](http://kubernetes.io/). ## How to Include an Example @@ -170,7 +146,7 @@ Mungers are like gofmt for md docs which we use to format documents. To use it, ``` -in your md files. Note that xxxx is the placeholder for a specific munger. Appropriate content will be generated and inserted between two brackets after you run `hack/update-munge-docs.sh`. See [munger document](http://releases.k8s.io/HEAD/cmd/mungedocs/) for more details. +in your md files. Note that xxxx is the placeholder for a specific munger. Appropriate content will be generated and inserted between two brackets after you run `hack/update-munge-docs.sh`. See [munger document](http://releases.k8s.io/v1.4.0-alpha.0/cmd/mungedocs/) for more details. ## Auto-added Mungers @@ -183,8 +159,6 @@ UNVERSIONED_WARNING munger inserts unversioned warning which warns the users whe ``` - - ``` diff --git a/instrumentation.md b/instrumentation.md index 5e195f6b..b8f15333 100644 --- a/instrumentation.md +++ b/instrumentation.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/instrumentation.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - Instrumenting Kubernetes with a new metric @@ -66,6 +37,13 @@ https://github.com/prometheus/client_golang/blob/master/prometheus/histogram.go https://github.com/prometheus/client_golang/blob/master/prometheus/summary.go + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/instrumentation.md?pixel)]() diff --git a/issues.md b/issues.md index 1a068faa..e59025ea 100644 --- a/issues.md +++ b/issues.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/issues.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -88,6 +59,13 @@ release if it gets done, but we wouldn't block the release on it. A few days before release, we will probably move all P2 and P3 bugs out of that milestone in bulk. + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/issues.md?pixel)]() diff --git a/kubectl-conventions.md b/kubectl-conventions.md index 23a73f11..258e25cb 100644 --- a/kubectl-conventions.md +++ b/kubectl-conventions.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/kubectl-conventions.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -407,6 +378,13 @@ method which configures the generated namespace that callers of the generator creating it. + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/kubectl-conventions.md?pixel)]() diff --git a/kubemark-guide.md b/kubemark-guide.md index 3f93cd36..ed9ff7ee 100644 --- a/kubemark-guide.md +++ b/kubemark-guide.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/kubemark-guide.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -239,6 +210,13 @@ it’s crucial to make it as simple as possible to allow running a big number of Hollows on a single core. + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/kubemark-guide.md?pixel)]() diff --git a/logging.md b/logging.md index f0350dca..bbeba7c0 100644 --- a/logging.md +++ b/logging.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/logging.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -65,6 +36,13 @@ environments may wish to run at V(3) or V(4). If you wish to change the log level, you can pass in `-v=X` where X is the desired maximum level to log. + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/logging.md?pixel)]() diff --git a/making-release-notes.md b/making-release-notes.md index 01ef369e..88998312 100644 --- a/making-release-notes.md +++ b/making-release-notes.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/making-release-notes.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -81,6 +52,13 @@ page. * Press Save. + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/making-release-notes.md?pixel)]() diff --git a/mesos-style.md b/mesos-style.md index fdf9da08..25055dea 100644 --- a/mesos-style.md +++ b/mesos-style.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/mesos-style.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -247,6 +218,13 @@ Borg is described [here](http://research.google.com/pubs/pub43438.html). + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/mesos-style.md?pixel)]() diff --git a/node-performance-testing.md b/node-performance-testing.md index 54c15dee..4fec2764 100644 --- a/node-performance-testing.md +++ b/node-performance-testing.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/node-performance-testing.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -156,6 +127,13 @@ More details on benchmarking [here](https://golang.org/pkg/testing/). + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/node-performance-testing.md?pixel)]() diff --git a/on-call-build-cop.md b/on-call-build-cop.md index 7a91e5cb..e0bfd240 100644 --- a/on-call-build-cop.md +++ b/on-call-build-cop.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/on-call-build-cop.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -181,6 +152,13 @@ the build cop is expected to file issues for any flaky tests they encounter. [@k8s-oncall](https://github.com/k8s-oncall) will reach the current person on call. + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/on-call-build-cop.md?pixel)]() diff --git a/on-call-rotations.md b/on-call-rotations.md index 6cf8d0bf..a7ace0c0 100644 --- a/on-call-rotations.md +++ b/on-call-rotations.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/on-call-rotations.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -72,6 +43,13 @@ milestones, for instance). * [Github and Build Cop Rotation](on-call-build-cop.md) * [User Support Rotation](on-call-user-support.md) + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/on-call-rotations.md?pixel)]() diff --git a/on-call-user-support.md b/on-call-user-support.md index 1e9f3cb3..f8d38866 100644 --- a/on-call-user-support.md +++ b/on-call-user-support.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/on-call-user-support.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -118,6 +89,13 @@ current person on call. + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/on-call-user-support.md?pixel)]() diff --git a/owners.md b/owners.md index dcd14483..70acf71d 100644 --- a/owners.md +++ b/owners.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/owners.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -127,6 +98,13 @@ parent's OWNERS file is used instead. There will be a top-level OWNERS file to Obviously changing the OWNERS file requires OWNERS permission. + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/owners.md?pixel)]() diff --git a/profiling.md b/profiling.md index 5e74d25f..a95840f4 100644 --- a/profiling.md +++ b/profiling.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/profiling.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -75,6 +46,13 @@ to get 30 sec. CPU profile. To enable contention profiling you need to add line `rt.SetBlockProfileRate(1)` in addition to `m.mux.HandleFunc(...)` added before (`rt` stands for `runtime` in `master.go`). This enables 'debug/pprof/block' subpage, which can be used as an input to `go tool pprof`. + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/profiling.md?pixel)]() diff --git a/pull-requests.md b/pull-requests.md index 6803c464..4f3c6018 100644 --- a/pull-requests.md +++ b/pull-requests.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/pull-requests.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -135,6 +106,13 @@ We use a variety of automation to manage pull requests. This automation is desc [elsewhere.](automation.md) + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/pull-requests.md?pixel)]() diff --git a/releasing.md b/releasing.md index 5747ed6b..635a7387 100644 --- a/releasing.md +++ b/releasing.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/releasing.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -302,6 +273,13 @@ can, for instance, tell it to override `gitVersion` and set it to is the complete SHA1 of the (dirty) tree used at build time. + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/releasing.md?pixel)]() diff --git a/running-locally.md b/running-locally.md index 6999e588..f1f0f192 100644 --- a/running-locally.md +++ b/running-locally.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/running-locally.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - Getting started locally @@ -192,6 +163,13 @@ KUBE_DNS_REPLICAS=1 To know more on DNS service you can look [here](http://issue.k8s.io/6667). Related documents can be found [here](../../build/kube-dns/#how-do-i-configure-it) + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/running-locally.md?pixel)]() diff --git a/scheduler.md b/scheduler.md index 778fd087..d9b77e9a 100755 --- a/scheduler.md +++ b/scheduler.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/scheduler.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -54,30 +25,37 @@ divided by the node's capacity). Finally, the node with the highest priority is chosen (or, if there are multiple such nodes, then one of them is chosen at random). The code for this main scheduling loop is in the function `Schedule()` in -[plugin/pkg/scheduler/generic_scheduler.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/generic_scheduler.go) +[plugin/pkg/scheduler/generic_scheduler.go](http://releases.k8s.io/v1.4.0-alpha.0/plugin/pkg/scheduler/generic_scheduler.go) ## Scheduler extensibility The scheduler is extensible: the cluster administrator can choose which of the pre-defined scheduling policies to apply, and can add new ones. The built-in predicates and priorities are -defined in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/predicates/predicates.go) and -[plugin/pkg/scheduler/algorithm/priorities/priorities.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/priorities/priorities.go), respectively. +defined in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/v1.4.0-alpha.0/plugin/pkg/scheduler/algorithm/predicates/predicates.go) and +[plugin/pkg/scheduler/algorithm/priorities/priorities.go](http://releases.k8s.io/v1.4.0-alpha.0/plugin/pkg/scheduler/algorithm/priorities/priorities.go), respectively. The policies that are applied when scheduling can be chosen in one of two ways. Normally, the policies used are selected by the functions `defaultPredicates()` and `defaultPriorities()` in -[plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). +[plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/v1.4.0-alpha.0/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). However, the choice of policies can be overridden by passing the command-line flag `--policy-config-file` to the scheduler, pointing to a JSON file specifying which scheduling policies to use. See [examples/scheduler-policy-config.json](../../examples/scheduler-policy-config.json) for an example config file. (Note that the config file format is versioned; the API is defined in -[plugin/pkg/scheduler/api](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/api/)). +[plugin/pkg/scheduler/api](http://releases.k8s.io/v1.4.0-alpha.0/plugin/pkg/scheduler/api/)). Thus to add a new scheduling policy, you should modify predicates.go or priorities.go, and either register the policy in `defaultPredicates()` or `defaultPriorities()`, or use a policy config file. ## Exploring the code If you want to get a global picture of how the scheduler works, you can start in -[plugin/cmd/kube-scheduler/app/server.go](http://releases.k8s.io/HEAD/plugin/cmd/kube-scheduler/app/server.go) +[plugin/cmd/kube-scheduler/app/server.go](http://releases.k8s.io/v1.4.0-alpha.0/plugin/cmd/kube-scheduler/app/server.go) + + + + + + + diff --git a/scheduler_algorithm.md b/scheduler_algorithm.md index 807f0600..2cf0c54d 100755 --- a/scheduler_algorithm.md +++ b/scheduler_algorithm.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/scheduler_algorithm.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -50,7 +21,7 @@ The purpose of filtering the nodes is to filter out the nodes that do not meet c - `MaxGCEPDVolumeCount`: Ensure that the number of attached GCE PersistentDisk volumes does not exceed a maximum value (by default, 16, which is the maximum GCE allows -- see [GCE's documentation](https://cloud.google.com/compute/docs/disks/persistent-disks#limits_for_predefined_machine_types)). The maximum value can be controlled by setting the `KUBE_MAX_PD_VOLS` environment variable. - `CheckNodeMemoryPressure`: Check if a pod can be scheduled on a node reporting memory pressure condition. Currently, no ``BestEffort`` should be placed on a node under memory pressure as it gets automatically evicted by kubelet. -The details of the above predicates can be found in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/predicates/predicates.go). All predicates mentioned above can be used in combination to perform a sophisticated filtering policy. Kubernetes uses some, but not all, of these predicates by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). +The details of the above predicates can be found in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/v1.4.0-alpha.0/plugin/pkg/scheduler/algorithm/predicates/predicates.go). All predicates mentioned above can be used in combination to perform a sophisticated filtering policy. Kubernetes uses some, but not all, of these predicates by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/v1.4.0-alpha.0/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). ## Ranking the nodes @@ -69,7 +40,14 @@ Currently, Kubernetes scheduler provides some practical priority functions, incl - `ImageLocalityPriority`: Nodes are prioritized based on locality of images requested by a pod. Nodes with larger size of already-installed packages required by the pod will be preferred over nodes with no already-installed packages required by the pod or a small total size of already-installed packages required by the pod. - `NodeAffinityPriority`: (Kubernetes v1.2) Implements `preferredDuringSchedulingIgnoredDuringExecution` node affinity; see [here](../user-guide/node-selection/) for more details. -The details of the above priority functions can be found in [plugin/pkg/scheduler/algorithm/priorities](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/priorities/). Kubernetes uses some, but not all, of these priority functions by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). Similar as predicates, you can combine the above priority functions and assign weight factors (positive number) to them as you want (check [scheduler.md](scheduler.md) for how to customize). +The details of the above priority functions can be found in [plugin/pkg/scheduler/algorithm/priorities](http://releases.k8s.io/v1.4.0-alpha.0/plugin/pkg/scheduler/algorithm/priorities/). Kubernetes uses some, but not all, of these priority functions by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/v1.4.0-alpha.0/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). Similar as predicates, you can combine the above priority functions and assign weight factors (positive number) to them as you want (check [scheduler.md](scheduler.md) for how to customize). + + + + + + + diff --git a/testing.md b/testing.md index 72f1c328..faa1f66a 100644 --- a/testing.md +++ b/testing.md @@ -1,29 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -195,6 +171,13 @@ hack/test-integration.sh # Run all integration tests. Please refer to [End-to-End Testing in Kubernetes](e2e-tests.md). + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/testing.md?pixel)]() diff --git a/update-release-docs.md b/update-release-docs.md index 1dbb20a8..21165997 100644 --- a/update-release-docs.md +++ b/update-release-docs.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/update-release-docs.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -144,6 +115,13 @@ docs go away. If the change added or deleted a doc, then update the corresponding `_includes/nav_vX.Y.html` file as well. + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/update-release-docs.md?pixel)]() diff --git a/updating-docs-for-feature-changes.md b/updating-docs-for-feature-changes.md index f0f3197d..6d7ec92b 100644 --- a/updating-docs-for-feature-changes.md +++ b/updating-docs-for-feature-changes.md @@ -1,29 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -100,6 +76,13 @@ Anyone making user facing changes to kubernetes. This is especially important f * *TODO once we have a standard widget set we are happy with* - include diagrams to help describe complex ideas (not required yet) + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/updating-docs-for-feature-changes.md?pixel)]() diff --git a/writing-a-getting-started-guide.md b/writing-a-getting-started-guide.md index fbe5aa1b..f4f86b54 100644 --- a/writing-a-getting-started-guide.md +++ b/writing-a-getting-started-guide.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/writing-a-getting-started-guide.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -130,6 +101,13 @@ These guidelines say *what* to do. See the Rationale section for *why*. + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/writing-a-getting-started-guide.md?pixel)]() diff --git a/writing-good-e2e-tests.md b/writing-good-e2e-tests.md index 2cb0fe47..18c6877f 100644 --- a/writing-good-e2e-tests.md +++ b/writing-good-e2e-tests.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/writing-good-e2e-tests.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -264,6 +235,13 @@ Note that opening issues for specific better tooling is welcome, and code implementing that tooling is even more welcome :-). + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/writing-good-e2e-tests.md?pixel)]() -- cgit v1.2.3 From 2f5640b79159111a3be6291fca8080043ff9cf6c Mon Sep 17 00:00:00 2001 From: David McMahon Date: Fri, 10 Jun 2016 14:21:20 -0700 Subject: Versioning docs and examples for v1.4.0-alpha.0. --- README.md | 36 ++++++--------------------- adding-an-APIGroup.md | 36 ++++++--------------------- api-conventions.md | 36 ++++++--------------------- api_changes.md | 36 ++++++--------------------- automation.md | 36 ++++++--------------------- cherry-picks.md | 38 ++++++---------------------- cli-roadmap.md | 36 ++++++--------------------- client-libraries.md | 38 ++++++---------------------- coding-conventions.md | 36 ++++++--------------------- collab.md | 36 ++++++--------------------- developer-guides/vagrant.md | 36 ++++++--------------------- development.md | 38 ++++++---------------------- e2e-node-tests.md | 36 ++++++--------------------- e2e-tests.md | 36 ++++++--------------------- faster_reviews.md | 36 ++++++--------------------- flaky-tests.md | 36 ++++++--------------------- generating-clientset.md | 36 ++++++--------------------- getting-builds.md | 38 ++++++---------------------- how-to-doc.md | 34 +++---------------------- instrumentation.md | 36 ++++++--------------------- issues.md | 36 ++++++--------------------- kubectl-conventions.md | 36 ++++++--------------------- kubemark-guide.md | 36 ++++++--------------------- logging.md | 36 ++++++--------------------- making-release-notes.md | 36 ++++++--------------------- mesos-style.md | 36 ++++++--------------------- node-performance-testing.md | 36 ++++++--------------------- on-call-build-cop.md | 36 ++++++--------------------- on-call-rotations.md | 36 ++++++--------------------- on-call-user-support.md | 36 ++++++--------------------- owners.md | 36 ++++++--------------------- profiling.md | 36 ++++++--------------------- pull-requests.md | 36 ++++++--------------------- releasing.md | 36 ++++++--------------------- running-locally.md | 36 ++++++--------------------- scheduler.md | 48 ++++++++++-------------------------- scheduler_algorithm.md | 40 +++++++----------------------- testing.md | 31 ++++++----------------- update-release-docs.md | 36 ++++++--------------------- updating-docs-for-feature-changes.md | 31 ++++++----------------- writing-a-getting-started-guide.md | 36 ++++++--------------------- writing-good-e2e-tests.md | 36 ++++++--------------------- 42 files changed, 303 insertions(+), 1221 deletions(-) diff --git a/README.md b/README.md index b1af07df..9a563619 100644 --- a/README.md +++ b/README.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/README.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -116,6 +87,13 @@ Guide](../admin/README.md). and how the version information gets embedded into the built binaries. + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/README.md?pixel)]() diff --git a/adding-an-APIGroup.md b/adding-an-APIGroup.md index e0f95fc7..17fe534a 100644 --- a/adding-an-APIGroup.md +++ b/adding-an-APIGroup.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/adding-an-APIGroup.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -123,6 +94,13 @@ TODO: Add a troubleshooting section. + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/adding-an-APIGroup.md?pixel)]() diff --git a/api-conventions.md b/api-conventions.md index 757de2dd..97ee0d86 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - API Conventions @@ -1377,6 +1348,13 @@ be less than 256", "must be greater than or equal to 0". Do not use words like "larger than", "bigger than", "more than", "higher than", etc. * When specifying numeric ranges, use inclusive ranges when possible. + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/api-conventions.md?pixel)]() diff --git a/api_changes.md b/api_changes.md index 4ec383e7..a561d356 100644 --- a/api_changes.md +++ b/api_changes.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/api_changes.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -758,6 +729,13 @@ A releated issue is how a cluster manager can roll back from a new version with a new feature, that is already being used by users. See https://github.com/kubernetes/kubernetes/issues/4855. + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/api_changes.md?pixel)]() diff --git a/automation.md b/automation.md index 6ba74fd0..b0cb4438 100644 --- a/automation.md +++ b/automation.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/automation.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -131,6 +102,13 @@ the issue number you found or filed. Any pushes of new code to the PR will automatically trigger a new test. No human interraction is required. + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/automation.md?pixel)]() diff --git a/cherry-picks.md b/cherry-picks.md index d5456a1a..7fd8be13 100644 --- a/cherry-picks.md +++ b/cherry-picks.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/cherry-picks.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -106,11 +77,18 @@ requested - this should not be the norm, but it may happen. See the [cherrypick queue dashboard](http://cherrypick.k8s.io/#/queue) for status of PRs labeled as `cherrypick-candidate`. -[Contributor License Agreements](http://releases.k8s.io/HEAD/CONTRIBUTING.md) is +[Contributor License Agreements](http://releases.k8s.io/v1.4.0-alpha.0/CONTRIBUTING.md) is considered implicit for all code within cherry-pick pull requests, ***unless there is a large conflict***. + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/cherry-picks.md?pixel)]() diff --git a/cli-roadmap.md b/cli-roadmap.md index 7a7791b8..8d000ad4 100644 --- a/cli-roadmap.md +++ b/cli-roadmap.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/cli-roadmap.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -40,6 +11,13 @@ See github issues with the following labels: * [component/clientlib](https://github.com/kubernetes/kubernetes/labels/component/clientlib) + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/cli-roadmap.md?pixel)]() diff --git a/client-libraries.md b/client-libraries.md index 95a3dfeb..c36b1ad1 100644 --- a/client-libraries.md +++ b/client-libraries.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/client-libraries.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -36,7 +7,7 @@ Documentation for other releases can be found at ### Supported - * [Go](http://releases.k8s.io/HEAD/pkg/client/) + * [Go](http://releases.k8s.io/v1.4.0-alpha.0/pkg/client/) ### User Contributed @@ -55,6 +26,13 @@ the core Kubernetes team* * [Ruby](https://github.com/abonas/kubeclient) * [Scala](https://github.com/doriordan/skuber) + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/client-libraries.md?pixel)]() diff --git a/coding-conventions.md b/coding-conventions.md index 3a59cd2a..1d7d75ac 100644 --- a/coding-conventions.md +++ b/coding-conventions.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/coding-conventions.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -176,6 +147,13 @@ using the system](../user-guide/config-best-practices.md) - [Go landmines](https://gist.github.com/lavalamp/4bd23295a9f32706a48f) + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/coding-conventions.md?pixel)]() diff --git a/collab.md b/collab.md index 0742b548..68c13df5 100644 --- a/collab.md +++ b/collab.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/collab.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -116,6 +87,13 @@ hold shall not be merged until the person who requested the hold acks the review, withdraws their hold, or is overruled by a preponderance of maintainers. + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/collab.md?pixel)]() diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index 64bfa13f..d87152ef 100644 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/developer-guides/vagrant.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -447,6 +418,13 @@ provider, which uses nfs by default. For example: export KUBERNETES_VAGRANT_USE_NFS=true ``` + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/developer-guides/vagrant.md?pixel)]() diff --git a/development.md b/development.md index 9e008191..550e4878 100644 --- a/development.md +++ b/development.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/development.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -48,7 +19,7 @@ branch, but release branches of Kubernetes should not change. Official releases are built using Docker containers. To build Kubernetes using Docker please follow [these -instructions](http://releases.k8s.io/HEAD/build/README.md). +instructions](http://releases.k8s.io/v1.4.0-alpha.0/build/README.md). ### Go development environment @@ -320,6 +291,13 @@ hack/update-generated-docs.sh ``` + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/development.md?pixel)]() diff --git a/e2e-node-tests.md b/e2e-node-tests.md index f2869134..876ad526 100644 --- a/e2e-node-tests.md +++ b/e2e-node-tests.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/e2e-node-tests.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -217,6 +188,13 @@ The PR builder runs tests against the images listed in [jenkins-pull.properties] The post submit tests run against the images listed in [jenkins-ci.properties](../../test/e2e_node/jenkins/jenkins-ci.properties) + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/e2e-node-tests.md?pixel)]() diff --git a/e2e-tests.md b/e2e-tests.md index b67d3a5e..f1137a26 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/e2e-tests.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -664,6 +635,13 @@ You should also know the [testing conventions](coding-conventions.md#testing-con + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/e2e-tests.md?pixel)]() diff --git a/faster_reviews.md b/faster_reviews.md index eb7416d6..d6bc43bd 100644 --- a/faster_reviews.md +++ b/faster_reviews.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/faster_reviews.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -235,6 +206,13 @@ a bit of thought into how your work can be made easier to review. If you do these things your PRs will flow much more easily. + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/faster_reviews.md?pixel)]() diff --git a/flaky-tests.md b/flaky-tests.md index b599f80f..e6563220 100644 --- a/flaky-tests.md +++ b/flaky-tests.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/flaky-tests.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -197,6 +168,13 @@ If you do a final check for flakes with `docker ps -a`, ignore tasks that exited Happy flake hunting! + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/flaky-tests.md?pixel)]() diff --git a/generating-clientset.md b/generating-clientset.md index 42851cc9..ea411103 100644 --- a/generating-clientset.md +++ b/generating-clientset.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/generating-clientset.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -64,6 +35,13 @@ At the 1.2 release, we have two released clientsets in the repo: internalclients + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/generating-clientset.md?pixel)]() diff --git a/getting-builds.md b/getting-builds.md index ab1df171..bc3223f9 100644 --- a/getting-builds.md +++ b/getting-builds.md @@ -1,40 +1,11 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/getting-builds.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - # Getting Kubernetes Builds -You can use [hack/get-build.sh](http://releases.k8s.io/HEAD/hack/get-build.sh) to or use as a reference on how to get the most recent builds with curl. With `get-build.sh` you can grab the most recent stable build, the most recent release candidate, or the most recent build to pass our ci and gce e2e tests (essentially a nightly build). +You can use [hack/get-build.sh](http://releases.k8s.io/v1.4.0-alpha.0/hack/get-build.sh) to or use as a reference on how to get the most recent builds with curl. With `get-build.sh` you can grab the most recent stable build, the most recent release candidate, or the most recent build to pass our ci and gce e2e tests (essentially a nightly build). Run `./hack/get-build.sh -h` for its usage. @@ -74,6 +45,13 @@ $ curl -sSL https://storage.googleapis.com/pub/gsutil.tar.gz | sudo tar -xz -C / $ sudo ln -s /usr/local/src/gsutil/gsutil /usr/bin/gsutil ``` + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/getting-builds.md?pixel)]() diff --git a/how-to-doc.md b/how-to-doc.md index 67bffe15..a978b908 100644 --- a/how-to-doc.md +++ b/how-to-doc.md @@ -1,29 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -73,7 +49,7 @@ After running `hack/update-munge-docs.sh`, you'll see a table of contents genera It's important to follow the rules when writing links. It helps us correctly versionize documents for each release. -Use inline links instead of urls at all times. When you add internal links to `docs/` or `examples/`, use relative links; otherwise, use `http://releases.k8s.io/HEAD/`. For example, avoid using: +Use inline links instead of urls at all times. When you add internal links to `docs/` or `examples/`, use relative links; otherwise, use `http://releases.k8s.io/v1.4.0-alpha.0/`. For example, avoid using: ``` [GCE](https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/gce.md) # note that it's under docs/ @@ -85,11 +61,11 @@ Instead, use: ``` [GCE](../getting-started-guides/gce.md) # note that it's under docs/ -[Kubernetes package](http://releases.k8s.io/HEAD/pkg/) # note that it's under pkg/ +[Kubernetes package](http://releases.k8s.io/v1.4.0-alpha.0/pkg/) # note that it's under pkg/ [Kubernetes](http://kubernetes.io/) # external link ``` -The above example generates the following links: [GCE](../getting-started-guides/gce.md), [Kubernetes package](http://releases.k8s.io/HEAD/pkg/), and [Kubernetes](http://kubernetes.io/). +The above example generates the following links: [GCE](../getting-started-guides/gce.md), [Kubernetes package](http://releases.k8s.io/v1.4.0-alpha.0/pkg/), and [Kubernetes](http://kubernetes.io/). ## How to Include an Example @@ -170,7 +146,7 @@ Mungers are like gofmt for md docs which we use to format documents. To use it, ``` -in your md files. Note that xxxx is the placeholder for a specific munger. Appropriate content will be generated and inserted between two brackets after you run `hack/update-munge-docs.sh`. See [munger document](http://releases.k8s.io/HEAD/cmd/mungedocs/) for more details. +in your md files. Note that xxxx is the placeholder for a specific munger. Appropriate content will be generated and inserted between two brackets after you run `hack/update-munge-docs.sh`. See [munger document](http://releases.k8s.io/v1.4.0-alpha.0/cmd/mungedocs/) for more details. ## Auto-added Mungers @@ -183,8 +159,6 @@ UNVERSIONED_WARNING munger inserts unversioned warning which warns the users whe ``` - - ``` diff --git a/instrumentation.md b/instrumentation.md index 5e195f6b..b8f15333 100644 --- a/instrumentation.md +++ b/instrumentation.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/instrumentation.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - Instrumenting Kubernetes with a new metric @@ -66,6 +37,13 @@ https://github.com/prometheus/client_golang/blob/master/prometheus/histogram.go https://github.com/prometheus/client_golang/blob/master/prometheus/summary.go + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/instrumentation.md?pixel)]() diff --git a/issues.md b/issues.md index 1a068faa..e59025ea 100644 --- a/issues.md +++ b/issues.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/issues.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -88,6 +59,13 @@ release if it gets done, but we wouldn't block the release on it. A few days before release, we will probably move all P2 and P3 bugs out of that milestone in bulk. + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/issues.md?pixel)]() diff --git a/kubectl-conventions.md b/kubectl-conventions.md index 23a73f11..258e25cb 100644 --- a/kubectl-conventions.md +++ b/kubectl-conventions.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/kubectl-conventions.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -407,6 +378,13 @@ method which configures the generated namespace that callers of the generator creating it. + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/kubectl-conventions.md?pixel)]() diff --git a/kubemark-guide.md b/kubemark-guide.md index 3f93cd36..ed9ff7ee 100644 --- a/kubemark-guide.md +++ b/kubemark-guide.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/kubemark-guide.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -239,6 +210,13 @@ it’s crucial to make it as simple as possible to allow running a big number of Hollows on a single core. + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/kubemark-guide.md?pixel)]() diff --git a/logging.md b/logging.md index f0350dca..bbeba7c0 100644 --- a/logging.md +++ b/logging.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/logging.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -65,6 +36,13 @@ environments may wish to run at V(3) or V(4). If you wish to change the log level, you can pass in `-v=X` where X is the desired maximum level to log. + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/logging.md?pixel)]() diff --git a/making-release-notes.md b/making-release-notes.md index 01ef369e..88998312 100644 --- a/making-release-notes.md +++ b/making-release-notes.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/making-release-notes.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -81,6 +52,13 @@ page. * Press Save. + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/making-release-notes.md?pixel)]() diff --git a/mesos-style.md b/mesos-style.md index fdf9da08..25055dea 100644 --- a/mesos-style.md +++ b/mesos-style.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/mesos-style.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -247,6 +218,13 @@ Borg is described [here](http://research.google.com/pubs/pub43438.html). + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/mesos-style.md?pixel)]() diff --git a/node-performance-testing.md b/node-performance-testing.md index 54c15dee..4fec2764 100644 --- a/node-performance-testing.md +++ b/node-performance-testing.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/node-performance-testing.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -156,6 +127,13 @@ More details on benchmarking [here](https://golang.org/pkg/testing/). + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/node-performance-testing.md?pixel)]() diff --git a/on-call-build-cop.md b/on-call-build-cop.md index 7a91e5cb..e0bfd240 100644 --- a/on-call-build-cop.md +++ b/on-call-build-cop.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/on-call-build-cop.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -181,6 +152,13 @@ the build cop is expected to file issues for any flaky tests they encounter. [@k8s-oncall](https://github.com/k8s-oncall) will reach the current person on call. + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/on-call-build-cop.md?pixel)]() diff --git a/on-call-rotations.md b/on-call-rotations.md index 6cf8d0bf..a7ace0c0 100644 --- a/on-call-rotations.md +++ b/on-call-rotations.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/on-call-rotations.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -72,6 +43,13 @@ milestones, for instance). * [Github and Build Cop Rotation](on-call-build-cop.md) * [User Support Rotation](on-call-user-support.md) + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/on-call-rotations.md?pixel)]() diff --git a/on-call-user-support.md b/on-call-user-support.md index 1e9f3cb3..f8d38866 100644 --- a/on-call-user-support.md +++ b/on-call-user-support.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/on-call-user-support.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -118,6 +89,13 @@ current person on call. + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/on-call-user-support.md?pixel)]() diff --git a/owners.md b/owners.md index dcd14483..70acf71d 100644 --- a/owners.md +++ b/owners.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/owners.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -127,6 +98,13 @@ parent's OWNERS file is used instead. There will be a top-level OWNERS file to Obviously changing the OWNERS file requires OWNERS permission. + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/owners.md?pixel)]() diff --git a/profiling.md b/profiling.md index 5e74d25f..a95840f4 100644 --- a/profiling.md +++ b/profiling.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/profiling.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -75,6 +46,13 @@ to get 30 sec. CPU profile. To enable contention profiling you need to add line `rt.SetBlockProfileRate(1)` in addition to `m.mux.HandleFunc(...)` added before (`rt` stands for `runtime` in `master.go`). This enables 'debug/pprof/block' subpage, which can be used as an input to `go tool pprof`. + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/profiling.md?pixel)]() diff --git a/pull-requests.md b/pull-requests.md index 6803c464..4f3c6018 100644 --- a/pull-requests.md +++ b/pull-requests.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/pull-requests.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -135,6 +106,13 @@ We use a variety of automation to manage pull requests. This automation is desc [elsewhere.](automation.md) + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/pull-requests.md?pixel)]() diff --git a/releasing.md b/releasing.md index 5747ed6b..635a7387 100644 --- a/releasing.md +++ b/releasing.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/releasing.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -302,6 +273,13 @@ can, for instance, tell it to override `gitVersion` and set it to is the complete SHA1 of the (dirty) tree used at build time. + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/releasing.md?pixel)]() diff --git a/running-locally.md b/running-locally.md index 6999e588..f1f0f192 100644 --- a/running-locally.md +++ b/running-locally.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/running-locally.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - Getting started locally @@ -192,6 +163,13 @@ KUBE_DNS_REPLICAS=1 To know more on DNS service you can look [here](http://issue.k8s.io/6667). Related documents can be found [here](../../build/kube-dns/#how-do-i-configure-it) + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/running-locally.md?pixel)]() diff --git a/scheduler.md b/scheduler.md index 778fd087..d9b77e9a 100755 --- a/scheduler.md +++ b/scheduler.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/scheduler.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -54,30 +25,37 @@ divided by the node's capacity). Finally, the node with the highest priority is chosen (or, if there are multiple such nodes, then one of them is chosen at random). The code for this main scheduling loop is in the function `Schedule()` in -[plugin/pkg/scheduler/generic_scheduler.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/generic_scheduler.go) +[plugin/pkg/scheduler/generic_scheduler.go](http://releases.k8s.io/v1.4.0-alpha.0/plugin/pkg/scheduler/generic_scheduler.go) ## Scheduler extensibility The scheduler is extensible: the cluster administrator can choose which of the pre-defined scheduling policies to apply, and can add new ones. The built-in predicates and priorities are -defined in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/predicates/predicates.go) and -[plugin/pkg/scheduler/algorithm/priorities/priorities.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/priorities/priorities.go), respectively. +defined in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/v1.4.0-alpha.0/plugin/pkg/scheduler/algorithm/predicates/predicates.go) and +[plugin/pkg/scheduler/algorithm/priorities/priorities.go](http://releases.k8s.io/v1.4.0-alpha.0/plugin/pkg/scheduler/algorithm/priorities/priorities.go), respectively. The policies that are applied when scheduling can be chosen in one of two ways. Normally, the policies used are selected by the functions `defaultPredicates()` and `defaultPriorities()` in -[plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). +[plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/v1.4.0-alpha.0/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). However, the choice of policies can be overridden by passing the command-line flag `--policy-config-file` to the scheduler, pointing to a JSON file specifying which scheduling policies to use. See [examples/scheduler-policy-config.json](../../examples/scheduler-policy-config.json) for an example config file. (Note that the config file format is versioned; the API is defined in -[plugin/pkg/scheduler/api](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/api/)). +[plugin/pkg/scheduler/api](http://releases.k8s.io/v1.4.0-alpha.0/plugin/pkg/scheduler/api/)). Thus to add a new scheduling policy, you should modify predicates.go or priorities.go, and either register the policy in `defaultPredicates()` or `defaultPriorities()`, or use a policy config file. ## Exploring the code If you want to get a global picture of how the scheduler works, you can start in -[plugin/cmd/kube-scheduler/app/server.go](http://releases.k8s.io/HEAD/plugin/cmd/kube-scheduler/app/server.go) +[plugin/cmd/kube-scheduler/app/server.go](http://releases.k8s.io/v1.4.0-alpha.0/plugin/cmd/kube-scheduler/app/server.go) + + + + + + + diff --git a/scheduler_algorithm.md b/scheduler_algorithm.md index 807f0600..2cf0c54d 100755 --- a/scheduler_algorithm.md +++ b/scheduler_algorithm.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/scheduler_algorithm.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -50,7 +21,7 @@ The purpose of filtering the nodes is to filter out the nodes that do not meet c - `MaxGCEPDVolumeCount`: Ensure that the number of attached GCE PersistentDisk volumes does not exceed a maximum value (by default, 16, which is the maximum GCE allows -- see [GCE's documentation](https://cloud.google.com/compute/docs/disks/persistent-disks#limits_for_predefined_machine_types)). The maximum value can be controlled by setting the `KUBE_MAX_PD_VOLS` environment variable. - `CheckNodeMemoryPressure`: Check if a pod can be scheduled on a node reporting memory pressure condition. Currently, no ``BestEffort`` should be placed on a node under memory pressure as it gets automatically evicted by kubelet. -The details of the above predicates can be found in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/predicates/predicates.go). All predicates mentioned above can be used in combination to perform a sophisticated filtering policy. Kubernetes uses some, but not all, of these predicates by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). +The details of the above predicates can be found in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/v1.4.0-alpha.0/plugin/pkg/scheduler/algorithm/predicates/predicates.go). All predicates mentioned above can be used in combination to perform a sophisticated filtering policy. Kubernetes uses some, but not all, of these predicates by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/v1.4.0-alpha.0/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). ## Ranking the nodes @@ -69,7 +40,14 @@ Currently, Kubernetes scheduler provides some practical priority functions, incl - `ImageLocalityPriority`: Nodes are prioritized based on locality of images requested by a pod. Nodes with larger size of already-installed packages required by the pod will be preferred over nodes with no already-installed packages required by the pod or a small total size of already-installed packages required by the pod. - `NodeAffinityPriority`: (Kubernetes v1.2) Implements `preferredDuringSchedulingIgnoredDuringExecution` node affinity; see [here](../user-guide/node-selection/) for more details. -The details of the above priority functions can be found in [plugin/pkg/scheduler/algorithm/priorities](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/priorities/). Kubernetes uses some, but not all, of these priority functions by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). Similar as predicates, you can combine the above priority functions and assign weight factors (positive number) to them as you want (check [scheduler.md](scheduler.md) for how to customize). +The details of the above priority functions can be found in [plugin/pkg/scheduler/algorithm/priorities](http://releases.k8s.io/v1.4.0-alpha.0/plugin/pkg/scheduler/algorithm/priorities/). Kubernetes uses some, but not all, of these priority functions by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/v1.4.0-alpha.0/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). Similar as predicates, you can combine the above priority functions and assign weight factors (positive number) to them as you want (check [scheduler.md](scheduler.md) for how to customize). + + + + + + + diff --git a/testing.md b/testing.md index 72f1c328..faa1f66a 100644 --- a/testing.md +++ b/testing.md @@ -1,29 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -195,6 +171,13 @@ hack/test-integration.sh # Run all integration tests. Please refer to [End-to-End Testing in Kubernetes](e2e-tests.md). + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/testing.md?pixel)]() diff --git a/update-release-docs.md b/update-release-docs.md index 1dbb20a8..21165997 100644 --- a/update-release-docs.md +++ b/update-release-docs.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/update-release-docs.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -144,6 +115,13 @@ docs go away. If the change added or deleted a doc, then update the corresponding `_includes/nav_vX.Y.html` file as well. + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/update-release-docs.md?pixel)]() diff --git a/updating-docs-for-feature-changes.md b/updating-docs-for-feature-changes.md index f0f3197d..6d7ec92b 100644 --- a/updating-docs-for-feature-changes.md +++ b/updating-docs-for-feature-changes.md @@ -1,29 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -100,6 +76,13 @@ Anyone making user facing changes to kubernetes. This is especially important f * *TODO once we have a standard widget set we are happy with* - include diagrams to help describe complex ideas (not required yet) + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/updating-docs-for-feature-changes.md?pixel)]() diff --git a/writing-a-getting-started-guide.md b/writing-a-getting-started-guide.md index fbe5aa1b..f4f86b54 100644 --- a/writing-a-getting-started-guide.md +++ b/writing-a-getting-started-guide.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/writing-a-getting-started-guide.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -130,6 +101,13 @@ These guidelines say *what* to do. See the Rationale section for *why*. + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/writing-a-getting-started-guide.md?pixel)]() diff --git a/writing-good-e2e-tests.md b/writing-good-e2e-tests.md index 2cb0fe47..18c6877f 100644 --- a/writing-good-e2e-tests.md +++ b/writing-good-e2e-tests.md @@ -1,34 +1,5 @@ - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/writing-good-e2e-tests.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - @@ -264,6 +235,13 @@ Note that opening issues for specific better tooling is welcome, and code implementing that tooling is even more welcome :-). + + + + + + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/writing-good-e2e-tests.md?pixel)]() -- cgit v1.2.3 From 94ac5ff7f0a7f0fb8a86b4fd2cc2eb86427d781e Mon Sep 17 00:00:00 2001 From: Dawn Chen Date: Fri, 10 Jun 2016 16:46:46 -0700 Subject: Revert "Versioning docs and examples for v1.4.0-alpha.0." This reverts commit cce9db3aa9555671c5ddf69549b46ed0fd7e472a. --- README.md | 36 +++++++++++++++++++++------ adding-an-APIGroup.md | 36 +++++++++++++++++++++------ api-conventions.md | 36 +++++++++++++++++++++------ api_changes.md | 36 +++++++++++++++++++++------ automation.md | 36 +++++++++++++++++++++------ cherry-picks.md | 38 ++++++++++++++++++++++------ cli-roadmap.md | 36 +++++++++++++++++++++------ client-libraries.md | 38 ++++++++++++++++++++++------ coding-conventions.md | 36 +++++++++++++++++++++------ collab.md | 36 +++++++++++++++++++++------ developer-guides/vagrant.md | 36 +++++++++++++++++++++------ development.md | 38 ++++++++++++++++++++++------ e2e-node-tests.md | 36 +++++++++++++++++++++------ e2e-tests.md | 36 +++++++++++++++++++++------ faster_reviews.md | 36 +++++++++++++++++++++------ flaky-tests.md | 36 +++++++++++++++++++++------ generating-clientset.md | 36 +++++++++++++++++++++------ getting-builds.md | 38 ++++++++++++++++++++++------ how-to-doc.md | 34 ++++++++++++++++++++++--- instrumentation.md | 36 +++++++++++++++++++++------ issues.md | 36 +++++++++++++++++++++------ kubectl-conventions.md | 36 +++++++++++++++++++++------ kubemark-guide.md | 36 +++++++++++++++++++++------ logging.md | 36 +++++++++++++++++++++------ making-release-notes.md | 36 +++++++++++++++++++++------ mesos-style.md | 36 +++++++++++++++++++++------ node-performance-testing.md | 36 +++++++++++++++++++++------ on-call-build-cop.md | 36 +++++++++++++++++++++------ on-call-rotations.md | 36 +++++++++++++++++++++------ on-call-user-support.md | 36 +++++++++++++++++++++------ owners.md | 36 +++++++++++++++++++++------ profiling.md | 36 +++++++++++++++++++++------ pull-requests.md | 36 +++++++++++++++++++++------ releasing.md | 36 +++++++++++++++++++++------ running-locally.md | 36 +++++++++++++++++++++------ scheduler.md | 48 ++++++++++++++++++++++++++---------- scheduler_algorithm.md | 40 +++++++++++++++++++++++------- testing.md | 31 +++++++++++++++++------ update-release-docs.md | 36 +++++++++++++++++++++------ updating-docs-for-feature-changes.md | 31 +++++++++++++++++------ writing-a-getting-started-guide.md | 36 +++++++++++++++++++++------ writing-good-e2e-tests.md | 36 +++++++++++++++++++++------ 42 files changed, 1221 insertions(+), 303 deletions(-) diff --git a/README.md b/README.md index 9a563619..b1af07df 100644 --- a/README.md +++ b/README.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/README.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -87,13 +116,6 @@ Guide](../admin/README.md). and how the version information gets embedded into the built binaries. - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/README.md?pixel)]() diff --git a/adding-an-APIGroup.md b/adding-an-APIGroup.md index 17fe534a..e0f95fc7 100644 --- a/adding-an-APIGroup.md +++ b/adding-an-APIGroup.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/adding-an-APIGroup.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -94,13 +123,6 @@ TODO: Add a troubleshooting section. - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/adding-an-APIGroup.md?pixel)]() diff --git a/api-conventions.md b/api-conventions.md index 97ee0d86..757de2dd 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + API Conventions @@ -1348,13 +1377,6 @@ be less than 256", "must be greater than or equal to 0". Do not use words like "larger than", "bigger than", "more than", "higher than", etc. * When specifying numeric ranges, use inclusive ranges when possible. - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/api-conventions.md?pixel)]() diff --git a/api_changes.md b/api_changes.md index a561d356..4ec383e7 100644 --- a/api_changes.md +++ b/api_changes.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/api_changes.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -729,13 +758,6 @@ A releated issue is how a cluster manager can roll back from a new version with a new feature, that is already being used by users. See https://github.com/kubernetes/kubernetes/issues/4855. - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/api_changes.md?pixel)]() diff --git a/automation.md b/automation.md index b0cb4438..6ba74fd0 100644 --- a/automation.md +++ b/automation.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/automation.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -102,13 +131,6 @@ the issue number you found or filed. Any pushes of new code to the PR will automatically trigger a new test. No human interraction is required. - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/automation.md?pixel)]() diff --git a/cherry-picks.md b/cherry-picks.md index 7fd8be13..d5456a1a 100644 --- a/cherry-picks.md +++ b/cherry-picks.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/cherry-picks.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -77,18 +106,11 @@ requested - this should not be the norm, but it may happen. See the [cherrypick queue dashboard](http://cherrypick.k8s.io/#/queue) for status of PRs labeled as `cherrypick-candidate`. -[Contributor License Agreements](http://releases.k8s.io/v1.4.0-alpha.0/CONTRIBUTING.md) is +[Contributor License Agreements](http://releases.k8s.io/HEAD/CONTRIBUTING.md) is considered implicit for all code within cherry-pick pull requests, ***unless there is a large conflict***. - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/cherry-picks.md?pixel)]() diff --git a/cli-roadmap.md b/cli-roadmap.md index 8d000ad4..7a7791b8 100644 --- a/cli-roadmap.md +++ b/cli-roadmap.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/cli-roadmap.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -11,13 +40,6 @@ See github issues with the following labels: * [component/clientlib](https://github.com/kubernetes/kubernetes/labels/component/clientlib) - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/cli-roadmap.md?pixel)]() diff --git a/client-libraries.md b/client-libraries.md index c36b1ad1..95a3dfeb 100644 --- a/client-libraries.md +++ b/client-libraries.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/client-libraries.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -7,7 +36,7 @@ ### Supported - * [Go](http://releases.k8s.io/v1.4.0-alpha.0/pkg/client/) + * [Go](http://releases.k8s.io/HEAD/pkg/client/) ### User Contributed @@ -26,13 +55,6 @@ the core Kubernetes team* * [Ruby](https://github.com/abonas/kubeclient) * [Scala](https://github.com/doriordan/skuber) - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/client-libraries.md?pixel)]() diff --git a/coding-conventions.md b/coding-conventions.md index 1d7d75ac..3a59cd2a 100644 --- a/coding-conventions.md +++ b/coding-conventions.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/coding-conventions.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -147,13 +176,6 @@ using the system](../user-guide/config-best-practices.md) - [Go landmines](https://gist.github.com/lavalamp/4bd23295a9f32706a48f) - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/coding-conventions.md?pixel)]() diff --git a/collab.md b/collab.md index 68c13df5..0742b548 100644 --- a/collab.md +++ b/collab.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/collab.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -87,13 +116,6 @@ hold shall not be merged until the person who requested the hold acks the review, withdraws their hold, or is overruled by a preponderance of maintainers. - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/collab.md?pixel)]() diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index d87152ef..64bfa13f 100644 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/developer-guides/vagrant.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -418,13 +447,6 @@ provider, which uses nfs by default. For example: export KUBERNETES_VAGRANT_USE_NFS=true ``` - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/developer-guides/vagrant.md?pixel)]() diff --git a/development.md b/development.md index 550e4878..9e008191 100644 --- a/development.md +++ b/development.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/development.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -19,7 +48,7 @@ branch, but release branches of Kubernetes should not change. Official releases are built using Docker containers. To build Kubernetes using Docker please follow [these -instructions](http://releases.k8s.io/v1.4.0-alpha.0/build/README.md). +instructions](http://releases.k8s.io/HEAD/build/README.md). ### Go development environment @@ -291,13 +320,6 @@ hack/update-generated-docs.sh ``` - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/development.md?pixel)]() diff --git a/e2e-node-tests.md b/e2e-node-tests.md index 876ad526..f2869134 100644 --- a/e2e-node-tests.md +++ b/e2e-node-tests.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/e2e-node-tests.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -188,13 +217,6 @@ The PR builder runs tests against the images listed in [jenkins-pull.properties] The post submit tests run against the images listed in [jenkins-ci.properties](../../test/e2e_node/jenkins/jenkins-ci.properties) - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/e2e-node-tests.md?pixel)]() diff --git a/e2e-tests.md b/e2e-tests.md index f1137a26..b67d3a5e 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/e2e-tests.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -635,13 +664,6 @@ You should also know the [testing conventions](coding-conventions.md#testing-con - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/e2e-tests.md?pixel)]() diff --git a/faster_reviews.md b/faster_reviews.md index d6bc43bd..eb7416d6 100644 --- a/faster_reviews.md +++ b/faster_reviews.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/faster_reviews.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -206,13 +235,6 @@ a bit of thought into how your work can be made easier to review. If you do these things your PRs will flow much more easily. - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/faster_reviews.md?pixel)]() diff --git a/flaky-tests.md b/flaky-tests.md index e6563220..b599f80f 100644 --- a/flaky-tests.md +++ b/flaky-tests.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/flaky-tests.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -168,13 +197,6 @@ If you do a final check for flakes with `docker ps -a`, ignore tasks that exited Happy flake hunting! - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/flaky-tests.md?pixel)]() diff --git a/generating-clientset.md b/generating-clientset.md index ea411103..42851cc9 100644 --- a/generating-clientset.md +++ b/generating-clientset.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/generating-clientset.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -35,13 +64,6 @@ At the 1.2 release, we have two released clientsets in the repo: internalclients - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/generating-clientset.md?pixel)]() diff --git a/getting-builds.md b/getting-builds.md index bc3223f9..ab1df171 100644 --- a/getting-builds.md +++ b/getting-builds.md @@ -1,11 +1,40 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/getting-builds.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + # Getting Kubernetes Builds -You can use [hack/get-build.sh](http://releases.k8s.io/v1.4.0-alpha.0/hack/get-build.sh) to or use as a reference on how to get the most recent builds with curl. With `get-build.sh` you can grab the most recent stable build, the most recent release candidate, or the most recent build to pass our ci and gce e2e tests (essentially a nightly build). +You can use [hack/get-build.sh](http://releases.k8s.io/HEAD/hack/get-build.sh) to or use as a reference on how to get the most recent builds with curl. With `get-build.sh` you can grab the most recent stable build, the most recent release candidate, or the most recent build to pass our ci and gce e2e tests (essentially a nightly build). Run `./hack/get-build.sh -h` for its usage. @@ -45,13 +74,6 @@ $ curl -sSL https://storage.googleapis.com/pub/gsutil.tar.gz | sudo tar -xz -C / $ sudo ln -s /usr/local/src/gsutil/gsutil /usr/bin/gsutil ``` - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/getting-builds.md?pixel)]() diff --git a/how-to-doc.md b/how-to-doc.md index a978b908..67bffe15 100644 --- a/how-to-doc.md +++ b/how-to-doc.md @@ -1,5 +1,29 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -49,7 +73,7 @@ After running `hack/update-munge-docs.sh`, you'll see a table of contents genera It's important to follow the rules when writing links. It helps us correctly versionize documents for each release. -Use inline links instead of urls at all times. When you add internal links to `docs/` or `examples/`, use relative links; otherwise, use `http://releases.k8s.io/v1.4.0-alpha.0/`. For example, avoid using: +Use inline links instead of urls at all times. When you add internal links to `docs/` or `examples/`, use relative links; otherwise, use `http://releases.k8s.io/HEAD/`. For example, avoid using: ``` [GCE](https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/gce.md) # note that it's under docs/ @@ -61,11 +85,11 @@ Instead, use: ``` [GCE](../getting-started-guides/gce.md) # note that it's under docs/ -[Kubernetes package](http://releases.k8s.io/v1.4.0-alpha.0/pkg/) # note that it's under pkg/ +[Kubernetes package](http://releases.k8s.io/HEAD/pkg/) # note that it's under pkg/ [Kubernetes](http://kubernetes.io/) # external link ``` -The above example generates the following links: [GCE](../getting-started-guides/gce.md), [Kubernetes package](http://releases.k8s.io/v1.4.0-alpha.0/pkg/), and [Kubernetes](http://kubernetes.io/). +The above example generates the following links: [GCE](../getting-started-guides/gce.md), [Kubernetes package](http://releases.k8s.io/HEAD/pkg/), and [Kubernetes](http://kubernetes.io/). ## How to Include an Example @@ -146,7 +170,7 @@ Mungers are like gofmt for md docs which we use to format documents. To use it, ``` -in your md files. Note that xxxx is the placeholder for a specific munger. Appropriate content will be generated and inserted between two brackets after you run `hack/update-munge-docs.sh`. See [munger document](http://releases.k8s.io/v1.4.0-alpha.0/cmd/mungedocs/) for more details. +in your md files. Note that xxxx is the placeholder for a specific munger. Appropriate content will be generated and inserted between two brackets after you run `hack/update-munge-docs.sh`. See [munger document](http://releases.k8s.io/HEAD/cmd/mungedocs/) for more details. ## Auto-added Mungers @@ -159,6 +183,8 @@ UNVERSIONED_WARNING munger inserts unversioned warning which warns the users whe ``` + + ``` diff --git a/instrumentation.md b/instrumentation.md index b8f15333..5e195f6b 100644 --- a/instrumentation.md +++ b/instrumentation.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/instrumentation.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + Instrumenting Kubernetes with a new metric @@ -37,13 +66,6 @@ https://github.com/prometheus/client_golang/blob/master/prometheus/histogram.go https://github.com/prometheus/client_golang/blob/master/prometheus/summary.go - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/instrumentation.md?pixel)]() diff --git a/issues.md b/issues.md index e59025ea..1a068faa 100644 --- a/issues.md +++ b/issues.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/issues.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -59,13 +88,6 @@ release if it gets done, but we wouldn't block the release on it. A few days before release, we will probably move all P2 and P3 bugs out of that milestone in bulk. - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/issues.md?pixel)]() diff --git a/kubectl-conventions.md b/kubectl-conventions.md index 258e25cb..23a73f11 100644 --- a/kubectl-conventions.md +++ b/kubectl-conventions.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/kubectl-conventions.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -378,13 +407,6 @@ method which configures the generated namespace that callers of the generator creating it. - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/kubectl-conventions.md?pixel)]() diff --git a/kubemark-guide.md b/kubemark-guide.md index ed9ff7ee..3f93cd36 100644 --- a/kubemark-guide.md +++ b/kubemark-guide.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/kubemark-guide.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -210,13 +239,6 @@ it’s crucial to make it as simple as possible to allow running a big number of Hollows on a single core. - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/kubemark-guide.md?pixel)]() diff --git a/logging.md b/logging.md index bbeba7c0..f0350dca 100644 --- a/logging.md +++ b/logging.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/logging.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -36,13 +65,6 @@ environments may wish to run at V(3) or V(4). If you wish to change the log level, you can pass in `-v=X` where X is the desired maximum level to log. - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/logging.md?pixel)]() diff --git a/making-release-notes.md b/making-release-notes.md index 88998312..01ef369e 100644 --- a/making-release-notes.md +++ b/making-release-notes.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/making-release-notes.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -52,13 +81,6 @@ page. * Press Save. - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/making-release-notes.md?pixel)]() diff --git a/mesos-style.md b/mesos-style.md index 25055dea..fdf9da08 100644 --- a/mesos-style.md +++ b/mesos-style.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/mesos-style.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -218,13 +247,6 @@ Borg is described [here](http://research.google.com/pubs/pub43438.html). - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/mesos-style.md?pixel)]() diff --git a/node-performance-testing.md b/node-performance-testing.md index 4fec2764..54c15dee 100644 --- a/node-performance-testing.md +++ b/node-performance-testing.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/node-performance-testing.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -127,13 +156,6 @@ More details on benchmarking [here](https://golang.org/pkg/testing/). - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/node-performance-testing.md?pixel)]() diff --git a/on-call-build-cop.md b/on-call-build-cop.md index e0bfd240..7a91e5cb 100644 --- a/on-call-build-cop.md +++ b/on-call-build-cop.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/on-call-build-cop.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -152,13 +181,6 @@ the build cop is expected to file issues for any flaky tests they encounter. [@k8s-oncall](https://github.com/k8s-oncall) will reach the current person on call. - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/on-call-build-cop.md?pixel)]() diff --git a/on-call-rotations.md b/on-call-rotations.md index a7ace0c0..6cf8d0bf 100644 --- a/on-call-rotations.md +++ b/on-call-rotations.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/on-call-rotations.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -43,13 +72,6 @@ milestones, for instance). * [Github and Build Cop Rotation](on-call-build-cop.md) * [User Support Rotation](on-call-user-support.md) - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/on-call-rotations.md?pixel)]() diff --git a/on-call-user-support.md b/on-call-user-support.md index f8d38866..1e9f3cb3 100644 --- a/on-call-user-support.md +++ b/on-call-user-support.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/on-call-user-support.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -89,13 +118,6 @@ current person on call. - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/on-call-user-support.md?pixel)]() diff --git a/owners.md b/owners.md index 70acf71d..dcd14483 100644 --- a/owners.md +++ b/owners.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/owners.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -98,13 +127,6 @@ parent's OWNERS file is used instead. There will be a top-level OWNERS file to Obviously changing the OWNERS file requires OWNERS permission. - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/owners.md?pixel)]() diff --git a/profiling.md b/profiling.md index a95840f4..5e74d25f 100644 --- a/profiling.md +++ b/profiling.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/profiling.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -46,13 +75,6 @@ to get 30 sec. CPU profile. To enable contention profiling you need to add line `rt.SetBlockProfileRate(1)` in addition to `m.mux.HandleFunc(...)` added before (`rt` stands for `runtime` in `master.go`). This enables 'debug/pprof/block' subpage, which can be used as an input to `go tool pprof`. - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/profiling.md?pixel)]() diff --git a/pull-requests.md b/pull-requests.md index 4f3c6018..6803c464 100644 --- a/pull-requests.md +++ b/pull-requests.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/pull-requests.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -106,13 +135,6 @@ We use a variety of automation to manage pull requests. This automation is desc [elsewhere.](automation.md) - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/pull-requests.md?pixel)]() diff --git a/releasing.md b/releasing.md index 635a7387..5747ed6b 100644 --- a/releasing.md +++ b/releasing.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/releasing.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -273,13 +302,6 @@ can, for instance, tell it to override `gitVersion` and set it to is the complete SHA1 of the (dirty) tree used at build time. - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/releasing.md?pixel)]() diff --git a/running-locally.md b/running-locally.md index f1f0f192..6999e588 100644 --- a/running-locally.md +++ b/running-locally.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/running-locally.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + Getting started locally @@ -163,13 +192,6 @@ KUBE_DNS_REPLICAS=1 To know more on DNS service you can look [here](http://issue.k8s.io/6667). Related documents can be found [here](../../build/kube-dns/#how-do-i-configure-it) - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/running-locally.md?pixel)]() diff --git a/scheduler.md b/scheduler.md index d9b77e9a..778fd087 100755 --- a/scheduler.md +++ b/scheduler.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/scheduler.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -25,37 +54,30 @@ divided by the node's capacity). Finally, the node with the highest priority is chosen (or, if there are multiple such nodes, then one of them is chosen at random). The code for this main scheduling loop is in the function `Schedule()` in -[plugin/pkg/scheduler/generic_scheduler.go](http://releases.k8s.io/v1.4.0-alpha.0/plugin/pkg/scheduler/generic_scheduler.go) +[plugin/pkg/scheduler/generic_scheduler.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/generic_scheduler.go) ## Scheduler extensibility The scheduler is extensible: the cluster administrator can choose which of the pre-defined scheduling policies to apply, and can add new ones. The built-in predicates and priorities are -defined in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/v1.4.0-alpha.0/plugin/pkg/scheduler/algorithm/predicates/predicates.go) and -[plugin/pkg/scheduler/algorithm/priorities/priorities.go](http://releases.k8s.io/v1.4.0-alpha.0/plugin/pkg/scheduler/algorithm/priorities/priorities.go), respectively. +defined in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/predicates/predicates.go) and +[plugin/pkg/scheduler/algorithm/priorities/priorities.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/priorities/priorities.go), respectively. The policies that are applied when scheduling can be chosen in one of two ways. Normally, the policies used are selected by the functions `defaultPredicates()` and `defaultPriorities()` in -[plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/v1.4.0-alpha.0/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). +[plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). However, the choice of policies can be overridden by passing the command-line flag `--policy-config-file` to the scheduler, pointing to a JSON file specifying which scheduling policies to use. See [examples/scheduler-policy-config.json](../../examples/scheduler-policy-config.json) for an example config file. (Note that the config file format is versioned; the API is defined in -[plugin/pkg/scheduler/api](http://releases.k8s.io/v1.4.0-alpha.0/plugin/pkg/scheduler/api/)). +[plugin/pkg/scheduler/api](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/api/)). Thus to add a new scheduling policy, you should modify predicates.go or priorities.go, and either register the policy in `defaultPredicates()` or `defaultPriorities()`, or use a policy config file. ## Exploring the code If you want to get a global picture of how the scheduler works, you can start in -[plugin/cmd/kube-scheduler/app/server.go](http://releases.k8s.io/v1.4.0-alpha.0/plugin/cmd/kube-scheduler/app/server.go) - - - - - - - +[plugin/cmd/kube-scheduler/app/server.go](http://releases.k8s.io/HEAD/plugin/cmd/kube-scheduler/app/server.go) diff --git a/scheduler_algorithm.md b/scheduler_algorithm.md index 2cf0c54d..807f0600 100755 --- a/scheduler_algorithm.md +++ b/scheduler_algorithm.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/scheduler_algorithm.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -21,7 +50,7 @@ The purpose of filtering the nodes is to filter out the nodes that do not meet c - `MaxGCEPDVolumeCount`: Ensure that the number of attached GCE PersistentDisk volumes does not exceed a maximum value (by default, 16, which is the maximum GCE allows -- see [GCE's documentation](https://cloud.google.com/compute/docs/disks/persistent-disks#limits_for_predefined_machine_types)). The maximum value can be controlled by setting the `KUBE_MAX_PD_VOLS` environment variable. - `CheckNodeMemoryPressure`: Check if a pod can be scheduled on a node reporting memory pressure condition. Currently, no ``BestEffort`` should be placed on a node under memory pressure as it gets automatically evicted by kubelet. -The details of the above predicates can be found in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/v1.4.0-alpha.0/plugin/pkg/scheduler/algorithm/predicates/predicates.go). All predicates mentioned above can be used in combination to perform a sophisticated filtering policy. Kubernetes uses some, but not all, of these predicates by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/v1.4.0-alpha.0/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). +The details of the above predicates can be found in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/predicates/predicates.go). All predicates mentioned above can be used in combination to perform a sophisticated filtering policy. Kubernetes uses some, but not all, of these predicates by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). ## Ranking the nodes @@ -40,14 +69,7 @@ Currently, Kubernetes scheduler provides some practical priority functions, incl - `ImageLocalityPriority`: Nodes are prioritized based on locality of images requested by a pod. Nodes with larger size of already-installed packages required by the pod will be preferred over nodes with no already-installed packages required by the pod or a small total size of already-installed packages required by the pod. - `NodeAffinityPriority`: (Kubernetes v1.2) Implements `preferredDuringSchedulingIgnoredDuringExecution` node affinity; see [here](../user-guide/node-selection/) for more details. -The details of the above priority functions can be found in [plugin/pkg/scheduler/algorithm/priorities](http://releases.k8s.io/v1.4.0-alpha.0/plugin/pkg/scheduler/algorithm/priorities/). Kubernetes uses some, but not all, of these priority functions by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/v1.4.0-alpha.0/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). Similar as predicates, you can combine the above priority functions and assign weight factors (positive number) to them as you want (check [scheduler.md](scheduler.md) for how to customize). - - - - - - - +The details of the above priority functions can be found in [plugin/pkg/scheduler/algorithm/priorities](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/priorities/). Kubernetes uses some, but not all, of these priority functions by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). Similar as predicates, you can combine the above priority functions and assign weight factors (positive number) to them as you want (check [scheduler.md](scheduler.md) for how to customize). diff --git a/testing.md b/testing.md index faa1f66a..72f1c328 100644 --- a/testing.md +++ b/testing.md @@ -1,5 +1,29 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -171,13 +195,6 @@ hack/test-integration.sh # Run all integration tests. Please refer to [End-to-End Testing in Kubernetes](e2e-tests.md). - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/testing.md?pixel)]() diff --git a/update-release-docs.md b/update-release-docs.md index 21165997..1dbb20a8 100644 --- a/update-release-docs.md +++ b/update-release-docs.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/update-release-docs.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -115,13 +144,6 @@ docs go away. If the change added or deleted a doc, then update the corresponding `_includes/nav_vX.Y.html` file as well. - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/update-release-docs.md?pixel)]() diff --git a/updating-docs-for-feature-changes.md b/updating-docs-for-feature-changes.md index 6d7ec92b..f0f3197d 100644 --- a/updating-docs-for-feature-changes.md +++ b/updating-docs-for-feature-changes.md @@ -1,5 +1,29 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -76,13 +100,6 @@ Anyone making user facing changes to kubernetes. This is especially important f * *TODO once we have a standard widget set we are happy with* - include diagrams to help describe complex ideas (not required yet) - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/updating-docs-for-feature-changes.md?pixel)]() diff --git a/writing-a-getting-started-guide.md b/writing-a-getting-started-guide.md index f4f86b54..fbe5aa1b 100644 --- a/writing-a-getting-started-guide.md +++ b/writing-a-getting-started-guide.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/writing-a-getting-started-guide.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -101,13 +130,6 @@ These guidelines say *what* to do. See the Rationale section for *why*. - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/writing-a-getting-started-guide.md?pixel)]() diff --git a/writing-good-e2e-tests.md b/writing-good-e2e-tests.md index 18c6877f..2cb0fe47 100644 --- a/writing-good-e2e-tests.md +++ b/writing-good-e2e-tests.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/writing-good-e2e-tests.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -235,13 +264,6 @@ Note that opening issues for specific better tooling is welcome, and code implementing that tooling is even more welcome :-). - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/writing-good-e2e-tests.md?pixel)]() -- cgit v1.2.3 From 72606c72a7841638348b45890e2b53cfc822864d Mon Sep 17 00:00:00 2001 From: "Madhusudan.C.S" Date: Mon, 13 Jun 2016 01:38:31 -0700 Subject: Default to GCR as the image registry if the provider is GCE or GKE. --- e2e-tests.md | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/e2e-tests.md b/e2e-tests.md index b67d3a5e..4a4a2634 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -266,8 +266,12 @@ Next, specify the docker repository where your ci images will be pushed. * **If `KUBERNETES_PROVIDER=gce` or `KUBERNETES_PROVIDER=gke`**: - You can simply set your push repo base based on your project name, and the necessary repositories will be auto-created when you - first push your container images. + If you use the same GCP project where you to run the e2e tests as the container image repository, + FEDERATION_PUSH_REPO_BASE environment variable will be defaulted to "gcr.io/${DEFAULT_GCP_PROJECT_NAME}". + You can skip ahead to the **Build** section. + + You can simply set your push repo base based on your project name, and the necessary repositories will be + auto-created when you first push your container images. ```sh $ export FEDERATION_PUSH_REPO_BASE="gcr.io/${GCE_PROJECT_NAME}" -- cgit v1.2.3 From c2022785bec40aa94300ddd277659f37540798f1 Mon Sep 17 00:00:00 2001 From: Paul Morie Date: Mon, 13 Jun 2016 15:10:46 -0400 Subject: Improve developer docs on unit and integration testing --- testing.md | 110 +++++++++++++++++++++++++++++++++++++++++++------------------ 1 file changed, 78 insertions(+), 32 deletions(-) diff --git a/testing.md b/testing.md index 72f1c328..c0e622f3 100644 --- a/testing.md +++ b/testing.md @@ -29,7 +29,7 @@ Documentation for other releases can be found at # Testing guide -Updated: 5/3/2016 +Updated: 5/21/2016 **Table of Contents** @@ -37,19 +37,23 @@ Updated: 5/3/2016 - [Testing guide](#testing-guide) - [Unit tests](#unit-tests) - [Run all unit tests](#run-all-unit-tests) - - [Run some unit tests](#run-some-unit-tests) + - [Set go flags during unit tests](#set-go-flags-during-unit-tests) + - [Run unit tests from certain packages](#run-unit-tests-from-certain-packages) + - [Run specific unit test cases in a package](#run-specific-unit-test-cases-in-a-package) - [Stress running unit tests](#stress-running-unit-tests) - [Unit test coverage](#unit-test-coverage) - [Benchmark unit tests](#benchmark-unit-tests) - [Integration tests](#integration-tests) - [Install etcd dependency](#install-etcd-dependency) - [Run integration tests](#run-integration-tests) + - [Run a specific integration test](#run-a-specific-integration-test) - [End-to-End tests](#end-to-end-tests) This assumes you already read the [development guide](development.md) to -install go, godeps, and configure your git client. +install go, godeps, and configure your git client. All command examples are +relative to the `kubernetes` root directory. Before sending pull requests you should at least make sure your changes have passed both unit and integration tests. @@ -62,8 +66,8 @@ passing, so it is often a good idea to make sure the e2e tests work as well. * Unit tests should be fully hermetic - Only access resources in the test binary. * All packages and any significant files require unit tests. -* The preferred method of testing multiple scenarios or inputs -is [table driven testing](https://github.com/golang/go/wiki/TableDrivenTests) +* The preferred method of testing multiple scenarios or input is + [table driven testing](https://github.com/golang/go/wiki/TableDrivenTests) - Example: [TestNamespaceAuthorization](../../test/integration/auth_test.go) * Unit tests must pass on OS X and Windows platforms. - Tests using linux-specific features must be skipped or compiled out. @@ -73,32 +77,59 @@ is [table driven testing](https://github.com/golang/go/wiki/TableDrivenTests) ### Run all unit tests +The `hack/test-go.sh` script is the entrypoint for running the unit tests that +ensures that `GOPATH` is set up correctly. If you have `GOPATH` set up +correctly, you can also just use `go test` directly. + ```sh cd kubernetes hack/test-go.sh # Run all unit tests. ``` -### Run some unit tests +### Set go flags during unit tests + +You can set [go flags](https://golang.org/cmd/go/) by setting the +`KUBE_GOFLAGS` environment variable. + +### Run unit tests from certain packages + +The `hack/test-go.sh` script accepts packages as arguments; the +`k8s.io/kubernetes` prefix is added automatically to these: ```sh -cd kubernetes +hack/test-go.sh pkg/api # run tests for pkg/api +hack/test-go.sh pkg/api pkg/kubelet # run tests for pkg/api and pkg/kubelet +``` -# Run all tests under pkg (requires client to be in $GOPATH/src/k8s.io) -go test ./pkg/... +In a shell, it's often handy to use brace expansion: -# Run all tests in the pkg/api (but not subpackages) -go test ./pkg/api +```sh +hack/test-go.sh pkg/{api,kubelet} # run tests for pkg/api and pkg/kubelet ``` +### Run specific unit test cases in a package + +You can set the test args using the `KUBE_TEST_ARGS` environment variable. +You can use this to pass the `-run` argument to `go test`, which accepts a +regular expression for the name of the test that should be run. + +```sh +# Runs TestValidatePod in pkg/api/validation with the verbose flag set +KUBE_GOFLAGS="-v" KUBE_TEST_ARGS='-run ^TestValidatePod$' hack/test-go.sh pkg/api/validation + +# Runs tests that match the regex ValidatePod|ValidateConfigMap in pkg/api/validation +KUBE_GOFLAGS="-v" KUBE_TEST_ARGS="-run ValidatePod\|ValidateConfigMap$" hack/test-go.sh pkg/api/validation +``` + +For other supported test flags, see the [golang +documentation](https://golang.org/cmd/go/#hdr-Description_of_testing_flags). + ### Stress running unit tests Running the same tests repeatedly is one way to root out flakes. You can do this efficiently. - ```sh -cd kubernetes - # Have 2 workers run all tests 5 times each (10 total iterations). hack/test-go.sh -p 2 -i 5 ``` @@ -112,43 +143,39 @@ Currently, collecting coverage is only supported for the Go unit tests. To run all unit tests and generate an HTML coverage report, run the following: ```sh -cd kubernetes KUBE_COVER=y hack/test-go.sh ``` -At the end of the run, an the HTML report will be generated with the path printed to stdout. +At the end of the run, an the HTML report will be generated with the path +printed to stdout. -To run tests and collect coverage in only one package, pass its relative path under the `kubernetes` directory as an argument, for example: +To run tests and collect coverage in only one package, pass its relative path +under the `kubernetes` directory as an argument, for example: ```sh -cd kubernetes KUBE_COVER=y hack/test-go.sh pkg/kubectl ``` -Multiple arguments can be passed, in which case the coverage results will be combined for all tests run. - -Coverage results for the project can also be viewed on [Coveralls](https://coveralls.io/r/kubernetes/kubernetes), and are continuously updated as commits are merged. Additionally, all pull requests which spawn a Travis build will report unit test coverage results to Coveralls. Coverage reports from before the Kubernetes Github organization was created can be found [here](https://coveralls.io/r/GoogleCloudPlatform/kubernetes). +Multiple arguments can be passed, in which case the coverage results will be +combined for all tests run. ### Benchmark unit tests To run benchmark tests, you'll typically use something like: ```sh -cd kubernetes go test ./pkg/apiserver -benchmem -run=XXX -bench=BenchmarkWatch ``` This will do the following: -1. `-run=XXX` will turn off regular unit tests - * Technically it will run test methods with XXX in the name. +1. `-run=XXX` is a regular expression filter on the name of test cases to run 2. `-bench=BenchmarkWatch` will run test methods with BenchmarkWatch in the name * See `grep -nr BenchmarkWatch .` for examples 3. `-benchmem` enables memory allocation stats See `go help test` and `go help testflag` for additional info. - ## Integration tests * Integration tests should only access other resources on the local machine @@ -158,26 +185,24 @@ See `go help test` and `go help testflag` for additional info. * The preferred method of testing multiple scenarios or inputs is [table driven testing](https://github.com/golang/go/wiki/TableDrivenTests) - Example: [TestNamespaceAuthorization](../../test/integration/auth_test.go) -* Integration tests must run in parallel - - Each test should create its own master, httpserver and config. +* Each test should create its own master, httpserver and config. - Example: [TestPodUpdateActiveDeadlineSeconds](../../test/integration/pods_test.go) * See [coding conventions](coding-conventions.md). ### Install etcd dependency -Kubernetes integration tests require your PATH to include an [etcd](https://github.com/coreos/etcd/releases) installation. -Kubernetes includes a script to help install etcd on your machine. +Kubernetes integration tests require your `PATH` to include an +[etcd](https://github.com/coreos/etcd/releases) installation. Kubernetes +includes a script to help install etcd on your machine. ```sh # Install etcd and add to PATH # Option a) install inside kubernetes root -cd kubernetes hack/install-etcd.sh # Installs in ./third_party/etcd echo export PATH="$PATH:$(pwd)/third_party/etcd" >> ~/.profile # Add to PATH # Option b) install manually -cd kubernetes grep -E "image.*etcd" cluster/saltbase/etcd/etcd.manifest # Find version # Install that version using yum/apt-get/etc echo export PATH="$PATH:" >> ~/.profile # Add to PATH @@ -185,11 +210,32 @@ echo export PATH="$PATH:" >> ~/.profile # Add to PATH ### Run integration tests +The integration tests are run using the `hack/test-integration.sh` script. +The Kubernetes integration tests are writting using the normal golang testing +package but expect to have a running etcd instance to connect to. The `test- +integration.sh` script wraps `hack/test-go.sh` and sets up an etcd instance +for the integration tests to use. + ```sh -cd kubernetes hack/test-integration.sh # Run all integration tests. ``` +This script runs the golang tests in package +[`test/integration`](../../test/integration/) +and a special watch cache test in `cmd/integration/integration.go`. + +### Run a specific integration test + +You can use also use the `KUBE_TEST_ARGS` environment variable with the `hack +/test-integration.sh` script to run a specific integration test case: + +```sh +# Run integration test TestPodUpdateActiveDeadlineSeconds with the verbose flag set. +KUBE_GOFLAGS="-v" KUBE_TEST_ARGS="-run ^TestPodUpdateActiveDeadlineSeconds$" hack/test-integration.sh +``` + +If you set `KUBE_TEST_ARGS`, the test case will be run with only the `v1` API +version and the watch cache test is skipped. ## End-to-End tests -- cgit v1.2.3 From 4a5d2099609356215ce6015cf574ebde34c281ab Mon Sep 17 00:00:00 2001 From: David McMahon Date: Mon, 13 Jun 2016 12:24:34 -0700 Subject: Updated docs and examples for release-1.3. --- README.md | 2 +- adding-an-APIGroup.md | 2 +- api-conventions.md | 2 +- api_changes.md | 2 +- automation.md | 2 +- cherry-picks.md | 2 +- cli-roadmap.md | 2 +- client-libraries.md | 2 +- coding-conventions.md | 2 +- collab.md | 2 +- developer-guides/vagrant.md | 2 +- development.md | 2 +- e2e-node-tests.md | 2 +- e2e-tests.md | 2 +- faster_reviews.md | 2 +- flaky-tests.md | 2 +- generating-clientset.md | 2 +- getting-builds.md | 2 +- instrumentation.md | 2 +- issues.md | 2 +- kubectl-conventions.md | 2 +- kubemark-guide.md | 2 +- logging.md | 2 +- making-release-notes.md | 2 +- mesos-style.md | 2 +- node-performance-testing.md | 2 +- on-call-build-cop.md | 2 +- on-call-rotations.md | 2 +- on-call-user-support.md | 2 +- owners.md | 2 +- profiling.md | 2 +- pull-requests.md | 2 +- releasing.md | 2 +- running-locally.md | 2 +- scheduler.md | 2 +- scheduler_algorithm.md | 2 +- testing.md | 5 +++++ update-release-docs.md | 2 +- updating-docs-for-feature-changes.md | 5 +++++ writing-a-getting-started-guide.md | 2 +- writing-good-e2e-tests.md | 2 +- 41 files changed, 49 insertions(+), 39 deletions(-) diff --git a/README.md b/README.md index b1af07df..6051933d 100644 --- a/README.md +++ b/README.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/README.md). +[here](http://releases.k8s.io/release-1.3/docs/devel/README.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/adding-an-APIGroup.md b/adding-an-APIGroup.md index e0f95fc7..c2197761 100644 --- a/adding-an-APIGroup.md +++ b/adding-an-APIGroup.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/adding-an-APIGroup.md). +[here](http://releases.k8s.io/release-1.3/docs/devel/adding-an-APIGroup.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/api-conventions.md b/api-conventions.md index 757de2dd..a940dab7 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md). +[here](http://releases.k8s.io/release-1.3/docs/devel/api-conventions.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/api_changes.md b/api_changes.md index 4ec383e7..4af0bd7c 100644 --- a/api_changes.md +++ b/api_changes.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/api_changes.md). +[here](http://releases.k8s.io/release-1.3/docs/devel/api_changes.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/automation.md b/automation.md index 6ba74fd0..365bcdd9 100644 --- a/automation.md +++ b/automation.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/automation.md). +[here](http://releases.k8s.io/release-1.3/docs/devel/automation.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/cherry-picks.md b/cherry-picks.md index d5456a1a..f923e42f 100644 --- a/cherry-picks.md +++ b/cherry-picks.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/cherry-picks.md). +[here](http://releases.k8s.io/release-1.3/docs/devel/cherry-picks.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/cli-roadmap.md b/cli-roadmap.md index 7a7791b8..9d0f9754 100644 --- a/cli-roadmap.md +++ b/cli-roadmap.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/cli-roadmap.md). +[here](http://releases.k8s.io/release-1.3/docs/devel/cli-roadmap.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/client-libraries.md b/client-libraries.md index 95a3dfeb..0d859541 100644 --- a/client-libraries.md +++ b/client-libraries.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/client-libraries.md). +[here](http://releases.k8s.io/release-1.3/docs/devel/client-libraries.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/coding-conventions.md b/coding-conventions.md index 3a59cd2a..c3f7d628 100644 --- a/coding-conventions.md +++ b/coding-conventions.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/coding-conventions.md). +[here](http://releases.k8s.io/release-1.3/docs/devel/coding-conventions.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/collab.md b/collab.md index 0742b548..782997d7 100644 --- a/collab.md +++ b/collab.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/collab.md). +[here](http://releases.k8s.io/release-1.3/docs/devel/collab.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index 64bfa13f..ff6b98f2 100644 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/developer-guides/vagrant.md). +[here](http://releases.k8s.io/release-1.3/docs/devel/developer-guides/vagrant.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/development.md b/development.md index 9e008191..5b3bce34 100644 --- a/development.md +++ b/development.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/development.md). +[here](http://releases.k8s.io/release-1.3/docs/devel/development.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/e2e-node-tests.md b/e2e-node-tests.md index f2869134..03ee4811 100644 --- a/e2e-node-tests.md +++ b/e2e-node-tests.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/e2e-node-tests.md). +[here](http://releases.k8s.io/release-1.3/docs/devel/e2e-node-tests.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/e2e-tests.md b/e2e-tests.md index b67d3a5e..1bab16ba 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/e2e-tests.md). +[here](http://releases.k8s.io/release-1.3/docs/devel/e2e-tests.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/faster_reviews.md b/faster_reviews.md index eb7416d6..97f4a8de 100644 --- a/faster_reviews.md +++ b/faster_reviews.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/faster_reviews.md). +[here](http://releases.k8s.io/release-1.3/docs/devel/faster_reviews.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/flaky-tests.md b/flaky-tests.md index b599f80f..68fe8a23 100644 --- a/flaky-tests.md +++ b/flaky-tests.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/flaky-tests.md). +[here](http://releases.k8s.io/release-1.3/docs/devel/flaky-tests.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/generating-clientset.md b/generating-clientset.md index 42851cc9..3142b9ea 100644 --- a/generating-clientset.md +++ b/generating-clientset.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/generating-clientset.md). +[here](http://releases.k8s.io/release-1.3/docs/devel/generating-clientset.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/getting-builds.md b/getting-builds.md index ab1df171..bd6143d5 100644 --- a/getting-builds.md +++ b/getting-builds.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/getting-builds.md). +[here](http://releases.k8s.io/release-1.3/docs/devel/getting-builds.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/instrumentation.md b/instrumentation.md index 5e195f6b..ffef0e31 100644 --- a/instrumentation.md +++ b/instrumentation.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/instrumentation.md). +[here](http://releases.k8s.io/release-1.3/docs/devel/instrumentation.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/issues.md b/issues.md index 1a068faa..54acf508 100644 --- a/issues.md +++ b/issues.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/issues.md). +[here](http://releases.k8s.io/release-1.3/docs/devel/issues.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/kubectl-conventions.md b/kubectl-conventions.md index 23a73f11..4d11fc02 100644 --- a/kubectl-conventions.md +++ b/kubectl-conventions.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/kubectl-conventions.md). +[here](http://releases.k8s.io/release-1.3/docs/devel/kubectl-conventions.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/kubemark-guide.md b/kubemark-guide.md index 3f93cd36..aa5b3c8d 100644 --- a/kubemark-guide.md +++ b/kubemark-guide.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/kubemark-guide.md). +[here](http://releases.k8s.io/release-1.3/docs/devel/kubemark-guide.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/logging.md b/logging.md index f0350dca..a941d309 100644 --- a/logging.md +++ b/logging.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/logging.md). +[here](http://releases.k8s.io/release-1.3/docs/devel/logging.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/making-release-notes.md b/making-release-notes.md index 01ef369e..2caee937 100644 --- a/making-release-notes.md +++ b/making-release-notes.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/making-release-notes.md). +[here](http://releases.k8s.io/release-1.3/docs/devel/making-release-notes.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/mesos-style.md b/mesos-style.md index fdf9da08..a2fa1959 100644 --- a/mesos-style.md +++ b/mesos-style.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/mesos-style.md). +[here](http://releases.k8s.io/release-1.3/docs/devel/mesos-style.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/node-performance-testing.md b/node-performance-testing.md index 54c15dee..04e7c06d 100644 --- a/node-performance-testing.md +++ b/node-performance-testing.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/node-performance-testing.md). +[here](http://releases.k8s.io/release-1.3/docs/devel/node-performance-testing.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/on-call-build-cop.md b/on-call-build-cop.md index 7a91e5cb..b7609cbc 100644 --- a/on-call-build-cop.md +++ b/on-call-build-cop.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/on-call-build-cop.md). +[here](http://releases.k8s.io/release-1.3/docs/devel/on-call-build-cop.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/on-call-rotations.md b/on-call-rotations.md index 6cf8d0bf..0fb2cd9f 100644 --- a/on-call-rotations.md +++ b/on-call-rotations.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/on-call-rotations.md). +[here](http://releases.k8s.io/release-1.3/docs/devel/on-call-rotations.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/on-call-user-support.md b/on-call-user-support.md index 1e9f3cb3..ca5a6d76 100644 --- a/on-call-user-support.md +++ b/on-call-user-support.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/on-call-user-support.md). +[here](http://releases.k8s.io/release-1.3/docs/devel/on-call-user-support.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/owners.md b/owners.md index dcd14483..1b1c7643 100644 --- a/owners.md +++ b/owners.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/owners.md). +[here](http://releases.k8s.io/release-1.3/docs/devel/owners.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/profiling.md b/profiling.md index 5e74d25f..dd1c3428 100644 --- a/profiling.md +++ b/profiling.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/profiling.md). +[here](http://releases.k8s.io/release-1.3/docs/devel/profiling.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/pull-requests.md b/pull-requests.md index 6803c464..f45e9b40 100644 --- a/pull-requests.md +++ b/pull-requests.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/pull-requests.md). +[here](http://releases.k8s.io/release-1.3/docs/devel/pull-requests.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/releasing.md b/releasing.md index 5747ed6b..2c8b5d00 100644 --- a/releasing.md +++ b/releasing.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/releasing.md). +[here](http://releases.k8s.io/release-1.3/docs/devel/releasing.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/running-locally.md b/running-locally.md index 6999e588..517b12c8 100644 --- a/running-locally.md +++ b/running-locally.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/running-locally.md). +[here](http://releases.k8s.io/release-1.3/docs/devel/running-locally.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/scheduler.md b/scheduler.md index 778fd087..f8359f73 100755 --- a/scheduler.md +++ b/scheduler.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/scheduler.md). +[here](http://releases.k8s.io/release-1.3/docs/devel/scheduler.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/scheduler_algorithm.md b/scheduler_algorithm.md index 807f0600..a84f19bc 100755 --- a/scheduler_algorithm.md +++ b/scheduler_algorithm.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/scheduler_algorithm.md). +[here](http://releases.k8s.io/release-1.3/docs/devel/scheduler_algorithm.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/testing.md b/testing.md index 72f1c328..475adc12 100644 --- a/testing.md +++ b/testing.md @@ -18,6 +18,11 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.3/docs/devel/testing.md). + Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/update-release-docs.md b/update-release-docs.md index 1dbb20a8..82140407 100644 --- a/update-release-docs.md +++ b/update-release-docs.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/update-release-docs.md). +[here](http://releases.k8s.io/release-1.3/docs/devel/update-release-docs.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/updating-docs-for-feature-changes.md b/updating-docs-for-feature-changes.md index f0f3197d..295aa5df 100644 --- a/updating-docs-for-feature-changes.md +++ b/updating-docs-for-feature-changes.md @@ -18,6 +18,11 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.3/docs/devel/updating-docs-for-feature-changes.md). + Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/writing-a-getting-started-guide.md b/writing-a-getting-started-guide.md index fbe5aa1b..390c717b 100644 --- a/writing-a-getting-started-guide.md +++ b/writing-a-getting-started-guide.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/writing-a-getting-started-guide.md). +[here](http://releases.k8s.io/release-1.3/docs/devel/writing-a-getting-started-guide.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/writing-good-e2e-tests.md b/writing-good-e2e-tests.md index 2cb0fe47..2e910438 100644 --- a/writing-good-e2e-tests.md +++ b/writing-good-e2e-tests.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.2/docs/devel/writing-good-e2e-tests.md). +[here](http://releases.k8s.io/release-1.3/docs/devel/writing-good-e2e-tests.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). -- cgit v1.2.3 From 0e792ecc020ce3882ecd9fc84f406b848169a365 Mon Sep 17 00:00:00 2001 From: Zach Loafman Date: Tue, 14 Jun 2016 09:01:53 -0700 Subject: Revert "Redo v1.4.0-alpha.0" This reverts commit c7f1485e1b3491e98f102c30e7e342cb53dda818, reversing changes made to 939ad4115a2a96f1e18758ec45b7d312bec65aa7. --- README.md | 36 +++++++++++++++++++++------ adding-an-APIGroup.md | 36 +++++++++++++++++++++------ api-conventions.md | 36 +++++++++++++++++++++------ api_changes.md | 36 +++++++++++++++++++++------ automation.md | 36 +++++++++++++++++++++------ cherry-picks.md | 38 ++++++++++++++++++++++------ cli-roadmap.md | 36 +++++++++++++++++++++------ client-libraries.md | 38 ++++++++++++++++++++++------ coding-conventions.md | 36 +++++++++++++++++++++------ collab.md | 36 +++++++++++++++++++++------ developer-guides/vagrant.md | 36 +++++++++++++++++++++------ development.md | 38 ++++++++++++++++++++++------ e2e-node-tests.md | 36 +++++++++++++++++++++------ e2e-tests.md | 36 +++++++++++++++++++++------ faster_reviews.md | 36 +++++++++++++++++++++------ flaky-tests.md | 36 +++++++++++++++++++++------ generating-clientset.md | 36 +++++++++++++++++++++------ getting-builds.md | 38 ++++++++++++++++++++++------ how-to-doc.md | 34 ++++++++++++++++++++++--- instrumentation.md | 36 +++++++++++++++++++++------ issues.md | 36 +++++++++++++++++++++------ kubectl-conventions.md | 36 +++++++++++++++++++++------ kubemark-guide.md | 36 +++++++++++++++++++++------ logging.md | 36 +++++++++++++++++++++------ making-release-notes.md | 36 +++++++++++++++++++++------ mesos-style.md | 36 +++++++++++++++++++++------ node-performance-testing.md | 36 +++++++++++++++++++++------ on-call-build-cop.md | 36 +++++++++++++++++++++------ on-call-rotations.md | 36 +++++++++++++++++++++------ on-call-user-support.md | 36 +++++++++++++++++++++------ owners.md | 36 +++++++++++++++++++++------ profiling.md | 36 +++++++++++++++++++++------ pull-requests.md | 36 +++++++++++++++++++++------ releasing.md | 36 +++++++++++++++++++++------ running-locally.md | 36 +++++++++++++++++++++------ scheduler.md | 48 ++++++++++++++++++++++++++---------- scheduler_algorithm.md | 40 +++++++++++++++++++++++------- testing.md | 31 +++++++++++++++++------ update-release-docs.md | 36 +++++++++++++++++++++------ updating-docs-for-feature-changes.md | 31 +++++++++++++++++------ writing-a-getting-started-guide.md | 36 +++++++++++++++++++++------ writing-good-e2e-tests.md | 36 +++++++++++++++++++++------ 42 files changed, 1221 insertions(+), 303 deletions(-) diff --git a/README.md b/README.md index 9a563619..b1af07df 100644 --- a/README.md +++ b/README.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/README.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -87,13 +116,6 @@ Guide](../admin/README.md). and how the version information gets embedded into the built binaries. - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/README.md?pixel)]() diff --git a/adding-an-APIGroup.md b/adding-an-APIGroup.md index 17fe534a..e0f95fc7 100644 --- a/adding-an-APIGroup.md +++ b/adding-an-APIGroup.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/adding-an-APIGroup.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -94,13 +123,6 @@ TODO: Add a troubleshooting section. - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/adding-an-APIGroup.md?pixel)]() diff --git a/api-conventions.md b/api-conventions.md index 97ee0d86..757de2dd 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + API Conventions @@ -1348,13 +1377,6 @@ be less than 256", "must be greater than or equal to 0". Do not use words like "larger than", "bigger than", "more than", "higher than", etc. * When specifying numeric ranges, use inclusive ranges when possible. - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/api-conventions.md?pixel)]() diff --git a/api_changes.md b/api_changes.md index a561d356..4ec383e7 100644 --- a/api_changes.md +++ b/api_changes.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/api_changes.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -729,13 +758,6 @@ A releated issue is how a cluster manager can roll back from a new version with a new feature, that is already being used by users. See https://github.com/kubernetes/kubernetes/issues/4855. - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/api_changes.md?pixel)]() diff --git a/automation.md b/automation.md index b0cb4438..6ba74fd0 100644 --- a/automation.md +++ b/automation.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/automation.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -102,13 +131,6 @@ the issue number you found or filed. Any pushes of new code to the PR will automatically trigger a new test. No human interraction is required. - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/automation.md?pixel)]() diff --git a/cherry-picks.md b/cherry-picks.md index 7fd8be13..d5456a1a 100644 --- a/cherry-picks.md +++ b/cherry-picks.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/cherry-picks.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -77,18 +106,11 @@ requested - this should not be the norm, but it may happen. See the [cherrypick queue dashboard](http://cherrypick.k8s.io/#/queue) for status of PRs labeled as `cherrypick-candidate`. -[Contributor License Agreements](http://releases.k8s.io/v1.4.0-alpha.0/CONTRIBUTING.md) is +[Contributor License Agreements](http://releases.k8s.io/HEAD/CONTRIBUTING.md) is considered implicit for all code within cherry-pick pull requests, ***unless there is a large conflict***. - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/cherry-picks.md?pixel)]() diff --git a/cli-roadmap.md b/cli-roadmap.md index 8d000ad4..7a7791b8 100644 --- a/cli-roadmap.md +++ b/cli-roadmap.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/cli-roadmap.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -11,13 +40,6 @@ See github issues with the following labels: * [component/clientlib](https://github.com/kubernetes/kubernetes/labels/component/clientlib) - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/cli-roadmap.md?pixel)]() diff --git a/client-libraries.md b/client-libraries.md index c36b1ad1..95a3dfeb 100644 --- a/client-libraries.md +++ b/client-libraries.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/client-libraries.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -7,7 +36,7 @@ ### Supported - * [Go](http://releases.k8s.io/v1.4.0-alpha.0/pkg/client/) + * [Go](http://releases.k8s.io/HEAD/pkg/client/) ### User Contributed @@ -26,13 +55,6 @@ the core Kubernetes team* * [Ruby](https://github.com/abonas/kubeclient) * [Scala](https://github.com/doriordan/skuber) - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/client-libraries.md?pixel)]() diff --git a/coding-conventions.md b/coding-conventions.md index 1d7d75ac..3a59cd2a 100644 --- a/coding-conventions.md +++ b/coding-conventions.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/coding-conventions.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -147,13 +176,6 @@ using the system](../user-guide/config-best-practices.md) - [Go landmines](https://gist.github.com/lavalamp/4bd23295a9f32706a48f) - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/coding-conventions.md?pixel)]() diff --git a/collab.md b/collab.md index 68c13df5..0742b548 100644 --- a/collab.md +++ b/collab.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/collab.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -87,13 +116,6 @@ hold shall not be merged until the person who requested the hold acks the review, withdraws their hold, or is overruled by a preponderance of maintainers. - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/collab.md?pixel)]() diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index d87152ef..64bfa13f 100644 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/developer-guides/vagrant.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -418,13 +447,6 @@ provider, which uses nfs by default. For example: export KUBERNETES_VAGRANT_USE_NFS=true ``` - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/developer-guides/vagrant.md?pixel)]() diff --git a/development.md b/development.md index 550e4878..9e008191 100644 --- a/development.md +++ b/development.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/development.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -19,7 +48,7 @@ branch, but release branches of Kubernetes should not change. Official releases are built using Docker containers. To build Kubernetes using Docker please follow [these -instructions](http://releases.k8s.io/v1.4.0-alpha.0/build/README.md). +instructions](http://releases.k8s.io/HEAD/build/README.md). ### Go development environment @@ -291,13 +320,6 @@ hack/update-generated-docs.sh ``` - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/development.md?pixel)]() diff --git a/e2e-node-tests.md b/e2e-node-tests.md index 876ad526..f2869134 100644 --- a/e2e-node-tests.md +++ b/e2e-node-tests.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/e2e-node-tests.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -188,13 +217,6 @@ The PR builder runs tests against the images listed in [jenkins-pull.properties] The post submit tests run against the images listed in [jenkins-ci.properties](../../test/e2e_node/jenkins/jenkins-ci.properties) - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/e2e-node-tests.md?pixel)]() diff --git a/e2e-tests.md b/e2e-tests.md index 00bb2ba2..4a4a2634 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/e2e-tests.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -639,13 +668,6 @@ You should also know the [testing conventions](coding-conventions.md#testing-con - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/e2e-tests.md?pixel)]() diff --git a/faster_reviews.md b/faster_reviews.md index d6bc43bd..eb7416d6 100644 --- a/faster_reviews.md +++ b/faster_reviews.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/faster_reviews.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -206,13 +235,6 @@ a bit of thought into how your work can be made easier to review. If you do these things your PRs will flow much more easily. - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/faster_reviews.md?pixel)]() diff --git a/flaky-tests.md b/flaky-tests.md index e6563220..b599f80f 100644 --- a/flaky-tests.md +++ b/flaky-tests.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/flaky-tests.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -168,13 +197,6 @@ If you do a final check for flakes with `docker ps -a`, ignore tasks that exited Happy flake hunting! - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/flaky-tests.md?pixel)]() diff --git a/generating-clientset.md b/generating-clientset.md index ea411103..42851cc9 100644 --- a/generating-clientset.md +++ b/generating-clientset.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/generating-clientset.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -35,13 +64,6 @@ At the 1.2 release, we have two released clientsets in the repo: internalclients - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/generating-clientset.md?pixel)]() diff --git a/getting-builds.md b/getting-builds.md index bc3223f9..ab1df171 100644 --- a/getting-builds.md +++ b/getting-builds.md @@ -1,11 +1,40 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/getting-builds.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + # Getting Kubernetes Builds -You can use [hack/get-build.sh](http://releases.k8s.io/v1.4.0-alpha.0/hack/get-build.sh) to or use as a reference on how to get the most recent builds with curl. With `get-build.sh` you can grab the most recent stable build, the most recent release candidate, or the most recent build to pass our ci and gce e2e tests (essentially a nightly build). +You can use [hack/get-build.sh](http://releases.k8s.io/HEAD/hack/get-build.sh) to or use as a reference on how to get the most recent builds with curl. With `get-build.sh` you can grab the most recent stable build, the most recent release candidate, or the most recent build to pass our ci and gce e2e tests (essentially a nightly build). Run `./hack/get-build.sh -h` for its usage. @@ -45,13 +74,6 @@ $ curl -sSL https://storage.googleapis.com/pub/gsutil.tar.gz | sudo tar -xz -C / $ sudo ln -s /usr/local/src/gsutil/gsutil /usr/bin/gsutil ``` - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/getting-builds.md?pixel)]() diff --git a/how-to-doc.md b/how-to-doc.md index a978b908..67bffe15 100644 --- a/how-to-doc.md +++ b/how-to-doc.md @@ -1,5 +1,29 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). +
+-- + + @@ -49,7 +73,7 @@ After running `hack/update-munge-docs.sh`, you'll see a table of contents genera It's important to follow the rules when writing links. It helps us correctly versionize documents for each release. -Use inline links instead of urls at all times. When you add internal links to `docs/` or `examples/`, use relative links; otherwise, use `http://releases.k8s.io/v1.4.0-alpha.0/`. For example, avoid using: +Use inline links instead of urls at all times. When you add internal links to `docs/` or `examples/`, use relative links; otherwise, use `http://releases.k8s.io/HEAD/`. For example, avoid using: ``` [GCE](https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/gce.md) # note that it's under docs/ @@ -61,11 +85,11 @@ Instead, use: ``` [GCE](../getting-started-guides/gce.md) # note that it's under docs/ -[Kubernetes package](http://releases.k8s.io/v1.4.0-alpha.0/pkg/) # note that it's under pkg/ +[Kubernetes package](http://releases.k8s.io/HEAD/pkg/) # note that it's under pkg/ [Kubernetes](http://kubernetes.io/) # external link ``` -The above example generates the following links: [GCE](../getting-started-guides/gce.md), [Kubernetes package](http://releases.k8s.io/v1.4.0-alpha.0/pkg/), and [Kubernetes](http://kubernetes.io/). +The above example generates the following links: [GCE](../getting-started-guides/gce.md), [Kubernetes package](http://releases.k8s.io/HEAD/pkg/), and [Kubernetes](http://kubernetes.io/). ## How to Include an Example @@ -146,7 +170,7 @@ Mungers are like gofmt for md docs which we use to format documents. To use it, ``` -in your md files. Note that xxxx is the placeholder for a specific munger. Appropriate content will be generated and inserted between two brackets after you run `hack/update-munge-docs.sh`. See [munger document](http://releases.k8s.io/v1.4.0-alpha.0/cmd/mungedocs/) for more details. +in your md files. Note that xxxx is the placeholder for a specific munger. Appropriate content will be generated and inserted between two brackets after you run `hack/update-munge-docs.sh`. See [munger document](http://releases.k8s.io/HEAD/cmd/mungedocs/) for more details. ## Auto-added Mungers @@ -159,6 +183,8 @@ UNVERSIONED_WARNING munger inserts unversioned warning which warns the users whe ``` + + ``` diff --git a/instrumentation.md b/instrumentation.md index b8f15333..5e195f6b 100644 --- a/instrumentation.md +++ b/instrumentation.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/instrumentation.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + Instrumenting Kubernetes with a new metric @@ -37,13 +66,6 @@ https://github.com/prometheus/client_golang/blob/master/prometheus/histogram.go https://github.com/prometheus/client_golang/blob/master/prometheus/summary.go - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/instrumentation.md?pixel)]() diff --git a/issues.md b/issues.md index e59025ea..1a068faa 100644 --- a/issues.md +++ b/issues.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/issues.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -59,13 +88,6 @@ release if it gets done, but we wouldn't block the release on it. A few days before release, we will probably move all P2 and P3 bugs out of that milestone in bulk. - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/issues.md?pixel)]() diff --git a/kubectl-conventions.md b/kubectl-conventions.md index 258e25cb..23a73f11 100644 --- a/kubectl-conventions.md +++ b/kubectl-conventions.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/kubectl-conventions.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -378,13 +407,6 @@ method which configures the generated namespace that callers of the generator creating it. - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/kubectl-conventions.md?pixel)]() diff --git a/kubemark-guide.md b/kubemark-guide.md index ed9ff7ee..3f93cd36 100644 --- a/kubemark-guide.md +++ b/kubemark-guide.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/kubemark-guide.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -210,13 +239,6 @@ it’s crucial to make it as simple as possible to allow running a big number of Hollows on a single core. - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/kubemark-guide.md?pixel)]() diff --git a/logging.md b/logging.md index bbeba7c0..f0350dca 100644 --- a/logging.md +++ b/logging.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/logging.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -36,13 +65,6 @@ environments may wish to run at V(3) or V(4). If you wish to change the log level, you can pass in `-v=X` where X is the desired maximum level to log. - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/logging.md?pixel)]() diff --git a/making-release-notes.md b/making-release-notes.md index 88998312..01ef369e 100644 --- a/making-release-notes.md +++ b/making-release-notes.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/making-release-notes.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -52,13 +81,6 @@ page. * Press Save. - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/making-release-notes.md?pixel)]() diff --git a/mesos-style.md b/mesos-style.md index 25055dea..fdf9da08 100644 --- a/mesos-style.md +++ b/mesos-style.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/mesos-style.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -218,13 +247,6 @@ Borg is described [here](http://research.google.com/pubs/pub43438.html). - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/mesos-style.md?pixel)]() diff --git a/node-performance-testing.md b/node-performance-testing.md index 4fec2764..54c15dee 100644 --- a/node-performance-testing.md +++ b/node-performance-testing.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/node-performance-testing.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -127,13 +156,6 @@ More details on benchmarking [here](https://golang.org/pkg/testing/). - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/node-performance-testing.md?pixel)]() diff --git a/on-call-build-cop.md b/on-call-build-cop.md index e0bfd240..7a91e5cb 100644 --- a/on-call-build-cop.md +++ b/on-call-build-cop.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/on-call-build-cop.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -152,13 +181,6 @@ the build cop is expected to file issues for any flaky tests they encounter. [@k8s-oncall](https://github.com/k8s-oncall) will reach the current person on call. - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/on-call-build-cop.md?pixel)]() diff --git a/on-call-rotations.md b/on-call-rotations.md index a7ace0c0..6cf8d0bf 100644 --- a/on-call-rotations.md +++ b/on-call-rotations.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/on-call-rotations.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -43,13 +72,6 @@ milestones, for instance). * [Github and Build Cop Rotation](on-call-build-cop.md) * [User Support Rotation](on-call-user-support.md) - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/on-call-rotations.md?pixel)]() diff --git a/on-call-user-support.md b/on-call-user-support.md index f8d38866..1e9f3cb3 100644 --- a/on-call-user-support.md +++ b/on-call-user-support.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/on-call-user-support.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -89,13 +118,6 @@ current person on call. - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/on-call-user-support.md?pixel)]() diff --git a/owners.md b/owners.md index 70acf71d..dcd14483 100644 --- a/owners.md +++ b/owners.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/owners.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -98,13 +127,6 @@ parent's OWNERS file is used instead. There will be a top-level OWNERS file to Obviously changing the OWNERS file requires OWNERS permission. - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/owners.md?pixel)]() diff --git a/profiling.md b/profiling.md index a95840f4..5e74d25f 100644 --- a/profiling.md +++ b/profiling.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/profiling.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -46,13 +75,6 @@ to get 30 sec. CPU profile. To enable contention profiling you need to add line `rt.SetBlockProfileRate(1)` in addition to `m.mux.HandleFunc(...)` added before (`rt` stands for `runtime` in `master.go`). This enables 'debug/pprof/block' subpage, which can be used as an input to `go tool pprof`. - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/profiling.md?pixel)]() diff --git a/pull-requests.md b/pull-requests.md index 4f3c6018..6803c464 100644 --- a/pull-requests.md +++ b/pull-requests.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/pull-requests.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -106,13 +135,6 @@ We use a variety of automation to manage pull requests. This automation is desc [elsewhere.](automation.md) - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/pull-requests.md?pixel)]() diff --git a/releasing.md b/releasing.md index 635a7387..5747ed6b 100644 --- a/releasing.md +++ b/releasing.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/releasing.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -273,13 +302,6 @@ can, for instance, tell it to override `gitVersion` and set it to is the complete SHA1 of the (dirty) tree used at build time. - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/releasing.md?pixel)]() diff --git a/running-locally.md b/running-locally.md index f1f0f192..6999e588 100644 --- a/running-locally.md +++ b/running-locally.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/running-locally.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + Getting started locally @@ -163,13 +192,6 @@ KUBE_DNS_REPLICAS=1 To know more on DNS service you can look [here](http://issue.k8s.io/6667). Related documents can be found [here](../../build/kube-dns/#how-do-i-configure-it) - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/running-locally.md?pixel)]() diff --git a/scheduler.md b/scheduler.md index d9b77e9a..778fd087 100755 --- a/scheduler.md +++ b/scheduler.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/scheduler.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -25,37 +54,30 @@ divided by the node's capacity). Finally, the node with the highest priority is chosen (or, if there are multiple such nodes, then one of them is chosen at random). The code for this main scheduling loop is in the function `Schedule()` in -[plugin/pkg/scheduler/generic_scheduler.go](http://releases.k8s.io/v1.4.0-alpha.0/plugin/pkg/scheduler/generic_scheduler.go) +[plugin/pkg/scheduler/generic_scheduler.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/generic_scheduler.go) ## Scheduler extensibility The scheduler is extensible: the cluster administrator can choose which of the pre-defined scheduling policies to apply, and can add new ones. The built-in predicates and priorities are -defined in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/v1.4.0-alpha.0/plugin/pkg/scheduler/algorithm/predicates/predicates.go) and -[plugin/pkg/scheduler/algorithm/priorities/priorities.go](http://releases.k8s.io/v1.4.0-alpha.0/plugin/pkg/scheduler/algorithm/priorities/priorities.go), respectively. +defined in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/predicates/predicates.go) and +[plugin/pkg/scheduler/algorithm/priorities/priorities.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/priorities/priorities.go), respectively. The policies that are applied when scheduling can be chosen in one of two ways. Normally, the policies used are selected by the functions `defaultPredicates()` and `defaultPriorities()` in -[plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/v1.4.0-alpha.0/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). +[plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). However, the choice of policies can be overridden by passing the command-line flag `--policy-config-file` to the scheduler, pointing to a JSON file specifying which scheduling policies to use. See [examples/scheduler-policy-config.json](../../examples/scheduler-policy-config.json) for an example config file. (Note that the config file format is versioned; the API is defined in -[plugin/pkg/scheduler/api](http://releases.k8s.io/v1.4.0-alpha.0/plugin/pkg/scheduler/api/)). +[plugin/pkg/scheduler/api](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/api/)). Thus to add a new scheduling policy, you should modify predicates.go or priorities.go, and either register the policy in `defaultPredicates()` or `defaultPriorities()`, or use a policy config file. ## Exploring the code If you want to get a global picture of how the scheduler works, you can start in -[plugin/cmd/kube-scheduler/app/server.go](http://releases.k8s.io/v1.4.0-alpha.0/plugin/cmd/kube-scheduler/app/server.go) - - - - - - - +[plugin/cmd/kube-scheduler/app/server.go](http://releases.k8s.io/HEAD/plugin/cmd/kube-scheduler/app/server.go) diff --git a/scheduler_algorithm.md b/scheduler_algorithm.md index 2cf0c54d..807f0600 100755 --- a/scheduler_algorithm.md +++ b/scheduler_algorithm.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/scheduler_algorithm.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -21,7 +50,7 @@ The purpose of filtering the nodes is to filter out the nodes that do not meet c - `MaxGCEPDVolumeCount`: Ensure that the number of attached GCE PersistentDisk volumes does not exceed a maximum value (by default, 16, which is the maximum GCE allows -- see [GCE's documentation](https://cloud.google.com/compute/docs/disks/persistent-disks#limits_for_predefined_machine_types)). The maximum value can be controlled by setting the `KUBE_MAX_PD_VOLS` environment variable. - `CheckNodeMemoryPressure`: Check if a pod can be scheduled on a node reporting memory pressure condition. Currently, no ``BestEffort`` should be placed on a node under memory pressure as it gets automatically evicted by kubelet. -The details of the above predicates can be found in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/v1.4.0-alpha.0/plugin/pkg/scheduler/algorithm/predicates/predicates.go). All predicates mentioned above can be used in combination to perform a sophisticated filtering policy. Kubernetes uses some, but not all, of these predicates by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/v1.4.0-alpha.0/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). +The details of the above predicates can be found in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/predicates/predicates.go). All predicates mentioned above can be used in combination to perform a sophisticated filtering policy. Kubernetes uses some, but not all, of these predicates by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). ## Ranking the nodes @@ -40,14 +69,7 @@ Currently, Kubernetes scheduler provides some practical priority functions, incl - `ImageLocalityPriority`: Nodes are prioritized based on locality of images requested by a pod. Nodes with larger size of already-installed packages required by the pod will be preferred over nodes with no already-installed packages required by the pod or a small total size of already-installed packages required by the pod. - `NodeAffinityPriority`: (Kubernetes v1.2) Implements `preferredDuringSchedulingIgnoredDuringExecution` node affinity; see [here](../user-guide/node-selection/) for more details. -The details of the above priority functions can be found in [plugin/pkg/scheduler/algorithm/priorities](http://releases.k8s.io/v1.4.0-alpha.0/plugin/pkg/scheduler/algorithm/priorities/). Kubernetes uses some, but not all, of these priority functions by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/v1.4.0-alpha.0/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). Similar as predicates, you can combine the above priority functions and assign weight factors (positive number) to them as you want (check [scheduler.md](scheduler.md) for how to customize). - - - - - - - +The details of the above priority functions can be found in [plugin/pkg/scheduler/algorithm/priorities](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/priorities/). Kubernetes uses some, but not all, of these priority functions by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). Similar as predicates, you can combine the above priority functions and assign weight factors (positive number) to them as you want (check [scheduler.md](scheduler.md) for how to customize). diff --git a/testing.md b/testing.md index faa1f66a..72f1c328 100644 --- a/testing.md +++ b/testing.md @@ -1,5 +1,29 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). +
+-- + + @@ -171,13 +195,6 @@ hack/test-integration.sh # Run all integration tests. Please refer to [End-to-End Testing in Kubernetes](e2e-tests.md). - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/testing.md?pixel)]() diff --git a/update-release-docs.md b/update-release-docs.md index 21165997..1dbb20a8 100644 --- a/update-release-docs.md +++ b/update-release-docs.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/update-release-docs.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -115,13 +144,6 @@ docs go away. If the change added or deleted a doc, then update the corresponding `_includes/nav_vX.Y.html` file as well. - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/update-release-docs.md?pixel)]() diff --git a/updating-docs-for-feature-changes.md b/updating-docs-for-feature-changes.md index 6d7ec92b..f0f3197d 100644 --- a/updating-docs-for-feature-changes.md +++ b/updating-docs-for-feature-changes.md @@ -1,5 +1,29 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). +
+-- + + @@ -76,13 +100,6 @@ Anyone making user facing changes to kubernetes. This is especially important f * *TODO once we have a standard widget set we are happy with* - include diagrams to help describe complex ideas (not required yet) - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/updating-docs-for-feature-changes.md?pixel)]() diff --git a/writing-a-getting-started-guide.md b/writing-a-getting-started-guide.md index f4f86b54..fbe5aa1b 100644 --- a/writing-a-getting-started-guide.md +++ b/writing-a-getting-started-guide.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/writing-a-getting-started-guide.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -101,13 +130,6 @@ These guidelines say *what* to do. See the Rationale section for *why*. - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/writing-a-getting-started-guide.md?pixel)]() diff --git a/writing-good-e2e-tests.md b/writing-good-e2e-tests.md index 18c6877f..2cb0fe47 100644 --- a/writing-good-e2e-tests.md +++ b/writing-good-e2e-tests.md @@ -1,5 +1,34 @@ + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.2/docs/devel/writing-good-e2e-tests.md). + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + @@ -235,13 +264,6 @@ Note that opening issues for specific better tooling is welcome, and code implementing that tooling is even more welcome :-). - - - - - - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/writing-good-e2e-tests.md?pixel)]() -- cgit v1.2.3 From 4c6e8c6738911f8e8b2bc59ddd3b1bd7274e1d08 Mon Sep 17 00:00:00 2001 From: Paul Morie Date: Mon, 13 Jun 2016 15:11:02 -0400 Subject: Improve debugging experience for single integration test case --- testing.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/testing.md b/testing.md index c0e622f3..13127609 100644 --- a/testing.md +++ b/testing.md @@ -146,7 +146,7 @@ To run all unit tests and generate an HTML coverage report, run the following: KUBE_COVER=y hack/test-go.sh ``` -At the end of the run, an the HTML report will be generated with the path +At the end of the run, an HTML report will be generated with the path printed to stdout. To run tests and collect coverage in only one package, pass its relative path -- cgit v1.2.3 From bce51fbf2fa559f1634de11494bde333a0e1e960 Mon Sep 17 00:00:00 2001 From: Joe Finney Date: Tue, 21 Jun 2016 15:58:34 -0700 Subject: Remove all traces of travis. --- pull-requests.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pull-requests.md b/pull-requests.md index f45e9b40..e31dc947 100644 --- a/pull-requests.md +++ b/pull-requests.md @@ -60,7 +60,7 @@ Either the [on call](on-call-rotations.md) manually or the [github "munger"](htt There are several requirements for the submit-queue to work: * Author must have signed CLA ("cla: yes" label added to PR) * No changes can be made since last lgtm label was applied -* k8s-bot must have reported the GCE E2E build and test steps passed (Travis, Jenkins unit/integration, Jenkins e2e) +* k8s-bot must have reported the GCE E2E build and test steps passed (Jenkins unit/integration, Jenkins e2e) Additionally, for infrequent or new contributors, we require the on call to apply the "ok-to-merge" label manually. This is gated by the [whitelist](https://github.com/kubernetes/contrib/blob/master/mungegithub/whitelist.txt). -- cgit v1.2.3 From 53d6a995ab15e458856a53cd18fa617b2e4ccc05 Mon Sep 17 00:00:00 2001 From: David McMahon Date: Fri, 24 Jun 2016 16:25:15 -0700 Subject: relnotes ready for use. --- pull-requests.md | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/pull-requests.md b/pull-requests.md index f45e9b40..13771c22 100644 --- a/pull-requests.md +++ b/pull-requests.md @@ -95,9 +95,8 @@ label is required for that non-master PR. ### Reviewing pre-release notes -**NOTE: THIS TOOLING IS NOT YET AVAILABLE, BUT COMING SOON!** - At any time, you can see what the release notes will look like on any branch. +(NOTE: This only works on Linux for now) ``` $ git pull https://github.com/kubernetes/release @@ -105,7 +104,7 @@ $ RELNOTES=$PWD/release/relnotes $ cd /to/your/kubernetes/repo $ $RELNOTES -man # for details on how to use the tool # Show release notes from the last release on a branch to HEAD -$ $RELNOTES --raw --branch=master +$ $RELNOTES --branch=master ``` ## Visual overview -- cgit v1.2.3 From b601f18a298d833b76c7ba0d1e47c24ff4e02dc8 Mon Sep 17 00:00:00 2001 From: Michael Rubin Date: Tue, 24 May 2016 16:36:17 -0700 Subject: Document usage of dedent for kubectl commands --- kubectl-conventions.md | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) diff --git a/kubectl-conventions.md b/kubectl-conventions.md index 4d11fc02..0beb95a7 100644 --- a/kubectl-conventions.md +++ b/kubectl-conventions.md @@ -279,15 +279,17 @@ type MineConfig struct { mineLatest bool } -const ( - mineLong = `Some long description -for my command.` +var ( + mineLong = dedent.Dedent(` + mine which is described here + with lots of details.`) - mineExample = ` # Run my command's first action - $ %[1]s first + mineExample = dedent.Dedent(` + # Run my command's first action + kubectl mine first_action - # Run my command's second action on latest stuff - $ %[1]s second --latest` + # Run my command's second action on latest stuff + kubectl mine second_action --flag`) ) // NewCmdMine implements the kubectl mine command. -- cgit v1.2.3 From 617a9d7677d6a59c2d90ca334d9cfab165211f86 Mon Sep 17 00:00:00 2001 From: xiangpengzhao Date: Mon, 20 Jun 2016 00:12:42 -0400 Subject: Fix broken links in on-call-user-support.md --- on-call-user-support.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/on-call-user-support.md b/on-call-user-support.md index ca5a6d76..233cf39a 100644 --- a/on-call-user-support.md +++ b/on-call-user-support.md @@ -91,8 +91,8 @@ ensure your questions don't go unanswered. Before posting a new question, please search stackoverflow for answers to similar questions, and also familiarize yourself with: - * [user guide](http://kubernetes.io/v1.1/) - * [troubleshooting guide](http://kubernetes.io/v1.1/docs/troubleshooting.html) + * [user guide](http://kubernetes.io/docs/user-guide/) + * [troubleshooting guide](http://kubernetes.io/docs/troubleshooting/) Again, thanks for using Kubernetes. -- cgit v1.2.3 From 8e1eb3f04593cd164f67a01846c4d1309db0819d Mon Sep 17 00:00:00 2001 From: Zach Loafman Date: Tue, 28 Jun 2016 16:42:26 -0700 Subject: Change references to gs://kubernetes-release/ci Change over to gs://kubernetes-release-dev/ci. This should be all the places we reference gs://kubernetes-release/ci or https://storage.googleapis.com/kubernetes-release/ci. I'm happy to be wrong. Follow-on to #28172 --- getting-builds.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/getting-builds.md b/getting-builds.md index bd6143d5..52e9c193 100644 --- a/getting-builds.md +++ b/getting-builds.md @@ -59,9 +59,9 @@ Finally, you can just print the latest or stable version: You can also use the gsutil tool to explore the Google Cloud Storage release buckets. Here are some examples: ```sh -gsutil cat gs://kubernetes-release/ci/latest.txt # output the latest ci version number -gsutil cat gs://kubernetes-release/ci/latest-green.txt # output the latest ci version number that passed gce e2e -gsutil ls gs://kubernetes-release/ci/v0.20.0-29-g29a55cc/ # list the contents of a ci release +gsutil cat gs://kubernetes-release-dev/ci/latest.txt # output the latest ci version number +gsutil cat gs://kubernetes-release-dev/ci/latest-green.txt # output the latest ci version number that passed gce e2e +gsutil ls gs://kubernetes-release-dev/ci/v0.20.0-29-g29a55cc/ # list the contents of a ci release gsutil ls gs://kubernetes-release/release # list all official releases and rcs ``` -- cgit v1.2.3 From 91f816494452b3a9a4e39117bdadb7b64279c8e2 Mon Sep 17 00:00:00 2001 From: joe2far Date: Tue, 7 Jun 2016 18:38:04 +0100 Subject: Make kubectl help strings consistent --- kubectl-conventions.md | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/kubectl-conventions.md b/kubectl-conventions.md index 0beb95a7..40cb7e59 100644 --- a/kubectl-conventions.md +++ b/kubectl-conventions.md @@ -125,6 +125,9 @@ flags and separate help that is tailored for the particular usage. * Flag names and single-character aliases should have the same meaning across all commands +* Flag descriptions should start with an uppercase letter and not have a +period at the end of a sentence + * Command-line flags corresponding to API fields should accept API enums exactly (e.g., `--restart=Always`) @@ -233,9 +236,16 @@ resources in other commands an exhaustive specification * Short should contain a one-line explanation of what the command does + * Short descriptions should start with an uppercase case letter and not + have a period at the end of a sentence + * Short descriptions should (if possible) start with a first person + (singular present tense) verb * Long may contain multiple lines, including additional information about input, output, commonly used flags, etc. + * Long descriptions should use proper grammar, start with an uppercase + letter and have a period at the end of a sentence + * Example should contain examples * Start commands with `$` -- cgit v1.2.3 From 950814d73c07c6badb5d21b719249d08341bb3cf Mon Sep 17 00:00:00 2001 From: Zach Loafman Date: Wed, 29 Jun 2016 07:55:53 -0700 Subject: Revert "Merge pull request #28193 from zmerlynn/pull-ci-elsewhere" This reverts commit d965b4719cb113f8f607e991755b09a3b0dbb33d, reversing changes made to 08a28e5123d3ef2aac444e8398979fec2cdc74eb. --- getting-builds.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/getting-builds.md b/getting-builds.md index 52e9c193..bd6143d5 100644 --- a/getting-builds.md +++ b/getting-builds.md @@ -59,9 +59,9 @@ Finally, you can just print the latest or stable version: You can also use the gsutil tool to explore the Google Cloud Storage release buckets. Here are some examples: ```sh -gsutil cat gs://kubernetes-release-dev/ci/latest.txt # output the latest ci version number -gsutil cat gs://kubernetes-release-dev/ci/latest-green.txt # output the latest ci version number that passed gce e2e -gsutil ls gs://kubernetes-release-dev/ci/v0.20.0-29-g29a55cc/ # list the contents of a ci release +gsutil cat gs://kubernetes-release/ci/latest.txt # output the latest ci version number +gsutil cat gs://kubernetes-release/ci/latest-green.txt # output the latest ci version number that passed gce e2e +gsutil ls gs://kubernetes-release/ci/v0.20.0-29-g29a55cc/ # list the contents of a ci release gsutil ls gs://kubernetes-release/release # list all official releases and rcs ``` -- cgit v1.2.3 From 219d085b1a600aadb64e05125a91f5f7e17b73c6 Mon Sep 17 00:00:00 2001 From: Zach Loafman Date: Wed, 29 Jun 2016 15:02:37 -0700 Subject: Revert "Revert "Merge pull request #28193 from zmerlynn/pull-ci-elsewhere"" Bring back #28193. We caught a break in https://github.com/kubernetes/test-infra/issues/240 and discovered the previous issue, fixed in https://github.com/kubernetes/test-infra/pull/241 and https://github.com/kubernetes/test-infra/pull/244, so I have a pretty good handle on what was causing the previous bringup issues (and it wasn't #28193). By the time this merges, we'll have good signal on GKE in the `kubernetes-e2e-gke-updown` job. This reverts commit ee1d48033366cfbb2e32fc98af6d37c0789e03c2. --- getting-builds.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/getting-builds.md b/getting-builds.md index bd6143d5..52e9c193 100644 --- a/getting-builds.md +++ b/getting-builds.md @@ -59,9 +59,9 @@ Finally, you can just print the latest or stable version: You can also use the gsutil tool to explore the Google Cloud Storage release buckets. Here are some examples: ```sh -gsutil cat gs://kubernetes-release/ci/latest.txt # output the latest ci version number -gsutil cat gs://kubernetes-release/ci/latest-green.txt # output the latest ci version number that passed gce e2e -gsutil ls gs://kubernetes-release/ci/v0.20.0-29-g29a55cc/ # list the contents of a ci release +gsutil cat gs://kubernetes-release-dev/ci/latest.txt # output the latest ci version number +gsutil cat gs://kubernetes-release-dev/ci/latest-green.txt # output the latest ci version number that passed gce e2e +gsutil ls gs://kubernetes-release-dev/ci/v0.20.0-29-g29a55cc/ # list the contents of a ci release gsutil ls gs://kubernetes-release/release # list all official releases and rcs ``` -- cgit v1.2.3 From 74429867d824ffd735989de5a8d83f9c840acf61 Mon Sep 17 00:00:00 2001 From: Wojciech Tyczynski Date: Mon, 4 Jul 2016 12:22:07 +0200 Subject: Remove cmd/integration test --- testing.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/testing.md b/testing.md index 4ae98859..b0dc82ef 100644 --- a/testing.md +++ b/testing.md @@ -226,8 +226,7 @@ hack/test-integration.sh # Run all integration tests. ``` This script runs the golang tests in package -[`test/integration`](../../test/integration/) -and a special watch cache test in `cmd/integration/integration.go`. +[`test/integration`](../../test/integration/). ### Run a specific integration test -- cgit v1.2.3 From 17596a7ef5cbc326813badeed329327842ef5eda Mon Sep 17 00:00:00 2001 From: dubstack Date: Fri, 1 Jul 2016 17:23:21 -0700 Subject: Fix minor typo --- development.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/development.md b/development.md index 5136ef0c..82014e7c 100644 --- a/development.md +++ b/development.md @@ -230,9 +230,9 @@ separate dependency updates from other changes._ ```sh export KPATH=$HOME/code/kubernetes -mkdir -p $KPATH/src/k8s.io/kubernetes -cd $KPATH/src/k8s.io/kubernetes -git clone https://path/to/your/fork . +mkdir -p $KPATH/src/k8s.io +cd $KPATH/src/k8s.io +git clone https://path/to/your/kubernetes/fork # assumes your fork is 'kubernetes' # Or copy your existing local repo here. IMPORTANT: making a symlink doesn't work. ``` -- cgit v1.2.3 From c5324cf45267562e9a0a7331cdba17f6031ba7d8 Mon Sep 17 00:00:00 2001 From: Mike Danese Date: Wed, 8 Jun 2016 14:49:33 -0700 Subject: update docs Signed-off-by: Mike Danese --- coding-conventions.md | 2 +- testing.md | 6 +++--- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/coding-conventions.md b/coding-conventions.md index c3f7d628..dc2825b0 100644 --- a/coding-conventions.md +++ b/coding-conventions.md @@ -109,7 +109,7 @@ following Go conventions - `stateLock`, `mapLock` etc. tests - Table-driven tests are preferred for testing multiple scenarios/inputs; for -example, see [TestNamespaceAuthorization](../../test/integration/auth_test.go) +example, see [TestNamespaceAuthorization](../../test/integration/auth/auth_test.go) - Significant features should come with integration (test/integration) and/or [end-to-end (test/e2e) tests](e2e-tests.md) diff --git a/testing.md b/testing.md index 4ae98859..4e41a1da 100644 --- a/testing.md +++ b/testing.md @@ -73,7 +73,7 @@ passing, so it is often a good idea to make sure the e2e tests work as well. * All packages and any significant files require unit tests. * The preferred method of testing multiple scenarios or input is [table driven testing](https://github.com/golang/go/wiki/TableDrivenTests) - - Example: [TestNamespaceAuthorization](../../test/integration/auth_test.go) + - Example: [TestNamespaceAuthorization](../../test/integration/auth/auth_test.go) * Unit tests must pass on OS X and Windows platforms. - Tests using linux-specific features must be skipped or compiled out. - Skipped is better, compiled out is required when it won't compile. @@ -189,9 +189,9 @@ See `go help test` and `go help testflag` for additional info. - This includes kubectl commands * The preferred method of testing multiple scenarios or inputs is [table driven testing](https://github.com/golang/go/wiki/TableDrivenTests) - - Example: [TestNamespaceAuthorization](../../test/integration/auth_test.go) + - Example: [TestNamespaceAuthorization](../../test/integration/auth/auth_test.go) * Each test should create its own master, httpserver and config. - - Example: [TestPodUpdateActiveDeadlineSeconds](../../test/integration/pods_test.go) + - Example: [TestPodUpdateActiveDeadlineSeconds](../../test/integration/pods/pods_test.go) * See [coding conventions](coding-conventions.md). ### Install etcd dependency -- cgit v1.2.3 From a96008ab101df6eb2e24ec5fde23febe2fbe6e63 Mon Sep 17 00:00:00 2001 From: "Tim St. Clair" Date: Thu, 7 Jul 2016 13:31:17 -0700 Subject: Regenerate TOCs with duplicate header fix --- api_changes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/api_changes.md b/api_changes.md index 4af0bd7c..c95d0736 100644 --- a/api_changes.md +++ b/api_changes.md @@ -48,7 +48,7 @@ found at [API Conventions](api-conventions.md). - [Edit defaults.go](#edit-defaultsgo) - [Edit conversion.go](#edit-conversiongo) - [Changing the internal structures](#changing-the-internal-structures) - - [Edit types.go](#edit-typesgo) + - [Edit types.go](#edit-typesgo-1) - [Edit validation.go](#edit-validationgo) - [Edit version conversions](#edit-version-conversions) - [Generate protobuf objects](#generate-protobuf-objects) -- cgit v1.2.3 From 5d96abaeda9d7944e47c1ce62bb545783f83de96 Mon Sep 17 00:00:00 2001 From: "Tim St. Clair" Date: Thu, 7 Jul 2016 16:30:35 -0700 Subject: Add development doc with go tips & tools --- go-code.md | 59 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 59 insertions(+) create mode 100644 go-code.md diff --git a/go-code.md b/go-code.md new file mode 100644 index 00000000..c9e90751 --- /dev/null +++ b/go-code.md @@ -0,0 +1,59 @@ + + + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + + + + +# Kubernetes Go Tools and Tips + +Kubernetes is one of the largest open source Go projects, so good tooling a solid understanding of +Go is critical to Kubernetes development. This document provides a collection of resources, tools +and tips that our developers have found useful. + +## Recommended Reading + +- [Kubernetes Go development environment](development.md#go-development-environment) +- [Go Tour](https://tour.golang.org/welcome/2) - Official Go tutorial. +- [Effective Go](https://golang.org/doc/effective_go.html) - A good collection of Go advice. +- [Kubernetes Code conventions](coding-conventions.md) - Style guide for Kubernetes code. +- [Three Go Landmines](https://gist.github.com/lavalamp/4bd23295a9f32706a48f) - Surprising behavior in the Go language. These have caused real bugs! + +## Recommended Tools + +- [godep](https://github.com/tools/godep) - Used for Kubernetes dependency management. See also [Kubernetes godep and dependency management](development.md#godep-and-dependency-management) +- [Go Version Manager](https://github.com/moovweb/gvm) - A handy tool for managing Go versions. +- [godepq](https://github.com/google/godepq) - A tool for analyzing go import trees. + +## Go Tips + +- [Godoc bookmarklet](https://gist.github.com/timstclair/c891fb8aeb24d663026371d91dcdb3fc) - navigate from a github page to the corresponding godoc page. +- Consider making a separate Go tree for each project, which can make overlapping dependency management much easier. Remember to set the `$GOPATH` correctly! Consider [scripting](https://gist.github.com/timstclair/17ca792a20e0d83b06dddef7d77b1ea0) this. +- Emacs users - setup [go-mode](https://github.com/dominikh/go-mode.el) + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/go-code.md?pixel)]() + -- cgit v1.2.3 From 4a69f7cd28fa45c3f34dfa1a2e89ca09bdea5294 Mon Sep 17 00:00:00 2001 From: Tim Hockin Date: Mon, 6 Jun 2016 22:46:24 -0700 Subject: Use file tags to generate deep-copies This drives most of the logic of deep-copy generation from tags like: // +deepcopy-gen=package ..rather than hardcoded lists of packages. This will make it possible to subsequently generate code ONLY for packages that need it *right now*, rather than all of them always. Also remove pkgs that really do not need deep-copies (no symbols used anywhere). --- adding-an-APIGroup.md | 8 +++++--- api_changes.md | 9 ++++----- 2 files changed, 9 insertions(+), 8 deletions(-) diff --git a/adding-an-APIGroup.md b/adding-an-APIGroup.md index c2197761..6026cc2e 100644 --- a/adding-an-APIGroup.md +++ b/adding-an-APIGroup.md @@ -75,12 +75,14 @@ cmd/libs/go2idl/ tool. 1. Generate conversions and deep-copies: 1. Add your "group/" or "group/version" into - cmd/libs/go2idl/{conversion-gen, deep-copy-gen}/main.go; + cmd/libs/go2idl/conversion-gen/main.go; 2. Make sure your pkg/apis/``/`` directory has a doc.go file + with the comment `// +k8s:deepcopy-gen=register`, to catch the attention + of our generation tools. + 3. Make sure your pkg/apis/``/`` directory has a doc.go file with the comment `// +genconversion=true`, to catch the attention of our gen-conversion script. - 3. Run hack/update-all.sh. - + 4. Run hack/update-all.sh. 2. Generate files for Ugorji codec: diff --git a/api_changes.md b/api_changes.md index 4af0bd7c..99aba0d7 100644 --- a/api_changes.md +++ b/api_changes.md @@ -468,12 +468,11 @@ regenerate auto-generated ones. To regenerate them run: hack/update-codegen.sh ``` -update-codegen will also generate code to handle deep copy of your versioned -api objects. The deep copy code resides with each versioned API: - - `pkg/api//deep_copy_generated.go` containing auto-generated copy functions - - `pkg/apis/extensions//deep_copy_generated.go` containing auto-generated copy functions +As part of the build, kubernetes will also generate code to handle deep copy of +your versioned api objects. The deep copy code resides with each versioned API: + - `/zz_generated.deep_copy.go` containing auto-generated copy functions -If running the above script is impossible due to compile errors, the easiest +If regeneration is somehow not possible due to compile errors, the easiest workaround is to comment out the code causing errors and let the script to regenerate it. If the auto-generated conversion methods are not used by the manually-written ones, it's fine to just remove the whole file and let the -- cgit v1.2.3 From 19bae12d22d75293cb96702caa3f7dbfe5ae8e0c Mon Sep 17 00:00:00 2001 From: Tim Hockin Date: Wed, 15 Jun 2016 23:43:13 -0700 Subject: Recreate the opt-in/opt-out logic for deepcopy This is the last piece of Clayton's #26179 to be implemented with file tags. All diffs are accounted for. Followup will use this to streamline some packages. Also add some V(5) debugging - it was helpful in diagnosing various issues, it may be helpful again. --- adding-an-APIGroup.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/adding-an-APIGroup.md b/adding-an-APIGroup.md index 6026cc2e..63c4e2a2 100644 --- a/adding-an-APIGroup.md +++ b/adding-an-APIGroup.md @@ -77,8 +77,8 @@ cmd/libs/go2idl/ tool. 1. Add your "group/" or "group/version" into cmd/libs/go2idl/conversion-gen/main.go; 2. Make sure your pkg/apis/``/`` directory has a doc.go file - with the comment `// +k8s:deepcopy-gen=register`, to catch the attention - of our generation tools. + with the comment `// +k8s:deepcopy-gen=package,register`, to catch the + attention of our generation tools. 3. Make sure your pkg/apis/``/`` directory has a doc.go file with the comment `// +genconversion=true`, to catch the attention of our gen-conversion script. -- cgit v1.2.3 From b35f3aa8f56de69e8a0c241c28b8e4bd8bedd94e Mon Sep 17 00:00:00 2001 From: Tim Hockin Date: Mon, 6 Jun 2016 23:42:16 -0700 Subject: Use file tags to generate conversions This drives conversion generation from file tags like: // +conversion-gen=k8s.io/my/internal/version .. rather than hardcoded lists of packages. The only net change in generated code can be explained as correct. Previously it didn't know that conversion was available. --- adding-an-APIGroup.md | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/adding-an-APIGroup.md b/adding-an-APIGroup.md index 63c4e2a2..cefa8564 100644 --- a/adding-an-APIGroup.md +++ b/adding-an-APIGroup.md @@ -79,9 +79,10 @@ cmd/libs/go2idl/ tool. 2. Make sure your pkg/apis/``/`` directory has a doc.go file with the comment `// +k8s:deepcopy-gen=package,register`, to catch the attention of our generation tools. - 3. Make sure your pkg/apis/``/`` directory has a doc.go file - with the comment `// +genconversion=true`, to catch the attention of our - gen-conversion script. + 3. Make sure your `pkg/apis//` directory has a doc.go file + with the comment `// +k8s:conversion-gen=`, to catch the + attention of our generation tools. For most APIs the only target you + need is `k8s.io/kubernetes/pkg/apis/` (your internal API). 4. Run hack/update-all.sh. 2. Generate files for Ugorji codec: -- cgit v1.2.3 From 3891f09e19b01250290371947375e36578947c87 Mon Sep 17 00:00:00 2001 From: Tim Hockin Date: Mon, 27 Jun 2016 01:33:46 -0700 Subject: small docs update --- api_changes.md | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/api_changes.md b/api_changes.md index d5ead760..35d7a545 100644 --- a/api_changes.md +++ b/api_changes.md @@ -522,9 +522,8 @@ At the moment, you'll have to make a new directory under `pkg/apis/`; copy the directory structure from `pkg/apis/extensions`. Add the new group/version to all of the `hack/{verify,update}-generated-{deep-copy,conversions,swagger}.sh` files in the appropriate places--it should just require adding your new group/version -to a bash array. You will also need to make sure your new types are imported by -the generation commands (`cmd/gendeepcopy/` & `cmd/genconversion`). These -instructions may not be complete and will be updated as we gain experience. +to a bash array. See [docs on adding an API group](adding-an-APIGroup.md) for +more. Adding API groups outside of the `pkg/apis/` directory is not currently supported, but is clearly desirable. The deep copy & conversion generators need -- cgit v1.2.3 From 75cfc5730a6092f5d89b6c8f24fe16f9dfb5029c Mon Sep 17 00:00:00 2001 From: Angus Salkeld Date: Fri, 8 Jul 2016 12:33:38 +0200 Subject: Fix some errors in the e2e doc and make it more consistent - "--tests" is not a valid argument - use --ginko-skip to exclude (not focus) - add "--check_node_count=false" to test against local cluster - always use "--" for long args (there was a mix of "-" and "--" and it was a bit confusing) --- e2e-tests.md | 44 ++++++++++++++++++++++---------------------- 1 file changed, 22 insertions(+), 22 deletions(-) diff --git a/e2e-tests.md b/e2e-tests.md index 6a6a5b39..50356385 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -136,16 +136,16 @@ go run hack/e2e.go -v --pushup go run hack/e2e.go -v --test # Run tests matching the regex "\[Feature:Performance\]" -go run hack/e2e.go -v -test --test_args="--ginkgo.focus=\[Feature:Performance\]" +go run hack/e2e.go -v --test --test_args="--ginkgo.focus=\[Feature:Performance\]" # Conversely, exclude tests that match the regex "Pods.*env" -go run hack/e2e.go -v -test --test_args="--ginkgo.focus=Pods.*env" +go run hack/e2e.go -v --test --test_args="--ginkgo.skip=Pods.*env" # Run tests in parallel, skip any that must be run serially GINKGO_PARALLEL=y go run hack/e2e.go --v --test --test_args="--ginkgo.skip=\[Serial\]" # Flags can be combined, and their actions will take place in this order: -# --build, --push|--up|--pushup, --test|--tests=..., --down +# --build, --push|--up|--pushup, --test, --down # # You can also specify an alternative provider, such as 'aws' # @@ -184,38 +184,38 @@ arguments into Ginkgo using `--test_args` (e.g. see above). For the purposes of brevity, we will look at a subset of the options, which are listed below: ``` --ginkgo.dryRun=false: If set, ginkgo will walk the test hierarchy without +--ginkgo.dryRun=false: If set, ginkgo will walk the test hierarchy without actually running anything. Best paired with -v. --ginkgo.failFast=false: If set, ginkgo will stop running a test suite after a +--ginkgo.failFast=false: If set, ginkgo will stop running a test suite after a failure occurs. --ginkgo.failOnPending=false: If set, ginkgo will mark the test suite as failed +--ginkgo.failOnPending=false: If set, ginkgo will mark the test suite as failed if any specs are pending. --ginkgo.focus="": If set, ginkgo will only run specs that match this regular +--ginkgo.focus="": If set, ginkgo will only run specs that match this regular expression. --ginkgo.skip="": If set, ginkgo will only run specs that do not match this +--ginkgo.skip="": If set, ginkgo will only run specs that do not match this regular expression. --ginkgo.trace=false: If set, default reporter prints out the full stack trace +--ginkgo.trace=false: If set, default reporter prints out the full stack trace when a failure occurs --ginkgo.v=false: If set, default reporter print out all specs as they begin. +--ginkgo.v=false: If set, default reporter print out all specs as they begin. --host="": The host, or api-server, to connect to +--host="": The host, or api-server, to connect to --kubeconfig="": Path to kubeconfig containing embedded authinfo. +--kubeconfig="": Path to kubeconfig containing embedded authinfo. --prom-push-gateway="": The URL to prometheus gateway, so that metrics can be +--prom-push-gateway="": The URL to prometheus gateway, so that metrics can be pushed during e2es and scraped by prometheus. Typically something like 127.0.0.1:9091. --provider="": The name of the Kubernetes provider (gce, gke, local, vagrant, +--provider="": The name of the Kubernetes provider (gce, gke, local, vagrant, etc.) --repo-root="../../": Root directory of kubernetes repository, for finding test +--repo-root="../../": Root directory of kubernetes repository, for finding test files. ``` @@ -318,7 +318,7 @@ The following command will create the underlying Kubernetes clusters in each of federation control plane in the cluster occupying the last zone in the `E2E_ZONES` list. ```sh -$ go run hack/e2e.go -v -up +$ go run hack/e2e.go -v --up ``` #### Run the Tests @@ -326,13 +326,13 @@ $ go run hack/e2e.go -v -up This will run only the `Feature:Federation` e2e tests. You can omit the `ginkgo.focus` argument to run the entire e2e suite. ```sh -$ go run hack/e2e.go -v -test --test_args="--ginkgo.focus=\[Feature:Federation\]" +$ go run hack/e2e.go -v --test --test_args="--ginkgo.focus=\[Feature:Federation\]" ``` #### Teardown ```sh -$ go run hack/e2e.go -v -down +$ go run hack/e2e.go -v --down ``` #### Shortcuts for test developers @@ -397,13 +397,13 @@ at a custom host directly: ```sh export KUBECONFIG=/path/to/kubeconfig -go run hack/e2e.go -v --test_args="--host=http://127.0.0.1:8080" +go run hack/e2e.go -v --test --check_node_count=false --test_args="--host=http://127.0.0.1:8080" ``` To control the tests that are run: ```sh -go run hack/e2e.go -v --test_args="--host=http://127.0.0.1:8080" --ginkgo.focus="Secrets" +go run hack/e2e.go -v --test --check_node_count=false --test_args="--host=http://127.0.0.1:8080" --ginkgo.focus="Secrets" ``` ## Kinds of tests @@ -485,10 +485,10 @@ export KUBERNETES_PROVIDER=skeleton go run hack/e2e.go -v --test --test_args="--ginkgo.focus=\[Conformance\]" # run all parallel-safe conformance tests in parallel -GINKGO_PARALLEL=y go run hack/e2e.go --v --test --test_args="--ginkgo.focus=\[Conformance\] --ginkgo.skip=\[Serial\]" +GINKGO_PARALLEL=y go run hack/e2e.go -v --test --test_args="--ginkgo.focus=\[Conformance\] --ginkgo.skip=\[Serial\]" # ... and finish up with remaining tests in serial -go run hack/e2e.go --v --test --test_args="--ginkgo.focus=\[Serial\].*\[Conformance\]" +go run hack/e2e.go -v --test --test_args="--ginkgo.focus=\[Serial\].*\[Conformance\]" ``` ### Defining Conformance Subset -- cgit v1.2.3 From 93895639713595781c57ff682a4df2f34d48ae0a Mon Sep 17 00:00:00 2001 From: Jordan Liggitt Date: Tue, 14 Jun 2016 23:41:47 -0400 Subject: Allow specifying base location for test etcd data --- testing.md | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/testing.md b/testing.md index 4ae98859..136d3f55 100644 --- a/testing.md +++ b/testing.md @@ -50,6 +50,7 @@ Updated: 5/21/2016 - [Benchmark unit tests](#benchmark-unit-tests) - [Integration tests](#integration-tests) - [Install etcd dependency](#install-etcd-dependency) + - [Etcd test data](#etcd-test-data) - [Run integration tests](#run-integration-tests) - [Run a specific integration test](#run-a-specific-integration-test) - [End-to-End tests](#end-to-end-tests) @@ -213,6 +214,14 @@ grep -E "image.*etcd" cluster/saltbase/etcd/etcd.manifest # Find version echo export PATH="$PATH:" >> ~/.profile # Add to PATH ``` +### Etcd test data + +Many tests start an etcd server internally, storing test data in the operating system's temporary directory. + +If you see test failures because the temporary directory does not have sufficient space, +or is on a volume with unpredictable write latency, you can override the test data directory +for those internal etcd instances with the `TEST_ETCD_DIR` environment variable. + ### Run integration tests The integration tests are run using the `hack/test-integration.sh` script. -- cgit v1.2.3 From 3125d80f3b2b0688e7994d249e85e0fe6c2bc325 Mon Sep 17 00:00:00 2001 From: Ivan Shvedunov Date: Mon, 11 Jul 2016 16:46:09 +0300 Subject: Support custom Fedora repos in vagrant provider --- developer-guides/vagrant.md | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index ff6b98f2..fa8bb48e 100644 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -429,6 +429,20 @@ environment variables. For example, if running `make quick-release`, use: sudo -E make quick-release ``` +#### I have repository access errors during VM provisioning! + +Sometimes VM provisioning may fail with errors that look like this: + +``` +Timeout was reached for https://mirrors.fedoraproject.org/metalink?repo=fedora-23&arch=x86_64 [Connection timed out after 120002 milliseconds] +``` + +You may use a custom Fedora repository URL to fix this: + +```shell +export CUSTOM_FEDORA_REPOSITORY_URL=https://download.fedoraproject.org/pub/fedora/ +``` + #### I ran vagrant suspend and nothing works! `vagrant suspend` seems to mess up the network. It's not supported at this time. -- cgit v1.2.3 From 793488ebab93cd98c436bcf08de0845efe38ec1a Mon Sep 17 00:00:00 2001 From: Tim Hockin Date: Tue, 24 May 2016 08:40:44 -0700 Subject: Use make as the main build tool This allows us to start building real dependencies into Makefile. Leave old hack/* scripts in place but advise to use 'make'. There are a few rules that call things like 'go run' or 'build/*' that I left as-is for now. --- adding-an-APIGroup.md | 4 ++-- development.md | 11 ++++++++--- e2e-node-tests.md | 24 ++++++++++++------------ pull-requests.md | 6 +++--- releasing.md | 10 ++++++---- running-locally.md | 2 +- testing.md | 41 +++++++++++++++++++++++------------------ 7 files changed, 55 insertions(+), 43 deletions(-) diff --git a/adding-an-APIGroup.md b/adding-an-APIGroup.md index cefa8564..f05009dd 100644 --- a/adding-an-APIGroup.md +++ b/adding-an-APIGroup.md @@ -119,8 +119,8 @@ pkg/kubectl/cmd/util/factory.go. 1. Add your group in pkg/api/testapi/testapi.go, then you can access the group in tests through testapi.``; -2. Add your "group/version" to `KUBE_API_VERSIONS` and `KUBE_TEST_API_VERSIONS` -in hack/test-go.sh. +2. Add your "group/version" to `KUBE_TEST_API_VERSIONS` in + hack/make-rules/test.sh and hack/make-rules/test-integration.sh TODO: Add a troubleshooting section. diff --git a/development.md b/development.md index 82014e7c..4c00072e 100644 --- a/development.md +++ b/development.md @@ -71,11 +71,16 @@ up a GOPATH. To build Kubernetes using your local Go development environment (generate linux binaries): - hack/build-go.sh +```sh + make +``` + You may pass build options and packages to the script as necessary. To build binaries for all platforms: +```sh hack/build-cross.sh +``` ## Workflow @@ -314,8 +319,8 @@ Three basic commands let you run unit, integration and/or e2e tests: ```sh cd kubernetes -hack/test-go.sh # Run unit tests -hack/test-integration.sh # Run integration tests, requires etcd +make test # Run unit tests +make test-integration # Run integration tests, requires etcd go run hack/e2e.go -v --build --up --test --down # Run e2e tests ``` diff --git a/e2e-node-tests.md b/e2e-node-tests.md index 03ee4811..f4713855 100644 --- a/e2e-node-tests.md +++ b/e2e-node-tests.md @@ -57,7 +57,7 @@ Prerequisites: From the Kubernetes base directory, run: ```sh -make test_e2e_node +make test-e2e-node ``` This will: run the *ginkgo* binary against the subdirectory *test/e2e_node*, which will in turn: @@ -87,7 +87,7 @@ Prerequisites: Run: ```sh -make test_e2e_node REMOTE=true +make test-e2e-node REMOTE=true ``` This will: @@ -124,7 +124,7 @@ provided by the default image. List the available test images using gcloud. ```sh -make test_e2e_node LIST_IMAGES=true +make test-e2e-node LIST_IMAGES=true ``` This will output a list of the available images for the default image project. @@ -132,7 +132,7 @@ This will output a list of the available images for the default image project. Then run: ```sh -make test_e2e_node REMOTE=true IMAGES="" +make test-e2e-node REMOTE=true IMAGES="" ``` ## Run tests against a running GCE instance (not an image) @@ -140,7 +140,7 @@ make test_e2e_node REMOTE=true IMAGES="" This is useful if you have an host instance running already and want to run the tests there instead of on a new instance. ```sh -make test_e2e_node REMOTE=true HOSTS="" +make test-e2e-node REMOTE=true HOSTS="" ``` ## Delete instance after tests run @@ -148,7 +148,7 @@ make test_e2e_node REMOTE=true HOSTS="" This is useful if you want recreate the instance for each test run to trigger flakes related to starting the instance. ```sh -make test_e2e_node REMOTE=true DELETE_INSTANCES=true +make test-e2e-node REMOTE=true DELETE_INSTANCES=true ``` ## Keep instance, test binaries, and *processes* around after tests run @@ -156,7 +156,7 @@ make test_e2e_node REMOTE=true DELETE_INSTANCES=true This is useful if you want to manually inspect or debug the kubelet process run as part of the tests. ```sh -make test_e2e_node REMOTE=true CLEANUP=false +make test-e2e-node REMOTE=true CLEANUP=false ``` ## Run tests using an image in another project @@ -164,7 +164,7 @@ make test_e2e_node REMOTE=true CLEANUP=false This is useful if you want to create your own host image in another project and use it for testing. ```sh -make test_e2e_node REMOTE=true IMAGE_PROJECT="" IMAGES="" +make test-e2e-node REMOTE=true IMAGE_PROJECT="" IMAGES="" ``` Setting up your own host image may require additional steps such as installing etcd or docker. See @@ -176,7 +176,7 @@ This is useful if you want to create instances using a different name so that yo test in parallel against different instances of the same image. ```sh -make test_e2e_node REMOTE=true INSTANCE_PREFIX="my-prefix" +make test-e2e-node REMOTE=true INSTANCE_PREFIX="my-prefix" ``` # Additional Test Options for both Remote and Local execution @@ -186,13 +186,13 @@ make test_e2e_node REMOTE=true INSTANCE_PREFIX="my-prefix" To run tests matching a regex: ```sh -make test_e2e_node REMOTE=true FOCUS="" +make test-e2e-node REMOTE=true FOCUS="" ``` To run tests NOT matching a regex: ```sh -make test_e2e_node REMOTE=true SKIP="" +make test-e2e-node REMOTE=true SKIP="" ``` ## Run tests continually until they fail @@ -202,7 +202,7 @@ run the tests until they fail. **Note: this will only perform test setup once ( less useful for catching flakes related creating the instance from an image.** ```sh -make test_e2e_node REMOTE=true RUN_UNTIL_FAILURE=true +make test-e2e-node REMOTE=true RUN_UNTIL_FAILURE=true ``` # Notes on tests run by the Kubernetes project during pre-, post- submit. diff --git a/pull-requests.md b/pull-requests.md index 2037b410..40705971 100644 --- a/pull-requests.md +++ b/pull-requests.md @@ -69,9 +69,9 @@ Additionally, for infrequent or new contributors, we require the on call to appl The following will save time for both you and your reviewer: * Enable [pre-commit hooks](development.md#committing-changes-to-your-fork) and verify they pass. -* Verify `hack/verify-all.sh` passes. -* Verify `hack/test-go.sh` passes. -* Verify `hack/test-integration.sh` passes. +* Verify `make verify` passes. +* Verify `make test` passes. +* Verify `make test-integration.sh` passes. ## Release Notes diff --git a/releasing.md b/releasing.md index 2c8b5d00..eb48f469 100644 --- a/releasing.md +++ b/releasing.md @@ -257,9 +257,11 @@ been automated that need to happen after the branch has been cut: *Please note that this information may be out of date. The scripts are the authoritative source on how version injection works.* -Kubernetes may be built from either a git tree (using `hack/build-go.sh`) or -from a tarball (using either `hack/build-go.sh` or `go install`) or directly by -the Go native build system (using `go get`). +Kubernetes may be built from either a git tree or from a tarball. We use +`make` to encapsulate a number of build steps into a single command. This +includes generating code, which means that tools like `go build` might work +(once files are generated) but might be using stale generated code. `make` is +the supported way to build. When building from git, we want to be able to insert specific information about the build tree at build time. In particular, we want to use the output of `git @@ -294,7 +296,7 @@ yield binaries that will identify themselves as `v0.4-dev` and will not be able to provide you with a SHA1. To add the extra versioning information when building from git, the -`hack/build-go.sh` script will gather that information (using `git describe` and +`make` build will gather that information (using `git describe` and `git rev-parse`) and then create a `-ldflags` string to pass to `go install` and tell the Go linker to override the contents of those variables at build time. It can, for instance, tell it to override `gitVersion` and set it to diff --git a/running-locally.md b/running-locally.md index 517b12c8..0e56456e 100644 --- a/running-locally.md +++ b/running-locally.md @@ -170,7 +170,7 @@ You are running a single node setup. This has the limitation of only supporting ```sh cd kubernetes -hack/build-go.sh +make hack/local-up-cluster.sh ``` diff --git a/testing.md b/testing.md index dba01c10..3d7fb452 100644 --- a/testing.md +++ b/testing.md @@ -83,13 +83,13 @@ passing, so it is often a good idea to make sure the e2e tests work as well. ### Run all unit tests -The `hack/test-go.sh` script is the entrypoint for running the unit tests that -ensures that `GOPATH` is set up correctly. If you have `GOPATH` set up -correctly, you can also just use `go test` directly. +`make test` is the entrypoint for running the unit tests that ensures that +`GOPATH` is set up correctly. If you have `GOPATH` set up correctly, you can +also just use `go test` directly. ```sh cd kubernetes -hack/test-go.sh # Run all unit tests. +make test # Run all unit tests. ``` ### Set go flags during unit tests @@ -99,18 +99,23 @@ You can set [go flags](https://golang.org/cmd/go/) by setting the ### Run unit tests from certain packages -The `hack/test-go.sh` script accepts packages as arguments; the -`k8s.io/kubernetes` prefix is added automatically to these: +`make test` accepts packages as arguments; the `k8s.io/kubernetes` prefix is +added automatically to these: ```sh -hack/test-go.sh pkg/api # run tests for pkg/api -hack/test-go.sh pkg/api pkg/kubelet # run tests for pkg/api and pkg/kubelet +make test WHAT=pkg/api # run tests for pkg/api +``` + +To run multiple targets you need quotes: + +```sh +make test WHAT="pkg/api pkg/kubelet" # run tests for pkg/api and pkg/kubelet ``` In a shell, it's often handy to use brace expansion: ```sh -hack/test-go.sh pkg/{api,kubelet} # run tests for pkg/api and pkg/kubelet +make test WHAT=pkg/{api,kubelet} # run tests for pkg/api and pkg/kubelet ``` ### Run specific unit test cases in a package @@ -121,10 +126,10 @@ regular expression for the name of the test that should be run. ```sh # Runs TestValidatePod in pkg/api/validation with the verbose flag set -KUBE_GOFLAGS="-v" KUBE_TEST_ARGS='-run ^TestValidatePod$' hack/test-go.sh pkg/api/validation +make test WHAT=pkg/api/validation KUBE_GOFLAGS="-v" KUBE_TEST_ARGS='-run ^TestValidatePod$' # Runs tests that match the regex ValidatePod|ValidateConfigMap in pkg/api/validation -KUBE_GOFLAGS="-v" KUBE_TEST_ARGS="-run ValidatePod\|ValidateConfigMap$" hack/test-go.sh pkg/api/validation +make test WHAT=pkg/api/validation KUBE_GOFLAGS="-v" KUBE_TEST_ARGS="-run ValidatePod\|ValidateConfigMap$" ``` For other supported test flags, see the [golang @@ -137,7 +142,7 @@ You can do this efficiently. ```sh # Have 2 workers run all tests 5 times each (10 total iterations). -hack/test-go.sh -p 2 -i 5 +make test PARALLEL=2 ITERATION=5 ``` For more advanced ideas please see [flaky-tests.md](flaky-tests.md). @@ -149,7 +154,7 @@ Currently, collecting coverage is only supported for the Go unit tests. To run all unit tests and generate an HTML coverage report, run the following: ```sh -KUBE_COVER=y hack/test-go.sh +make test KUBE_COVER=y ``` At the end of the run, an HTML report will be generated with the path @@ -159,7 +164,7 @@ To run tests and collect coverage in only one package, pass its relative path under the `kubernetes` directory as an argument, for example: ```sh -KUBE_COVER=y hack/test-go.sh pkg/kubectl +make test WHAT=pkg/kubectl KUBE_COVER=y ``` Multiple arguments can be passed, in which case the coverage results will be @@ -224,14 +229,14 @@ for those internal etcd instances with the `TEST_ETCD_DIR` environment variable. ### Run integration tests -The integration tests are run using the `hack/test-integration.sh` script. +The integration tests are run using `make test-integration`. The Kubernetes integration tests are writting using the normal golang testing package but expect to have a running etcd instance to connect to. The `test- -integration.sh` script wraps `hack/test-go.sh` and sets up an etcd instance +integration.sh` script wraps `make test` and sets up an etcd instance for the integration tests to use. ```sh -hack/test-integration.sh # Run all integration tests. +make test-integration # Run all integration tests. ``` This script runs the golang tests in package @@ -244,7 +249,7 @@ You can use also use the `KUBE_TEST_ARGS` environment variable with the `hack ```sh # Run integration test TestPodUpdateActiveDeadlineSeconds with the verbose flag set. -KUBE_GOFLAGS="-v" KUBE_TEST_ARGS="-run ^TestPodUpdateActiveDeadlineSeconds$" hack/test-integration.sh +make test-integration KUBE_GOFLAGS="-v" KUBE_TEST_ARGS="-run ^TestPodUpdateActiveDeadlineSeconds$" ``` If you set `KUBE_TEST_ARGS`, the test case will be run with only the `v1` API -- cgit v1.2.3 From 18d6af7c105572d1e043be25e31eae2d59eb51b1 Mon Sep 17 00:00:00 2001 From: Tim Hockin Date: Mon, 30 May 2016 17:22:53 -0700 Subject: Make releases work --- development.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/development.md b/development.md index 4c00072e..571028c2 100644 --- a/development.md +++ b/development.md @@ -79,7 +79,7 @@ You may pass build options and packages to the script as necessary. To build binaries for all platforms: ```sh - hack/build-cross.sh + make cross ``` ## Workflow -- cgit v1.2.3 From 451e9a5a3fd2497851e0b31fc7acbfe75a576968 Mon Sep 17 00:00:00 2001 From: Tim Hockin Date: Sat, 4 Jun 2016 21:53:58 -0700 Subject: s/deep_copy/deepcopy/ Just a naming nit that was too hard to fixup-and-rebase. --- api_changes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/api_changes.md b/api_changes.md index 35d7a545..57787c72 100644 --- a/api_changes.md +++ b/api_changes.md @@ -470,7 +470,7 @@ hack/update-codegen.sh As part of the build, kubernetes will also generate code to handle deep copy of your versioned api objects. The deep copy code resides with each versioned API: - - `/zz_generated.deep_copy.go` containing auto-generated copy functions + - `/zz_generated.deepcopy.go` containing auto-generated copy functions If regeneration is somehow not possible due to compile errors, the easiest workaround is to comment out the code causing errors and let the script to -- cgit v1.2.3 From 41183c16771d864f6b68e8ab402f0e6595e2b394 Mon Sep 17 00:00:00 2001 From: Angus Salkeld Date: Tue, 12 Jul 2016 09:29:49 +0200 Subject: Add detect-master to local provider to get e2e working go run hack/e2e.go -v -test --check_node_count=false --test_args="--ginkgo.focus=\[Feature:Volumes\]" --- e2e-tests.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/e2e-tests.md b/e2e-tests.md index 50356385..a33ec83f 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -397,13 +397,13 @@ at a custom host directly: ```sh export KUBECONFIG=/path/to/kubeconfig -go run hack/e2e.go -v --test --check_node_count=false --test_args="--host=http://127.0.0.1:8080" +go run hack/e2e.go -v --test --check_node_count=false ``` To control the tests that are run: ```sh -go run hack/e2e.go -v --test --check_node_count=false --test_args="--host=http://127.0.0.1:8080" --ginkgo.focus="Secrets" +go run hack/e2e.go -v --test --check_node_count=false --test_args="--ginkgo.focus="Secrets" ``` ## Kinds of tests -- cgit v1.2.3 From 74c9bab39c48cee79c364187322e4b944e6b4667 Mon Sep 17 00:00:00 2001 From: joe2far Date: Wed, 13 Jul 2016 15:06:24 +0100 Subject: Fixed several typos --- generating-clientset.md | 2 +- kubemark-guide.md | 2 +- releasing.md | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/generating-clientset.md b/generating-clientset.md index 3142b9ea..6691d00f 100644 --- a/generating-clientset.md +++ b/generating-clientset.md @@ -50,7 +50,7 @@ will generate a clientset named "my_release" which includes clients for api/v1 o - Adding expansion methods: client-gen only generates the common methods, such as `Create()` and `Delete()`. You can manually add additional methods through the expansion interface. For example, this [file](../../pkg/client/clientset_generated/release_1_2/typed/core/v1/pod_expansion.go) adds additional methods to Pod's client. As a convention, we put the expansion interface and its methods in file ${TYPE}_expansion.go. - Generating Fake clients for testing purposes: client-gen will generate a fake clientset if the command line argument `--fake-clientset` is set. The fake clientset provides the default implementation, you only need to fake out the methods you care about when writing test cases. -The output of client-gen inlcudes: +The output of client-gen includes: - clientset: the clientset will be generated at `pkg/client/clientset_generated/` by default, and you can change the path via the `--clientset-path` command line argument. - Individual typed clients and client for group: They will be generated at `pkg/client/clientset_generated/${clientset_name}/typed/generated/${GROUP}/${VERSION}/` diff --git a/kubemark-guide.md b/kubemark-guide.md index aa5b3c8d..8c736ec3 100644 --- a/kubemark-guide.md +++ b/kubemark-guide.md @@ -76,7 +76,7 @@ Common workflow for Kubemark is: - monitoring test execution and debugging problems - turning down Kubemark cluster -Included in descrptions there will be comments helpful for anyone who’ll want to +Included in descriptions there will be comments helpful for anyone who’ll want to port Kubemark to different providers. ### Starting a Kubemark cluster diff --git a/releasing.md b/releasing.md index eb48f469..04045cc8 100644 --- a/releasing.md +++ b/releasing.md @@ -153,7 +153,7 @@ Then, run This will do a dry run of the release. It will give you instructions at the end for `pushd`ing into the dry-run directory and having a look around. -`pushd` into the directory and make sure everythig looks as you expect: +`pushd` into the directory and make sure everything looks as you expect: ```console git log "${RELEASE_VERSION}" # do you see the commit you expect? -- cgit v1.2.3 From 738bb79c6a26c16f9c4ad48b921b1cc7012a3caa Mon Sep 17 00:00:00 2001 From: Mike Brown Date: Fri, 1 Jul 2016 11:36:47 -0500 Subject: adds source debug build options Signed-off-by: Mike Brown --- development.md | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/development.md b/development.md index 571028c2..f77e6ed4 100644 --- a/development.md +++ b/development.md @@ -75,8 +75,14 @@ binaries): make ``` -You may pass build options and packages to the script as necessary. To build -binaries for all platforms: +You may pass build options and packages to the script as necessary. For example, +to build with optimizations disabled for enabling use of source debug tools: + +```sh + make GOGCFLAGS="-N -l" +``` + +To build binaries for all platforms: ```sh make cross -- cgit v1.2.3 From c7c8656f2fee1cc86baa73d4a65ae4ea6611e3f2 Mon Sep 17 00:00:00 2001 From: Mike Brown Date: Tue, 3 May 2016 14:31:42 -0500 Subject: devel/ tree 80col updates; and other minor edits Signed-off-by: Mike Brown --- faster_reviews.md | 186 ++++++++++++++++++++++++++---------------------- flaky-tests.md | 41 +++++++---- generating-clientset.md | 57 ++++++++++++--- getting-builds.md | 17 +++-- how-to-doc.md | 73 ++++++++++++++----- instrumentation.md | 31 +++++--- 6 files changed, 265 insertions(+), 140 deletions(-) diff --git a/faster_reviews.md b/faster_reviews.md index 97f4a8de..2d408a81 100644 --- a/faster_reviews.md +++ b/faster_reviews.md @@ -38,18 +38,18 @@ Most of what is written here is not at all specific to Kubernetes, but it bears being written down in the hope that it will occasionally remind people of "best practices" around code reviews. -You've just had a brilliant idea on how to make Kubernetes better. Let's call -that idea "FeatureX". Feature X is not even that complicated. You have a -pretty good idea of how to implement it. You jump in and implement it, fixing a -bunch of stuff along the way. You send your PR - this is awesome! And it sits. -And sits. A week goes by and nobody reviews it. Finally someone offers a few -comments, which you fix up and wait for more review. And you wait. Another -week or two goes by. This is horrible. - -What went wrong? One particular problem that comes up frequently is this - your -PR is too big to review. You've touched 39 files and have 8657 insertions. -When your would-be reviewers pull up the diffs they run away - this PR is going -to take 4 hours to review and they don't have 4 hours right now. They'll get to it +You've just had a brilliant idea on how to make Kubernetes better. Let's call +that idea "Feature-X". Feature-X is not even that complicated. You have a pretty +good idea of how to implement it. You jump in and implement it, fixing a bunch +of stuff along the way. You send your PR - this is awesome! And it sits. And +sits. A week goes by and nobody reviews it. Finally someone offers a few +comments, which you fix up and wait for more review. And you wait. Another +week or two goes by. This is horrible. + +What went wrong? One particular problem that comes up frequently is this - your +PR is too big to review. You've touched 39 files and have 8657 insertions. When +your would-be reviewers pull up the diffs they run away - this PR is going to +take 4 hours to review and they don't have 4 hours right now. They'll get to it later, just as soon as they have more free time (ha!). Let's talk about how to avoid this. @@ -63,38 +63,39 @@ Let's talk about how to avoid this. ## 1. Don't build a cathedral in one PR -Are you sure FeatureX is something the Kubernetes team wants or will accept, or -that it is implemented to fit with other changes in flight? Are you willing to -bet a few days or weeks of work on it? If you have any doubt at all about the -usefulness of your feature or the design - make a proposal doc (in docs/proposals; -for example [the QoS proposal](http://prs.k8s.io/11713)) or a sketch PR (e.g., just -the API or Go interface) or both. Write or code up just enough to express the idea -and the design and why you made those choices, then get feedback on this. Be clear -about what type of feedback you are asking for. Now, if we ask you to change a -bunch of facets of the design, you won't have to re-write it all. +Are you sure Feature-X is something the Kubernetes team wants or will accept, or +that it is implemented to fit with other changes in flight? Are you willing to +bet a few days or weeks of work on it? If you have any doubt at all about the +usefulness of your feature or the design - make a proposal doc (in +docs/proposals; for example [the QoS proposal](http://prs.k8s.io/11713)) or a +sketch PR (e.g., just the API or Go interface) or both. Write or code up just +enough to express the idea and the design and why you made those choices, then +get feedback on this. Be clear about what type of feedback you are asking for. +Now, if we ask you to change a bunch of facets of the design, you won't have to +re-write it all. ## 2. Smaller diffs are exponentially better Small PRs get reviewed faster and are more likely to be correct than big ones. -Let's face it - attention wanes over time. If your PR takes 60 minutes to -review, I almost guarantee that the reviewer's eye for details is not as keen in -the last 30 minutes as it was in the first. This leads to multiple rounds of -review when one might have sufficed. In some cases the review is delayed in its +Let's face it - attention wanes over time. If your PR takes 60 minutes to +review, I almost guarantee that the reviewer's eye for detail is not as keen in +the last 30 minutes as it was in the first. This leads to multiple rounds of +review when one might have sufficed. In some cases the review is delayed in its entirety by the need for a large contiguous block of time to sit and read your code. -Whenever possible, break up your PRs into multiple commits. Making a series of +Whenever possible, break up your PRs into multiple commits. Making a series of discrete commits is a powerful way to express the evolution of an idea or the -different ideas that make up a single feature. There's a balance to be struck, -obviously. If your commits are too small they become more cumbersome to deal -with. Strive to group logically distinct ideas into commits. - -For example, if you found that FeatureX needed some "prefactoring" to fit in, -make a commit that JUST does that prefactoring. Then make a new commit for -FeatureX. Don't lump unrelated things together just because you didn't think -about prefactoring. If you need to, fork a new branch, do the prefactoring -there and send a PR for that. If you can explain why you are doing seemingly -no-op work ("it makes the FeatureX change easier, I promise") we'll probably be +different ideas that make up a single feature. There's a balance to be struck, +obviously. If your commits are too small they become more cumbersome to deal +with. Strive to group logically distinct ideas into separate commits. + +For example, if you found that Feature-X needed some "prefactoring" to fit in, +make a commit that JUST does that prefactoring. Then make a new commit for +Feature-X. Don't lump unrelated things together just because you didn't think +about prefactoring. If you need to, fork a new branch, do the prefactoring +there and send a PR for that. If you can explain why you are doing seemingly +no-op work ("it makes the Feature-X change easier, I promise") we'll probably be OK with it. Obviously, a PR with 25 commits is still very cumbersome to review, so use @@ -103,135 +104,146 @@ common sense. ## 3. Multiple small PRs are often better than multiple commits If you can extract whole ideas from your PR and send those as PRs of their own, -you can avoid the painful problem of continually rebasing. Kubernetes is a +you can avoid the painful problem of continually rebasing. Kubernetes is a fast-moving codebase - lock in your changes ASAP, and make merges be someone else's problem. Obviously, we want every PR to be useful on its own, so you'll have to use common sense in deciding what can be a PR vs what should be a commit in a larger -PR. Rule of thumb - if this commit or set of commits is directly related to -FeatureX and nothing else, it should probably be part of the FeatureX PR. If +PR. Rule of thumb - if this commit or set of commits is directly related to +Feature-X and nothing else, it should probably be part of the Feature-X PR. If you can plausibly imagine someone finding value in this commit outside of -FeatureX, try it as a PR. +Feature-X, try it as a PR. -Don't worry about flooding us with PRs. We'd rather have 100 small, obvious PRs +Don't worry about flooding us with PRs. We'd rather have 100 small, obvious PRs than 10 unreviewable monoliths. ## 4. Don't rename, reformat, comment, etc in the same PR -Often, as you are implementing FeatureX, you find things that are just wrong. -Bad comments, poorly named functions, bad structure, weak type-safety. You +Often, as you are implementing Feature-X, you find things that are just wrong. +Bad comments, poorly named functions, bad structure, weak type-safety. You should absolutely fix those things (or at least file issues, please) - but not -in this PR. See the above points - break unrelated changes out into different -PRs or commits. Otherwise your diff will have WAY too many changes, and your +in this PR. See the above points - break unrelated changes out into different +PRs or commits. Otherwise your diff will have WAY too many changes, and your reviewer won't see the forest because of all the trees. ## 5. Comments matter -Read up on GoDoc - follow those general rules. If you're writing code and you +Read up on GoDoc - follow those general rules. If you're writing code and you think there is any possible chance that someone might not understand why you did -something (or that you won't remember what you yourself did), comment it. If +something (or that you won't remember what you yourself did), comment it. If you think there's something pretty obvious that we could follow up on, add a -TODO. Many code-review comments are about this exact issue. +TODO. Many code-review comments are about this exact issue. ## 5. Tests are almost always required Nothing is more frustrating than doing a review, only to find that the tests are -inadequate or even entirely absent. Very few PRs can touch code and NOT touch -tests. If you don't know how to test FeatureX - ask! We'll be happy to help +inadequate or even entirely absent. Very few PRs can touch code and NOT touch +tests. If you don't know how to test Feature-X - ask! We'll be happy to help you design things for easy testing or to suggest appropriate test cases. ## 6. Look for opportunities to generify If you find yourself writing something that touches a lot of modules, think hard -about the dependencies you are introducing between packages. Can some of what -you're doing be made more generic and moved up and out of the FeatureX package? -Do you need to use a function or type from an otherwise unrelated package? If -so, promote! We have places specifically for hosting more generic code. +about the dependencies you are introducing between packages. Can some of what +you're doing be made more generic and moved up and out of the Feature-X package? +Do you need to use a function or type from an otherwise unrelated package? If +so, promote! We have places specifically for hosting more generic code. -Likewise if FeatureX is similar in form to FeatureW which was checked in last -month and it happens to exactly duplicate some tricky stuff from FeatureW, -consider prefactoring core logic out and using it in both FeatureW and FeatureX. -But do that in a different commit or PR, please. +Likewise if Feature-X is similar in form to Feature-W which was checked in last +month and it happens to exactly duplicate some tricky stuff from Feature-W, +consider prefactoring core logic out and using it in both Feature-W and +Feature-X. But do that in a different commit or PR, please. ## 7. Fix feedback in a new commit -Your reviewer has finally sent you some feedback on FeatureX. You make a bunch +Your reviewer has finally sent you some feedback on Feature-X. You make a bunch of changes and ... what? You could patch those into your commits with git -"squash" or "fixup" logic. But that makes your changes hard to verify. Unless +"squash" or "fixup" logic. But that makes your changes hard to verify. Unless your whole PR is pretty trivial, you should instead put your fixups into a new -commit and re-push. Your reviewer can then look at that commit on its own - so +commit and re-push. Your reviewer can then look at that commit on its own - so much faster to review than starting over. We might still ask you to clean up your commits at the very end, for the sake -of a more readable history, but don't do this until asked, typically at the point -where the PR would otherwise be tagged LGTM. +of a more readable history, but don't do this until asked, typically at the +point where the PR would otherwise be tagged LGTM. General squashing guidelines: * Sausage => squash - When there are several commits to fix bugs in the original commit(s), address reviewer feedback, etc. Really we only want to see the end state and commit message for the whole PR. + When there are several commits to fix bugs in the original commit(s), address +reviewer feedback, etc. Really we only want to see the end state and commit +message for the whole PR. * Layers => don't squash - When there are independent changes layered upon each other to achieve a single goal. For instance, writing a code munger could be one commit, applying it could be another, and adding a precommit check could be a third. One could argue they should be separate PRs, but there's really no way to test/review the munger without seeing it applied, and there needs to be a precommit check to ensure the munged output doesn't immediately get out of date. + When there are independent changes layered upon each other to achieve a single +goal. For instance, writing a code munger could be one commit, applying it could +be another, and adding a precommit check could be a third. One could argue they +should be separate PRs, but there's really no way to test/review the munger +without seeing it applied, and there needs to be a precommit check to ensure the +munged output doesn't immediately get out of date. -A commit, as much as possible, should be a single logical change. Each commit should always have a good title line (<70 characters) and include an additional description paragraph describing in more detail the change intended. Do not link pull requests by `#` in a commit description, because GitHub creates lots of spam. Instead, reference other PRs via the PR your commit is in. +A commit, as much as possible, should be a single logical change. Each commit +should always have a good title line (<70 characters) and include an additional +description paragraph describing in more detail the change intended. Do not link +pull requests by `#` in a commit description, because GitHub creates lots of +spam. Instead, reference other PRs via the PR your commit is in. ## 8. KISS, YAGNI, MVP, etc Sometimes we need to remind each other of core tenets of software design - Keep -It Simple, You Aren't Gonna Need It, Minimum Viable Product, and so on. Adding +It Simple, You Aren't Gonna Need It, Minimum Viable Product, and so on. Adding features "because we might need it later" is antithetical to software that -ships. Add the things you need NOW and (ideally) leave room for things you +ships. Add the things you need NOW and (ideally) leave room for things you might need later - but don't implement them now. ## 9. Push back -We understand that it is hard to imagine, but sometimes we make mistakes. It's -OK to push back on changes requested during a review. If you have a good reason +We understand that it is hard to imagine, but sometimes we make mistakes. It's +OK to push back on changes requested during a review. If you have a good reason for doing something a certain way, you are absolutely allowed to debate the -merits of a requested change. You might be overruled, but you might also -prevail. We're mostly pretty reasonable people. Mostly. +merits of a requested change. You might be overruled, but you might also +prevail. We're mostly pretty reasonable people. Mostly. ## 10. I'm still getting stalled - help?! -So, you've done all that and you still aren't getting any PR love? Here's some +So, you've done all that and you still aren't getting any PR love? Here's some things you can do that might help kick a stalled process along: - * Make sure that your PR has an assigned reviewer (assignee in GitHub). If - this is not the case, reply to the PR comment stream asking for one to be - assigned. + * Make sure that your PR has an assigned reviewer (assignee in GitHub). If +this is not the case, reply to the PR comment stream asking for one to be +assigned. * Ping the assignee (@username) on the PR comment stream asking for an - estimate of when they can get to it. +estimate of when they can get to it. * Ping the assignee by email (many of us have email addresses that are well - published or are the same as our GitHub handle @google.com or @redhat.com). +published or are the same as our GitHub handle @google.com or @redhat.com). * Ping the [team](https://github.com/orgs/kubernetes/teams) (via @team-name) - that works in the area you're submitting code. +that works in the area you're submitting code. If you think you have fixed all the issues in a round of review, and you haven't heard back, you should ping the reviewer (assignee) on the comment stream with a "please take another look" (PTAL) or similar comment indicating you are done and -you think it is ready for re-review. In fact, this is probably a good habit for +you think it is ready for re-review. In fact, this is probably a good habit for all PRs. One phenomenon of open-source projects (where anyone can comment on any issue) is the dog-pile - your PR gets so many comments from so many people it becomes -hard to follow. In this situation you can ask the primary reviewer -(assignee) whether they want you to fork a new PR to clear out all the comments. -Remember: you don't HAVE to fix every issue raised by every person who feels -like commenting, but you should at least answer reasonable comments with an +hard to follow. In this situation you can ask the primary reviewer (assignee) +whether they want you to fork a new PR to clear out all the comments. Remember: +you don't HAVE to fix every issue raised by every person who feels like +commenting, but you should at least answer reasonable comments with an explanation. ## Final: Use common sense -Obviously, none of these points are hard rules. There is no document that can -take the place of common sense and good taste. Use your best judgment, but put -a bit of thought into how your work can be made easier to review. If you do +Obviously, none of these points are hard rules. There is no document that can +take the place of common sense and good taste. Use your best judgment, but put +a bit of thought into how your work can be made easier to review. If you do these things your PRs will flow much more easily. diff --git a/flaky-tests.md b/flaky-tests.md index 68fe8a23..c2db9ae2 100644 --- a/flaky-tests.md +++ b/flaky-tests.md @@ -67,7 +67,7 @@ discoverable from the issue. 5. Link to durable storage with the rest of the logs. This means (for all the tests that Google runs) the GCS link is mandatory! The Jenkins test result link is nice but strictly optional: not only does it expire more quickly, - it's not accesible to non-Googlers. + it's not accessible to non-Googlers. ## Expectations when a flaky test is assigned to you @@ -132,15 +132,20 @@ system! # Hunting flaky unit tests in Kubernetes -Sometimes unit tests are flaky. This means that due to (usually) race conditions, they will occasionally fail, even though most of the time they pass. +Sometimes unit tests are flaky. This means that due to (usually) race +conditions, they will occasionally fail, even though most of the time they pass. -We have a goal of 99.9% flake free tests. This means that there is only one flake in one thousand runs of a test. +We have a goal of 99.9% flake free tests. This means that there is only one +flake in one thousand runs of a test. -Running a test 1000 times on your own machine can be tedious and time consuming. Fortunately, there is a better way to achieve this using Kubernetes. +Running a test 1000 times on your own machine can be tedious and time consuming. +Fortunately, there is a better way to achieve this using Kubernetes. -_Note: these instructions are mildly hacky for now, as we get run once semantics and logging they will get better_ +_Note: these instructions are mildly hacky for now, as we get run once semantics +and logging they will get better_ -There is a testing image `brendanburns/flake` up on the docker hub. We will use this image to test our fix. +There is a testing image `brendanburns/flake` up on the docker hub. We will use +this image to test our fix. Create a replication controller with the following config: @@ -166,15 +171,25 @@ spec: value: https://github.com/kubernetes/kubernetes ``` -Note that we omit the labels and the selector fields of the replication controller, because they will be populated from the labels field of the pod template by default. +Note that we omit the labels and the selector fields of the replication +controller, because they will be populated from the labels field of the pod +template by default. ```sh kubectl create -f ./controller.yaml ``` -This will spin up 24 instances of the test. They will run to completion, then exit, and the kubelet will restart them, accumulating more and more runs of the test. -You can examine the recent runs of the test by calling `docker ps -a` and looking for tasks that exited with non-zero exit codes. Unfortunately, docker ps -a only keeps around the exit status of the last 15-20 containers with the same image, so you have to check them frequently. -You can use this script to automate checking for failures, assuming your cluster is running on GCE and has four nodes: +This will spin up 24 instances of the test. They will run to completion, then +exit, and the kubelet will restart them, accumulating more and more runs of the +test. + +You can examine the recent runs of the test by calling `docker ps -a` and +looking for tasks that exited with non-zero exit codes. Unfortunately, docker +ps -a only keeps around the exit status of the last 15-20 containers with the +same image, so you have to check them frequently. + +You can use this script to automate checking for failures, assuming your cluster +is running on GCE and has four nodes: ```sh echo "" > output.txt @@ -186,13 +201,15 @@ done grep "Exited ([^0])" output.txt ``` -Eventually you will have sufficient runs for your purposes. At that point you can delete the replication controller by running: +Eventually you will have sufficient runs for your purposes. At that point you +can delete the replication controller by running: ```sh kubectl delete replicationcontroller flakecontroller ``` -If you do a final check for flakes with `docker ps -a`, ignore tasks that exited -1, since that's what happens when you stop the replication controller. +If you do a final check for flakes with `docker ps -a`, ignore tasks that +exited -1, since that's what happens when you stop the replication controller. Happy flake hunting! diff --git a/generating-clientset.md b/generating-clientset.md index 6691d00f..90c8758a 100644 --- a/generating-clientset.md +++ b/generating-clientset.md @@ -34,34 +34,69 @@ Documentation for other releases can be found at # Generation and release cycle of clientset -Client-gen is an automatic tool that generates [clientset](../../docs/proposals/client-package-structure.md#high-level-client-sets) based on API types. This doc introduces the use the client-gen, and the release cycle of the generated clientsets. +Client-gen is an automatic tool that generates +[clientset](../../docs/proposals/client-package-structure.md#high-level-client-sets) +based on API types. This doc introduces the use the client-gen, and the release +cycle of the generated clientsets. ## Using client-gen The workflow includes four steps: -- Marking API types with tags: in `pkg/apis/${GROUP}/${VERSION}/types.go`, mark the types (e.g., Pods) that you want to generate clients for with the `// +genclient=true` tag. If the resource associated with the type is not namespace scoped (e.g., PersistentVolume), you need to append the `nonNamespaced=true` tag as well. -- Running the client-gen tool: you need to use the command line argument `--input` to specify the groups and versions of the APIs you want to generate clients for, client-gen will then look into `pkg/apis/${GROUP}/${VERSION}/types.go` and generate clients for the types you have marked with the `genclient` tags. For example, run +- Marking API types with tags: in `pkg/apis/${GROUP}/${VERSION}/types.go`, mark +the types (e.g., Pods) that you want to generate clients for with the +`// +genclient=true` tag. If the resource associated with the type is not +namespace scoped (e.g., PersistentVolume), you need to append the +`nonNamespaced=true` tag as well. + +- Running the client-gen tool: you need to use the command line argument +`--input` to specify the groups and versions of the APIs you want to generate +clients for, client-gen will then look into +`pkg/apis/${GROUP}/${VERSION}/types.go` and generate clients for the types you +have marked with the `genclient` tags. For example, running: ``` $ client-gen --input="api/v1,extensions/v1beta1" --clientset-name="my_release" ``` -will generate a clientset named "my_release" which includes clients for api/v1 objects and extensions/v1beta1 objects. You can run `$ client-gen --help` to see other command line arguments. -- Adding expansion methods: client-gen only generates the common methods, such as `Create()` and `Delete()`. You can manually add additional methods through the expansion interface. For example, this [file](../../pkg/client/clientset_generated/release_1_2/typed/core/v1/pod_expansion.go) adds additional methods to Pod's client. As a convention, we put the expansion interface and its methods in file ${TYPE}_expansion.go. -- Generating Fake clients for testing purposes: client-gen will generate a fake clientset if the command line argument `--fake-clientset` is set. The fake clientset provides the default implementation, you only need to fake out the methods you care about when writing test cases. +will generate a clientset named "my_release" which includes clients for api/v1 +objects and extensions/v1beta1 objects. You can run `$ client-gen --help` to see +other command line arguments. + +- Adding expansion methods: client-gen only generates the common methods, such +as `Create()` and `Delete()`. You can manually add additional methods through +the expansion interface. For example, this +[file](../../pkg/client/clientset_generated/release_1_2/typed/core/v1/pod_expansion.go) +adds additional methods to Pod's client. As a convention, we put the expansion +interface and its methods in file ${TYPE}_expansion.go. + +- Generating fake clients for testing purposes: client-gen will generate a fake +clientset if the command line argument `--fake-clientset` is set. The fake +clientset provides the default implementation, you only need to fake out the +methods you care about when writing test cases. The output of client-gen includes: -- clientset: the clientset will be generated at `pkg/client/clientset_generated/` by default, and you can change the path via the `--clientset-path` command line argument. + +- clientset: the clientset will be generated at +`pkg/client/clientset_generated/` by default, and you can change the path via +the `--clientset-path` command line argument. + - Individual typed clients and client for group: They will be generated at `pkg/client/clientset_generated/${clientset_name}/typed/generated/${GROUP}/${VERSION}/` ## Released clientsets -At the 1.2 release, we have two released clientsets in the repo: internalclientset and release_1_2. -- internalclientset: because most components in our repo still deal with the internal objects, the internalclientset talks in internal objects to ease the adoption of clientset. We will keep updating it as our API evolves. Eventually it will be replaced by a versioned clientset. -- release_1_2: release_1_2 clientset is a versioned clientset, it includes clients for the core v1 objects, extensions/v1beta1, autoscaling/v1, and batch/v1 objects. We will NOT update it after we cut the 1.2 release. After the 1.2 release, we will create release_1_3 clientset and keep it updated until we cut release 1.3. - +At the 1.2 release, we have two released clientsets in the repo: +internalclientset and release_1_2. +- internalclientset: because most components in our repo still deal with the +internal objects, the internalclientset talks in internal objects to ease the +adoption of clientset. We will keep updating it as our API evolves. Eventually +it will be replaced by a versioned clientset. +- release_1_2: release_1_2 clientset is a versioned clientset, it includes +clients for the core v1 objects, extensions/v1beta1, autoscaling/v1, and +batch/v1 objects. We will NOT update it after we cut the 1.2 release. After the +1.2 release, we will create release_1_3 clientset and keep it updated until we +cut release 1.3. diff --git a/getting-builds.md b/getting-builds.md index 52e9c193..b61ceddd 100644 --- a/getting-builds.md +++ b/getting-builds.md @@ -34,29 +34,36 @@ Documentation for other releases can be found at # Getting Kubernetes Builds -You can use [hack/get-build.sh](http://releases.k8s.io/HEAD/hack/get-build.sh) to or use as a reference on how to get the most recent builds with curl. With `get-build.sh` you can grab the most recent stable build, the most recent release candidate, or the most recent build to pass our ci and gce e2e tests (essentially a nightly build). +You can use [hack/get-build.sh](http://releases.k8s.io/HEAD/hack/get-build.sh) +to get a build or to use as a reference on how to get the most recent builds +with curl. With `get-build.sh` you can grab the most recent stable build, the +most recent release candidate, or the most recent build to pass our ci and gce +e2e tests (essentially a nightly build). Run `./hack/get-build.sh -h` for its usage. -For example, to get a build at a specific version (v1.1.1): +To get a build at a specific version (v1.1.1) use: ```console ./hack/get-build.sh v1.1.1 ``` -Alternatively, to get the latest stable release: +To get the latest stable release: ```console ./hack/get-build.sh release/stable ``` -Finally, you can just print the latest or stable version: +Use the "-v" option to print the version number of a build without retrieving +it. For example, the following prints the version number for the latest ci +build: ```console ./hack/get-build.sh -v ci/latest ``` -You can also use the gsutil tool to explore the Google Cloud Storage release buckets. Here are some examples: +You can also use the gsutil tool to explore the Google Cloud Storage release +buckets. Here are some examples: ```sh gsutil cat gs://kubernetes-release-dev/ci/latest.txt # output the latest ci version number diff --git a/how-to-doc.md b/how-to-doc.md index 67bffe15..6ec896f4 100644 --- a/how-to-doc.md +++ b/how-to-doc.md @@ -31,7 +31,8 @@ Documentation for other releases can be found at Updated: 11/3/2015 -*This document is oriented at users and developers who want to write documents for Kubernetes.* +*This document is oriented at users and developers who want to write documents +for Kubernetes.* **Table of Contents** @@ -56,24 +57,34 @@ Updated: 11/3/2015 ## General Concepts -Each document needs to be munged to ensure its format is correct, links are valid, etc. To munge a document, simply run `hack/update-munge-docs.sh`. We verify that all documents have been munged using `hack/verify-munge-docs.sh`. The scripts for munging documents are called mungers, see the [mungers section](#what-are-mungers) below if you're curious about how mungers are implemented or if you want to write one. +Each document needs to be munged to ensure its format is correct, links are +valid, etc. To munge a document, simply run `hack/update-munge-docs.sh`. We +verify that all documents have been munged using `hack/verify-munge-docs.sh`. +The scripts for munging documents are called mungers, see the +[mungers section](#what-are-mungers) below if you're curious about how mungers +are implemented or if you want to write one. ## How to Get a Table of Contents -Instead of writing table of contents by hand, insert the following code in your md file: +Instead of writing table of contents by hand, insert the following code in your +md file: ``` ``` -After running `hack/update-munge-docs.sh`, you'll see a table of contents generated for you, layered based on the headings. +After running `hack/update-munge-docs.sh`, you'll see a table of contents +generated for you, layered based on the headings. ## How to Write Links -It's important to follow the rules when writing links. It helps us correctly versionize documents for each release. +It's important to follow the rules when writing links. It helps us correctly +versionize documents for each release. -Use inline links instead of urls at all times. When you add internal links to `docs/` or `examples/`, use relative links; otherwise, use `http://releases.k8s.io/HEAD/`. For example, avoid using: +Use inline links instead of urls at all times. When you add internal links to +`docs/` or `examples/`, use relative links; otherwise, use +`http://releases.k8s.io/HEAD/`. For example, avoid using: ``` [GCE](https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/gce.md) # note that it's under docs/ @@ -89,18 +100,27 @@ Instead, use: [Kubernetes](http://kubernetes.io/) # external link ``` -The above example generates the following links: [GCE](../getting-started-guides/gce.md), [Kubernetes package](http://releases.k8s.io/HEAD/pkg/), and [Kubernetes](http://kubernetes.io/). +The above example generates the following links: +[GCE](../getting-started-guides/gce.md), +[Kubernetes package](http://releases.k8s.io/HEAD/pkg/), and +[Kubernetes](http://kubernetes.io/). ## How to Include an Example -While writing examples, you may want to show the content of certain example files (e.g. [pod.yaml](../user-guide/pod.yaml)). In this case, insert the following code in the md file: +While writing examples, you may want to show the content of certain example +files (e.g. [pod.yaml](../user-guide/pod.yaml)). In this case, insert the +following code in the md file: ``` ``` -Note that you should replace `path/to/file` with the relative path to the example file. Then `hack/update-munge-docs.sh` will generate a code block with the content of the specified file, and a link to download it. This way, you save the time to do the copy-and-paste; what's better, the content won't become out-of-date every time you update the example file. +Note that you should replace `path/to/file` with the relative path to the +example file. Then `hack/update-munge-docs.sh` will generate a code block with +the content of the specified file, and a link to download it. This way, you save +the time to do the copy-and-paste; what's better, the content won't become +out-of-date every time you update the example file. For example, the following: @@ -135,11 +155,17 @@ spec: ### Code formatting -Wrap a span of code with single backticks (`` ` ``). To format multiple lines of code as its own code block, use triple backticks (```` ``` ````). +Wrap a span of code with single backticks (`` ` ``). To format multiple lines of +code as its own code block, use triple backticks (```` ``` ````). ### Syntax Highlighting -Adding syntax highlighting to code blocks improves readability. To do so, in your fenced block, add an optional language identifier. Some useful identifier includes `yaml`, `console` (for console output), and `sh` (for shell quote format). Note that in a console output, put `$ ` at the beginning of each command and put nothing at the beginning of the output. Here's an example of console code block: +Adding syntax highlighting to code blocks improves readability. To do so, in +your fenced block, add an optional language identifier. Some useful identifier +includes `yaml`, `console` (for console output), and `sh` (for shell quote +format). Note that in a console output, put `$ ` at the beginning of each +command and put nothing at the beginning of the output. Here's an example of +console code block: ``` ```console @@ -159,26 +185,38 @@ pod "foo" created ### Headings -Add a single `#` before the document title to create a title heading, and add `##` to the next level of section title, and so on. Note that the number of `#` will determine the size of the heading. +Add a single `#` before the document title to create a title heading, and add +`##` to the next level of section title, and so on. Note that the number of `#` +will determine the size of the heading. ## What Are Mungers? -Mungers are like gofmt for md docs which we use to format documents. To use it, simply place +Mungers are like gofmt for md docs which we use to format documents. To use it, +simply place ``` ``` -in your md files. Note that xxxx is the placeholder for a specific munger. Appropriate content will be generated and inserted between two brackets after you run `hack/update-munge-docs.sh`. See [munger document](http://releases.k8s.io/HEAD/cmd/mungedocs/) for more details. +in your md files. Note that xxxx is the placeholder for a specific munger. +Appropriate content will be generated and inserted between two brackets after +you run `hack/update-munge-docs.sh`. See +[munger document](http://releases.k8s.io/HEAD/cmd/mungedocs/) for more details. ## Auto-added Mungers -After running `hack/update-munge-docs.sh`, you may see some code / mungers in your md file that are auto-added. You don't have to add them manually. It's recommended to just read this section as a reference instead of messing up with the following mungers. +After running `hack/update-munge-docs.sh`, you may see some code / mungers in +your md file that are auto-added. You don't have to add them manually. It's +recommended to just read this section as a reference instead of messing up with +the following mungers. + ### Unversioned Warning -UNVERSIONED_WARNING munger inserts unversioned warning which warns the users when they're reading the document from HEAD and informs them where to find the corresponding document for a specific release. +UNVERSIONED_WARNING munger inserts unversioned warning which warns the users +when they're reading the document from HEAD and informs them where to find the +corresponding document for a specific release. ``` @@ -191,7 +229,8 @@ UNVERSIONED_WARNING munger inserts unversioned warning which warns the users whe ### Is Versioned -IS_VERSIONED munger inserts `IS_VERSIONED` tag in documents in each release, which stops UNVERSIONED_WARNING munger from inserting warning messages. +IS_VERSIONED munger inserts `IS_VERSIONED` tag in documents in each release, +which stops UNVERSIONED_WARNING munger from inserting warning messages. ``` diff --git a/instrumentation.md b/instrumentation.md index ffef0e31..4961e0bb 100644 --- a/instrumentation.md +++ b/instrumentation.md @@ -31,19 +31,30 @@ Documentation for other releases can be found at -Instrumenting Kubernetes with a new metric -=================== -The following is a step-by-step guide for adding a new metric to the Kubernetes code base. +## Instrumenting Kubernetes with a new metric -We use the Prometheus monitoring system's golang client library for instrumenting our code. Once you've picked out a file that you want to add a metric to, you should: +The following is a step-by-step guide for adding a new metric to the Kubernetes +code base. + +We use the Prometheus monitoring system's golang client library for +instrumenting our code. Once you've picked out a file that you want to add a +metric to, you should: 1. Import "github.com/prometheus/client_golang/prometheus". 2. Create a top-level var to define the metric. For this, you have to: - 1. Pick the type of metric. Use a Gauge for things you want to set to a particular value, a Counter for things you want to increment, or a Histogram or Summary for histograms/distributions of values (typically for latency). Histograms are better if you're going to aggregate the values across jobs, while summaries are better if you just want the job to give you a useful summary of the values. + + 1. Pick the type of metric. Use a Gauge for things you want to set to a +particular value, a Counter for things you want to increment, or a Histogram or +Summary for histograms/distributions of values (typically for latency). +Histograms are better if you're going to aggregate the values across jobs, while +summaries are better if you just want the job to give you a useful summary of +the values. 2. Give the metric a name and description. - 3. Pick whether you want to distinguish different categories of things using labels on the metric. If so, add "Vec" to the name of the type of metric you want and add a slice of the label names to the definition. + 3. Pick whether you want to distinguish different categories of things using +labels on the metric. If so, add "Vec" to the name of the type of metric you +want and add a slice of the label names to the definition. https://github.com/kubernetes/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/apiserver/apiserver.go#L53 https://github.com/kubernetes/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/kubelet/metrics/metrics.go#L31 @@ -53,13 +64,17 @@ We use the Prometheus monitoring system's golang client library for instrumentin https://github.com/kubernetes/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/kubelet/metrics/metrics.go#L74 https://github.com/kubernetes/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/apiserver/apiserver.go#L78 -4. Use the metric by calling the appropriate method for your metric type (Set, Inc/Add, or Observe, respectively for Gauge, Counter, or Histogram/Summary), first calling WithLabelValues if your metric has any labels +4. Use the metric by calling the appropriate method for your metric type (Set, +Inc/Add, or Observe, respectively for Gauge, Counter, or Histogram/Summary), +first calling WithLabelValues if your metric has any labels https://github.com/kubernetes/kubernetes/blob/3ce7fe8310ff081dbbd3d95490193e1d5250d2c9/pkg/kubelet/kubelet.go#L1384 https://github.com/kubernetes/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/apiserver/apiserver.go#L87 -These are the metric type definitions if you're curious to learn about them or need more information: +These are the metric type definitions if you're curious to learn about them or +need more information: + https://github.com/prometheus/client_golang/blob/master/prometheus/gauge.go https://github.com/prometheus/client_golang/blob/master/prometheus/counter.go https://github.com/prometheus/client_golang/blob/master/prometheus/histogram.go -- cgit v1.2.3 From 9298db79ff05521237ad53182a170322bfc91fe6 Mon Sep 17 00:00:00 2001 From: joe2far Date: Fri, 15 Jul 2016 10:44:58 +0100 Subject: Fix broken warning image link in docs --- README.md | 10 +++++----- adding-an-APIGroup.md | 10 +++++----- api-conventions.md | 10 +++++----- api_changes.md | 10 +++++----- automation.md | 10 +++++----- cherry-picks.md | 10 +++++----- cli-roadmap.md | 10 +++++----- client-libraries.md | 10 +++++----- coding-conventions.md | 10 +++++----- collab.md | 10 +++++----- developer-guides/vagrant.md | 10 +++++----- development.md | 10 +++++----- e2e-node-tests.md | 10 +++++----- e2e-tests.md | 10 +++++----- faster_reviews.md | 10 +++++----- flaky-tests.md | 10 +++++----- generating-clientset.md | 10 +++++----- getting-builds.md | 10 +++++----- go-code.md | 10 +++++----- how-to-doc.md | 10 +++++----- instrumentation.md | 10 +++++----- issues.md | 10 +++++----- kubectl-conventions.md | 10 +++++----- kubemark-guide.md | 10 +++++----- logging.md | 10 +++++----- making-release-notes.md | 10 +++++----- mesos-style.md | 10 +++++----- node-performance-testing.md | 10 +++++----- on-call-build-cop.md | 10 +++++----- on-call-rotations.md | 10 +++++----- on-call-user-support.md | 10 +++++----- owners.md | 10 +++++----- profiling.md | 10 +++++----- pull-requests.md | 10 +++++----- releasing.md | 10 +++++----- running-locally.md | 10 +++++----- scheduler.md | 10 +++++----- scheduler_algorithm.md | 10 +++++----- testing.md | 10 +++++----- update-release-docs.md | 10 +++++----- updating-docs-for-feature-changes.md | 10 +++++----- writing-a-getting-started-guide.md | 10 +++++----- writing-good-e2e-tests.md | 10 +++++----- 43 files changed, 215 insertions(+), 215 deletions(-) diff --git a/README.md b/README.md index 6051933d..377f957a 100644 --- a/README.md +++ b/README.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/adding-an-APIGroup.md b/adding-an-APIGroup.md index f05009dd..cac04449 100644 --- a/adding-an-APIGroup.md +++ b/adding-an-APIGroup.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/api-conventions.md b/api-conventions.md index a940dab7..8247c726 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/api_changes.md b/api_changes.md index 57787c72..237b71a6 100644 --- a/api_changes.md +++ b/api_changes.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/automation.md b/automation.md index 365bcdd9..c41a1e64 100644 --- a/automation.md +++ b/automation.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/cherry-picks.md b/cherry-picks.md index f923e42f..93bef70c 100644 --- a/cherry-picks.md +++ b/cherry-picks.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/cli-roadmap.md b/cli-roadmap.md index 9d0f9754..015f20f0 100644 --- a/cli-roadmap.md +++ b/cli-roadmap.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/client-libraries.md b/client-libraries.md index 0d859541..7292777c 100644 --- a/client-libraries.md +++ b/client-libraries.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/coding-conventions.md b/coding-conventions.md index dc2825b0..b551c032 100644 --- a/coding-conventions.md +++ b/coding-conventions.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/collab.md b/collab.md index 782997d7..002e9cc5 100644 --- a/collab.md +++ b/collab.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index ff6b98f2..1c5b10c4 100644 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/development.md b/development.md index 571028c2..594dfcdf 100644 --- a/development.md +++ b/development.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/e2e-node-tests.md b/e2e-node-tests.md index f4713855..a922d627 100644 --- a/e2e-node-tests.md +++ b/e2e-node-tests.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/e2e-tests.md b/e2e-tests.md index 50356385..fec617f0 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/faster_reviews.md b/faster_reviews.md index 2d408a81..984eecde 100644 --- a/faster_reviews.md +++ b/faster_reviews.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/flaky-tests.md b/flaky-tests.md index c2db9ae2..1f742a46 100644 --- a/flaky-tests.md +++ b/flaky-tests.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/generating-clientset.md b/generating-clientset.md index 90c8758a..4fd3044c 100644 --- a/generating-clientset.md +++ b/generating-clientset.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/getting-builds.md b/getting-builds.md index b61ceddd..b1ae845f 100644 --- a/getting-builds.md +++ b/getting-builds.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/go-code.md b/go-code.md index c9e90751..01b9c45c 100644 --- a/go-code.md +++ b/go-code.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/how-to-doc.md b/how-to-doc.md index 6ec896f4..e0659339 100644 --- a/how-to-doc.md +++ b/how-to-doc.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/instrumentation.md b/instrumentation.md index 4961e0bb..a9e85691 100644 --- a/instrumentation.md +++ b/instrumentation.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/issues.md b/issues.md index 54acf508..0cf4730d 100644 --- a/issues.md +++ b/issues.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/kubectl-conventions.md b/kubectl-conventions.md index 40cb7e59..8705d285 100644 --- a/kubectl-conventions.md +++ b/kubectl-conventions.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/kubemark-guide.md b/kubemark-guide.md index 8c736ec3..f9c2dc0b 100644 --- a/kubemark-guide.md +++ b/kubemark-guide.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/logging.md b/logging.md index a941d309..523a4ccf 100644 --- a/logging.md +++ b/logging.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/making-release-notes.md b/making-release-notes.md index 2caee937..4a1a0693 100644 --- a/making-release-notes.md +++ b/making-release-notes.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/mesos-style.md b/mesos-style.md index a2fa1959..f614fea8 100644 --- a/mesos-style.md +++ b/mesos-style.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/node-performance-testing.md b/node-performance-testing.md index 04e7c06d..58dcfaee 100644 --- a/node-performance-testing.md +++ b/node-performance-testing.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/on-call-build-cop.md b/on-call-build-cop.md index b7609cbc..f6479c8e 100644 --- a/on-call-build-cop.md +++ b/on-call-build-cop.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/on-call-rotations.md b/on-call-rotations.md index 0fb2cd9f..649a8853 100644 --- a/on-call-rotations.md +++ b/on-call-rotations.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/on-call-user-support.md b/on-call-user-support.md index ca5a6d76..54c1a5c8 100644 --- a/on-call-user-support.md +++ b/on-call-user-support.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/owners.md b/owners.md index 1b1c7643..3b61766d 100644 --- a/owners.md +++ b/owners.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/profiling.md b/profiling.md index dd1c3428..e61fc0f6 100644 --- a/profiling.md +++ b/profiling.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/pull-requests.md b/pull-requests.md index 40705971..24524d90 100644 --- a/pull-requests.md +++ b/pull-requests.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/releasing.md b/releasing.md index 04045cc8..c195ee8e 100644 --- a/releasing.md +++ b/releasing.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/running-locally.md b/running-locally.md index 0e56456e..2b92bb32 100644 --- a/running-locally.md +++ b/running-locally.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/scheduler.md b/scheduler.md index f8359f73..302ec144 100755 --- a/scheduler.md +++ b/scheduler.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/scheduler_algorithm.md b/scheduler_algorithm.md index a84f19bc..2aaa84df 100755 --- a/scheduler_algorithm.md +++ b/scheduler_algorithm.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/testing.md b/testing.md index 3d7fb452..4995d689 100644 --- a/testing.md +++ b/testing.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/update-release-docs.md b/update-release-docs.md index 82140407..0fed8f22 100644 --- a/update-release-docs.md +++ b/update-release-docs.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/updating-docs-for-feature-changes.md b/updating-docs-for-feature-changes.md index 295aa5df..5975e428 100644 --- a/updating-docs-for-feature-changes.md +++ b/updating-docs-for-feature-changes.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/writing-a-getting-started-guide.md b/writing-a-getting-started-guide.md index 390c717b..05b3d0c2 100644 --- a/writing-a-getting-started-guide.md +++ b/writing-a-getting-started-guide.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

diff --git a/writing-good-e2e-tests.md b/writing-good-e2e-tests.md index 2e910438..70abfe1c 100644 --- a/writing-good-e2e-tests.md +++ b/writing-good-e2e-tests.md @@ -2,15 +2,15 @@ -WARNING -WARNING -WARNING -WARNING -WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

-- cgit v1.2.3 From bfcd7225a5b8ea9eb717c416089d4538e2802ba9 Mon Sep 17 00:00:00 2001 From: Buddha Prakash Date: Mon, 27 Jun 2016 11:46:20 -0700 Subject: Inject top level QoS cgroup creation in the Kubelet --- e2e-node-tests.md | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/e2e-node-tests.md b/e2e-node-tests.md index f4713855..6ba390ed 100644 --- a/e2e-node-tests.md +++ b/e2e-node-tests.md @@ -205,6 +205,14 @@ less useful for catching flakes related creating the instance from an image.** make test-e2e-node REMOTE=true RUN_UNTIL_FAILURE=true ``` +## Additional QoS Cgroups Hierarchy level testing + +For testing with the QoS Cgroup Hierarchy enabled, you can pass --cgroups-per-qos flag as an argument into Ginkgo using TEST_ARGS + +```sh +make test_e2e_node TEST_ARGS="--cgroups-per-qos=true" +``` + # Notes on tests run by the Kubernetes project during pre-, post- submit. The node e2e tests are run by the PR builder for each Pull Request and the results published at -- cgit v1.2.3 From 66a39c4518ad6b1e3390c39fcaedd112179cf6db Mon Sep 17 00:00:00 2001 From: Antoine Pelisse Date: Fri, 15 Jul 2016 10:35:48 -0700 Subject: Document auto-close on stale pull-request --- automation.md | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/automation.md b/automation.md index 6ba74fd0..8a909848 100644 --- a/automation.md +++ b/automation.md @@ -108,6 +108,18 @@ with numerous other functions. See the README in the link above. Please feel free to unleash your creativity on this tool, send us new mungers that you think will help support the Kubernetes development process. +### Closing stale pull-requests + +Github Munger will close pull-requests that don't have human activity in the +last 90 days. It will warn about this process 60 days before closing the +pull-request, and warn again 30 days later. One way to prevent this from +happening is to add the "keep-open" label on the pull-request. + +Feel free to re-open and maybe add the "keep-open" label if this happens to a +valid pull-request. It may also be a good opportunity to get more attention by +verifying that it is properly assigned and/or mention people that might be +interested. + ## PR builder We also run a robotic PR builder that attempts to run tests for each PR. -- cgit v1.2.3 From c0341e2bfd11e5f9cbca839d5a9d29c6549fbfdb Mon Sep 17 00:00:00 2001 From: lixiaobing10051267 Date: Mon, 18 Jul 2016 16:21:33 +0800 Subject: File "cluster/kube-env.sh" not exist --- developer-guides/vagrant.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) mode change 100644 => 100755 developer-guides/vagrant.md diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md old mode 100644 new mode 100755 index ff6b98f2..2fe8ed93 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -278,7 +278,7 @@ Congratulations! ### Testing The following will run all of the end-to-end testing scenarios assuming you set -your environment in `cluster/kube-env.sh`: +your environment: ```shell NUM_NODES=3 go run hack/e2e.go -v --build --up --test --down -- cgit v1.2.3 From 6cb74d15583c0afaddf9df12ac00d8119f251bd6 Mon Sep 17 00:00:00 2001 From: lixiaobing10051267 Date: Mon, 18 Jul 2016 17:24:55 +0800 Subject: The directory of file "api_changes.md" has been changed, need to modify --- api_changes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) mode change 100644 => 100755 api_changes.md diff --git a/api_changes.md b/api_changes.md old mode 100644 new mode 100755 index 57787c72..20ee745d --- a/api_changes.md +++ b/api_changes.md @@ -399,7 +399,7 @@ have to do more later. The files you want are Note that the conversion machinery doesn't generically handle conversion of values, such as various kinds of field references and API constants. [The client -library](../../pkg/client/unversioned/request.go) has custom conversion code for +library](../../pkg/client/restclient/request.go) has custom conversion code for field references. You also need to add a call to api.Scheme.AddFieldLabelConversionFunc with a mapping function that understands supported translations. -- cgit v1.2.3 From 577e973603c0706a190ced493762cdd75af1bdff Mon Sep 17 00:00:00 2001 From: lixiaobing10051267 Date: Mon, 18 Jul 2016 21:46:08 +0800 Subject: Both the file name and directory of fake docker manager are wrong --- kubemark-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) mode change 100644 => 100755 kubemark-guide.md diff --git a/kubemark-guide.md b/kubemark-guide.md old mode 100644 new mode 100755 index 8c736ec3..61e728a4 --- a/kubemark-guide.md +++ b/kubemark-guide.md @@ -229,7 +229,7 @@ other providers you’ll need to delete all this stuff by yourself. Kubemark master uses exactly the same binaries as ordinary Kubernetes does. This means that it will never be out of date. On the other hand HollowNodes use existing fake for Kubelet (called SimpleKubelet), which mocks its runtime -manager with `pkg/kubelet/fake-docker-manager.go`, where most logic sits. +manager with `pkg/kubelet/dockertools/fake_manager.go`, where most logic sits. Because there’s no easy way of mocking other managers (e.g. VolumeManager), they are not supported in Kubemark (e.g. we can’t schedule Pods with volumes in them yet). -- cgit v1.2.3 From 81d7d53564d56e07b62ba260ee3df03ce76a5817 Mon Sep 17 00:00:00 2001 From: lixiaobing10051267 Date: Tue, 19 Jul 2016 11:08:25 +0800 Subject: Modify "server.go" directory from master to kubelet --- profiling.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/profiling.md b/profiling.md index dd1c3428..fc61b300 100644 --- a/profiling.md +++ b/profiling.md @@ -52,7 +52,7 @@ m.mux.HandleFunc("/debug/pprof/symbol", pprof.Symbol) to the init(c *Config) method in 'pkg/master/master.go' and import 'net/http/pprof' package. -In most use cases to use profiler service it's enough to do 'import _ net/http/pprof', which automatically registers a handler in the default http.Server. Slight inconvenience is that APIserver uses default server for intra-cluster communication, so plugging profiler to it is not really useful. In 'pkg/master/server/server.go' more servers are created and started as separate goroutines. The one that is usually serving external traffic is secureServer. The handler for this traffic is defined in 'pkg/master/master.go' and stored in Handler variable. It is created from HTTP multiplexer, so the only thing that needs to be done is adding profiler handler functions to this multiplexer. This is exactly what lines after TL;DR do. +In most use cases to use profiler service it's enough to do 'import _ net/http/pprof', which automatically registers a handler in the default http.Server. Slight inconvenience is that APIserver uses default server for intra-cluster communication, so plugging profiler to it is not really useful. In 'pkg/kubelet/server/server.go' more servers are created and started as separate goroutines. The one that is usually serving external traffic is secureServer. The handler for this traffic is defined in 'pkg/master/master.go' and stored in Handler variable. It is created from HTTP multiplexer, so the only thing that needs to be done is adding profiler handler functions to this multiplexer. This is exactly what lines after TL;DR do. ## Connecting to the profiler -- cgit v1.2.3 From faceee8b3a50bd270d33560b864c341ff63989f1 Mon Sep 17 00:00:00 2001 From: Random-Liu Date: Tue, 19 Jul 2016 02:13:10 -0700 Subject: Add document for node e2e --disable-kubenet flag. --- e2e-node-tests.md | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/e2e-node-tests.md b/e2e-node-tests.md index 6ba390ed..a96681b7 100644 --- a/e2e-node-tests.md +++ b/e2e-node-tests.md @@ -205,6 +205,26 @@ less useful for catching flakes related creating the instance from an image.** make test-e2e-node REMOTE=true RUN_UNTIL_FAILURE=true ``` +## Run tests with kubenet network plugin + +[kubenet](http://kubernetes.io/docs/admin/network-plugins/#kubenet) is +the default network plugin used by kubelet since Kubernetes 1.3. The +plugin requires [CNI](https://github.com/containernetworking/cni) and +[nsenter](http://man7.org/linux/man-pages/man1/nsenter.1.html). + +Currently, kubenet is enabled by default for Remote execution `REMOTE=true`, +but disabled for Local execution. **Note: kubenet is not supported for +local execution currently. This may cause network related test result to be +different for Local and Remote execution. So if you want to run network +related test, Remote execution is recommended.** + +To enable/disable kubenet: + +```sh +make test_e2e_node TEST_ARGS="--disable-kubenet=true" # enable kubenet +make test_e2e_node TEST_ARGS="--disable-kubenet=false" # disable kubenet +``` + ## Additional QoS Cgroups Hierarchy level testing For testing with the QoS Cgroup Hierarchy enabled, you can pass --cgroups-per-qos flag as an argument into Ginkgo using TEST_ARGS -- cgit v1.2.3 From fbc5fb01fccbfc243995149832cacf933377513f Mon Sep 17 00:00:00 2001 From: Antoine Pelisse Date: Tue, 19 Jul 2016 15:40:53 -0700 Subject: Mention that comments keep pull-requests open A comment in a pull-request will keep it open for another 90 days. Let's mention that in the documentation for people who can't add labels. --- automation.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/automation.md b/automation.md index a1f0144f..4ef1479e 100644 --- a/automation.md +++ b/automation.md @@ -118,7 +118,8 @@ happening is to add the "keep-open" label on the pull-request. Feel free to re-open and maybe add the "keep-open" label if this happens to a valid pull-request. It may also be a good opportunity to get more attention by verifying that it is properly assigned and/or mention people that might be -interested. +interested. Commenting on the pull-request will also keep it open for another 90 +days. ## PR builder -- cgit v1.2.3 From 72da38c3f06a43e0e291b5cf3322823118905563 Mon Sep 17 00:00:00 2001 From: lixiaobing10051267 Date: Fri, 22 Jul 2016 14:32:58 +0800 Subject: Modify the provider name in e2e-tests.md --- e2e-tests.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/e2e-tests.md b/e2e-tests.md index fec617f0..f5dc3963 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -296,7 +296,7 @@ Next, specify the docker repository where your ci images will be pushed. * `${FEDERATION_PUSH_REPO_BASE}/federation-controller-manager` These repositories must allow public read access, as the e2e node docker daemons will not have any credentials. If you're using - gce/gke as your provider, the repositories will have read-access by default. + GCE/GKE as your provider, the repositories will have read-access by default. #### Build -- cgit v1.2.3 From 4783107ecf1bd66ba0674913fc7aaa7e75cab1ee Mon Sep 17 00:00:00 2001 From: Davanum Srinivas Date: Thu, 14 Jul 2016 07:48:32 -0400 Subject: Extend all to more resources Added more things from the list here: https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/cmd/cmd.go#L159 Update the devel/kubectl-conventions.md with the rules mentioned by a few folks on which resources could be added to the special 'all' alias Related to a suggestion in issue #22337 --- kubectl-conventions.md | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/kubectl-conventions.md b/kubectl-conventions.md index 8705d285..22593025 100644 --- a/kubectl-conventions.md +++ b/kubectl-conventions.md @@ -43,6 +43,7 @@ Updated: 8/27/2015 - [Principles](#principles) - [Command conventions](#command-conventions) - [Create commands](#create-commands) + - [Rules for extending special resource alias - "all"](#rules-for-extending-special-resource-alias---all) - [Flag conventions](#flag-conventions) - [Output conventions](#output-conventions) - [Documentation conventions](#documentation-conventions) @@ -118,6 +119,21 @@ creating tls secrets. You create these as separate commands to get distinct flags and separate help that is tailored for the particular usage. +### Rules for extending special resource alias - "all" + +Here are the rules to add a new resource to the `kubectl get all` output. + +* No cluster scoped resources + +* No namespace admin level resources (limits, quota, policy, authorization +rules) + +* No resources that are potentially unrecoverable (secrets and pvc) + +* Resources that are considered "similar" to #3 should be grouped +the same (configmaps) + + ## Flag conventions * Flags are all lowercase, with words separated by hyphens -- cgit v1.2.3 From 79328b29c6294ca3e12ce6163f125728a62b4ad9 Mon Sep 17 00:00:00 2001 From: Jess Frazelle Date: Tue, 19 Jul 2016 10:35:13 -0400 Subject: Update the devel docs with where and how to change the go version being used to build and test k8s. Signed-off-by: Jess Frazelle --- development.md | 17 +++++++++++++++++ go-code.md | 2 ++ 2 files changed, 19 insertions(+) diff --git a/development.md b/development.md index ac2b3bb3..262ed228 100644 --- a/development.md +++ b/development.md @@ -88,6 +88,21 @@ To build binaries for all platforms: make cross ``` +### How to update the Go version used to test & build k8s + +The kubernetes project tries to stay on the latest version of Go so it can +benefit from the improvements to the language over time and can easily +bump to a minor release version for security updates. + +Since kubernetes is mostly built and tested in containers, there are a few +unique places you need to update the go version. + +- The image for cross compiling in [build/build-image/cross/](../../build/build-image/cross/). The `VERSION` file and `Dockerfile`. +- The jenkins test-image in + [hack/jenkins/test-image/](../../hack/jenkins/test-image/). The `Dockerfile` and `Makefile`. +- The docker image being run in [hack/jenkins/dockerized-e2e-runner.sh](../../hack/jenkins/dockerized-e2e-runner.sh) and [hack/jenkins/gotest-dockerized.sh](../../hack/jenkins/gotest-dockerized.sh). +- The cross tag `KUBE_BUILD_IMAGE_CROSS_TAG` in [build/common.sh](../../build/common.sh) + ## Workflow Below, we outline one of the more common git workflows that core developers use. @@ -339,6 +354,8 @@ hack/update-generated-docs.sh ``` + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/development.md?pixel)]() diff --git a/go-code.md b/go-code.md index 01b9c45c..e6416bed 100644 --- a/go-code.md +++ b/go-code.md @@ -36,6 +36,8 @@ and tips that our developers have found useful. ## Recommended Reading - [Kubernetes Go development environment](development.md#go-development-environment) +- [The Go Spec](https://golang.org/ref/spec) - The Go Programming Language + Specification. - [Go Tour](https://tour.golang.org/welcome/2) - Official Go tutorial. - [Effective Go](https://golang.org/doc/effective_go.html) - A good collection of Go advice. - [Kubernetes Code conventions](coding-conventions.md) - Style guide for Kubernetes code. -- cgit v1.2.3 From c4cd7ae22101ac96a6b66c368ed7c9c4c6021cb6 Mon Sep 17 00:00:00 2001 From: Vishnu kannan Date: Fri, 22 Jul 2016 17:24:21 -0700 Subject: Make it possible to run node e2e with GCI via make Signed-off-by: Vishnu kannan --- e2e-node-tests.md | 1 + 1 file changed, 1 insertion(+) diff --git a/e2e-node-tests.md b/e2e-node-tests.md index 04b82799..b300cddb 100644 --- a/e2e-node-tests.md +++ b/e2e-node-tests.md @@ -51,6 +51,7 @@ Why run tests *Locally*? Much faster than running tests Remotely. Prerequisites: - [Install etcd](https://github.com/coreos/etcd/releases) on your PATH - Verify etcd is installed correctly by running `which etcd` + - Or make etcd binary available and executable at `/tmp/etcd` - [Install ginkgo](https://github.com/onsi/ginkgo) on your PATH - Verify ginkgo is installed correctly by running `which ginkgo` -- cgit v1.2.3 From 0b5ff92dd20cf18182dd968883934990b7d830d8 Mon Sep 17 00:00:00 2001 From: bradley childs Date: Mon, 25 Jul 2016 20:56:24 -0400 Subject: Update pull-requests.md fix typo Fix the make arg for `make test-integration` --- pull-requests.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pull-requests.md b/pull-requests.md index 24524d90..7bc4d967 100644 --- a/pull-requests.md +++ b/pull-requests.md @@ -71,7 +71,7 @@ The following will save time for both you and your reviewer: * Enable [pre-commit hooks](development.md#committing-changes-to-your-fork) and verify they pass. * Verify `make verify` passes. * Verify `make test` passes. -* Verify `make test-integration.sh` passes. +* Verify `make test-integration` passes. ## Release Notes -- cgit v1.2.3 From 7d43d8a4fced1514703eccb79d3387bc3d36f3c7 Mon Sep 17 00:00:00 2001 From: derekwaynecarr Date: Fri, 22 Jul 2016 15:23:34 -0400 Subject: Scheduler does not place pods on nodes that have disk pressure --- scheduler_algorithm.md | 1 + 1 file changed, 1 insertion(+) diff --git a/scheduler_algorithm.md b/scheduler_algorithm.md index 2aaa84df..ab9be4a8 100755 --- a/scheduler_algorithm.md +++ b/scheduler_algorithm.md @@ -49,6 +49,7 @@ The purpose of filtering the nodes is to filter out the nodes that do not meet c - `MaxEBSVolumeCount`: Ensure that the number of attached ElasticBlockStore volumes does not exceed a maximum value (by default, 39, since Amazon recommends a maximum of 40 with one of those 40 reserved for the root volume -- see [Amazon's documentation](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/volume_limits.html#linux-specific-volume-limits)). The maximum value can be controlled by setting the `KUBE_MAX_PD_VOLS` environment variable. - `MaxGCEPDVolumeCount`: Ensure that the number of attached GCE PersistentDisk volumes does not exceed a maximum value (by default, 16, which is the maximum GCE allows -- see [GCE's documentation](https://cloud.google.com/compute/docs/disks/persistent-disks#limits_for_predefined_machine_types)). The maximum value can be controlled by setting the `KUBE_MAX_PD_VOLS` environment variable. - `CheckNodeMemoryPressure`: Check if a pod can be scheduled on a node reporting memory pressure condition. Currently, no ``BestEffort`` should be placed on a node under memory pressure as it gets automatically evicted by kubelet. +- `CheckNodeDiskPressure`: Check if a pod can be scheduled on a node reporting disk pressure condition. Currently, no pods should be placed on a node under disk pressure as it gets automatically evicted by kubelet. The details of the above predicates can be found in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/predicates/predicates.go). All predicates mentioned above can be used in combination to perform a sophisticated filtering policy. Kubernetes uses some, but not all, of these predicates by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). -- cgit v1.2.3 From d044345431a53d2237998d0d70bbc4ae5a209747 Mon Sep 17 00:00:00 2001 From: Random-Liu Date: Mon, 18 Jul 2016 00:52:39 -0700 Subject: Make the node e2e test run in parallel. --- e2e-node-tests.md | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/e2e-node-tests.md b/e2e-node-tests.md index 04b82799..54b0ac9e 100644 --- a/e2e-node-tests.md +++ b/e2e-node-tests.md @@ -205,6 +205,18 @@ less useful for catching flakes related creating the instance from an image.** make test-e2e-node REMOTE=true RUN_UNTIL_FAILURE=true ``` +## Run tests in parallel + +Running test in parallel can usually shorten the test duration. By default node +e2e test runs with`--nodes=8` (see ginkgo flag +[--nodes](https://onsi.github.io/ginkgo/#parallel-specs)). You can use the +`PARALLELISM` option to change the parallelism. + +```sh +make test-e2e-node PARALLELISM=4 # run test with 4 parallel nodes +make test-e2e-node PARALLELISM=1 # run test sequentially +``` + ## Run tests with kubenet network plugin [kubenet](http://kubernetes.io/docs/admin/network-plugins/#kubenet) is -- cgit v1.2.3 From 70f59e3d966f73240973e456acfb2bc35fc7c51a Mon Sep 17 00:00:00 2001 From: Jing Xu Date: Mon, 1 Aug 2016 10:04:48 -0700 Subject: Add instructions for running version-skewed tests Add instructions for running version-skewed tests --- e2e-tests.md | 65 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 65 insertions(+) diff --git a/e2e-tests.md b/e2e-tests.md index f5dc3963..da0b2b3f 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -56,6 +56,7 @@ Updated: 5/3/2016 - [Debugging clusters](#debugging-clusters) - [Local clusters](#local-clusters) - [Testing against local clusters](#testing-against-local-clusters) + - [Version-skewed and upgrade testing](#version-skewed-and-upgrade-testing) - [Kinds of tests](#kinds-of-tests) - [Conformance tests](#conformance-tests) - [Defining Conformance Subset](#defining-conformance-subset) @@ -406,6 +407,70 @@ To control the tests that are run: go run hack/e2e.go -v --test --check_node_count=false --test_args="--host=http://127.0.0.1:8080" --ginkgo.focus="Secrets" ``` +### Version-skewed and upgrade testing + +We run version-skewed tests to check that newer versions of Kubernetes work +similarly enough to older versions. The general strategy is to cover the following cases: + +1. One version of `kubectl` with another version of the cluster and tests (e.g. + that v1.2 and v1.4 `kubectl` doesn't break v1.3 tests running against a v1.3 + cluster). +1. A newer version of the Kubernetes master with older nodes and tests (e.g. + that upgrading a master to v1.3 with nodes at v1.2 still passes v1.2 tests). +1. A newer version of the whole cluster with older tests (e.g. that a cluster + upgraded---master and nodes---to v1.3 still passes v1.2 tests). +1. That an upgraded cluster functions the same as a brand-new cluster of the + same version (e.g. a cluster upgraded to v1.3 passes the same v1.3 tests as + a newly-created v1.3 cluster). + +[hack/e2e-runner.sh](http://releases.k8s.io/HEAD/hack/jenkins/e2e-runner.sh) is +the authoritative source on how to run version-skewed tests, but below is a +quick-and-dirty tutorial. + +```sh +# Assume you have two copies of the Kubernetes repository checked out, at +# ./kubernetes and ./kubernetes_old + +# If using GKE: +export KUBERNETES_PROVIDER=gke +export CLUSTER_API_VERSION=${OLD_VERSION} + +# Deploy a cluster at the old version; see above for more details +cd ./kubernetes_old +go run ./hack/e2e.go -v --up + +# Upgrade the cluster to the new version +# +# If using GKE, add --upgrade-target=${NEW_VERSION} +# +# You can target Feature:MasterUpgrade or Feature:ClusterUpgrade +cd ../kubernetes +go run ./hack/e2e.go -v --test --check_version_skew=false --test_args="--ginkgo.focus=\[Feature:MasterUpgrade\]" + +# Run old tests with new kubectl +cd ../kubernetes_old +go run ./hack/e2e.go -v --test --test_args="--kubectl-path=$(pwd)/../kubernetes/cluster/kubectl.sh" +``` + +If you are just testing version-skew, you may want to just deploy at one +version and then test at another version, instead of going through the whole +upgrade process: + +```sh +# With the same setup as above + +# Deploy a cluster at the new version +cd ./kubernetes +go run ./hack/e2e.go -v --up + +# Run new tests with old kubectl +go run ./hack/e2e.go -v --test --test_args="--kubectl-path=$(pwd)/../kubernetes_old/cluster/kubectl.sh" + +# Run old tests with new kubectl +cd ../kubernetes_old +go run ./hack/e2e.go -v --test --test_args="--kubectl-path=$(pwd)/../kubernetes/cluster/kubectl.sh" +``` + ## Kinds of tests We are working on implementing clearer partitioning of our e2e tests to make -- cgit v1.2.3 From 35f1a5d54ce7ed5a2654aad129aecaf4bf3c1e10 Mon Sep 17 00:00:00 2001 From: Daniel Smith Date: Mon, 1 Aug 2016 21:51:57 -0700 Subject: Revert "Extend all to more resources" --- kubectl-conventions.md | 16 ---------------- 1 file changed, 16 deletions(-) diff --git a/kubectl-conventions.md b/kubectl-conventions.md index 22593025..8705d285 100644 --- a/kubectl-conventions.md +++ b/kubectl-conventions.md @@ -43,7 +43,6 @@ Updated: 8/27/2015 - [Principles](#principles) - [Command conventions](#command-conventions) - [Create commands](#create-commands) - - [Rules for extending special resource alias - "all"](#rules-for-extending-special-resource-alias---all) - [Flag conventions](#flag-conventions) - [Output conventions](#output-conventions) - [Documentation conventions](#documentation-conventions) @@ -119,21 +118,6 @@ creating tls secrets. You create these as separate commands to get distinct flags and separate help that is tailored for the particular usage. -### Rules for extending special resource alias - "all" - -Here are the rules to add a new resource to the `kubectl get all` output. - -* No cluster scoped resources - -* No namespace admin level resources (limits, quota, policy, authorization -rules) - -* No resources that are potentially unrecoverable (secrets and pvc) - -* Resources that are considered "similar" to #3 should be grouped -the same (configmaps) - - ## Flag conventions * Flags are all lowercase, with words separated by hyphens -- cgit v1.2.3 From 2544caac5562ac645026e0a74d77786354083d9c Mon Sep 17 00:00:00 2001 From: Hongchao Deng Date: Tue, 2 Aug 2016 15:13:47 -0700 Subject: automation.md: fix typos --- automation.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/automation.md b/automation.md index 8900dbcc..c4880362 100644 --- a/automation.md +++ b/automation.md @@ -46,7 +46,7 @@ processes. In an effort to * reduce load on core developers * maintain e2e stability - * load test githubs label feature + * load test github's label feature We have added an automated [submit-queue] (https://github.com/kubernetes/contrib/blob/master/mungegithub/mungers/submit-queue.go) @@ -99,7 +99,7 @@ green when this PR finishes retesting. ## Github Munger -We run a [github "munger"](https://github.com/kubernetes/contrib/tree/master/mungegithub). +We run [github "mungers"](https://github.com/kubernetes/contrib/tree/master/mungegithub). This runs repeatedly over github pulls and issues and runs modular "mungers" similar to "mungedocs." The mungers include the 'submit-queue' referenced above along -- cgit v1.2.3 From 7f6f947c4bf961b4d54daf447638530fe8aa90a2 Mon Sep 17 00:00:00 2001 From: Chao Xu Date: Mon, 25 Jul 2016 22:03:39 -0700 Subject: add validateListType to pkg/api/meta/schema_test.go --- api-conventions.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/api-conventions.md b/api-conventions.md index 8247c726..5bc731be 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -134,8 +134,9 @@ specific actions that create, update, delete, or get. 2. **Lists** are collections of **resources** of one (usually) or more (occasionally) kinds. - Lists have a limited set of common metadata. All lists use the "items" field -to contain the array of objects they return. + The name of a list kind must end with "List". Lists have a limited set of +common metadata. All lists use the required "items" field to contain the array +of objects they return. Any kind that has the "items" field must be a list kind. Most objects defined in the system should have an endpoint that returns the full set of resources, as well as zero or more endpoints that return subsets of -- cgit v1.2.3 From 058b3c5c4a2f437b80531e365c779968cdd832ef Mon Sep 17 00:00:00 2001 From: Phillip Wittrock Date: Tue, 2 Aug 2016 18:43:15 -0700 Subject: Move non-Minikube local cluster guides from docs repo to kubernetes development repo. --- local-cluster/docker.md | 305 +++++++++++++++++++++++++++++++++ local-cluster/local.md | 167 ++++++++++++++++++ local-cluster/vagrant.md | 438 +++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 910 insertions(+) create mode 100644 local-cluster/docker.md create mode 100644 local-cluster/local.md create mode 100644 local-cluster/vagrant.md diff --git a/local-cluster/docker.md b/local-cluster/docker.md new file mode 100644 index 00000000..bccb16d1 --- /dev/null +++ b/local-cluster/docker.md @@ -0,0 +1,305 @@ + + + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + + + + +--- +assignees: +- asridharan +- brendandburns +- fgrzadkowski + +--- + +**Stop. This guide has been superseded by [Minikube](https://github.com/kubernetes/minikube) which is the recommended method of running Kubernetes on your local machine.** + + +The following instructions show you how to set up a simple, single node Kubernetes cluster using Docker. + +Here's a diagram of what the final result will look like: + +![Kubernetes Single Node on Docker](../../getting-started-guides/k8s-singlenode-docker.png) + +* TOC +{:toc} + +## Prerequisites + +**Note: These steps have not been tested with the [Docker For Mac or Docker For Windows beta programs](https://blog.docker.com/2016/03/docker-for-mac-windows-beta/).** + +1. You need to have Docker version >= "1.10" installed on the machine. +2. Enable mount propagation. Hyperkube is running in a container which has to mount volumes for other containers, for example in case of persistent storage. The required steps depend on the init system. + + + In case of **systemd**, change MountFlags in the Docker unit file to shared. + + ```shell + DOCKER_CONF=$(systemctl cat docker | head -1 | awk '{print $2}') + sed -i.bak 's/^\(MountFlags=\).*/\1shared/' $DOCKER_CONF + systemctl daemon-reload + systemctl restart docker + ``` + + **Otherwise**, manually set the mount point used by Hyperkube to be shared: + + ```shell + mkdir -p /var/lib/kubelet + mount --bind /var/lib/kubelet /var/lib/kubelet + mount --make-shared /var/lib/kubelet + ``` + + +### Run it + +1. Decide which Kubernetes version to use. Set the `${K8S_VERSION}` variable to a version of Kubernetes >= "v1.2.0". + + + If you'd like to use the current **stable** version of Kubernetes, run the following: + + ```sh + export K8S_VERSION=$(curl -sS https://storage.googleapis.com/kubernetes-release/release/stable.txt) + ``` + + and for the **latest** available version (including unstable releases): + + ```sh + export K8S_VERSION=$(curl -sS https://storage.googleapis.com/kubernetes-release/release/latest.txt) + ``` + +2. Start Hyperkube + + ```shell + export ARCH=amd64 + docker run -d \ + --volume=/sys:/sys:rw \ + --volume=/var/lib/docker/:/var/lib/docker:rw \ + --volume=/var/lib/kubelet/:/var/lib/kubelet:rw,shared \ + --volume=/var/run:/var/run:rw \ + --net=host \ + --pid=host \ + --privileged \ + --name=kubelet \ + gcr.io/google_containers/hyperkube-${ARCH}:${K8S_VERSION} \ + /hyperkube kubelet \ + --hostname-override=127.0.0.1 \ + --api-servers=http://localhost:8080 \ + --config=/etc/kubernetes/manifests \ + --cluster-dns=10.0.0.10 \ + --cluster-domain=cluster.local \ + --allow-privileged --v=2 + ``` + + > Note that `--cluster-dns` and `--cluster-domain` is used to deploy dns, feel free to discard them if dns is not needed. + + > If you would like to mount an external device as a volume, add `--volume=/dev:/dev` to the command above. It may however, cause some problems described in [#18230](https://github.com/kubernetes/kubernetes/issues/18230) + + > Architectures other than `amd64` are experimental and sometimes unstable, but feel free to try them out! Valid values: `arm`, `arm64` and `ppc64le`. ARM is available with Kubernetes version `v1.3.0-alpha.2` and higher. ARM 64-bit and PowerPC 64 little-endian are available with `v1.3.0-alpha.3` and higher. Track progress on multi-arch support [here](https://github.com/kubernetes/kubernetes/issues/17981) + + > If you are behind a proxy, you need to pass the proxy setup to curl in the containers to pull the certificates. Create a .curlrc under /root folder (because the containers are running as root) with the following line: + + ``` + proxy = : + ``` + + This actually runs the kubelet, which in turn runs a [pod](http://kubernetes.io/docs/user-guide/pods/) that contains the other master components. + + ** **SECURITY WARNING** ** services exposed via Kubernetes using Hyperkube are available on the host node's public network interface / IP address. Because of this, this guide is not suitable for any host node/server that is directly internet accessible. Refer to [#21735](https://github.com/kubernetes/kubernetes/issues/21735) for addtional info. + +### Download `kubectl` + +At this point you should have a running Kubernetes cluster. You can test it out +by downloading the kubectl binary for `${K8S_VERSION}` (in this example: `{{page.version}}.0`). + + +Downloads: + + - `linux/amd64`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/amd64/kubectl + - `linux/386`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/386/kubectl + - `linux/arm`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/arm/kubectl + - `linux/arm64`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/arm64/kubectl + - `linux/ppc64le`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/ppc64le/kubectl + - `OS X/amd64`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/darwin/amd64/kubectl + - `OS X/386`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/darwin/386/kubectl + - `windows/amd64`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/windows/amd64/kubectl.exe + - `windows/386`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/windows/386/kubectl.exe + +The generic download path is: + +``` +http://storage.googleapis.com/kubernetes-release/release/${K8S_VERSION}/bin/${GOOS}/${GOARCH}/${K8S_BINARY} +``` + +An example install with `linux/amd64`: + +``` +curl -sSL "https://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/amd64/kubectl" > /usr/bin/kubectl +chmod +x /usr/bin/kubectl +``` + +On OS X, to make the API server accessible locally, setup a ssh tunnel. + +```shell +docker-machine ssh `docker-machine active` -N -L 8080:localhost:8080 +``` + +Setting up a ssh tunnel is applicable to remote docker hosts as well. + +(Optional) Create kubernetes cluster configuration: + +```shell +kubectl config set-cluster test-doc --server=http://localhost:8080 +kubectl config set-context test-doc --cluster=test-doc +kubectl config use-context test-doc +``` + +### Test it out + +List the nodes in your cluster by running: + +```shell +kubectl get nodes +``` + +This should print: + +```shell +NAME STATUS AGE +127.0.0.1 Ready 1h +``` + +### Run an application + +```shell +kubectl run nginx --image=nginx --port=80 +``` + +Now run `docker ps` you should see nginx running. You may need to wait a few minutes for the image to get pulled. + +### Expose it as a service + +```shell +kubectl expose deployment nginx --port=80 +``` + +Run the following command to obtain the cluster local IP of this service we just created: + +```shell{% raw %} +ip=$(kubectl get svc nginx --template={{.spec.clusterIP}}) +echo $ip +{% endraw %}``` + +Hit the webserver with this IP: + +```shell{% raw %} + +curl $ip +{% endraw %}``` + +On OS X, since docker is running inside a VM, run the following command instead: + +```shell +docker-machine ssh `docker-machine active` curl $ip +``` + +### Turning down your cluster + +1\. Delete the nginx service and deployment: + +If you plan on re-creating your nginx deployment and service you will need to clean it up. + +```shell +kubectl delete service,deployments nginx +``` + +2\. Delete all the containers including the kubelet: + +```shell +docker rm -f kubelet +docker rm -f `docker ps | grep k8s | awk '{print $1}'` +``` + +3\. Cleanup the filesystem: + +On OS X, first ssh into the docker VM: + +```shell +docker-machine ssh `docker-machine active` +``` + +```shell +grep /var/lib/kubelet /proc/mounts | awk '{print $2}' | sudo xargs -n1 umount +sudo rm -rf /var/lib/kubelet +``` + +### Troubleshooting + +#### Node is in `NotReady` state + +If you see your node as `NotReady` it's possible that your OS does not have memcg enabled. + +1. Your kernel should support memory accounting. Ensure that the +following configs are turned on in your linux kernel: + +```shell +CONFIG_RESOURCE_COUNTERS=y +CONFIG_MEMCG=y +``` + +2. Enable the memory accounting in the kernel, at boot, as command line +parameters as follows: + +```shell +GRUB_CMDLINE_LINUX="cgroup_enable=memory=1" +``` + +NOTE: The above is specifically for GRUB2. +You can check the command line parameters passed to your kernel by looking at the +output of /proc/cmdline: + +```shell +$ cat /proc/cmdline +BOOT_IMAGE=/boot/vmlinuz-3.18.4-aufs root=/dev/sda5 ro cgroup_enable=memory=1 +``` + +## Support Level + + +IaaS Provider | Config. Mgmt | OS | Networking | Conforms | Support Level +-------------------- | ------------ | ------ | ---------- | ---------| ---------------------------- +Docker Single Node | custom | N/A | local | | Project ([@brendandburns](https://github.com/brendandburns)) + + + +## Further reading + +Please see the [Kubernetes docs](http://kubernetes.io/docs) for more details on administering +and using a Kubernetes cluster. + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/local-cluster/docker.md?pixel)]() + diff --git a/local-cluster/local.md b/local-cluster/local.md new file mode 100644 index 00000000..afff72fa --- /dev/null +++ b/local-cluster/local.md @@ -0,0 +1,167 @@ + + + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + + + + +--- +assignees: +- erictune +- mikedanese +- thockin + +--- + + + +* TOC +{:toc} + +**Stop. This guide has been superseded by [Minikube](https://github.com/kubernetes/minikube) which is the recommended method of running Kubernetes on your local machine.** + +### Requirements + +#### Linux + +Not running Linux? Consider running Linux in a local virtual machine with [vagrant](https://www.vagrantup.com/), or on a cloud provider like Google Compute Engine + +#### Docker + +At least [Docker](https://docs.docker.com/installation/#installation) +1.8.3+. Ensure the Docker daemon is running and can be contacted (try `docker +ps`). Some of the Kubernetes components need to run as root, which normally +works fine with docker. + +#### etcd + +You need an [etcd](https://github.com/coreos/etcd/releases) in your path, please make sure it is installed and in your ``$PATH``. + +#### go + +You need [go](https://golang.org/doc/install) at least 1.4+ in your path, please make sure it is installed and in your ``$PATH``. + +### Starting the cluster + +First, you need to [download Kubernetes](http://kubernetes.io/docs/getting-started-guides/binary_release/). Then open a separate tab of your terminal +and run the following (since one needs sudo access to start/stop Kubernetes daemons, it is easier to run the entire script as root): + +```shell +cd kubernetes +hack/local-up-cluster.sh +``` + +This will build and start a lightweight local cluster, consisting of a master +and a single node. Type Control-C to shut it down. + +You can use the cluster/kubectl.sh script to interact with the local cluster. hack/local-up-cluster.sh will +print the commands to run to point kubectl at the local cluster. + + +### Running a container + +Your cluster is running, and you want to start running containers! + +You can now use any of the cluster/kubectl.sh commands to interact with your local setup. + +```shell +export KUBERNETES_PROVIDER=local +cluster/kubectl.sh get pods +cluster/kubectl.sh get services +cluster/kubectl.sh get deployments +cluster/kubectl.sh run my-nginx --image=nginx --replicas=2 --port=80 + +## begin wait for provision to complete, you can monitor the docker pull by opening a new terminal + sudo docker images + ## you should see it pulling the nginx image, once the above command returns it + sudo docker ps + ## you should see your container running! + exit +## end wait + +## create a service for nginx, which serves on port 80 +cluster/kubectl.sh expose deployment my-nginx --port=80 --name=my-nginx + +## introspect Kubernetes! +cluster/kubectl.sh get pods +cluster/kubectl.sh get services +cluster/kubectl.sh get deployments + +## Test the nginx service with the IP/port from "get services" command +curl http://10.X.X.X:80/ +``` + +### Running a user defined pod + +Note the difference between a [container](http://kubernetes.io/docs/user-guide/containers/) +and a [pod](http://kubernetes.io/docs/user-guide/pods/). Since you only asked for the former, Kubernetes will create a wrapper pod for you. +However you cannot view the nginx start page on localhost. To verify that nginx is running you need to run `curl` within the docker container (try `docker exec`). + +You can control the specifications of a pod via a user defined manifest, and reach nginx through your browser on the port specified therein: + +```shell +cluster/kubectl.sh create -f docs/user-guide/pod.yaml +``` + +Congratulations! + +### FAQs + +#### I cannot reach service IPs on the network. + +Some firewall software that uses iptables may not interact well with +kubernetes. If you have trouble around networking, try disabling any +firewall or other iptables-using systems, first. Also, you can check +if SELinux is blocking anything by running a command such as `journalctl --since yesterday | grep avc`. + +By default the IP range for service cluster IPs is 10.0.*.* - depending on your +docker installation, this may conflict with IPs for containers. If you find +containers running with IPs in this range, edit hack/local-cluster-up.sh and +change the service-cluster-ip-range flag to something else. + +#### I changed Kubernetes code, how do I run it? + +```shell +cd kubernetes +hack/build-go.sh +hack/local-up-cluster.sh +``` + +#### kubectl claims to start a container but `get pods` and `docker ps` don't show it. + +One or more of the Kubernetes daemons might've crashed. Tail the [logs](http://kubernetes.io/docs/admin/cluster-troubleshooting/#looking-at-logs) of each in /tmp. + +```shell +$ ls /tmp/kube*.log +$ tail -f /tmp/kube-apiserver.log +``` + +#### The pods fail to connect to the services by host names + +The local-up-cluster.sh script doesn't start a DNS service. Similar situation can be found [here](http://issue.k8s.io/6667). You can start a manually. + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/local-cluster/local.md?pixel)]() + diff --git a/local-cluster/vagrant.md b/local-cluster/vagrant.md new file mode 100644 index 00000000..973574eb --- /dev/null +++ b/local-cluster/vagrant.md @@ -0,0 +1,438 @@ + + + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + + + + +--- +assignees: +- brendandburns +- derekwaynecarr +- jbeda +--- + +did no + +Running Kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/develop on your local machine (Linux, Mac OS X). + +* TOC +{:toc} + +### Prerequisites + +1. Install latest version >= 1.7.4 of [Vagrant](http://www.vagrantup.com/downloads.html) +2. Install one of: + 1. The latest version of [Virtual Box](https://www.virtualbox.org/wiki/Downloads) + 2. [VMWare Fusion](https://www.vmware.com/products/fusion/) version 5 or greater as well as the appropriate [Vagrant VMWare Fusion provider](https://www.vagrantup.com/vmware) + 3. [VMWare Workstation](https://www.vmware.com/products/workstation/) version 9 or greater as well as the [Vagrant VMWare Workstation provider](https://www.vagrantup.com/vmware) + 4. [Parallels Desktop](https://www.parallels.com/products/desktop/) version 9 or greater as well as the [Vagrant Parallels provider](https://parallels.github.io/vagrant-parallels/) + 5. libvirt with KVM and enable support of hardware virtualisation. [Vagrant-libvirt](https://github.com/pradels/vagrant-libvirt). For fedora provided official rpm, and possible to use `yum install vagrant-libvirt` + +### Setup + +Setting up a cluster is as simple as running: + +```sh +export KUBERNETES_PROVIDER=vagrant +curl -sS https://get.k8s.io | bash +``` + +Alternatively, you can download [Kubernetes release](https://github.com/kubernetes/kubernetes/releases) and extract the archive. To start your local cluster, open a shell and run: + +```sh +cd kubernetes + +export KUBERNETES_PROVIDER=vagrant +./cluster/kube-up.sh +``` + +The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine. + +By default, the Vagrant setup will create a single master VM (called kubernetes-master) and one node (called kubernetes-node-1). Each VM will take 1 GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate free disk space). + +If you'd like more than one node, set the `NUM_NODES` environment variable to the number you want: + +```sh +export NUM_NODES=3 +``` + +Vagrant will provision each machine in the cluster with all the necessary components to run Kubernetes. The initial setup can take a few minutes to complete on each machine. + +If you installed more than one Vagrant provider, Kubernetes will usually pick the appropriate one. However, you can override which one Kubernetes will use by setting the [`VAGRANT_DEFAULT_PROVIDER`](https://docs.vagrantup.com/v2/providers/default.html) environment variable: + +```sh +export VAGRANT_DEFAULT_PROVIDER=parallels +export KUBERNETES_PROVIDER=vagrant +./cluster/kube-up.sh +``` + +By default, each VM in the cluster is running Fedora. + +To access the master or any node: + +```sh +vagrant ssh master +vagrant ssh node-1 +``` + +If you are running more than one node, you can access the others by: + +```sh +vagrant ssh node-2 +vagrant ssh node-3 +``` + +Each node in the cluster installs the docker daemon and the kubelet. + +The master node instantiates the Kubernetes master components as pods on the machine. + +To view the service status and/or logs on the kubernetes-master: + +```console +[vagrant@kubernetes-master ~] $ vagrant ssh master +[vagrant@kubernetes-master ~] $ sudo su + +[root@kubernetes-master ~] $ systemctl status kubelet +[root@kubernetes-master ~] $ journalctl -ru kubelet + +[root@kubernetes-master ~] $ systemctl status docker +[root@kubernetes-master ~] $ journalctl -ru docker + +[root@kubernetes-master ~] $ tail -f /var/log/kube-apiserver.log +[root@kubernetes-master ~] $ tail -f /var/log/kube-controller-manager.log +[root@kubernetes-master ~] $ tail -f /var/log/kube-scheduler.log +``` + +To view the services on any of the nodes: + +```console +[vagrant@kubernetes-master ~] $ vagrant ssh node-1 +[vagrant@kubernetes-master ~] $ sudo su + +[root@kubernetes-master ~] $ systemctl status kubelet +[root@kubernetes-master ~] $ journalctl -ru kubelet + +[root@kubernetes-master ~] $ systemctl status docker +[root@kubernetes-master ~] $ journalctl -ru docker +``` + +### Interacting with your Kubernetes cluster with Vagrant. + +With your Kubernetes cluster up, you can manage the nodes in your cluster with the regular Vagrant commands. + +To push updates to new Kubernetes code after making source changes: + +```sh +./cluster/kube-push.sh +``` + +To stop and then restart the cluster: + +```sh +vagrant halt +./cluster/kube-up.sh +``` + +To destroy the cluster: + +```sh +vagrant destroy +``` + +Once your Vagrant machines are up and provisioned, the first thing to do is to check that you can use the `kubectl.sh` script. + +You may need to build the binaries first, you can do this with `make` + +```console +$ ./cluster/kubectl.sh get nodes + +NAME LABELS +10.245.1.4 +10.245.1.5 +10.245.1.3 +``` + +### Authenticating with your master + +When using the vagrant provider in Kubernetes, the `cluster/kubectl.sh` script will cache your credentials in a `~/.kubernetes_vagrant_auth` file so you will not be prompted for them in the future. + +```sh +cat ~/.kubernetes_vagrant_auth +``` + +```json +{ "User": "vagrant", + "Password": "vagrant", + "CAFile": "/home/k8s_user/.kubernetes.vagrant.ca.crt", + "CertFile": "/home/k8s_user/.kubecfg.vagrant.crt", + "KeyFile": "/home/k8s_user/.kubecfg.vagrant.key" +} +``` + +You should now be set to use the `cluster/kubectl.sh` script. For example try to list the nodes that you have started with: + +```sh +./cluster/kubectl.sh get nodes +``` + +### Running containers + +Your cluster is running, you can list the nodes in your cluster: + +```sh +$ ./cluster/kubectl.sh get nodes + +NAME LABELS +10.245.2.4 +10.245.2.3 +10.245.2.2 +``` + +Now start running some containers! + +You can now use any of the `cluster/kube-*.sh` commands to interact with your VM machines. +Before starting a container there will be no pods, services and replication controllers. + +```sh +$ ./cluster/kubectl.sh get pods +NAME READY STATUS RESTARTS AGE + +$ ./cluster/kubectl.sh get services +NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE + +$ ./cluster/kubectl.sh get replicationcontrollers +CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS +``` + +Start a container running nginx with a replication controller and three replicas + +```sh +$ ./cluster/kubectl.sh run my-nginx --image=nginx --replicas=3 --port=80 +``` + +When listing the pods, you will see that three containers have been started and are in Waiting state: + +```sh +$ ./cluster/kubectl.sh get pods +NAME READY STATUS RESTARTS AGE +my-nginx-5kq0g 0/1 Pending 0 10s +my-nginx-gr3hh 0/1 Pending 0 10s +my-nginx-xql4j 0/1 Pending 0 10s +``` + +You need to wait for the provisioning to complete, you can monitor the nodes by doing: + +```sh +$ vagrant ssh node-1 -c 'sudo docker images' +kubernetes-node-1: + REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE + 96864a7d2df3 26 hours ago 204.4 MB + google/cadvisor latest e0575e677c50 13 days ago 12.64 MB + kubernetes/pause latest 6c4579af347b 8 weeks ago 239.8 kB +``` + +Once the docker image for nginx has been downloaded, the container will start and you can list it: + +```sh +$ vagrant ssh node-1 -c 'sudo docker ps' +kubernetes-node-1: + CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES + dbe79bf6e25b nginx:latest "nginx" 21 seconds ago Up 19 seconds k8s--mynginx.8c5b8a3a--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--fcfa837f + fa0e29c94501 kubernetes/pause:latest "/pause" 8 minutes ago Up 8 minutes 0.0.0.0:8080->80/tcp k8s--net.a90e7ce4--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--baf5b21b + aa2ee3ed844a google/cadvisor:latest "/usr/bin/cadvisor" 38 minutes ago Up 38 minutes k8s--cadvisor.9e90d182--cadvisor_-_agent.file--4626b3a2 + 65a3a926f357 kubernetes/pause:latest "/pause" 39 minutes ago Up 39 minutes 0.0.0.0:4194->8080/tcp k8s--net.c5ba7f0e--cadvisor_-_agent.file--342fd561 +``` + +Going back to listing the pods, services and replicationcontrollers, you now have: + +```sh +$ ./cluster/kubectl.sh get pods +NAME READY STATUS RESTARTS AGE +my-nginx-5kq0g 1/1 Running 0 1m +my-nginx-gr3hh 1/1 Running 0 1m +my-nginx-xql4j 1/1 Running 0 1m + +$ ./cluster/kubectl.sh get services +NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE + +$ ./cluster/kubectl.sh get replicationcontrollers +CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE +my-nginx my-nginx nginx run=my-nginx 3 1m +``` + +We did not start any services, hence there are none listed. But we see three replicas displayed properly. + +Learn about [running your first containers](http://kubernetes.io/docs/user-guide/simple-nginx/) application to learn how to create a service. + +You can already play with scaling the replicas with: + +```sh +$ ./cluster/kubectl.sh scale rc my-nginx --replicas=2 +$ ./cluster/kubectl.sh get pods +NAME READY STATUS RESTARTS AGE +my-nginx-5kq0g 1/1 Running 0 2m +my-nginx-gr3hh 1/1 Running 0 2m +``` + +Congratulations! + +## Troubleshooting + +#### I keep downloading the same (large) box all the time! + +By default the Vagrantfile will download the box from S3. You can change this (and cache the box locally) by providing a name and an alternate URL when calling `kube-up.sh` + +```sh +export KUBERNETES_BOX_NAME=choose_your_own_name_for_your_kuber_box +export KUBERNETES_BOX_URL=path_of_your_kuber_box +export KUBERNETES_PROVIDER=vagrant +./cluster/kube-up.sh +``` + +#### I am getting timeouts when trying to curl the master from my host! + +During provision of the cluster, you may see the following message: + +```sh +Validating node-1 +............. +Waiting for each node to be registered with cloud provider +error: couldn't read version from server: Get https://10.245.1.2/api: dial tcp 10.245.1.2:443: i/o timeout +``` + +Some users have reported VPNs may prevent traffic from being routed to the host machine into the virtual machine network. + +To debug, first verify that the master is binding to the proper IP address: + +```sh +$ vagrant ssh master +$ ifconfig | grep eth1 -C 2 +eth1: flags=4163 mtu 1500 inet 10.245.1.2 netmask + 255.255.255.0 broadcast 10.245.1.255 +``` + +Then verify that your host machine has a network connection to a bridge that can serve that address: + +```sh +$ ifconfig | grep 10.245.1 -C 2 + +vboxnet5: flags=4163 mtu 1500 + inet 10.245.1.1 netmask 255.255.255.0 broadcast 10.245.1.255 + inet6 fe80::800:27ff:fe00:5 prefixlen 64 scopeid 0x20 + ether 0a:00:27:00:00:05 txqueuelen 1000 (Ethernet) +``` + +If you do not see a response on your host machine, you will most likely need to connect your host to the virtual network created by the virtualization provider. + +If you do see a network, but are still unable to ping the machine, check if your VPN is blocking the request. + +#### I just created the cluster, but I am getting authorization errors! + +You probably have an incorrect ~/.kubernetes_vagrant_auth file for the cluster you are attempting to contact. + +```sh +rm ~/.kubernetes_vagrant_auth +``` + +After using kubectl.sh make sure that the correct credentials are set: + +```sh +cat ~/.kubernetes_vagrant_auth +``` + +```json +{ + "User": "vagrant", + "Password": "vagrant" +} +``` + +#### I just created the cluster, but I do not see my container running! + +If this is your first time creating the cluster, the kubelet on each node schedules a number of docker pull requests to fetch prerequisite images. This can take some time and as a result may delay your initial pod getting provisioned. + +#### I have brought Vagrant up but the nodes cannot validate! + +Log on to one of the nodes (`vagrant ssh node-1`) and inspect the salt minion log (`sudo cat /var/log/salt/minion`). + +#### I want to change the number of nodes! + +You can control the number of nodes that are instantiated via the environment variable `NUM_NODES` on your host machine. If you plan to work with replicas, we strongly encourage you to work with enough nodes to satisfy your largest intended replica size. If you do not plan to work with replicas, you can save some system resources by running with a single node. You do this, by setting `NUM_NODES` to 1 like so: + +```sh +export NUM_NODES=1 +``` + +#### I want my VMs to have more memory! + +You can control the memory allotted to virtual machines with the `KUBERNETES_MEMORY` environment variable. +Just set it to the number of megabytes you would like the machines to have. For example: + +```sh +export KUBERNETES_MEMORY=2048 +``` + +If you need more granular control, you can set the amount of memory for the master and nodes independently. For example: + +```sh +export KUBERNETES_MASTER_MEMORY=1536 +export KUBERNETES_NODE_MEMORY=2048 +``` + +#### I want to set proxy settings for my Kubernetes cluster boot strapping! + +If you are behind a proxy, you need to install vagrant proxy plugin and set the proxy settings by + +```sh +vagrant plugin install vagrant-proxyconf +export VAGRANT_HTTP_PROXY=http://username:password@proxyaddr:proxyport +export VAGRANT_HTTPS_PROXY=https://username:password@proxyaddr:proxyport +``` + +Optionally you can specify addresses to not proxy, for example + +```sh +export VAGRANT_NO_PROXY=127.0.0.1 +``` + +If you are using sudo to make kubernetes build for example make quick-release, you need run `sudo -E make quick-release` to pass the environment variables. + +#### I ran vagrant suspend and nothing works! + +`vagrant suspend` seems to mess up the network. This is not supported at this time. + +#### I want vagrant to sync folders via nfs! + +You can ensure that vagrant uses nfs to sync folders with virtual machines by setting the KUBERNETES_VAGRANT_USE_NFS environment variable to 'true'. nfs is faster than virtualbox or vmware's 'shared folders' and does not require guest additions. See the [vagrant docs](http://docs.vagrantup.com/v2/synced-folders/nfs.html) for details on configuring nfs on the host. This setting will have no effect on the libvirt provider, which uses nfs by default. For example: + +```sh +export KUBERNETES_VAGRANT_USE_NFS=true +``` + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/local-cluster/vagrant.md?pixel)]() + -- cgit v1.2.3 From 5217b8c376acf2bf4737c4ffea70423db91ec88e Mon Sep 17 00:00:00 2001 From: Hongchao Deng Date: Tue, 2 Aug 2016 15:16:59 -0700 Subject: automation.md: update lgtm point --- automation.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/automation.md b/automation.md index 8900dbcc..53ac0e1b 100644 --- a/automation.md +++ b/automation.md @@ -80,7 +80,8 @@ A PR is considered "ready for merging" if it matches the following: * Jenkins GCE e2e * Jenkins unit/integration * The PR cannot have any prohibited future milestones (such as a v1.5 milestone during v1.4 code freeze) - * The PR must have the "lgtm" label + * The PR must have the "lgtm" label. The "lgtm" label is automatically applied + following a review comment consisting of only "LGTM" (case-insensitive) * The PR must not have been updated since the "lgtm" label was applied * The PR must not have the "do-not-merge" label -- cgit v1.2.3 From d7ea656051e05c17e9a84715377a46af819360b2 Mon Sep 17 00:00:00 2001 From: Phillip Wittrock Date: Wed, 3 Aug 2016 16:35:09 -0700 Subject: Clean up items from moving local cluster setup guides --- local-cluster/docker.md | 15 ++++----------- local-cluster/local.md | 13 ------------- local-cluster/vagrant.md | 12 ------------ 3 files changed, 4 insertions(+), 36 deletions(-) diff --git a/local-cluster/docker.md b/local-cluster/docker.md index bccb16d1..e0586134 100644 --- a/local-cluster/docker.md +++ b/local-cluster/docker.md @@ -27,14 +27,6 @@ Documentation for other releases can be found at ---- -assignees: -- asridharan -- brendandburns -- fgrzadkowski - ---- - **Stop. This guide has been superseded by [Minikube](https://github.com/kubernetes/minikube) which is the recommended method of running Kubernetes on your local machine.** @@ -44,9 +36,6 @@ Here's a diagram of what the final result will look like: ![Kubernetes Single Node on Docker](../../getting-started-guides/k8s-singlenode-docker.png) -* TOC -{:toc} - ## Prerequisites **Note: These steps have not been tested with the [Docker For Mac or Docker For Windows beta programs](https://blog.docker.com/2016/03/docker-for-mac-windows-beta/).** @@ -225,6 +214,10 @@ On OS X, since docker is running inside a VM, run the following command instead: docker-machine ssh `docker-machine active` curl $ip ``` +## Deploy a DNS + +Read [documentation for manually deploying a DNS](http://kubernetes.io/docs/getting-started-guides/docker-multinode/#deploy-dns-manually-for-v12x) for instructions. + ### Turning down your cluster 1\. Delete the nginx service and deployment: diff --git a/local-cluster/local.md b/local-cluster/local.md index afff72fa..23e2156f 100644 --- a/local-cluster/local.md +++ b/local-cluster/local.md @@ -27,19 +27,6 @@ Documentation for other releases can be found at ---- -assignees: -- erictune -- mikedanese -- thockin - ---- - - - -* TOC -{:toc} - **Stop. This guide has been superseded by [Minikube](https://github.com/kubernetes/minikube) which is the recommended method of running Kubernetes on your local machine.** ### Requirements diff --git a/local-cluster/vagrant.md b/local-cluster/vagrant.md index 973574eb..47ac65e7 100644 --- a/local-cluster/vagrant.md +++ b/local-cluster/vagrant.md @@ -27,20 +27,8 @@ Documentation for other releases can be found at ---- -assignees: -- brendandburns -- derekwaynecarr -- jbeda ---- - -did no - Running Kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/develop on your local machine (Linux, Mac OS X). -* TOC -{:toc} - ### Prerequisites 1. Install latest version >= 1.7.4 of [Vagrant](http://www.vagrantup.com/downloads.html) -- cgit v1.2.3 From 05d13f4a50860aea96ebae2f0e35b732c73a901d Mon Sep 17 00:00:00 2001 From: lixiaobing10051267 Date: Wed, 3 Aug 2016 17:06:50 +0800 Subject: Modify some detail information in contributing workflow --- development.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/development.md b/development.md index ac2b3bb3..3dc1a3c6 100644 --- a/development.md +++ b/development.md @@ -164,7 +164,7 @@ git push -f origin myfeature ### Creating a pull request 1. Visit https://github.com/$YOUR_GITHUB_USERNAME/kubernetes -2. Click the "Compare and pull request" button next to your "myfeature" branch. +2. Click the "Compare & pull request" button next to your "myfeature" branch. 3. Check out the pull request [process](pull-requests.md) for more details ### When to retain commits and when to squash -- cgit v1.2.3 From 2b079d26745b3d477d97813384980fdaa1db32fd Mon Sep 17 00:00:00 2001 From: Tamer Tas Date: Thu, 4 Aug 2016 11:13:26 +0300 Subject: Improve Developer README --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 377f957a..1b140418 100644 --- a/README.md +++ b/README.md @@ -48,7 +48,7 @@ Guide](../admin/README.md). * **Pull Request Process** ([pull-requests.md](pull-requests.md)): When and why pull requests are closed. -* **Kubernetes On-Call Rotations** ([on-call-rotations.md](on-call-rotations.md)): Descriptions of on-call rotations for build and end-user support +* **Kubernetes On-Call Rotations** ([on-call-rotations.md](on-call-rotations.md)): Descriptions of on-call rotations for build and end-user support. * **Faster PR reviews** ([faster_reviews.md](faster_reviews.md)): How to get faster PR reviews. @@ -110,7 +110,7 @@ Guide](../admin/README.md). ## Building releases -* **Making release notes** ([making-release-notes.md](making-release-notes.md)): Generating release nodes for a new release. +* **Making release notes** ([making-release-notes.md](making-release-notes.md)): Generating release notes for a new release. * **Releasing Kubernetes** ([releasing.md](releasing.md)): How to create a Kubernetes release (as in version) and how the version information gets embedded into the built binaries. -- cgit v1.2.3 From 668f8a6986a27bfeb3a98c5b52c0fc6be46e1493 Mon Sep 17 00:00:00 2001 From: lixiaobing10051267 Date: Thu, 4 Aug 2016 14:30:40 +0800 Subject: Replace with explicit kubernetes fork path --- development.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/development.md b/development.md index 3dc1a3c6..08b66e8a 100644 --- a/development.md +++ b/development.md @@ -243,7 +243,7 @@ separate dependency updates from other changes._ export KPATH=$HOME/code/kubernetes mkdir -p $KPATH/src/k8s.io cd $KPATH/src/k8s.io -git clone https://path/to/your/kubernetes/fork # assumes your fork is 'kubernetes' +git clone https://github.com/$YOUR_GITHUB_USERNAME/kubernetes.git # assumes your fork is 'kubernetes' # Or copy your existing local repo here. IMPORTANT: making a symlink doesn't work. ``` -- cgit v1.2.3 From e1e7e3a30b669c3d63d48704fd31217c656dd06f Mon Sep 17 00:00:00 2001 From: Tamer Tas Date: Thu, 4 Aug 2016 11:40:40 +0300 Subject: Detail unit testing workflow Include the information about testing that is found in `Makefile` comments of the testing targets --- development.md | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/development.md b/development.md index 3dc1a3c6..c7f810f4 100644 --- a/development.md +++ b/development.md @@ -325,12 +325,13 @@ Three basic commands let you run unit, integration and/or e2e tests: ```sh cd kubernetes -make test # Run unit tests -make test-integration # Run integration tests, requires etcd -go run hack/e2e.go -v --build --up --test --down # Run e2e tests +make test # Run every unit test +make test WHAT=pkg/util/cache GOFLAGS=-v # Run tests of a package verbosely +make test-integration # Run integration tests, requires etcd +make test-e2e # Run e2e tests ``` -See the [testing guide](testing.md) for additional information and scenarios. +See the [testing guide](testing.md) and [end-to-end tests](e2e-tests.md) for additional information and scenarios. ## Regenerating the CLI documentation -- cgit v1.2.3 From 2db30bd01f2412771f8b5764da2edf37486afa71 Mon Sep 17 00:00:00 2001 From: gmarek Date: Mon, 8 Aug 2016 15:26:37 +0200 Subject: Small update to the kubemark guide --- kubemark-guide.md | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) diff --git a/kubemark-guide.md b/kubemark-guide.md index 7c44c362..79bf5f07 100755 --- a/kubemark-guide.md +++ b/kubemark-guide.md @@ -61,9 +61,10 @@ resources from everything else. ## Requirements -To run Kubemark you need a Kubernetes cluster for running all your HollowNodes -and a dedicated machine for a master. Master machine has to be directly routable -from HollowNodes. You also need an access to some Docker repository. +To run Kubemark you need a Kubernetes cluster (called `external cluster`) +for running all your HollowNodes and a dedicated machine for a master. +Master machine has to be directly routable from HollowNodes. You also need an +access to some Docker repository. Currently scripts are written to be easily usable by GCE, but it should be relatively straightforward to port them to different providers or bare metal. @@ -81,10 +82,11 @@ port Kubemark to different providers. ### Starting a Kubemark cluster -To start a Kubemark cluster on GCE you need to create an external cluster (it -can be GCE, GKE or any other cluster) by yourself, build a kubernetes release -(e.g. by running `make quick-release`) and run `test/kubemark/start-kubemark.sh` -script. This script will create a VM for master components, Pods for HollowNodes +To start a Kubemark cluster on GCE you need to create an external kubernetes +cluster (it can be GCE, GKE or anything else) by yourself, make sure that kubeconfig +points to it by default, build a kubernetes release (e.g. by running +`make quick-release`) and run `test/kubemark/start-kubemark.sh` script. +This script will create a VM for master components, Pods for HollowNodes and do all the setup necessary to let them talk to each other. It will use the configuration stored in `cluster/kubemark/config-default.sh` - you can tweak it however you want, but note that some features may not be implemented yet, as -- cgit v1.2.3 From abef76f633ef9d5b6b75d62958401087d3736d42 Mon Sep 17 00:00:00 2001 From: Mike Brown Date: Tue, 2 Aug 2016 14:57:27 -0500 Subject: re-organize development.md to addresses issue #13876 Signed-off-by: Mike Brown --- development.md | 211 ++++++++++++++++----------------------------------------- godep.md | 152 +++++++++++++++++++++++++++++++++++++++++ 2 files changed, 211 insertions(+), 152 deletions(-) create mode 100644 godep.md diff --git a/development.md b/development.md index 32cb17cb..2cbe4456 100644 --- a/development.md +++ b/development.md @@ -36,27 +36,29 @@ Documentation for other releases can be found at This document is intended to be the canonical source of truth for things like supported toolchain versions for building Kubernetes. If you find a -requirement that this doc does not capture, please file a bug. If you find -other docs with references to requirements that are not simply links to this -doc, please file a bug. +requirement that this doc does not capture, please +[submit an issue](https://github.com/kubernetes/kubernetes/issues) on github. If +you find other docs with references to requirements that are not simply links to +this doc, please [submit an issue](https://github.com/kubernetes/kubernetes/issues). This document is intended to be relative to the branch in which it is found. It is guaranteed that requirements will change over time for the development branch, but release branches of Kubernetes should not change. -## Building Kubernetes +## Building Kubernetes with Docker Official releases are built using Docker containers. To build Kubernetes using -Docker please follow [these -instructions](http://releases.k8s.io/HEAD/build/README.md). +Docker please follow [these instructions] +(http://releases.k8s.io/HEAD/build/README.md). -### Local OS/shell environment +## Building Kubernetes on a local OS/shell environment -Many of the Kubernetes development helper scripts rely on a fairly up-to-date GNU tools -environment, so most recent Linux distros should work just fine -out-of-the-box. Note that Mac OS X ships with somewhat outdated -BSD-based tools, some of which may be incompatible in subtle ways, so we recommend -[replacing those with modern GNU tools](https://www.topbug.net/blog/2013/04/14/install-and-use-gnu-command-line-tools-in-mac-os-x/). +Many of the Kubernetes development helper scripts rely on a fairly up-to-date +GNU tools environment, so most recent Linux distros should work just fine +out-of-the-box. Note that Mac OS X ships with somewhat outdated BSD-based tools, +some of which may be incompatible in subtle ways, so we recommend +[replacing those with modern GNU tools] +(https://www.topbug.net/blog/2013/04/14/install-and-use-gnu-command-line-tools-in-mac-os-x/). ### Go development environment @@ -65,8 +67,50 @@ To build Kubernetes without using Docker containers, you'll need a Go development environment. Builds for Kubernetes 1.0 - 1.2 require Go version 1.4.2. Builds for Kubernetes 1.3 and higher require Go version 1.6.0. If you haven't set up a Go development environment, please follow [these -instructions](http://golang.org/doc/code.html) to install the go tools and set -up a GOPATH. +instructions](http://golang.org/doc/code.html) to install the go tools. + +Set up your GOPATH and add a path entry for go binaries to your PATH. Typically +added to your ~/.profile: + +```sh +export GOPATH=$HOME/go +export PATH=$PATH:$GOPATH/bin +``` + +### Godep dependency management + +Kubernetes build and test scripts use [godep](https://github.com/tools/godep) to +manage dependencies. + +#### Install godep + +Ensure that [mercurial](http://mercurial.selenic.com/wiki/Download) is +installed on your system. (some of godep's dependencies use the mercurial +source control system). Use `apt-get install mercurial` or `yum install +mercurial` on Linux, or [brew.sh](http://brew.sh) on OS X, or download directly +from mercurial. + +Install godep (may require sudo): + +```sh +go get -u github.com/tools/godep +``` + +Note: +At this time, godep version >= v63 is known to work in the Kubernetes project. + +To check your version of godep: + +```sh +$ godep version +godep v74 (linux/amd64/go1.6.2) +``` + +Developers planning to managing dependencies in the `vendor/` tree may want to +explore alternative environment setups. See +[using godep to manage dependencies](godep.md). + +### Local build using make To build Kubernetes using your local Go development environment (generate linux binaries): @@ -121,7 +165,7 @@ git remote add upstream 'https://github.com/kubernetes/kubernetes.git' ### Create a branch and make changes ```sh -git checkout -b myfeature +git checkout -b my-feature # Make your code changes ``` @@ -181,143 +225,6 @@ reviews much easier. See [Faster Reviews](faster_reviews.md) for more details. -## godep and dependency management - -Kubernetes uses [godep](https://github.com/tools/godep) to manage dependencies. -It is not strictly required for building Kubernetes but it is required when -managing dependencies under the vendor/ tree, and is required by a number of the -build and test scripts. Please make sure that `godep` is installed and in your -`$PATH`, and that `godep version` says it is at least v63. - -### Installing godep - -There are many ways to build and host Go binaries. Here is an easy way to get -utilities like `godep` installed: - -1) Ensure that [mercurial](http://mercurial.selenic.com/wiki/Download) is -installed on your system. (some of godep's dependencies use the mercurial -source control system). Use `apt-get install mercurial` or `yum install -mercurial` on Linux, or [brew.sh](http://brew.sh) on OS X, or download directly -from mercurial. - -2) Create a new GOPATH for your tools and install godep: - -```sh -export GOPATH=$HOME/go-tools -mkdir -p $GOPATH -go get -u github.com/tools/godep -``` - -3) Add this $GOPATH/bin to your path. Typically you'd add this to your ~/.profile: - -```sh -export GOPATH=$HOME/go-tools -export PATH=$PATH:$GOPATH/bin -``` - -Note: -At this time, godep version >= v63 is known to work in the Kubernetes project - -To check your version of godep: - -```sh -$ godep version -godep v66 (linux/amd64/go1.6.2) -``` - -If it is not a valid version try, make sure you have updated the godep repo -with `go get -u github.com/tools/godep`. - -### Using godep - -Here's a quick walkthrough of one way to use godeps to add or update a -Kubernetes dependency into `vendor/`. For more details, please see the -instructions in [godep's documentation](https://github.com/tools/godep). - -1) Devote a directory to this endeavor: - -_Devoting a separate directory is not strictly required, but it is helpful to -separate dependency updates from other changes._ - -```sh -export KPATH=$HOME/code/kubernetes -mkdir -p $KPATH/src/k8s.io -cd $KPATH/src/k8s.io -git clone https://github.com/$YOUR_GITHUB_USERNAME/kubernetes.git # assumes your fork is 'kubernetes' -# Or copy your existing local repo here. IMPORTANT: making a symlink doesn't work. -``` - -2) Set up your GOPATH. - -```sh -# This will *not* let your local builds see packages that exist elsewhere on your system. -export GOPATH=$KPATH -``` - -3) Populate your new GOPATH. - -```sh -cd $KPATH/src/k8s.io/kubernetes -godep restore -``` - -4) Next, you can either add a new dependency or update an existing one. - -To add a new dependency is simple (if a bit slow): - -```sh -cd $KPATH/src/k8s.io/kubernetes -DEP=example.com/path/to/dependency -godep get $DEP/... -# Now change code in Kubernetes to use the dependency. -./hack/godep-save.sh -``` - -To update an existing dependency is a bit more complicated. Godep has an -`update` command, but none of us can figure out how to actually make it work. -Instead, this procedure seems to work reliably: - -```sh -cd $KPATH/src/k8s.io/kubernetes -DEP=example.com/path/to/dependency -# NB: For the next step, $DEP is assumed be the repo root. If it is actually a -# subdir of the repo, use the repo root here. This is required to keep godep -# from getting angry because `godep restore` left the tree in a "detached head" -# state. -rm -rf $KPATH/src/$DEP # repo root -godep get $DEP/... -# Change code in Kubernetes, if necessary. -rm -rf Godeps -rm -rf vendor -./hack/godep-save.sh -git co -- $(git st -s | grep "^ D" | awk '{print $2}' | grep ^Godeps) -``` - -_If `go get -u path/to/dependency` fails with compilation errors, instead try -`go get -d -u path/to/dependency` to fetch the dependencies without compiling -them. This is unusual, but has been observed._ - -After all of this is done, `git status` should show you what files have been -modified and added/removed. Make sure to `git add` and `git rm` them. It is -commonly advised to make one `git commit` which includes just the dependency -update and Godeps files, and another `git commit` that includes changes to -Kubernetes code to use the new/updated dependency. These commits can go into a -single pull request. - -5) Before sending your PR, it's a good idea to sanity check that your -Godeps.json file and the contents of `vendor/ `are ok by running `hack/verify-godeps.sh` - -_If `hack/verify-godeps.sh` fails after a `godep update`, it is possible that a -transitive dependency was added or removed but not updated by godeps. It then -may be necessary to perform a `hack/godep-save.sh` to pick up the transitive -dependency changes._ - -It is sometimes expedient to manually fix the /Godeps/Godeps.json file to -minimize the changes. However without great care this can lead to failures -with `hack/verify-godeps.sh`. This must pass for every PR. - -6) If you updated the Godeps, please also update `Godeps/LICENSES` by running -`hack/update-godep-licenses.sh`. ## Testing diff --git a/godep.md b/godep.md new file mode 100644 index 00000000..6d0a2bb8 --- /dev/null +++ b/godep.md @@ -0,0 +1,152 @@ + + + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + + + + +# Using godep to manage dependencies + +This document is intended to show a way for managing `vendor/` tree dependencies +in Kubernetes. If you are not planning on managing `vendor` dependencies go here +[Godep dependency management](development.md#godep-dependency-management). + +## Alternate GOPATH for installing and using godep + +There are many ways to build and host Go binaries. Here is one way to get +utilities like `godep` installed: + +Create a new GOPATH just for your go tools and install godep: + +```sh +export GOPATH=$HOME/go-tools +mkdir -p $GOPATH +go get -u github.com/tools/godep +``` + +Add this $GOPATH/bin to your path. Typically you'd add this to your ~/.profile: + +```sh +export GOPATH=$HOME/go-tools +export PATH=$PATH:$GOPATH/bin +``` + +## Using godep + +Here's a quick walkthrough of one way to use godeps to add or update a +Kubernetes dependency into `vendor/`. For more details, please see the +instructions in [godep's documentation](https://github.com/tools/godep). + +1) Devote a directory to this endeavor: + +_Devoting a separate directory is not strictly required, but it is helpful to +separate dependency updates from other changes._ + +```sh +export KPATH=$HOME/code/kubernetes +mkdir -p $KPATH/src/k8s.io +cd $KPATH/src/k8s.io +git clone https://github.com/$YOUR_GITHUB_USERNAME/kubernetes.git # assumes your fork is 'kubernetes' +# Or copy your existing local repo here. IMPORTANT: making a symlink doesn't work. +``` + +2) Set up your GOPATH. + +```sh +# This will *not* let your local builds see packages that exist elsewhere on your system. +export GOPATH=$KPATH +``` + +3) Populate your new GOPATH. + +```sh +cd $KPATH/src/k8s.io/kubernetes +godep restore +``` + +4) Next, you can either add a new dependency or update an existing one. + +To add a new dependency is simple (if a bit slow): + +```sh +cd $KPATH/src/k8s.io/kubernetes +DEP=example.com/path/to/dependency +godep get $DEP/... +# Now change code in Kubernetes to use the dependency. +./hack/godep-save.sh +``` + +To update an existing dependency is a bit more complicated. Godep has an +`update` command, but none of us can figure out how to actually make it work. +Instead, this procedure seems to work reliably: + +```sh +cd $KPATH/src/k8s.io/kubernetes +DEP=example.com/path/to/dependency +# NB: For the next step, $DEP is assumed be the repo root. If it is actually a +# subdir of the repo, use the repo root here. This is required to keep godep +# from getting angry because `godep restore` left the tree in a "detached head" +# state. +rm -rf $KPATH/src/$DEP # repo root +godep get $DEP/... +# Change code in Kubernetes, if necessary. +rm -rf Godeps +rm -rf vendor +./hack/godep-save.sh +git co -- $(git st -s | grep "^ D" | awk '{print $2}' | grep ^Godeps) +``` + +_If `go get -u path/to/dependency` fails with compilation errors, instead try +`go get -d -u path/to/dependency` to fetch the dependencies without compiling +them. This is unusual, but has been observed._ + +After all of this is done, `git status` should show you what files have been +modified and added/removed. Make sure to `git add` and `git rm` them. It is +commonly advised to make one `git commit` which includes just the dependency +update and Godeps files, and another `git commit` that includes changes to +Kubernetes code to use the new/updated dependency. These commits can go into a +single pull request. + +5) Before sending your PR, it's a good idea to sanity check that your +Godeps.json file and the contents of `vendor/ `are ok by running `hack/verify-godeps.sh` + +_If `hack/verify-godeps.sh` fails after a `godep update`, it is possible that a +transitive dependency was added or removed but not updated by godeps. It then +may be necessary to perform a `hack/godep-save.sh` to pick up the transitive +dependency changes._ + +It is sometimes expedient to manually fix the /Godeps/Godeps.json file to +minimize the changes. However without great care this can lead to failures +with `hack/verify-godeps.sh`. This must pass for every PR. + +6) If you updated the Godeps, please also update `Godeps/LICENSES` by running +`hack/update-godep-licenses.sh`. + + + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/godep.md?pixel)]() + -- cgit v1.2.3 From 7af759eeb52f61c965b49495825d72932e915b61 Mon Sep 17 00:00:00 2001 From: gmarek Date: Wed, 10 Aug 2016 16:04:14 +0200 Subject: Change the name of kubemark config file --- kubemark-guide.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/kubemark-guide.md b/kubemark-guide.md index 79bf5f07..8c18b2be 100755 --- a/kubemark-guide.md +++ b/kubemark-guide.md @@ -94,7 +94,7 @@ implementation of Hollow components/mocks will probably be lagging behind ‘rea one. For performance tests interesting variables are `NUM_NODES` and `MASTER_SIZE`. After start-kubemark script is finished you’ll have a ready Kubemark cluster, a kubeconfig file for talking to the Kubemark cluster is -stored in `test/kubemark/kubeconfig.loc`. +stored in `test/kubemark/kubeconfig.kubemark`. Currently we're running HollowNode with limit of 0.05 a CPU core and ~60MB or memory, which taking into account default cluster addons and fluentD running on @@ -115,7 +115,7 @@ to the Docker repository (*GCR for us, using scripts from `cluster/gce/util.sh` - it may get tricky outside of GCE*) - Generates certificates and kubeconfig files, writes a kubeconfig locally to -`test/kubemark/kubeconfig.loc` and creates a Secret which stores kubeconfig for +`test/kubemark/kubeconfig.kubemark` and creates a Secret which stores kubeconfig for HollowKubelet/HollowProxy use (*used gcloud to transfer files to Master, should be easy to do outside of GCE*). @@ -190,7 +190,7 @@ E.g. you want to see the logs of HollowKubelet on which pod `my-pod` is running. To do so you can execute: ``` -$ kubectl kubernetes/test/kubemark/kubeconfig.loc describe pod my-pod +$ kubectl kubernetes/test/kubemark/kubeconfig.kubemark describe pod my-pod ``` Which outputs pod description and among it a line: -- cgit v1.2.3 From f2756d89ce32b66070c7adeb825c1996a7adf724 Mon Sep 17 00:00:00 2001 From: Xianglin Gao Date: Thu, 11 Aug 2016 11:14:09 +0800 Subject: fix mistakes in api changes Signed-off-by: Xianglin Gao --- api_changes.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/api_changes.md b/api_changes.md index c1ce5563..cd19c55d 100755 --- a/api_changes.md +++ b/api_changes.md @@ -493,7 +493,7 @@ hack/update-generated-protobuf.sh The vast majority of objects will not need any consideration when converting to protobuf, but be aware that if you depend on a Golang type in the standard -library there may be additional work requried, although in practice we typically +library there may be additional work required, although in practice we typically use our own equivalents for JSON serialization. The `pkg/api/serialization_test.go` will verify that your protobuf serialization preserves all fields - be sure to run it several times to ensure there are no incompletely calculated fields. @@ -752,7 +752,7 @@ The latter requires that all objects in the same API group as `Frobber` to be replicated in the new version, `v6alpha2`. This also requires user to use a new client which uses the other version. Therefore, this is not a preferred option. -A releated issue is how a cluster manager can roll back from a new version +A related issue is how a cluster manager can roll back from a new version with a new feature, that is already being used by users. See https://github.com/kubernetes/kubernetes/issues/4855. -- cgit v1.2.3 From 10305290f7b27590662540f8876261cebbab8ead Mon Sep 17 00:00:00 2001 From: Silas Boyd-Wickizer Date: Thu, 11 Aug 2016 22:29:20 -0700 Subject: Add Node.js `kubernetes-client` to client-libraries.md --- client-libraries.md | 1 + 1 file changed, 1 insertion(+) diff --git a/client-libraries.md b/client-libraries.md index 7292777c..7229b7f1 100644 --- a/client-libraries.md +++ b/client-libraries.md @@ -47,6 +47,7 @@ the core Kubernetes team* * [Java (OSGi)](https://bitbucket.org/amdatulabs/amdatu-kubernetes) * [Java (Fabric8, OSGi)](https://github.com/fabric8io/kubernetes-client) * [Node.js](https://github.com/tenxcloud/node-kubernetes-client) + * [Node.js](https://github.com/godaddy/kubernetes-client) * [Perl](https://metacpan.org/pod/Net::Kubernetes) * [PHP](https://github.com/devstub/kubernetes-api-php-client) * [PHP](https://github.com/maclof/kubernetes-client) -- cgit v1.2.3 From 3cc8cb9486d8e64fa19d58a1ff789ae4f51edabd Mon Sep 17 00:00:00 2001 From: Silas Boyd-Wickizer Date: Mon, 15 Aug 2016 08:32:24 -0700 Subject: Add a short `-n` for `kubectl`'s `--namespace` fixes #24078 --namespace is a very common flag for nearly every kubectl command we have. We should claim -n for it. --- kubectl-conventions.md | 1 + 1 file changed, 1 insertion(+) diff --git a/kubectl-conventions.md b/kubectl-conventions.md index 8705d285..3e7e8803 100644 --- a/kubectl-conventions.md +++ b/kubectl-conventions.md @@ -142,6 +142,7 @@ list when adding new short flags * `-f`: Resource file * also used for `--follow` in `logs`, but should be deprecated in favor of `-F` + * `-n`: Namespace scope * `-l`: Label selector * also used for `--labels` in `expose`, but should be deprecated * `-L`: Label columns -- cgit v1.2.3 From fd1f40af9b566320fd03242e5a3bd39a06accd3c Mon Sep 17 00:00:00 2001 From: lixiaobing10051267 Date: Wed, 17 Aug 2016 16:49:48 +0800 Subject: Incorrect branch name for git push command in development --- development.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/development.md b/development.md index 809ee5ab..a6b57631 100644 --- a/development.md +++ b/development.md @@ -217,13 +217,13 @@ Then you can commit your changes and push them to your fork: ```sh git commit -git push -f origin myfeature +git push -f origin my-feature ``` ### Creating a pull request 1. Visit https://github.com/$YOUR_GITHUB_USERNAME/kubernetes -2. Click the "Compare & pull request" button next to your "myfeature" branch. +2. Click the "Compare & pull request" button next to your "my-feature" branch. 3. Check out the pull request [process](pull-requests.md) for more details ### When to retain commits and when to squash -- cgit v1.2.3 From a98f213eb50c20f984098d4493a51a90d981136a Mon Sep 17 00:00:00 2001 From: lixiaobing10051267 Date: Wed, 17 Aug 2016 17:43:05 +0800 Subject: Optimize order description for turning down cluster --- local-cluster/docker.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/local-cluster/docker.md b/local-cluster/docker.md index e0586134..b69121e8 100644 --- a/local-cluster/docker.md +++ b/local-cluster/docker.md @@ -116,7 +116,7 @@ Here's a diagram of what the final result will look like: This actually runs the kubelet, which in turn runs a [pod](http://kubernetes.io/docs/user-guide/pods/) that contains the other master components. - ** **SECURITY WARNING** ** services exposed via Kubernetes using Hyperkube are available on the host node's public network interface / IP address. Because of this, this guide is not suitable for any host node/server that is directly internet accessible. Refer to [#21735](https://github.com/kubernetes/kubernetes/issues/21735) for addtional info. + ** **SECURITY WARNING** ** services exposed via Kubernetes using Hyperkube are available on the host node's public network interface / IP address. Because of this, this guide is not suitable for any host node/server that is directly internet accessible. Refer to [#21735](https://github.com/kubernetes/kubernetes/issues/21735) for additional info. ### Download `kubectl` @@ -220,7 +220,7 @@ Read [documentation for manually deploying a DNS](http://kubernetes.io/docs/gett ### Turning down your cluster -1\. Delete the nginx service and deployment: +1. Delete the nginx service and deployment: If you plan on re-creating your nginx deployment and service you will need to clean it up. @@ -228,14 +228,14 @@ If you plan on re-creating your nginx deployment and service you will need to cl kubectl delete service,deployments nginx ``` -2\. Delete all the containers including the kubelet: +2. Delete all the containers including the kubelet: ```shell docker rm -f kubelet docker rm -f `docker ps | grep k8s | awk '{print $1}'` ``` -3\. Cleanup the filesystem: +3. Cleanup the filesystem: On OS X, first ssh into the docker VM: -- cgit v1.2.3 From 963f4f421e0c5ca5cf08f31f89c9c8b0e0be5ded Mon Sep 17 00:00:00 2001 From: Huamin Chen Date: Wed, 17 Aug 2016 21:14:37 +0000 Subject: more explictly about NoDiskConflicts policy and applicable volume types Signed-off-by: Huamin Chen --- scheduler_algorithm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/scheduler_algorithm.md b/scheduler_algorithm.md index ab9be4a8..26658f3f 100755 --- a/scheduler_algorithm.md +++ b/scheduler_algorithm.md @@ -40,7 +40,7 @@ For each unscheduled Pod, the Kubernetes scheduler tries to find a node across t The purpose of filtering the nodes is to filter out the nodes that do not meet certain requirements of the Pod. For example, if the free resource on a node (measured by the capacity minus the sum of the resource requests of all the Pods that already run on the node) is less than the Pod's required resource, the node should not be considered in the ranking phase so it is filtered out. Currently, there are several "predicates" implementing different filtering policies, including: -- `NoDiskConflict`: Evaluate if a pod can fit due to the volumes it requests, and those that are already mounted. +- `NoDiskConflict`: Evaluate if a pod can fit due to the volumes it requests, and those that are already mounted. Currently supported volumes are: AWS EBS, GCE PD, and Ceph RBD. Only Persistent Volume Claims for those supported types are checked. Persistent Volumes added directly to pods are not evaluated and are not constrained by this policy. - `NoVolumeZoneConflict`: Evaluate if the volumes a pod requests are available on the node, given the Zone restrictions. - `PodFitsResources`: Check if the free resource (CPU and Memory) meets the requirement of the Pod. The free resource is measured by the capacity minus the sum of requests of all Pods on the node. To learn more about the resource QoS in Kubernetes, please check [QoS proposal](../design/resource-qos.md). - `PodFitsHostPorts`: Check if any HostPort required by the Pod is already occupied on the node. -- cgit v1.2.3 From 63b0ec4dffd79ce5f0262f25155e402c118372fe Mon Sep 17 00:00:00 2001 From: Xianglin Gao Date: Fri, 19 Aug 2016 10:41:08 +0800 Subject: fix typo Signed-off-by: Xianglin Gao --- api-conventions.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/api-conventions.md b/api-conventions.md index 5bc731be..201f6c03 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -1345,7 +1345,7 @@ encodes the stream before returning it to the client. Clients should use the SPDY protocols if their clients have native support, or WebSockets as a fallback. Note that WebSockets is susceptible to Head-of-Line -blocking and so clients must read and process each message sequentionally. In +blocking and so clients must read and process each message sequentially. In the future, an HTTP/2 implementation will be exposed that deprecates SPDY. -- cgit v1.2.3 From dfea332d506cbb3181b490715d1150e518ccf59b Mon Sep 17 00:00:00 2001 From: derekwaynecarr Date: Thu, 11 Aug 2016 17:06:06 -0400 Subject: Disable cgroups-per-qos flag until implementation is stabilized --- e2e-node-tests.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/e2e-node-tests.md b/e2e-node-tests.md index 3feaf761..44bd5f0c 100644 --- a/e2e-node-tests.md +++ b/e2e-node-tests.md @@ -242,6 +242,8 @@ make test_e2e_node TEST_ARGS="--disable-kubenet=false" # disable kubenet For testing with the QoS Cgroup Hierarchy enabled, you can pass --cgroups-per-qos flag as an argument into Ginkgo using TEST_ARGS +*Note: Disabled pending feature stabilization.* + ```sh make test_e2e_node TEST_ARGS="--cgroups-per-qos=true" ``` -- cgit v1.2.3 From e774267211edc2b5e9aa65a981464f24bbccfd76 Mon Sep 17 00:00:00 2001 From: Marie Shaw Date: Tue, 16 Aug 2016 15:53:26 -0700 Subject: Add gubernator.md and image files --- gubernator-images/filterpage.png | Bin 0 -> 408077 bytes gubernator-images/filterpage1.png | Bin 0 -> 375248 bytes gubernator-images/filterpage2.png | Bin 0 -> 372828 bytes gubernator-images/filterpage3.png | Bin 0 -> 362554 bytes gubernator-images/skipping1.png | Bin 0 -> 67007 bytes gubernator-images/skipping2.png | Bin 0 -> 114503 bytes gubernator-images/testfailures.png | Bin 0 -> 189178 bytes gubernator.md | 171 +++++++++++++++++++++++++++++++++++++ 8 files changed, 171 insertions(+) create mode 100644 gubernator-images/filterpage.png create mode 100644 gubernator-images/filterpage1.png create mode 100644 gubernator-images/filterpage2.png create mode 100644 gubernator-images/filterpage3.png create mode 100644 gubernator-images/skipping1.png create mode 100644 gubernator-images/skipping2.png create mode 100644 gubernator-images/testfailures.png create mode 100644 gubernator.md diff --git a/gubernator-images/filterpage.png b/gubernator-images/filterpage.png new file mode 100644 index 00000000..2d08bd8e Binary files /dev/null and b/gubernator-images/filterpage.png differ diff --git a/gubernator-images/filterpage1.png b/gubernator-images/filterpage1.png new file mode 100644 index 00000000..838cb0fa Binary files /dev/null and b/gubernator-images/filterpage1.png differ diff --git a/gubernator-images/filterpage2.png b/gubernator-images/filterpage2.png new file mode 100644 index 00000000..63da782e Binary files /dev/null and b/gubernator-images/filterpage2.png differ diff --git a/gubernator-images/filterpage3.png b/gubernator-images/filterpage3.png new file mode 100644 index 00000000..33066d78 Binary files /dev/null and b/gubernator-images/filterpage3.png differ diff --git a/gubernator-images/skipping1.png b/gubernator-images/skipping1.png new file mode 100644 index 00000000..a5dea440 Binary files /dev/null and b/gubernator-images/skipping1.png differ diff --git a/gubernator-images/skipping2.png b/gubernator-images/skipping2.png new file mode 100644 index 00000000..b133347e Binary files /dev/null and b/gubernator-images/skipping2.png differ diff --git a/gubernator-images/testfailures.png b/gubernator-images/testfailures.png new file mode 100644 index 00000000..1b331248 Binary files /dev/null and b/gubernator-images/testfailures.png differ diff --git a/gubernator.md b/gubernator.md new file mode 100644 index 00000000..f7ac0318 --- /dev/null +++ b/gubernator.md @@ -0,0 +1,171 @@ + + + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + + + + +# Gubernator + +*This document is oriented at developers who want to use Gubernator to debug while developing for Kubernetes.* + + + +- [Gubernator](#gubernator) + - [What is Gubernator?](#what-is-gubernator) + - [Gubernator Features](#gubernator-features) + - [Test Failures list](#test-failures-list) + - [Log Filtering](#log-filtering) + - [Gubernator for Local Tests](#gubernator-for-local-tests) + - [Future Work](#future-work) + + + +## What is Gubernator? + +[Gubernator](https://k8s-gubernator.appspot.com/) is a webpage for viewing and filtering Kubernetes +test results. + +Gubernator simplifies the debugging proccess and makes it easier to track down failures by automating many +steps commonly taken in searching through logs, and by offering tools to filter through logs to find relevant lines. +Gubernator automates the steps of finding the failed tests, displaying relevant logs, and determining the +failed pods and the corresponing pod UID, namespace, and container ID. +It also allows for filtering of the log files to display relevant lines based on selected keywords, and +allows for multiple logs to be woven together by timestamp. + +Gubernator runs on Google App Engine and fetches logs stored on Google Cloud Storage. + +## Gubernator Features + +### Test Failures list + +Issues made by k8s-merge-robot will post a link to a page listing the failed tests. +Each failed test comes with the corresponding error log from a junit file and a link +to filter logs for that test. + +Based on the message logged in the junit file, the pod name may be displayed. + +![alt text](https://github.com/kubernetes/kubernetes/docs/devel/gubernator-images/testfailures.png) + +[Test Failures List Example](https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/11721) + +### Log Filtering + +The log filtering page comes with checkboxes and textboxes to aid in filtering. Filtered keywords will be bolded +and lines including keywords will be highlighted. Up to four lines around the line of interest will also be displayed. + +![alt text](https://github.com/kubernetes/kubernetes/docs/devel/gubernator-images/filterpage.png) + +If less than 100 lines are skipped, the "... skipping xx lines ..." message can be clicked to expand and show +the hidden lines. + +Before expansion: +![alt text](https://github.com/kubernetes/kubernetes/docs/devel/gubernator-images/skipping1.png) +After expansion: +![alt text](https://github.com/kubernetes/kubernetes/docs/devel/gubernator-images/skipping2.png) + +If the pod name was displayed in the Test Failures list, it will automatically be included in the filters. +If it is not found in the error message, it can be manually entered into the textbox. Once a pod name +is entered, the Pod UID, Namespace, and ContainerID may be automatically filled in as well. These can be +altered as well. To apply the filter, check off the options corresponding to the filter. + +![alt text](https://github.com/kubernetes/kubernetes/docs/devel/gubernator-images/filterpage1.png) + +To add a filter, type the term to be filtered into the textbox labeled "Add filter:" and press enter. +Additional filters will be displayed as checkboxes under the textbox. + +![alt text](https://github.com/kubernetes/kubernetes/docs/devel/gubernator-images/filterpage3.png) + +To choose which logs to view check off the checkboxes corresponding to the logs of interest. If multiple logs are +included, the "Weave by timestamp" option can weave the selected logs together based on the timestamp in each line. + +![alt text](https://github.com/kubernetes/kubernetes/docs/devel/gubernator-images/filterpage2.png) + +[Log Filtering Example 1](https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubelet-gce-e2e-ci/5535/nodelog?pod=pod-configmaps-b5b876cb-3e1e-11e6-8956-42010af0001d&junit=junit_03.xml&wrap=on&logfiles=%2Fkubernetes-jenkins%2Flogs%2Fkubelet-gce-e2e-ci%2F5535%2Fartifacts%2Ftmp-node-e2e-7a5a3b40-e2e-node-coreos-stable20160622-image%2Fkube-apiserver.log&logfiles=%2Fkubernetes-jenkins%2Flogs%2Fkubelet-gce-e2e-ci%2F5535%2Fartifacts%2Ftmp-node-e2e-7a5a3b40-e2e-node-coreos-stable20160622-image%2Fkubelet.log&UID=on&poduid=b5b8a59e-3e1e-11e6-b358-42010af0001d&ns=e2e-tests-configmap-oi12h&cID=tmp-node-e2e-7a5a3b40-e2e-node-coreos-stable20160622-image) + +[Log Filtering Example 2](https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/11721/nodelog?pod=client-containers-a53f813c-503e-11e6-88dd-0242ac110003&junit=junit_19.xml&wrap=on) + + +### Gubernator for Local Tests + +*Currently Gubernator can only be used with remote node e2e tests.* + +**NOTE: Using Gubernator with local tests will publically upload your test logs to Google Cloud Storage** + +To use Gubernator to view logs from local test runs, set the GUBERNATOR tag to true. +A URL link to view the test results will be printed to the console. +Please note that running with the Gubernator tag will bypass the user confirmation for uploading to GCS. + +```console + +$ make test-e2e-node REMOTE=true GUBERNATOR=true +... +================================================================ +Running gubernator.sh + +Gubernator linked below: +k8s-gubernator.appspot.com/build/yourusername-g8r-logs/logs/e2e-node/timestamp +``` + +The gubernator.sh script can be run after running a remote node e2e test for the same effect. + +```console +$ ./test/e2e_node/gubernator.sh +Do you want to run gubernator.sh and upload logs publicly to GCS? [y/n]y +... +Gubernator linked below: +k8s-gubernator.appspot.com/build/yourusername-g8r-logs/logs/e2e-node/timestamp +``` + +## Future Work + +Gubernator provides a framework for debugging failures and introduces useful features. +There is still a lot of room for more features and growth to make the debugging process more efficient. + +How to contribute (see https://github.com/kubernetes/test-infra/blob/master/gubernator/README.md) + +* Extend GUBERNATOR flag to all local tests + +* More accurate identification of pod name, container ID, etc. + * Change content of logged strings for failures to include more information + * Better regex in Gubernator + +* Automate discovery of more keywords + * Volume Name + * Disk Name + * Pod IP + +* Clickable API objects in the displayed lines in order to add them as filters + +* Construct story of pod's lifetime + * Have concise view of what a pod went through from when pod was started to failure + +* Improve UI + * Have separate folders of logs in rows instead of in one long column + * Improve interface for adding additional features (maybe instead of textbox and checkbox, have chips) + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/gubernator.md?pixel)]() + -- cgit v1.2.3 From c622a0863d96cb407d1013c85d19803462a24fe2 Mon Sep 17 00:00:00 2001 From: Erick Fejta Date: Thu, 18 Aug 2016 15:22:30 -0700 Subject: Delete deprecated dockerized-e2e-runner.sh --- development.md | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/development.md b/development.md index a6b57631..eea858ff 100644 --- a/development.md +++ b/development.md @@ -142,9 +142,8 @@ Since kubernetes is mostly built and tested in containers, there are a few unique places you need to update the go version. - The image for cross compiling in [build/build-image/cross/](../../build/build-image/cross/). The `VERSION` file and `Dockerfile`. -- The jenkins test-image in - [hack/jenkins/test-image/](../../hack/jenkins/test-image/). The `Dockerfile` and `Makefile`. -- The docker image being run in [hack/jenkins/dockerized-e2e-runner.sh](../../hack/jenkins/dockerized-e2e-runner.sh) and [hack/jenkins/gotest-dockerized.sh](../../hack/jenkins/gotest-dockerized.sh). +- Update [dockerized-e2e-runner.sh](https://github.com/kubernetes/test-infra/blob/master/jenkins/dockerized-e2e-runner.sh) to run a kubekins-e2e with the desired go version, which requires pushing [e2e-image](../../hack/jenkins/e2e-image/) and [test-image](../../hack/jenkins/test-image/) images that are `FROM` the desired go version. +- The docker image being run in [hack/jenkins/gotest-dockerized.sh](../../hack/jenkins/gotest-dockerized.sh). - The cross tag `KUBE_BUILD_IMAGE_CROSS_TAG` in [build/common.sh](../../build/common.sh) ## Workflow -- cgit v1.2.3 From dac646732fbd3bdccd90480db85317efe3ef414e Mon Sep 17 00:00:00 2001 From: Marie Shaw Date: Fri, 19 Aug 2016 15:33:14 -0700 Subject: Fix images links --- gubernator.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/gubernator.md b/gubernator.md index f7ac0318..03bdaf93 100644 --- a/gubernator.md +++ b/gubernator.md @@ -67,7 +67,7 @@ to filter logs for that test. Based on the message logged in the junit file, the pod name may be displayed. -![alt text](https://github.com/kubernetes/kubernetes/docs/devel/gubernator-images/testfailures.png) +![alt text](gubernator-images/testfailures.png) [Test Failures List Example](https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/11721) @@ -76,32 +76,32 @@ Based on the message logged in the junit file, the pod name may be displayed. The log filtering page comes with checkboxes and textboxes to aid in filtering. Filtered keywords will be bolded and lines including keywords will be highlighted. Up to four lines around the line of interest will also be displayed. -![alt text](https://github.com/kubernetes/kubernetes/docs/devel/gubernator-images/filterpage.png) +![alt text](gubernator-images/filterpage.png) If less than 100 lines are skipped, the "... skipping xx lines ..." message can be clicked to expand and show the hidden lines. Before expansion: -![alt text](https://github.com/kubernetes/kubernetes/docs/devel/gubernator-images/skipping1.png) +![alt text](gubernator-images/skipping1.png) After expansion: -![alt text](https://github.com/kubernetes/kubernetes/docs/devel/gubernator-images/skipping2.png) +![alt text](gubernator-images/skipping2.png) If the pod name was displayed in the Test Failures list, it will automatically be included in the filters. If it is not found in the error message, it can be manually entered into the textbox. Once a pod name is entered, the Pod UID, Namespace, and ContainerID may be automatically filled in as well. These can be altered as well. To apply the filter, check off the options corresponding to the filter. -![alt text](https://github.com/kubernetes/kubernetes/docs/devel/gubernator-images/filterpage1.png) +![alt text](gubernator-images/filterpage1.png) To add a filter, type the term to be filtered into the textbox labeled "Add filter:" and press enter. Additional filters will be displayed as checkboxes under the textbox. -![alt text](https://github.com/kubernetes/kubernetes/docs/devel/gubernator-images/filterpage3.png) +![alt text](gubernator-images/filterpage3.png) To choose which logs to view check off the checkboxes corresponding to the logs of interest. If multiple logs are included, the "Weave by timestamp" option can weave the selected logs together based on the timestamp in each line. -![alt text](https://github.com/kubernetes/kubernetes/docs/devel/gubernator-images/filterpage2.png) +![alt text](gubernator-images/filterpage2.png) [Log Filtering Example 1](https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubelet-gce-e2e-ci/5535/nodelog?pod=pod-configmaps-b5b876cb-3e1e-11e6-8956-42010af0001d&junit=junit_03.xml&wrap=on&logfiles=%2Fkubernetes-jenkins%2Flogs%2Fkubelet-gce-e2e-ci%2F5535%2Fartifacts%2Ftmp-node-e2e-7a5a3b40-e2e-node-coreos-stable20160622-image%2Fkube-apiserver.log&logfiles=%2Fkubernetes-jenkins%2Flogs%2Fkubelet-gce-e2e-ci%2F5535%2Fartifacts%2Ftmp-node-e2e-7a5a3b40-e2e-node-coreos-stable20160622-image%2Fkubelet.log&UID=on&poduid=b5b8a59e-3e1e-11e6-b358-42010af0001d&ns=e2e-tests-configmap-oi12h&cID=tmp-node-e2e-7a5a3b40-e2e-node-coreos-stable20160622-image) -- cgit v1.2.3 From 988826d8aa213a332387293d3ce7861f1f401453 Mon Sep 17 00:00:00 2001 From: Tamer Tas Date: Sat, 20 Aug 2016 10:11:59 +0300 Subject: docs/devel: document the behavior of github UI for PRs --- development.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/development.md b/development.md index eea858ff..54b94e8a 100644 --- a/development.md +++ b/development.md @@ -225,6 +225,8 @@ git push -f origin my-feature 2. Click the "Compare & pull request" button next to your "my-feature" branch. 3. Check out the pull request [process](pull-requests.md) for more details +**Note:** If you have write access, please refrain from using the GitHub UI for creating PRs, because GitHub will create the PR branch inside the main repository rather than inside your fork. + ### When to retain commits and when to squash Upon merge, all git commits should represent meaningful milestones or units of -- cgit v1.2.3 From c586e0d8089d7260aced318b17771473ce2987b0 Mon Sep 17 00:00:00 2001 From: "Dr. Stefan Schimanski" Date: Mon, 22 Aug 2016 13:07:16 +0200 Subject: Add some docs about the missing node e2e scheduling --- e2e-node-tests.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/e2e-node-tests.md b/e2e-node-tests.md index 3feaf761..de993b8c 100644 --- a/e2e-node-tests.md +++ b/e2e-node-tests.md @@ -42,6 +42,8 @@ Node e2e tests are run as both pre- and post- submit tests by the Kubernetes pro *Note: Linux only. Mac and Windows unsupported.* +*Note: There is no scheduler running. The e2e tests have to do manual scheduling, e.g. by using `framework.PodClient`.* + # Running tests ## Locally -- cgit v1.2.3 From d7f26f7b131dab35fd910c51421dcc95760fc14a Mon Sep 17 00:00:00 2001 From: Bart Van Bos Date: Tue, 23 Aug 2016 14:10:43 +0200 Subject: Add go-bindata as development dependency --- development.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/development.md b/development.md index 54b94e8a..c6f9ebcc 100644 --- a/development.md +++ b/development.md @@ -90,10 +90,11 @@ source control system). Use `apt-get install mercurial` or `yum install mercurial` on Linux, or [brew.sh](http://brew.sh) on OS X, or download directly from mercurial. -Install godep (may require sudo): +Install godep and go-bindata (may require sudo): ```sh go get -u github.com/tools/godep +go get -u github.com/jteeuwen/go-bindata/go-bindata ``` Note: -- cgit v1.2.3 From 5d02f4bf1bd40ae447650abaa0af4cb1addac45f Mon Sep 17 00:00:00 2001 From: Erick Fejta Date: Tue, 23 Aug 2016 17:39:41 -0700 Subject: Moved runner to test-infra --- development.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/development.md b/development.md index 54b94e8a..d6e58026 100644 --- a/development.md +++ b/development.md @@ -142,7 +142,7 @@ Since kubernetes is mostly built and tested in containers, there are a few unique places you need to update the go version. - The image for cross compiling in [build/build-image/cross/](../../build/build-image/cross/). The `VERSION` file and `Dockerfile`. -- Update [dockerized-e2e-runner.sh](https://github.com/kubernetes/test-infra/blob/master/jenkins/dockerized-e2e-runner.sh) to run a kubekins-e2e with the desired go version, which requires pushing [e2e-image](../../hack/jenkins/e2e-image/) and [test-image](../../hack/jenkins/test-image/) images that are `FROM` the desired go version. +- Update [dockerized-e2e-runner.sh](https://github.com/kubernetes/test-infra/blob/master/jenkins/dockerized-e2e-runner.sh) to run a kubekins-e2e with the desired go version, which requires pushing [e2e-image](https://github.com/kubernetes/test-infra/tree/master/jenkins/e2e-image) and [test-image](https://github.com/kubernetes/test-infra/tree/master/jenkins/test-image) images that are `FROM` the desired go version. - The docker image being run in [hack/jenkins/gotest-dockerized.sh](../../hack/jenkins/gotest-dockerized.sh). - The cross tag `KUBE_BUILD_IMAGE_CROSS_TAG` in [build/common.sh](../../build/common.sh) -- cgit v1.2.3 From 33dbe1a2592659bb650feda4cab4eda2826852df Mon Sep 17 00:00:00 2001 From: Jeff Mendoza Date: Mon, 15 Aug 2016 13:04:34 -0700 Subject: Removed non-md files from docs. Moved doc yamls to test/fixtures. Most of the contents of docs/ has moved to kubernetes.github.io. Development of the docs and accompanying files has continued there, making the copies in this repo stale. I've removed everything but the .md files which remain to redirect old links. The .yaml config files in the docs were used by some tests, these have been moved to test/fixtures/doc-yaml, and can remain there to be used by tests or other purposes. --- how-to-doc.md | 16 ++++++++-------- local-cluster/docker.md | 2 +- local-cluster/k8s-singlenode-docker.png | Bin 0 -> 31801 bytes local-cluster/local.md | 2 +- running-locally.md | 2 +- 5 files changed, 11 insertions(+), 11 deletions(-) create mode 100644 local-cluster/k8s-singlenode-docker.png diff --git a/how-to-doc.md b/how-to-doc.md index e0659339..99569426 100644 --- a/how-to-doc.md +++ b/how-to-doc.md @@ -108,7 +108,7 @@ The above example generates the following links: ## How to Include an Example While writing examples, you may want to show the content of certain example -files (e.g. [pod.yaml](../user-guide/pod.yaml)). In this case, insert the +files (e.g. [pod.yaml](../../test/fixtures/doc-yaml/user-guide/pod.yaml)). In this case, insert the following code in the md file: ``` @@ -125,13 +125,13 @@ out-of-date every time you update the example file. For example, the following: ``` - - + + ``` generates the following after `hack/update-munge-docs.sh`: - + ```yaml apiVersion: v1 @@ -148,8 +148,8 @@ spec: - containerPort: 80 ``` -[Download example](../user-guide/pod.yaml?raw=true) - +[Download example](../../test/fixtures/doc-yaml/user-guide/pod.yaml?raw=true) + ## Misc. @@ -170,7 +170,7 @@ console code block: ``` ```console -$ kubectl create -f docs/user-guide/pod.yaml +$ kubectl create -f test/fixtures/doc-yaml/user-guide/pod.yaml pod "foo" created ```  @@ -179,7 +179,7 @@ pod "foo" created which renders as: ```console -$ kubectl create -f docs/user-guide/pod.yaml +$ kubectl create -f test/fixtures/doc-yaml/user-guide/pod.yaml pod "foo" created ``` diff --git a/local-cluster/docker.md b/local-cluster/docker.md index b69121e8..38550e9f 100644 --- a/local-cluster/docker.md +++ b/local-cluster/docker.md @@ -34,7 +34,7 @@ The following instructions show you how to set up a simple, single node Kubernet Here's a diagram of what the final result will look like: -![Kubernetes Single Node on Docker](../../getting-started-guides/k8s-singlenode-docker.png) +![Kubernetes Single Node on Docker](k8s-singlenode-docker.png) ## Prerequisites diff --git a/local-cluster/k8s-singlenode-docker.png b/local-cluster/k8s-singlenode-docker.png new file mode 100644 index 00000000..5ebf8126 Binary files /dev/null and b/local-cluster/k8s-singlenode-docker.png differ diff --git a/local-cluster/local.md b/local-cluster/local.md index 23e2156f..8c22a7d1 100644 --- a/local-cluster/local.md +++ b/local-cluster/local.md @@ -109,7 +109,7 @@ However you cannot view the nginx start page on localhost. To verify that nginx You can control the specifications of a pod via a user defined manifest, and reach nginx through your browser on the port specified therein: ```shell -cluster/kubectl.sh create -f docs/user-guide/pod.yaml +cluster/kubectl.sh create -f test/fixtures/doc-yaml/user-guide/pod.yaml ``` Congratulations! diff --git a/running-locally.md b/running-locally.md index 2b92bb32..4bf86c1b 100644 --- a/running-locally.md +++ b/running-locally.md @@ -143,7 +143,7 @@ However you cannot view the nginx start page on localhost. To verify that nginx You can control the specifications of a pod via a user defined manifest, and reach nginx through your browser on the port specified therein: ```sh -cluster/kubectl.sh create -f docs/user-guide/pod.yaml +cluster/kubectl.sh create -f test/fixtures/doc-yaml/user-guide/pod.yaml ``` Congratulations! -- cgit v1.2.3 From eac888c221453fadcb9e84b08d4d153e8edd0894 Mon Sep 17 00:00:00 2001 From: Chao Xu Date: Thu, 25 Aug 2016 15:40:03 -0700 Subject: update readme --- generating-clientset.md | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/generating-clientset.md b/generating-clientset.md index 4fd3044c..35d04d74 100644 --- a/generating-clientset.md +++ b/generating-clientset.md @@ -62,12 +62,16 @@ will generate a clientset named "my_release" which includes clients for api/v1 objects and extensions/v1beta1 objects. You can run `$ client-gen --help` to see other command line arguments. -- Adding expansion methods: client-gen only generates the common methods, such -as `Create()` and `Delete()`. You can manually add additional methods through -the expansion interface. For example, this -[file](../../pkg/client/clientset_generated/release_1_2/typed/core/v1/pod_expansion.go) -adds additional methods to Pod's client. As a convention, we put the expansion -interface and its methods in file ${TYPE}_expansion.go. +- ***Adding expansion methods***: client-gen only generates the common methods, + such as `Create()` and `Delete()`. You can manually add additional methods + through the expansion interface. For example, this + [file](../../pkg/client/clientset_generated/release_1_4/typed/core/v1/pod_expansion.go) + adds additional methods to Pod's client. As a convention, we put the expansion + interface and its methods in file ${TYPE}_expansion.go. In most cases, you + don't want to remove existing expansion files. So to make life easier, + instead of creating a new clientset from scratch, ***you can copy and rename an + existing clientset (so that all the expansion files are copied)***, and then run + client-gen. - Generating fake clients for testing purposes: client-gen will generate a fake clientset if the command line argument `--fake-clientset` is set. The fake -- cgit v1.2.3 From 059bfae6141d3c206f7571f7067b66057e9b9e6c Mon Sep 17 00:00:00 2001 From: Erick Fejta Date: Tue, 9 Aug 2016 16:22:47 -0700 Subject: Convert bool to error, helper func for cd to skew --- e2e-tests.md | 3 --- 1 file changed, 3 deletions(-) diff --git a/e2e-tests.md b/e2e-tests.md index da0b2b3f..76becd54 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -124,9 +124,6 @@ go run hack/e2e.go -v --build # Create a fresh cluster. Deletes a cluster first, if it exists go run hack/e2e.go -v --up -# Test if a cluster is up. -go run hack/e2e.go -v --isup - # Push code to an existing cluster go run hack/e2e.go -v --push -- cgit v1.2.3 From b87dcb359c6a3da36ac4d81ebae57ef75c393352 Mon Sep 17 00:00:00 2001 From: Chao Xu Date: Tue, 30 Aug 2016 15:16:41 -0700 Subject: update doc --- client-libraries.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/client-libraries.md b/client-libraries.md index 7229b7f1..354945f6 100644 --- a/client-libraries.md +++ b/client-libraries.md @@ -36,7 +36,7 @@ Documentation for other releases can be found at ### Supported - * [Go](http://releases.k8s.io/HEAD/pkg/client/) + * [Go](https://github.com/kubernetes/client-go) ### User Contributed -- cgit v1.2.3 From 10555cb427c0457ca07f66d43daae970ab321630 Mon Sep 17 00:00:00 2001 From: jay vyas Date: Mon, 29 Aug 2016 10:35:15 -0400 Subject: Ascii-gram for scheduler, more structured doc --- scheduler.md | 68 ++++++++++++++++++++++++++++++++++++++++-------------------- 1 file changed, 45 insertions(+), 23 deletions(-) diff --git a/scheduler.md b/scheduler.md index 302ec144..f699dabc 100755 --- a/scheduler.md +++ b/scheduler.md @@ -41,45 +41,67 @@ indicating where the Pod should be scheduled. ## The scheduling process -The scheduler tries to find a node for each Pod, one at a time, as it notices -these Pods via watch. There are three steps. First it applies a set of "predicates" that filter out -inappropriate nodes. For example, if the PodSpec specifies resource requests, then the scheduler -will filter out nodes that don't have at least that much resources available (computed -as the capacity of the node minus the sum of the resource requests of the containers that -are already running on the node). Second, it applies a set of "priority functions" -that rank the nodes that weren't filtered out by the predicate check. For example, -it tries to spread Pods across nodes and zones while at the same time favoring the least-loaded -nodes (where "load" here is sum of the resource requests of the containers running on the node, +``` + +-------+ + +---------------+ node 1| + | +-------+ + | + +----> | Apply pred. filters + | | + | | +-------+ + | +----+---------->+node 2 | + | | +--+----+ + | watch | | + | | | +------+ + | +---------------------->+node 3| ++--+---------------+ | +--+---+ +| Pods in apiserver| | | ++------------------+ | | + | | + | | + +------------V------v--------+ + | Priority function | + +-------------+--------------+ + | + | node 1: p=2 + | node 2: p=5 + v + select max{node priority} = node 2 + +``` + +The Scheduler tries to find a node for each Pod, one at a time. Notices pods via watch. +- First it applies a set of "predicates" to filter out inappropriate nodes inappropriate nodes. For example, if the PodSpec specifies resource requests, then the scheduler will filter out nodes that don't have at least that much resources available (computed as the capacity of the node minus the sum of the resource requests of the containers that are already running on the node). +- Second, it applies a set of "priority functions" +that rank the nodes that weren't filtered out by the predicate check. For example, it tries to spread Pods across nodes and zones while at the same time favoring the least-loaded nodes (where "load" here is sum of the resource requests of the containers running on the node, divided by the node's capacity). -Finally, the node with the highest priority is chosen -(or, if there are multiple such nodes, then one of them is chosen at random). The code -for this main scheduling loop is in the function `Schedule()` in -[plugin/pkg/scheduler/generic_scheduler.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/generic_scheduler.go) +- Finally, the node with the highest priority is chosen (or, if there are multiple such nodes, then one of them is chosen at random). The code for this main scheduling loop is in the function `Schedule()` in [plugin/pkg/scheduler/generic_scheduler.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/generic_scheduler.go) ## Scheduler extensibility The scheduler is extensible: the cluster administrator can choose which of the pre-defined -scheduling policies to apply, and can add new ones. The built-in predicates and priorities are +scheduling policies to apply, and can add new ones. + +### Policies Prediates + Priorities + +The built-in predicates and priorities are defined in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/predicates/predicates.go) and [plugin/pkg/scheduler/algorithm/priorities/priorities.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/priorities/priorities.go), respectively. + +### Modifying policies + The policies that are applied when scheduling can be chosen in one of two ways. Normally, the policies used are selected by the functions `defaultPredicates()` and `defaultPriorities()` in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). -However, the choice of policies -can be overridden by passing the command-line flag `--policy-config-file` to the scheduler, pointing to a JSON -file specifying which scheduling policies to use. See -[examples/scheduler-policy-config.json](../../examples/scheduler-policy-config.json) for an example -config file. (Note that the config file format is versioned; the API is defined in -[plugin/pkg/scheduler/api](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/api/)). -Thus to add a new scheduling policy, you should modify predicates.go or priorities.go, -and either register the policy in `defaultPredicates()` or `defaultPriorities()`, or use a policy config file. +However, the choice of policies can be overridden by passing the command-line flag `--policy-config-file` to the scheduler, pointing to a JSON file specifying which scheduling policies to use. See [examples/scheduler-policy-config.json](../../examples/scheduler-policy-config.json) for an example +config file. (Note that the config file format is versioned; the API is defined in [plugin/pkg/scheduler/api](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/api/)). +Thus to add a new scheduling policy, you should modify predicates.go or priorities.go, and either register the policy in `defaultPredicates()` or `defaultPriorities()`, or use a policy config file. ## Exploring the code If you want to get a global picture of how the scheduler works, you can start in [plugin/cmd/kube-scheduler/app/server.go](http://releases.k8s.io/HEAD/plugin/cmd/kube-scheduler/app/server.go) - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/scheduler.md?pixel)]() -- cgit v1.2.3 From e9a9f3add1fa4091681f38167423b5ce4b2a8d7b Mon Sep 17 00:00:00 2001 From: Mike Danese Date: Thu, 1 Sep 2016 11:24:46 -0700 Subject: fix broken link in docs --- coding-conventions.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/coding-conventions.md b/coding-conventions.md index b551c032..dbc717e0 100644 --- a/coding-conventions.md +++ b/coding-conventions.md @@ -51,7 +51,7 @@ Updated: 5/3/2016 - Bash - - https://google-styleguide.googlecode.com/svn/trunk/shell.xml + - https://google.github.io/styleguide/shell.xml - Ensure that build, release, test, and cluster-management scripts run on OS X -- cgit v1.2.3 From 9975514540d8e3dbaed5910a2638352e29eaf841 Mon Sep 17 00:00:00 2001 From: David McMahon Date: Thu, 1 Sep 2016 14:40:55 -0700 Subject: Update the latestReleaseBranch to release-1.4 in the munger. --- README.md | 2 +- adding-an-APIGroup.md | 2 +- api-conventions.md | 2 +- api_changes.md | 2 +- automation.md | 2 +- cherry-picks.md | 2 +- cli-roadmap.md | 2 +- client-libraries.md | 2 +- coding-conventions.md | 2 +- collab.md | 2 +- developer-guides/vagrant.md | 2 +- development.md | 2 +- e2e-node-tests.md | 2 +- e2e-tests.md | 2 +- faster_reviews.md | 2 +- flaky-tests.md | 2 +- generating-clientset.md | 2 +- getting-builds.md | 2 +- go-code.md | 5 +++++ godep.md | 5 +++++ gubernator.md | 5 +++++ instrumentation.md | 2 +- issues.md | 2 +- kubectl-conventions.md | 2 +- kubemark-guide.md | 2 +- local-cluster/docker.md | 5 +++++ local-cluster/local.md | 5 +++++ local-cluster/vagrant.md | 5 +++++ logging.md | 2 +- making-release-notes.md | 2 +- mesos-style.md | 2 +- node-performance-testing.md | 2 +- on-call-build-cop.md | 2 +- on-call-rotations.md | 2 +- on-call-user-support.md | 2 +- owners.md | 2 +- profiling.md | 2 +- pull-requests.md | 2 +- releasing.md | 2 +- running-locally.md | 2 +- scheduler.md | 2 +- scheduler_algorithm.md | 2 +- testing.md | 2 +- update-release-docs.md | 2 +- updating-docs-for-feature-changes.md | 2 +- writing-a-getting-started-guide.md | 2 +- writing-good-e2e-tests.md | 2 +- 47 files changed, 71 insertions(+), 41 deletions(-) diff --git a/README.md b/README.md index 1b140418..bf28c603 100644 --- a/README.md +++ b/README.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/README.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/README.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/adding-an-APIGroup.md b/adding-an-APIGroup.md index cac04449..4b87ccf3 100644 --- a/adding-an-APIGroup.md +++ b/adding-an-APIGroup.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/adding-an-APIGroup.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/adding-an-APIGroup.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/api-conventions.md b/api-conventions.md index 201f6c03..4a9c6fc1 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/api-conventions.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/api-conventions.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/api_changes.md b/api_changes.md index cd19c55d..0b0b7987 100755 --- a/api_changes.md +++ b/api_changes.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/api_changes.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/api_changes.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/automation.md b/automation.md index 4161bc35..580606e4 100644 --- a/automation.md +++ b/automation.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/automation.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/automation.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/cherry-picks.md b/cherry-picks.md index 93bef70c..ef2cee70 100644 --- a/cherry-picks.md +++ b/cherry-picks.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/cherry-picks.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/cherry-picks.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/cli-roadmap.md b/cli-roadmap.md index 015f20f0..7fce10ba 100644 --- a/cli-roadmap.md +++ b/cli-roadmap.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/cli-roadmap.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/cli-roadmap.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/client-libraries.md b/client-libraries.md index 354945f6..868f8363 100644 --- a/client-libraries.md +++ b/client-libraries.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/client-libraries.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/client-libraries.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/coding-conventions.md b/coding-conventions.md index dbc717e0..df66a96e 100644 --- a/coding-conventions.md +++ b/coding-conventions.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/coding-conventions.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/coding-conventions.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/collab.md b/collab.md index 002e9cc5..e11b544f 100644 --- a/collab.md +++ b/collab.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/collab.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/collab.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index ef32f3f3..fe5bc6ea 100755 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/developer-guides/vagrant.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/developer-guides/vagrant.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/development.md b/development.md index d6e58026..7150634d 100644 --- a/development.md +++ b/development.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/development.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/development.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/e2e-node-tests.md b/e2e-node-tests.md index 44bd5f0c..25ba672c 100644 --- a/e2e-node-tests.md +++ b/e2e-node-tests.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/e2e-node-tests.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/e2e-node-tests.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/e2e-tests.md b/e2e-tests.md index da0b2b3f..13132919 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/e2e-tests.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/e2e-tests.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/faster_reviews.md b/faster_reviews.md index 984eecde..11fcbe72 100644 --- a/faster_reviews.md +++ b/faster_reviews.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/faster_reviews.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/faster_reviews.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/flaky-tests.md b/flaky-tests.md index 1f742a46..22aa4c42 100644 --- a/flaky-tests.md +++ b/flaky-tests.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/flaky-tests.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/flaky-tests.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/generating-clientset.md b/generating-clientset.md index 35d04d74..aa29f54b 100644 --- a/generating-clientset.md +++ b/generating-clientset.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/generating-clientset.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/generating-clientset.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/getting-builds.md b/getting-builds.md index b1ae845f..b9d8c66e 100644 --- a/getting-builds.md +++ b/getting-builds.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/getting-builds.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/getting-builds.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/go-code.md b/go-code.md index e6416bed..695102ec 100644 --- a/go-code.md +++ b/go-code.md @@ -18,6 +18,11 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.4/docs/devel/go-code.md). + Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/godep.md b/godep.md index 6d0a2bb8..f746debb 100644 --- a/godep.md +++ b/godep.md @@ -18,6 +18,11 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.4/docs/devel/godep.md). + Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/gubernator.md b/gubernator.md index 03bdaf93..def20b5b 100644 --- a/gubernator.md +++ b/gubernator.md @@ -18,6 +18,11 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.4/docs/devel/gubernator.md). + Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/instrumentation.md b/instrumentation.md index a9e85691..b5677ad7 100644 --- a/instrumentation.md +++ b/instrumentation.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/instrumentation.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/instrumentation.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/issues.md b/issues.md index 0cf4730d..4a4e2493 100644 --- a/issues.md +++ b/issues.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/issues.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/issues.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/kubectl-conventions.md b/kubectl-conventions.md index 3e7e8803..29316407 100644 --- a/kubectl-conventions.md +++ b/kubectl-conventions.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/kubectl-conventions.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/kubectl-conventions.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/kubemark-guide.md b/kubemark-guide.md index 8c18b2be..28ca49fd 100755 --- a/kubemark-guide.md +++ b/kubemark-guide.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/kubemark-guide.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/kubemark-guide.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/local-cluster/docker.md b/local-cluster/docker.md index 38550e9f..6cdeb3c6 100644 --- a/local-cluster/docker.md +++ b/local-cluster/docker.md @@ -18,6 +18,11 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.4/docs/devel/local-cluster/docker.md). + Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/local-cluster/local.md b/local-cluster/local.md index 8c22a7d1..1986346d 100644 --- a/local-cluster/local.md +++ b/local-cluster/local.md @@ -18,6 +18,11 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.4/docs/devel/local-cluster/local.md). + Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/local-cluster/vagrant.md b/local-cluster/vagrant.md index 47ac65e7..ffdbabe0 100644 --- a/local-cluster/vagrant.md +++ b/local-cluster/vagrant.md @@ -18,6 +18,11 @@ If you are using a released version of Kubernetes, you should refer to the docs that go with that version. + + +The latest release of this document can be found +[here](http://releases.k8s.io/release-1.4/docs/devel/local-cluster/vagrant.md). + Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/logging.md b/logging.md index 523a4ccf..71fa6c69 100644 --- a/logging.md +++ b/logging.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/logging.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/logging.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/making-release-notes.md b/making-release-notes.md index 4a1a0693..bc51f22c 100644 --- a/making-release-notes.md +++ b/making-release-notes.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/making-release-notes.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/making-release-notes.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/mesos-style.md b/mesos-style.md index f614fea8..89a3e340 100644 --- a/mesos-style.md +++ b/mesos-style.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/mesos-style.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/mesos-style.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/node-performance-testing.md b/node-performance-testing.md index 58dcfaee..9842e443 100644 --- a/node-performance-testing.md +++ b/node-performance-testing.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/node-performance-testing.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/node-performance-testing.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/on-call-build-cop.md b/on-call-build-cop.md index f6479c8e..9cbea294 100644 --- a/on-call-build-cop.md +++ b/on-call-build-cop.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/on-call-build-cop.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/on-call-build-cop.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/on-call-rotations.md b/on-call-rotations.md index 649a8853..8ff47dc5 100644 --- a/on-call-rotations.md +++ b/on-call-rotations.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/on-call-rotations.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/on-call-rotations.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/on-call-user-support.md b/on-call-user-support.md index b365b6f0..5efaeec7 100644 --- a/on-call-user-support.md +++ b/on-call-user-support.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/on-call-user-support.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/on-call-user-support.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/owners.md b/owners.md index 3b61766d..2f735e36 100644 --- a/owners.md +++ b/owners.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/owners.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/owners.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/profiling.md b/profiling.md index c130da87..5786e005 100644 --- a/profiling.md +++ b/profiling.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/profiling.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/profiling.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/pull-requests.md b/pull-requests.md index 7bc4d967..ae7c039f 100644 --- a/pull-requests.md +++ b/pull-requests.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/pull-requests.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/pull-requests.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/releasing.md b/releasing.md index c195ee8e..72dad8b5 100644 --- a/releasing.md +++ b/releasing.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/releasing.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/releasing.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/running-locally.md b/running-locally.md index 4bf86c1b..dc32a38f 100644 --- a/running-locally.md +++ b/running-locally.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/running-locally.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/running-locally.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/scheduler.md b/scheduler.md index 302ec144..69a7c7a2 100755 --- a/scheduler.md +++ b/scheduler.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/scheduler.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/scheduler.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/scheduler_algorithm.md b/scheduler_algorithm.md index 26658f3f..01f5df82 100755 --- a/scheduler_algorithm.md +++ b/scheduler_algorithm.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/scheduler_algorithm.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/scheduler_algorithm.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/testing.md b/testing.md index 4995d689..ed0bdfa0 100644 --- a/testing.md +++ b/testing.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/testing.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/testing.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/update-release-docs.md b/update-release-docs.md index 0fed8f22..215ba3f0 100644 --- a/update-release-docs.md +++ b/update-release-docs.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/update-release-docs.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/update-release-docs.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/updating-docs-for-feature-changes.md b/updating-docs-for-feature-changes.md index 5975e428..5a22d780 100644 --- a/updating-docs-for-feature-changes.md +++ b/updating-docs-for-feature-changes.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/updating-docs-for-feature-changes.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/updating-docs-for-feature-changes.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/writing-a-getting-started-guide.md b/writing-a-getting-started-guide.md index 05b3d0c2..a227ae6f 100644 --- a/writing-a-getting-started-guide.md +++ b/writing-a-getting-started-guide.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/writing-a-getting-started-guide.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/writing-a-getting-started-guide.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). diff --git a/writing-good-e2e-tests.md b/writing-good-e2e-tests.md index 70abfe1c..cfe8ff6a 100644 --- a/writing-good-e2e-tests.md +++ b/writing-good-e2e-tests.md @@ -21,7 +21,7 @@ refer to the docs that go with that version. The latest release of this document can be found -[here](http://releases.k8s.io/release-1.3/docs/devel/writing-good-e2e-tests.md). +[here](http://releases.k8s.io/release-1.4/docs/devel/writing-good-e2e-tests.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). -- cgit v1.2.3 From eda48ca12ea162ce27be2758c2fd95af7570aaee Mon Sep 17 00:00:00 2001 From: jayunit100 Date: Thu, 1 Sep 2016 13:59:20 -0400 Subject: Updated theoretical node commit, secondary improvement, separate commit --- scheduler.md | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/scheduler.md b/scheduler.md index f699dabc..f4321e68 100755 --- a/scheduler.md +++ b/scheduler.md @@ -70,11 +70,10 @@ indicating where the Pod should be scheduled. ``` -The Scheduler tries to find a node for each Pod, one at a time. Notices pods via watch. -- First it applies a set of "predicates" to filter out inappropriate nodes inappropriate nodes. For example, if the PodSpec specifies resource requests, then the scheduler will filter out nodes that don't have at least that much resources available (computed as the capacity of the node minus the sum of the resource requests of the containers that are already running on the node). +The Scheduler tries to find a node for each Pod, one at a time. +- First it applies a set of "predicates" to filter out inappropriate nodes. For example, if the PodSpec specifies resource requests, then the scheduler will filter out nodes that don't have at least that much resources available (computed as the capacity of the node minus the sum of the resource requests of the containers that are already running on the node). - Second, it applies a set of "priority functions" -that rank the nodes that weren't filtered out by the predicate check. For example, it tries to spread Pods across nodes and zones while at the same time favoring the least-loaded nodes (where "load" here is sum of the resource requests of the containers running on the node, -divided by the node's capacity). +that rank the nodes that weren't filtered out by the predicate check. For example, it tries to spread Pods across nodes and zones while at the same time favoring the least (theoretically) loaded nodes (where "load" - in theory - is measured as the sum of the resource requests of the containers running on the node, divided by the node's capacity). - Finally, the node with the highest priority is chosen (or, if there are multiple such nodes, then one of them is chosen at random). The code for this main scheduling loop is in the function `Schedule()` in [plugin/pkg/scheduler/generic_scheduler.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/generic_scheduler.go) ## Scheduler extensibility -- cgit v1.2.3 From 60b3b6930fd561d3db24580c89ccc610cabfb216 Mon Sep 17 00:00:00 2001 From: Jordan Liggitt Date: Thu, 8 Sep 2016 16:21:58 -0400 Subject: Doc API group suffix, add test to catch new groups --- adding-an-APIGroup.md | 13 ++++++++----- api_changes.md | 2 +- 2 files changed, 9 insertions(+), 6 deletions(-) diff --git a/adding-an-APIGroup.md b/adding-an-APIGroup.md index 4b87ccf3..f1bd182d 100644 --- a/adding-an-APIGroup.md +++ b/adding-an-APIGroup.md @@ -49,19 +49,19 @@ We plan on improving the way the types are factored in the future; see [#16062](https://github.com/kubernetes/kubernetes/pull/16062) for the directions in which this might evolve. -1. Create a folder in pkg/apis to hold you group. Create types.go in +1. Create a folder in pkg/apis to hold your group. Create types.go in pkg/apis/``/ and pkg/apis/``/``/ to define API objects in your group; 2. Create pkg/apis/``/{register.go, ``/register.go} to register this group's API objects to the encoding/decoding scheme (e.g., -[pkg/apis/extensions/register.go](../../pkg/apis/extensions/register.go) and -[pkg/apis/extensions/v1beta1/register.go](../../pkg/apis/extensions/v1beta1/register.go); +[pkg/apis/authentication/register.go](../../pkg/apis/authentication/register.go) and +[pkg/apis/authentication/v1beta1/register.go](../../pkg/apis/authentication/v1beta1/register.go); 3. Add a pkg/apis/``/install/install.go, which is responsible for adding the group to the `latest` package, so that other packages can access the group's meta through `latest.Group`. You probably only need to change the name of group -and version in the [example](../../pkg/apis/extensions/install/install.go)). You +and version in the [example](../../pkg/apis/authentication/install/install.go)). You need to import this `install` package in {pkg/master, pkg/client/unversioned}/import_known_versions.go, if you want to make your group accessible to other packages in the kube-apiserver binary, binaries that uses @@ -83,7 +83,10 @@ cmd/libs/go2idl/ tool. with the comment `// +k8s:conversion-gen=`, to catch the attention of our generation tools. For most APIs the only target you need is `k8s.io/kubernetes/pkg/apis/` (your internal API). - 4. Run hack/update-all.sh. + 3. Make sure your `pkg/apis/` and `pkg/apis//` directories + have a doc.go file with the comment `+groupName=.k8s.io`, to correctly + generate the DNS-suffixed group name. + 5. Run hack/update-all.sh. 2. Generate files for Ugorji codec: diff --git a/api_changes.md b/api_changes.md index 0b0b7987..afdbaae7 100755 --- a/api_changes.md +++ b/api_changes.md @@ -519,7 +519,7 @@ hack/update-codecgen.sh This section is under construction, as we make the tooling completely generic. At the moment, you'll have to make a new directory under `pkg/apis/`; copy the -directory structure from `pkg/apis/extensions`. Add the new group/version to all +directory structure from `pkg/apis/authentication`. Add the new group/version to all of the `hack/{verify,update}-generated-{deep-copy,conversions,swagger}.sh` files in the appropriate places--it should just require adding your new group/version to a bash array. See [docs on adding an API group](adding-an-APIGroup.md) for -- cgit v1.2.3 From 2e227a61d0fdfa9c8890759b0cdfbbc79fea27eb Mon Sep 17 00:00:00 2001 From: Matt Liggett Date: Fri, 9 Sep 2016 15:56:54 -0700 Subject: clarify weekend responsibilities --- on-call-build-cop.md | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/on-call-build-cop.md b/on-call-build-cop.md index 9cbea294..530d7230 100644 --- a/on-call-build-cop.md +++ b/on-call-build-cop.md @@ -139,7 +139,7 @@ or [flaky test builds](https://goto.google.com/k8s-test/view/Flaky/) are not your responsibility to monitor. The `Test owner:` in the job description will be automatically emailed if the job is failing. -* If you are a weekday oncall, ensure that PRs confirming to the following +* If you are oncall, ensure that PRs confirming to the following pre-requisites are being merged at a reasonable rate: * [Have been LGTMd](https://github.com/kubernetes/kubernetes/labels/lgtm) @@ -147,10 +147,9 @@ pre-requisites are being merged at a reasonable rate: * Author has signed CLA if applicable. -* If you are a weekend oncall, [never merge PRs manually](collab.md), instead -add the label "lgtm" to the PRs once they have been LGTMd and passed Travis; -this will cause merge-bot to merge them automatically (or make them easy to find -by the next oncall, who will merge them). +* Although the shift schedule shows you as being scheduled Monday to Monday, + working on the weekend is neither expected nor encouraged. Enjoy your time + off. * When the build is broken, roll back the PRs responsible ASAP -- cgit v1.2.3 From 33264e2765e6670c157578bb2659b08c23213eac Mon Sep 17 00:00:00 2001 From: David McMahon Date: Thu, 14 Apr 2016 18:30:16 -0700 Subject: Deprecate kubernetes/kubenetes release infrastructure and doc. --- README.md | 6 +- making-release-notes.md | 86 -------------- releasing.md | 309 ------------------------------------------------ 3 files changed, 1 insertion(+), 400 deletions(-) delete mode 100644 making-release-notes.md delete mode 100644 releasing.md diff --git a/README.md b/README.md index bf28c603..c77aa2db 100644 --- a/README.md +++ b/README.md @@ -110,11 +110,7 @@ Guide](../admin/README.md). ## Building releases -* **Making release notes** ([making-release-notes.md](making-release-notes.md)): Generating release notes for a new release. - -* **Releasing Kubernetes** ([releasing.md](releasing.md)): How to create a Kubernetes release (as in version) - and how the version information gets embedded into the built binaries. - +See the [kubernetes/release](https://github.com/kubernetes/release) repository for details on creating releases and related tools and helper scripts. [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/README.md?pixel)]() diff --git a/making-release-notes.md b/making-release-notes.md deleted file mode 100644 index bc51f22c..00000000 --- a/making-release-notes.md +++ /dev/null @@ -1,86 +0,0 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/making-release-notes.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - -## Making release notes - -This documents the process for making release notes for a release. - -### 1) Note the PR number of the previous release - -Find the most-recent PR that was merged with the previous .0 release. Remember -this as $LASTPR. - -- _TODO_: Figure out a way to record this somewhere to save the next -release engineer time. - -Find the most-recent PR that was merged with the current .0 release. Remember -this as $CURRENTPR. - -### 2) Run the release-notes tool - -```bash -${KUBERNETES_ROOT}/build/make-release-notes.sh $LASTPR $CURRENTPR -``` - -### 3) Trim the release notes - -This generates a list of the entire set of PRs merged since the last minor -release. It is likely long and many PRs aren't worth mentioning. If any of the -PRs were cherrypicked into patches on the last minor release, you should exclude -them from the current release's notes. - -Open up `candidate-notes.md` in your favorite editor. - -Remove, regroup, organize to your hearts content. - - -### 4) Update CHANGELOG.md - -With the final markdown all set, cut and paste it to the top of `CHANGELOG.md` - -### 5) Update the Release page - - * Switch to the [releases](https://github.com/kubernetes/kubernetes/releases) -page. - - * Open up the release you are working on. - - * Cut and paste the final markdown from above into the release notes - - * Press Save. - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/making-release-notes.md?pixel)]() - diff --git a/releasing.md b/releasing.md deleted file mode 100644 index 72dad8b5..00000000 --- a/releasing.md +++ /dev/null @@ -1,309 +0,0 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/releasing.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - -# Releasing Kubernetes - -This document explains how to cut a release, and the theory behind it. If you -just want to cut a release and move on with your life, you can stop reading -after the first section. - -## How to cut a Kubernetes release - -Regardless of whether you are cutting a major or minor version, cutting a -release breaks down into four pieces: - -1. selecting release components; -1. cutting/branching the release; -1. building and pushing the binaries; and -1. publishing binaries and release notes. -1. updating the master branch. - -You should progress in this strict order. - -### Selecting release components - -First, figure out what kind of release you're doing, what branch you're cutting -from, and other prerequisites. - -* Alpha releases (`vX.Y.0-alpha.W`) are cut directly from `master`. - * Alpha releases don't require anything besides green tests, (see below). -* Beta releases (`vX.Y.Z-beta.W`) are cut from their respective release branch, - `release-X.Y`. - * Make sure all necessary cherry picks have been resolved. You should ensure - that all outstanding cherry picks have been reviewed and merged and the - branch validated on Jenkins. See [Cherry Picks](cherry-picks.md) for more - information on how to manage cherry picks prior to cutting the release. - * Beta releases also require green tests, (see below). -* Official releases (`vX.Y.Z`) are cut from their respective release branch, - `release-X.Y`. - * Official releases should be similar or identical to their respective beta - releases, so have a look at the cherry picks that have been merged since - the beta release and question everything you find. - * Official releases also require green tests, (see below). -* New release series are also cut directly from `master`. - * **This is a big deal!** If you're reading this doc for the first time, you - probably shouldn't be doing this release, and should talk to someone on the - release team. - * New release series cut a new release branch, `release-X.Y`, off of - `master`, and also release the first beta in the series, `vX.Y.0-beta.0`. - * Every change in the `vX.Y` series from this point on will have to be - cherry picked, so be sure you want to do this before proceeding. - * You should still look for green tests, (see below). - -No matter what you're cutting, you're going to want to look at -[Jenkins](http://kubekins.dls.corp.google.com/) (Google internal only). Figure -out what branch you're cutting from, (see above,) and look at the critical jobs -building from that branch. First glance through builds and look for nice solid -rows of green builds, and then check temporally with the other critical builds -to make sure they're solid around then as well. - -If you're doing an alpha release or cutting a new release series, you can -choose an arbitrary build. If you are doing an official release, you have to -release from HEAD of the branch, (because you have to do some version-rev -commits,) so choose the latest build on the release branch. (Remember, that -branch should be frozen.) - -Once you find some greens, you can find the build hash for a build by looking at -the Full Console Output and searching for `build_version=`. You should see a line: - -```console -build_version=v1.2.0-alpha.2.164+b44c7d79d6c9bb -``` - -Or, if you're cutting from a release branch (i.e. doing an official release), - -```console -build_version=v1.1.0-beta.567+d79d6c9bbb44c7 -``` - -Please note that `build_version` was called `githash` versions prior to v1.2. - -Because Jenkins builds frequently, if you're looking between jobs -(e.g. `kubernetes-e2e-gke-ci` and `kubernetes-e2e-gce`), there may be no single -`build_version` that's been run on both jobs. In that case, take the a green -`kubernetes-e2e-gce` build (but please check that it corresponds to a temporally -similar build that's green on `kubernetes-e2e-gke-ci`). Lastly, if you're having -trouble understanding why the GKE continuous integration clusters are failing -and you're trying to cut a release, don't hesitate to contact the GKE -oncall. - -Before proceeding to the next step: - -```sh -export BUILD_VERSION=v1.2.0-alpha.2.164+b44c7d79d6c9bb -``` - -Where `v1.2.0-alpha.2.164+b44c7d79d6c9bb` is the build hash you decided on. This -will become your release point. - -### Cutting/branching the release - -You'll need the latest version of the releasing tools: - -```console -git clone git@github.com:kubernetes/kubernetes.git -cd kubernetes -``` - -or `git fetch upstream && git checkout upstream/master` from an existing repo. - -Decide what version you're cutting and export it: - -- alpha release: `export RELEASE_VERSION="vX.Y.0-alpha.W"`; -- beta release: `export RELEASE_VERSION="vX.Y.Z-beta.W"`; -- official release: `export RELEASE_VERSION="vX.Y.Z"`; -- new release series: `export RELEASE_VERSION="vX.Y"`. - -Then, run - -```console -./release/cut-official-release.sh "${RELEASE_VERSION}" "${BUILD_VERSION}" -``` - -This will do a dry run of the release. It will give you instructions at the -end for `pushd`ing into the dry-run directory and having a look around. -`pushd` into the directory and make sure everything looks as you expect: - -```console -git log "${RELEASE_VERSION}" # do you see the commit you expect? -make release -./cluster/kubectl.sh version -c -``` - -If you're satisfied with the result of the script, go back to `upstream/master` -run - -```console -./release/cut-official-release.sh "${RELEASE_VERSION}" "${BUILD_VERSION}" --no-dry-run -``` - -and follow the instructions. - -### Publishing binaries and release notes - -Only publish a beta release if it's a standalone pre-release (*not* -vX.Y.Z-beta.0). We create beta tags after we do official releases to -maintain proper semantic versioning, but we don't publish these beta releases. - -The script you ran above will prompt you to take any remaining steps to push -tars, and will also give you a template for the release notes. Compose an -email to the team with the template. Figure out what the PR numbers for this -release and last release are, and get an api-token from GitHub -(https://github.com/settings/tokens). From a clone of -[kubernetes/contrib](https://github.com/kubernetes/contrib), - -``` -go run release-notes/release-notes.go --last-release-pr= --current-release-pr= --api-token= --base= -``` - -where `` is `master` for alpha releases and `release-X.Y` for beta and official releases. - -**If this is a first official release (vX.Y.0)**, look through the release -notes for all of the alpha releases since the last cycle, and include anything -important in release notes. - -Feel free to edit the notes, (e.g. cherry picks should generally just have the -same title as the original PR). - -Send the email out, letting people know these are the draft release notes. If -they want to change anything, they should update the appropriate PRs with the -`release-note` label. - -When you're ready to announce the release, [create a GitHub -release](https://github.com/kubernetes/kubernetes/releases/new): - -1. pick the appropriate tag; -1. check "This is a pre-release" if it's an alpha or beta release; -1. fill in the release title from the draft; -1. re-run the appropriate release notes tool(s) to pick up any changes people - have made; -1. find the appropriate `kubernetes.tar.gz` in [GCS bucket](https://console.developers.google.com/storage/browser/kubernetes-release/release/), - download it, double check the hash (compare to what you had in the release - notes draft), and attach it to the release; and -1. publish! - -### Manual tasks for new release series - -*TODO(#20946) Burn this list down.* - -If you are cutting a new release series, there are a few tasks that haven't yet -been automated that need to happen after the branch has been cut: - -1. Update the master branch constant for doc generation: change the - `latestReleaseBranch` in `cmd/mungedocs/mungedocs.go` to the new release - branch (`release-X.Y`), run `hack/update-generated-docs.sh`. This will let - the unversioned warning in docs point to the latest release series. Please - send the changes as a PR titled "Update the latestReleaseBranch to - release-X.Y in the munger". -1. Send a note to the test team (@kubernetes/goog-testing) that a new branch - has been created. - 1. There is currently much work being done on our Jenkins infrastructure - and configs. Eventually we could have a relatively simple interface - to make this change or a way to automatically use the new branch. - See [recent Issue #22672](https://github.com/kubernetes/kubernetes/issues/22672). - 1. You can provide this guidance in the email to aid in the setup: - 1. See [End-2-End Testing in Kubernetes](e2e-tests.md) for the test jobs - that should be running in CI, which are under version control in - `hack/jenkins/e2e.sh` (on the release branch) and - `hack/jenkins/job-configs/kubernetes-jenkins/kubernetes-e2e.yaml` - (in `master`). You'll want to munge these for the release - branch so that, as we cherry-pick fixes onto the branch, we know that - it builds, etc. (Talk with @ihmccreery for more details.) - 1. Make sure all features that are supposed to be GA are covered by tests, - but remove feature tests on the release branch for features that aren't - GA. You can use `hack/list-feature-tests.sh` to see a list of tests - labeled as `[Feature:.+]`; make sure that these are all either - covered in CI jobs on the release branch or are experimental - features. (The answer should already be 'yes', but this is a - good time to reconcile.) - 1. Make a dashboard in Jenkins that contains all of the jobs for this - release cycle, and also add them to Critical Builds. (Don't add - them to the merge-bot blockers; see kubernetes/contrib#156.) - - -## Injecting Version into Binaries - -*Please note that this information may be out of date. The scripts are the -authoritative source on how version injection works.* - -Kubernetes may be built from either a git tree or from a tarball. We use -`make` to encapsulate a number of build steps into a single command. This -includes generating code, which means that tools like `go build` might work -(once files are generated) but might be using stale generated code. `make` is -the supported way to build. - -When building from git, we want to be able to insert specific information about -the build tree at build time. In particular, we want to use the output of `git -describe` to generate the version of Kubernetes and the status of the build -tree (add a `-dirty` prefix if the tree was modified.) - -When building from a tarball or using the Go build system, we will not have -access to the information about the git tree, but we still want to be able to -tell whether this build corresponds to an exact release (e.g. v0.3) or is -between releases (e.g. at some point in development between v0.3 and v0.4). - -In order to cover the different build cases, we start by providing information -that can be used when using only Go build tools or when we do not have the git -version information available. - -To be able to provide a meaningful version in those cases, we set the contents -of variables in a Go source file that will be used when no overrides are -present. - -We are using `pkg/version/base.go` as the source of versioning in absence of -information from git. Here is a sample of that file's contents: - -```go -var ( - gitVersion string = "v0.4-dev" // version from git, output of $(git describe) - gitCommit string = "" // sha1 from git, output of $(git rev-parse HEAD) -) -``` - -This means a build with `go install` or `go get` or a build from a tarball will -yield binaries that will identify themselves as `v0.4-dev` and will not be able -to provide you with a SHA1. - -To add the extra versioning information when building from git, the -`make` build will gather that information (using `git describe` and -`git rev-parse`) and then create a `-ldflags` string to pass to `go install` and -tell the Go linker to override the contents of those variables at build time. It -can, for instance, tell it to override `gitVersion` and set it to -`v0.4-13-g4567bcdef6789-dirty` and set `gitCommit` to `4567bcdef6789...` which -is the complete SHA1 of the (dirty) tree used at build time. - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/releasing.md?pixel)]() - -- cgit v1.2.3 From 6969ef64222b00e3b44ebee6129dd7be2eac9f0e Mon Sep 17 00:00:00 2001 From: Ivan Shvedunov Date: Thu, 15 Sep 2016 16:23:40 +0300 Subject: Fix typo in scheduler doc --- scheduler.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/scheduler.md b/scheduler.md index e9e1f29d..5c50340d 100755 --- a/scheduler.md +++ b/scheduler.md @@ -81,7 +81,7 @@ that rank the nodes that weren't filtered out by the predicate check. For exampl The scheduler is extensible: the cluster administrator can choose which of the pre-defined scheduling policies to apply, and can add new ones. -### Policies Prediates + Priorities +### Policies (Predicates and Priorities) The built-in predicates and priorities are defined in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/predicates/predicates.go) and -- cgit v1.2.3 From 1a17e61b5a11920e6262f3920e0309ef1e2b5cf7 Mon Sep 17 00:00:00 2001 From: Brandon Philips Date: Thu, 15 Sep 2016 06:37:53 -0700 Subject: docs: devel: tell people how to find flake tests This doc talks about flake tests but never links to all of them. Fix this so people can dive in. --- flaky-tests.md | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/flaky-tests.md b/flaky-tests.md index 22aa4c42..645fd634 100644 --- a/flaky-tests.md +++ b/flaky-tests.md @@ -69,6 +69,15 @@ discoverable from the issue. link is nice but strictly optional: not only does it expire more quickly, it's not accessible to non-Googlers. +## Finding filed flaky test cases + +Find flaky tests issues on GitHub under the [kind/flake issue label][flake]. +There are significant numbers of flaky tests reported on a regular basis and P2 +flakes are under-investigated. Fixing flakes is a quick way to gain expertise +and community goodwill. + +[flake]: https://github.com/kubernetes/kubernetes/issues?q=is%3Aopen+is%3Aissue+label%3Akind%2Fflake + ## Expectations when a flaky test is assigned to you Note that we won't randomly assign these issues to you unless you've opted in or -- cgit v1.2.3 From 19d5af9961ab07c650aeabfd0e65a8a73e7fd893 Mon Sep 17 00:00:00 2001 From: Davanum Srinivas Date: Thu, 14 Jul 2016 07:48:32 -0400 Subject: Extend all to more resources Added more things from the list here: https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/cmd/cmd.go#L159 Update the devel/kubectl-conventions.md with the rules mentioned by a few folks on which resources could be added to the special 'all' alias --- kubectl-conventions.md | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/kubectl-conventions.md b/kubectl-conventions.md index 29316407..fe2e51a1 100644 --- a/kubectl-conventions.md +++ b/kubectl-conventions.md @@ -43,6 +43,7 @@ Updated: 8/27/2015 - [Principles](#principles) - [Command conventions](#command-conventions) - [Create commands](#create-commands) + - [Rules for extending special resource alias - "all"](#rules-for-extending-special-resource-alias---all) - [Flag conventions](#flag-conventions) - [Output conventions](#output-conventions) - [Documentation conventions](#documentation-conventions) @@ -118,6 +119,21 @@ creating tls secrets. You create these as separate commands to get distinct flags and separate help that is tailored for the particular usage. +### Rules for extending special resource alias - "all" + +Here are the rules to add a new resource to the `kubectl get all` output. + +* No cluster scoped resources + +* No namespace admin level resources (limits, quota, policy, authorization +rules) + +* No resources that are potentially unrecoverable (secrets and pvc) + +* Resources that are considered "similar" to #3 should be grouped +the same (configmaps) + + ## Flag conventions * Flags are all lowercase, with words separated by hyphens -- cgit v1.2.3 From eb797e756de41791fcba07ef874ebf8673671b7d Mon Sep 17 00:00:00 2001 From: jayunit100 Date: Tue, 20 Sep 2016 11:23:00 -0400 Subject: viper hierarchies, cadvisor impl --- e2e-tests.md | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/e2e-tests.md b/e2e-tests.md index b0d0860d..6d457f59 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -58,6 +58,7 @@ Updated: 5/3/2016 - [Testing against local clusters](#testing-against-local-clusters) - [Version-skewed and upgrade testing](#version-skewed-and-upgrade-testing) - [Kinds of tests](#kinds-of-tests) + - [Viper configuration and hierarchichal test parameters.](#viper-configuration-and-hierarchichal-test-parameters) - [Conformance tests](#conformance-tests) - [Defining Conformance Subset](#defining-conformance-subset) - [Continuous Integration](#continuous-integration) @@ -511,6 +512,20 @@ breaking changes, it does *not* block the merge-queue, and thus should run in some separate test suites owned by the feature owner(s) (see [Continuous Integration](#continuous-integration) below). +### Viper configuration and hierarchichal test parameters. + +The future of e2e test configuration idioms will be increasingly defined using viper, and decreasingly via flags. + +Flags in general fall apart once tests become sufficiently complicated. So, even if we could use another flag library, it wouldn't be ideal. + +To use viper, rather than flags, to configure your tests: + +- Just add "e2e.json" to the current directory you are in, and define parameters in it... i.e. `"kubeconfig":"/tmp/x"`. + +Note that advanced testing parameters, and hierarchichally defined parameters, are only defined in viper, to see what they are, you can dive into [TestContextType](../../test/e2e/framework/test_context.go). + +In time, it is our intent to add or autogenerate a sample viper configuration that includes all e2e parameters, to ship with kubernetes. + ### Conformance tests Finally, `[Conformance]` tests represent a subset of the e2e-tests we expect to -- cgit v1.2.3 From 3e034b658318da6bfe1677fc82c50f0dde3fff0b Mon Sep 17 00:00:00 2001 From: Joe Finney Date: Thu, 22 Sep 2016 18:40:53 -0700 Subject: Make e2e.go give us JUnit results. --- e2e-tests.md | 10 ++-------- 1 file changed, 2 insertions(+), 8 deletions(-) diff --git a/e2e-tests.md b/e2e-tests.md index 6d457f59..0200afb8 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -125,12 +125,6 @@ go run hack/e2e.go -v --build # Create a fresh cluster. Deletes a cluster first, if it exists go run hack/e2e.go -v --up -# Push code to an existing cluster -go run hack/e2e.go -v --push - -# Push to an existing cluster, or bring up a cluster if it's down. -go run hack/e2e.go -v --pushup - # Run all tests go run hack/e2e.go -v --test @@ -144,12 +138,12 @@ go run hack/e2e.go -v --test --test_args="--ginkgo.skip=Pods.*env" GINKGO_PARALLEL=y go run hack/e2e.go --v --test --test_args="--ginkgo.skip=\[Serial\]" # Flags can be combined, and their actions will take place in this order: -# --build, --push|--up|--pushup, --test, --down +# --build, --up, --test, --down # # You can also specify an alternative provider, such as 'aws' # # e.g.: -KUBERNETES_PROVIDER=aws go run hack/e2e.go -v --build --pushup --test --down +KUBERNETES_PROVIDER=aws go run hack/e2e.go -v --build --up --test --down # -ctl can be used to quickly call kubectl against your e2e cluster. Useful for # cleaning up after a failed test or viewing logs. Use -v to avoid suppressing -- cgit v1.2.3 From fec46af679a671d8ecf30a3b5dcb62b40c6d7272 Mon Sep 17 00:00:00 2001 From: YuPengZTE Date: Mon, 26 Sep 2016 17:05:53 +0800 Subject: The VS and dot is seprated Signed-off-by: YuPengZTE --- api-conventions.md | 4 ++-- faster_reviews.md | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/api-conventions.md b/api-conventions.md index 4a9c6fc1..7fc2bdfc 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -63,7 +63,7 @@ resources](../user-guide/working-with-resources.md).* - [List Operations](#list-operations) - [Map Operations](#map-operations) - [Idempotency](#idempotency) - - [Optional vs Required](#optional-vs-required) + - [Optional vs. Required](#optional-vs-required) - [Defaulting](#defaulting) - [Late Initialization](#late-initialization) - [Concurrency Control and Consistency](#concurrency-control-and-consistency) @@ -658,7 +658,7 @@ exists - instead, it will either return 201 Created or 504 with Reason allotted, and the client should retry (optionally after the time indicated in the Retry-After header). -## Optional vs Required +## Optional vs. Required Fields must be either optional or required. diff --git a/faster_reviews.md b/faster_reviews.md index 11fcbe72..b15d9c52 100644 --- a/faster_reviews.md +++ b/faster_reviews.md @@ -109,7 +109,7 @@ fast-moving codebase - lock in your changes ASAP, and make merges be someone else's problem. Obviously, we want every PR to be useful on its own, so you'll have to use -common sense in deciding what can be a PR vs what should be a commit in a larger +common sense in deciding what can be a PR vs. what should be a commit in a larger PR. Rule of thumb - if this commit or set of commits is directly related to Feature-X and nothing else, it should probably be part of the Feature-X PR. If you can plausibly imagine someone finding value in this commit outside of -- cgit v1.2.3 From 8bcaa28e2cd5eddb8b5028ca6d7bbc4cec4b6781 Mon Sep 17 00:00:00 2001 From: "Sean M. Collins" Date: Tue, 27 Sep 2016 08:55:04 -0400 Subject: Delete vagrant.md The link[1] 404's and most of the documentation now suggests[2] using minikube for running a kubernetes cluster locally [1]: http://kubernetes.github.io/docs/getting-started-guides/vagrant/ [2]: http://kubernetes.io/docs/getting-started-guides/#local-machine-solutions --- running-locally.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/running-locally.md b/running-locally.md index dc32a38f..ef80e686 100644 --- a/running-locally.md +++ b/running-locally.md @@ -57,7 +57,7 @@ Getting started locally #### Linux -Not running Linux? Consider running Linux in a local virtual machine with [Vagrant](../getting-started-guides/vagrant.md), or on a cloud provider like [Google Compute Engine](../getting-started-guides/gce.md). +Not running Linux? Consider running [Minikube](http://kubernetes.io/docs/getting-started-guides/minikube/), or on a cloud provider like [Google Compute Engine](../getting-started-guides/gce.md). #### Docker -- cgit v1.2.3 From 7b171dc2bbf3bb8e356af795d903056a041fa24c Mon Sep 17 00:00:00 2001 From: Doug Davis Date: Thu, 5 May 2016 13:41:49 -0700 Subject: Change minion to node Contination of #1111 I tried to keep this PR down to just a simple search-n-replace to keep things simple. I may have gone too far in some spots but its easy to roll those back if needed. I avoided renaming `contrib/mesos/pkg/minion` because there's already a `contrib/mesos/pkg/node` dir and fixing that will require a bit of work due to a circular import chain that pops up. So I'm saving that for a follow-on PR. I rolled back some of this from a previous commit because it just got to big/messy. Will follow up with additional PRs Signed-off-by: Doug Davis --- api-conventions.md | 2 +- developer-guides/vagrant.md | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/api-conventions.md b/api-conventions.md index 7fc2bdfc..2742a9f0 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -1182,7 +1182,7 @@ than capitalization of the initial letter, the two should almost always match. No underscores nor dashes in either. * Field and resource names should be declarative, not imperative (DoSomething, SomethingDoer, DoneBy, DoneAt). -* `Minion` has been deprecated in favor of `Node`. Use `Node` where referring to +* Use `Node` where referring to the node resource in the context of the cluster. Use `Host` where referring to properties of the individual physical/virtual system, such as `hostname`, `hostPath`, `hostNetwork`, etc. diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index fe5bc6ea..53dd0681 100755 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -371,8 +371,8 @@ provisioned. #### I have Vagrant up but the nodes won't validate! -Log on to one of the nodes (`vagrant ssh node-1`) and inspect the salt minion -log (`sudo cat /var/log/salt/minion`). +Log on to one of the nodes (`vagrant ssh node-1`) and inspect the salt node +log (`sudo cat /var/log/salt/node`). #### I want to change the number of nodes! -- cgit v1.2.3 From c94f28aad0e10cee605e278c900ba193310d2d19 Mon Sep 17 00:00:00 2001 From: mbohlool Date: Tue, 27 Sep 2016 23:55:45 -0700 Subject: Generate and verify openapi specs in source tree at api/openapi-spec --- api_changes.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/api_changes.md b/api_changes.md index afdbaae7..6488d231 100755 --- a/api_changes.md +++ b/api_changes.md @@ -590,10 +590,11 @@ out. Put `grep` or `ack` to good use. If you added functionality, you should consider documenting it and/or writing an example to illustrate your change. -Make sure you update the swagger API spec by running: +Make sure you update the swagger and OpenAPI spec by running: ```sh hack/update-swagger-spec.sh +hack/update-openapi-spec.sh ``` The API spec changes should be in a commit separate from your other changes. -- cgit v1.2.3 From 0bb9abbaaa14df4c5fc1337ace470d88cbc5c8a5 Mon Sep 17 00:00:00 2001 From: Brendan Burns Date: Mon, 5 Sep 2016 20:30:29 -0700 Subject: Add community expectations about conduct and reviewing. --- community-expectations.md | 116 ++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 116 insertions(+) create mode 100644 community-expectations.md diff --git a/community-expectations.md b/community-expectations.md new file mode 100644 index 00000000..4dabb68b --- /dev/null +++ b/community-expectations.md @@ -0,0 +1,116 @@ + + + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). +
+-- + + + + + +## Community Expectations + +Kubernetes is a community project. Consequently, it is wholly dependent on +its community to provide a productive, friendly and collaborative environment. + +The first and foremost goal of the Kubernetes community to develop orchestration +technology that radically simplifies the process of creating reliable +distributed systems. However a second, equally important goal is the creation +of a community that fosters easy, agile development of such orchestration +systems. + +We therefore describe the expectations for +members of the Kubernetes community. This document is intended to be a living one +that evolves as the community evolves via the same PR and code review process +that shapes the rest of the project. It currently covers the expectations +of conduct that govern all members of the community as well as the expectations +around code review that govern all active contributors to Kubernetes. + +### Code of Conduct + +The most important expectation of the Kubernetes community is that all members +abide by the Kubernetes [community code of conduct](../../code-of-conduct.md). +Only by respecting each other can we develop a productive, collaborative +community. + +### Code review + +As a community we believe in the [value of code review for all contributions](collab.md). +Code review increases both the quality and readability of our codebase, which +in turn produces high quality software. + +However, the code review process can also introduce latency for contributors +and additional work for reviewers that can frustrate both parties. + +Consequently, as a community we expect that all active participants in the +community will also be active reviewers. + +We ask that active contributors to the project participate in the code review process +in areas where that contributor has expertise. Active +contributors are considered to be anyone who meets any of the following criteria: + * Sent more than two pull requests (PRs) in the previous one month, or more + than 20 PRs in the previous year. + * Filed more than three issues in the previous month, or more than 30 issues in + the previous 12 months. + * Commented on more than pull requests in the previous month, or + more than 50 pull requests in the previous 12 months. + * Marked any PR as LGTM in the previous month. + * Have *collaborator* permissions in the Kubernetes github project. + +In addition to these community expectations, any community member who wants to +be an active reviewer can also add their name to an *active reviewer* file +(location tbd) which will make them an active reviewer for as long as they +are included in the file. + +#### Expectations of reviewers: Review comments + +Because reviewers are often the first points of contact between new members of +the community and can significantly impact the first impression of the +Kubernetes community, reviewers are especially important in shaping the +Kubernetes community. Reviewers are highly encouraged to review the +[code of conduct](../../code-of-conduct.md) and are strongly encouraged to go above +and beyond the code of conduct to promote a collaborative, respectful +Kubernetes community. + +#### Expectations of reviewers: Review latency + +Reviewers are expected to respond in a timely fashion to PRs that are assigned +to them. Reviewers are expected to respond to an *active* PRs with reasonable +latency, and if reviewers fail to respond, those PRs may be assigned to other +reviewers. + +*Active* PRs are considered those which have a proper CLA (`cla:yes`) label +and do not need rebase to be merged. PRs that do not have a proper CLA, or +require a rebase are not considered active PRs. + +## Thanks + +Many thanks in advance to everyone who contributes their time and effort to +making Kubernetes both a successful system as well as a successful community. +The strength of our software shines in the strengths of each individual +community member. Thanks! + + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/community-expectations.md?pixel)]() + -- cgit v1.2.3 From 7f9358461560ee78d26b6d541e91f71bf4db69dc Mon Sep 17 00:00:00 2001 From: Minhan Xia Date: Mon, 3 Oct 2016 16:39:55 -0700 Subject: add delete-namespace-on-failure flag --- e2e-tests.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/e2e-tests.md b/e2e-tests.md index 0200afb8..372cc683 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -137,6 +137,9 @@ go run hack/e2e.go -v --test --test_args="--ginkgo.skip=Pods.*env" # Run tests in parallel, skip any that must be run serially GINKGO_PARALLEL=y go run hack/e2e.go --v --test --test_args="--ginkgo.skip=\[Serial\]" +# Run tests in parallel, skip any that must be run serially and keep the test namespace if test failed +GINKGO_PARALLEL=y go run hack/e2e.go --v --test --test_args="--ginkgo.skip=\[Serial\] --delete-namespace-on-falure=false" + # Flags can be combined, and their actions will take place in this order: # --build, --up, --test, --down # -- cgit v1.2.3 From 2c0e8827a110dcff277b45d37b070fc77f8c8a27 Mon Sep 17 00:00:00 2001 From: Hemant Kumar Date: Thu, 6 Oct 2016 10:51:49 -0400 Subject: Update documentation for running e2e tests locally The docs for running e2e tests locally needs to be updated. check_node_count option has been removed and developers usually need to perform additional steps do get it going. --- e2e-tests.md | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/e2e-tests.md b/e2e-tests.md index 0200afb8..b1aadb81 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -383,6 +383,9 @@ sudo PATH=$PATH hack/local-up-cluster.sh This will start a single-node Kubernetes cluster than runs pods using the local docker daemon. Press Control-C to stop the cluster. +You can generate a valid kubeconfig file by following instructions printed at the +end of aforementioned script. + #### Testing against local clusters In order to run an E2E test against a locally running cluster, point the tests @@ -390,7 +393,9 @@ at a custom host directly: ```sh export KUBECONFIG=/path/to/kubeconfig -go run hack/e2e.go -v --test --check_node_count=false +export KUBE_MASTER_IP="http://127.0.0.1:" +export KUBE_MASTER=local +go run hack/e2e.go -v --test ``` To control the tests that are run: -- cgit v1.2.3 From 04122e99c65fbd34b6cfbfd374160961324c62db Mon Sep 17 00:00:00 2001 From: Fabiano Franz Date: Mon, 10 Oct 2016 19:07:07 -0300 Subject: Use our own normalizers in the conventions doc --- kubectl-conventions.md | 27 ++++++++++++++------------- 1 file changed, 14 insertions(+), 13 deletions(-) diff --git a/kubectl-conventions.md b/kubectl-conventions.md index fe2e51a1..dd388a61 100644 --- a/kubectl-conventions.md +++ b/kubectl-conventions.md @@ -301,24 +301,25 @@ Sample command skeleton: // MineRecommendedName is the recommended command name for kubectl mine. const MineRecommendedName = "mine" -// MineConfig contains all the options for running the mine cli command. -type MineConfig struct { - mineLatest bool -} - +// Long command description and examples. var ( - mineLong = dedent.Dedent(` - mine which is described here - with lots of details.`) + mineLong = templates.LongDesc(` + mine which is described here + with lots of details.`) - mineExample = dedent.Dedent(` - # Run my command's first action - kubectl mine first_action + mineExample = templates.Examples(` + # Run my command's first action + kubectl mine first_action - # Run my command's second action on latest stuff - kubectl mine second_action --flag`) + # Run my command's second action on latest stuff + kubectl mine second_action --flag`) ) +// MineConfig contains all the options for running the mine cli command. +type MineConfig struct { + mineLatest bool +} + // NewCmdMine implements the kubectl mine command. func NewCmdMine(parent, name string, f *cmdutil.Factory, out io.Writer) *cobra.Command { opts := &MineConfig{} -- cgit v1.2.3 From d86f445b5e219873a92a306ceef52772211d6626 Mon Sep 17 00:00:00 2001 From: Euan Kemp Date: Mon, 17 Oct 2016 09:33:33 -0700 Subject: local-up: Add option to guess binary path --- running-locally.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/running-locally.md b/running-locally.md index ef80e686..810740fa 100644 --- a/running-locally.md +++ b/running-locally.md @@ -102,6 +102,12 @@ hack/local-up-cluster.sh This will build and start a lightweight local cluster, consisting of a master and a single node. Type Control-C to shut it down. +If you've already compiled the Kubernetes components, then you can avoid rebuilding them with this script by using the `-O` flag. + +```sh +./hack/local-up-cluster.sh -O +``` + You can use the cluster/kubectl.sh script to interact with the local cluster. hack/local-up-cluster.sh will print the commands to run to point kubectl at the local cluster. -- cgit v1.2.3 From bc658b754671ed699a987e17fa2d6be80c0ddc19 Mon Sep 17 00:00:00 2001 From: deads2k Date: Thu, 15 Sep 2016 07:36:11 -0400 Subject: recommendations for writing controllers --- controllers.md | 215 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 215 insertions(+) create mode 100644 controllers.md diff --git a/controllers.md b/controllers.md new file mode 100644 index 00000000..82eb5c08 --- /dev/null +++ b/controllers.md @@ -0,0 +1,215 @@ + + + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). +
+-- + + + + + +# Writing Controllers + +A Kubernetes controller is an active reconciliation process. That is, it watches some object for the world's desired +state, and it watches the world's actual state, too. Then, it sends instructions to try and make the world's current +state be more like the desired state. + +The simplest implementation of this is a loop: + +```go +for { + desired := getDesiredState() + current := getCurrentState() + makeChanges(desired, current) +} +``` + +Watches, etc, are all merely optimizations of this logic. + +## Guidelines + +When you’re writing controllers, there are few guidelines that will help make sure you get the results and performance +you’re looking for. + +1. Operate on one item at a time. If you use a `workqueue.Interface`, you’ll be able to queue changes for a + particular resource and later pop them in multiple “worker” gofuncs with a guarantee that no two gofuncs will + work on the same item at the same time. + + Many controllers must trigger off multiple resources (I need to "check X if Y changes"), but nearly all controllers + can collapse those into a queue of “check this X” based on relationships. For instance, a ReplicaSetController needs + to react to a pod being deleted, but it does that by finding the related ReplicaSets and queuing those. + + +1. Random ordering between resources. When controllers queue off multiple types of resources, there is no guarantee + of ordering amongst those resources. + + Distinct watches are updated independently. Even with an objective ordering of “created resourceA/X” and “created + resourceB/Y”, your controller could observe “created resourceB/Y” and “created resourceA/X”. + + +1. Level driven, not edge driven. Just like having a shell script that isn’t running all the time, your controller + may be off for an indeterminate amount of time before running again. + + If an API object appears with a marker value of `true`, you can’t count on having seen it turn from `false` to `true`, + only that you now observe it being `true`. Even an API watch suffers from this problem, so be sure that you’re not + counting on seeing a change unless your controller is also marking the information it last made the decision on in + the object's status. + + +1. Use `SharedInformers`. `SharedInformers` provide hooks to receive notifications of adds, updates, and deletes for + a particular resource. They also provide convenience functions for accessing shared caches and determining when a + cache is primed. + + Use the factory methods down in https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/framework/informers/factory.go + to ensure that you are sharing the same instance of the cache as everyone else. + + This saves us connections against the API server, duplicate serialization costs server-side, duplicate deserialization + costs controller-side, and duplicate caching costs controller-side. + + You may see other mechanisms like reflectors and deltafifos driving controllers. Those were older mechanisms that we + later used to build the `SharedInformers`. You should avoid using them in new controllers + + +1. Never mutate original objects! Caches are shared across controllers, this means that if you mutate your "copy" + (actually a reference or shallow copy) of an object, you’ll mess up other controllers (not just your own). + + The most common point of failure is making a shallow copy, then mutating a map, like `Annotations`. Use + `api.Scheme.Copy` to make a deep copy. + + +1. Wait for your secondary caches. Many controllers have primary and secondary resources. Primary resources are the + resources that you’ll be updating `Status` for. Secondary resources are resources that you’ll be managing + (creating/deleting) or using for lookups. + + Use the `framework.WaitForCacheSync` function to wait for your secondary caches before starting your primary sync + functions. This will make sure that things like a Pod count for a ReplicaSet isn’t working off of known out of date + information that results in thrashing. + + +1. There are other actors in the system. Just because you haven't changed an object doesn't mean that somebody else + hasn't. + + Don't forget that the current state may change at any moment--it's not sufficient to just watch the desired state. + If you use the absence of objects in the desired state to indicate that things in the current state should be deleted, + make sure you don't have a bug in your observation code (e.g., act before your cache has filled). + + +1. Percolate errors to the top level for consistent re-queuing. We have a `workqueue.RateLimitingInterface` to allow + simple requeuing with reasonable backoffs. + + Your main controller func should return an error when requeuing is necessary. When it isn’t, it should use + `utilruntime.HandleError` and return nil instead. This makes it very easy for reviewers to inspect error handling + cases and to be confident that your controller doesn’t accidentally lose things it should retry for. + + +1. Watches and Informers will “sync”. Periodically, they will deliver every matching object in the cluster to your + `Update` method. This is good for cases where you may need to take additional action on the object, but sometimes you + know there won’t be more work to do. + + In cases where you are *certain* that you don't need to requeue items when there are no new changes, you can compare the + resource version of the old and new objects. If they are the same, you skip requeuing the work. Be careful when you + do this. If you ever skip requeuing your item on failures, you could fail, not requeue, and then never retry that + item again. + + +## Rough Structure + +Overall, your controller should look something like this: + +```go +type Controller struct{ + // podLister is secondary cache of pods which is used for object lookups + podLister cache.StoreToPodLister + + // queue is where incoming work is placed to de-dup and to allow "easy" rate limited requeues on errors + queue workqueue.RateLimitingInterface +} + +func (c *Controller) Run(threadiness int, stopCh chan struct{}){ + // don't let panics crash the process + defer utilruntime.HandleCrash() + // make sure the work queue is shutdown which will trigger workers to end + defer dsc.queue.ShutDown() + + glog.Infof("Starting controller") + + // wait for your secondary caches to fill before starting your work + if !framework.WaitForCacheSync(stopCh, c.podStoreSynced) { + return + } + + // start up your worker threads based on threadiness. Some controllers have multiple kinds of workers + for i := 0; i < threadiness; i++ { + // runWorker will loop until "something bad" happens. The .Until will then rekick the worker + // after one second + go wait.Until(c.runWorker, time.Second, stopCh) + } + + // wait until we're told to stop + <-stopCh + glog.Infof("Shutting down controller") +} + +func (c *Controller) runWorker() { + // hot loop until we're told to stop. processNextWorkItem will automatically wait until there's work + // available, so we don't don't worry about secondary waits + for c.processNextWorkItem() { + } +} + +// processNextWorkItem deals with one key off the queue. It returns false when it's time to quit. +func (c *Controller) processNextWorkItem() bool { + // pull the next work item from queue. It should be a key we use to lookup something in a cache + key, quit := c.queue.Get() + if quit { + return false + } + // you always have to indicate to the queue that you've completed a piece of work + defer c.queue.Done(key) + + // do your work on the key. This method will contains your "do stuff" logic" + err := c.syncHandler(key.(string)) + if err == nil { + // if you had no error, tell the queue to stop tracking history for your key. This will + // reset things like failure counts for per-item rate limiting + c.queue.Forget(key) + return true + } + + // there was a failure so be sure to report it. This method allows for pluggable error handling + // which can be used for things like cluster-monitoring + utilruntime.HandleError(fmt.Errorf("%v failed with : %v", key, err)) + // since we failed, we should requeue the item to work on later. This method will add a backoff + // to avoid hotlooping on particular items (they're probably still not going to work right away) + // and overall controller protection (everything I've done is broken, this controller needs to + // calm down or it can starve other useful work) cases. + c.queue.AddRateLimited(key) + + return true +} + +``` + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/controllers.md?pixel)]() + -- cgit v1.2.3 From 7d98197e2e520f51e73dfa2f7591054112142eab Mon Sep 17 00:00:00 2001 From: xilabao Date: Wed, 19 Oct 2016 11:04:36 +0800 Subject: check_node_count falls out of use, clear from docs --- e2e-tests.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/e2e-tests.md b/e2e-tests.md index 417b7207..4db76f89 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -404,7 +404,7 @@ go run hack/e2e.go -v --test To control the tests that are run: ```sh -go run hack/e2e.go -v --test --check_node_count=false --test_args="--ginkgo.focus="Secrets" +go run hack/e2e.go -v --test --test_args="--ginkgo.focus=\"Secrets\"" ``` ### Version-skewed and upgrade testing -- cgit v1.2.3 From e7c5a242ee56f2bc2b8e2ba9ea03bc8ed618a432 Mon Sep 17 00:00:00 2001 From: Mike Danese Date: Wed, 19 Oct 2016 18:32:33 -0700 Subject: add some docs about building with bazel --- bazel.md | 72 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 72 insertions(+) create mode 100644 bazel.md diff --git a/bazel.md b/bazel.md new file mode 100644 index 00000000..2c321a90 --- /dev/null +++ b/bazel.md @@ -0,0 +1,72 @@ + + + + +WARNING +WARNING +WARNING +WARNING +WARNING + +

PLEASE NOTE: This document applies to the HEAD of the source tree

+ +If you are using a released version of Kubernetes, you should +refer to the docs that go with that version. + +Documentation for other releases can be found at +[releases.k8s.io](http://releases.k8s.io). + +-- + + + + + +# Build with Bazel + +Building with bazel is currently experimental. Automanaged BUILD rules have the +tag "automanaged" and are maintained by +[gazel](https://github.com/mikedanese/gazel). Instructions for installing bazel +can be found [here](https://www.bazel.io/versions/master/docs/install.html). + +To build docker images for the components, run: + +``` +$ bazel build //build/... +``` + +To run many of the unit tests, run: + +``` +$ bazel test //cmd/... //build/... //pkg/... //federation/... //plugin/... +``` + +To update automanaged build files, run: + +``` +$ ./hack/update-bazel.sh +``` + + +To update a single build file, run: + +``` +$ # get gazel +$ go get -u github.com/mikedanese/gazel +$ # .e.g. ./pkg/kubectl/BUILD +$ gazel ./pkg/kubectl +``` + +Updating BUILD file for a package will be required when: +* Files are added to or removed from a package +* Import dependencies change for a package + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/bazel.md?pixel)]() + -- cgit v1.2.3 From 1b3c1c71fcec7ba6d9a41e8a6913263413fd2e30 Mon Sep 17 00:00:00 2001 From: Clayton Coleman Date: Mon, 24 Oct 2016 12:04:10 -0400 Subject: Clarify backwards and forwards compatibility in docs We weren't necessarily clear that we consider both required. --- api_changes.md | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/api_changes.md b/api_changes.md index 6488d231..23316016 100755 --- a/api_changes.md +++ b/api_changes.md @@ -129,8 +129,11 @@ backward-compatibly. ## On compatibility Before talking about how to make API changes, it is worthwhile to clarify what -we mean by API compatibility. An API change is considered backward-compatible -if it: +we mean by API compatibility. Kubernetes considers forwards and backwards +compatibility of its APIs a top priority. + +An API change is considered forward and backward-compatible if it: + * adds new functionality that is not required for correct behavior (e.g., does not add a new required field) * does not change existing semantics, including: @@ -150,7 +153,8 @@ versions and back) with no loss of information. continue to function as they did previously, even when your change is utilized. If your change does not meet these criteria, it is not considered strictly -compatible. +compatible, and may break older clients, or result in newer clients causing +undefined behavior. Let's consider some examples. In a hypothetical API (assume we're at version v6), the `Frobber` struct looks something like this: -- cgit v1.2.3 From e7c50a8122a5a77edccb7b89e7fc5c16a3dd8315 Mon Sep 17 00:00:00 2001 From: Mike Danese Date: Mon, 24 Oct 2016 10:28:07 -0700 Subject: rename build/ to build-tools/ --- bazel.md | 4 ++-- cherry-picks.md | 4 ++-- development.md | 6 +++--- e2e-tests.md | 2 +- running-locally.md | 2 +- 5 files changed, 9 insertions(+), 9 deletions(-) diff --git a/bazel.md b/bazel.md index 2c321a90..80915f15 100644 --- a/bazel.md +++ b/bazel.md @@ -37,13 +37,13 @@ can be found [here](https://www.bazel.io/versions/master/docs/install.html). To build docker images for the components, run: ``` -$ bazel build //build/... +$ bazel build //build-tools/... ``` To run many of the unit tests, run: ``` -$ bazel test //cmd/... //build/... //pkg/... //federation/... //plugin/... +$ bazel test //cmd/... //build-tools/... //pkg/... //federation/... //plugin/... ``` To update automanaged build files, run: diff --git a/cherry-picks.md b/cherry-picks.md index ef2cee70..4283ee3b 100644 --- a/cherry-picks.md +++ b/cherry-picks.md @@ -79,7 +79,7 @@ tracking the tool to automate the batching procedure. #### Cherrypicking a doc change If you are cherrypicking a change which adds a doc, then you also need to run -`build/versionize-docs.sh` in the release branch to versionize that doc. +`build-tools/versionize-docs.sh` in the release branch to versionize that doc. Ideally, just running `hack/cherry_pick_pull.sh` should be enough, but we are not there yet: [#18861](https://github.com/kubernetes/kubernetes/issues/18861) @@ -89,7 +89,7 @@ running `hack/cherry_pick_pull.sh` and before merging the PR: ``` $ git checkout -b automated-cherry-pick-of-#123456-upstream-release-3.14 origin/automated-cherry-pick-of-#123456-upstream-release-3.14 -$ ./build/versionize-docs.sh release-3.14 +$ ./build-tools/versionize-docs.sh release-3.14 $ git commit -a -m "Running versionize docs" $ git push origin automated-cherry-pick-of-#123456-upstream-release-3.14 ``` diff --git a/development.md b/development.md index 2a0e0410..d36f7ec3 100644 --- a/development.md +++ b/development.md @@ -49,7 +49,7 @@ branch, but release branches of Kubernetes should not change. Official releases are built using Docker containers. To build Kubernetes using Docker please follow [these instructions] -(http://releases.k8s.io/HEAD/build/README.md). +(http://releases.k8s.io/HEAD/build-tools/README.md). ## Building Kubernetes on a local OS/shell environment @@ -142,10 +142,10 @@ bump to a minor release version for security updates. Since kubernetes is mostly built and tested in containers, there are a few unique places you need to update the go version. -- The image for cross compiling in [build/build-image/cross/](../../build/build-image/cross/). The `VERSION` file and `Dockerfile`. +- The image for cross compiling in [build-tools/build-image/cross/](../../build-tools/build-image/cross/). The `VERSION` file and `Dockerfile`. - Update [dockerized-e2e-runner.sh](https://github.com/kubernetes/test-infra/blob/master/jenkins/dockerized-e2e-runner.sh) to run a kubekins-e2e with the desired go version, which requires pushing [e2e-image](https://github.com/kubernetes/test-infra/tree/master/jenkins/e2e-image) and [test-image](https://github.com/kubernetes/test-infra/tree/master/jenkins/test-image) images that are `FROM` the desired go version. - The docker image being run in [hack/jenkins/gotest-dockerized.sh](../../hack/jenkins/gotest-dockerized.sh). -- The cross tag `KUBE_BUILD_IMAGE_CROSS_TAG` in [build/common.sh](../../build/common.sh) +- The cross tag `KUBE_BUILD_IMAGE_CROSS_TAG` in [build-tools/common.sh](../../build-tools/common.sh) ## Workflow diff --git a/e2e-tests.md b/e2e-tests.md index 4db76f89..431fa9a2 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -305,7 +305,7 @@ Next, specify the docker repository where your ci images will be pushed. * Push the federation container images ```sh - $ build/push-federation-images.sh + $ build-tools/push-federation-images.sh ``` #### Deploy federation control plane diff --git a/running-locally.md b/running-locally.md index 810740fa..a6332d71 100644 --- a/running-locally.md +++ b/running-locally.md @@ -195,7 +195,7 @@ KUBE_DNS_DOMAIN="cluster.local" KUBE_DNS_REPLICAS=1 ``` -To know more on DNS service you can look [here](http://issue.k8s.io/6667). Related documents can be found [here](../../build/kube-dns/#how-do-i-configure-it) +To know more on DNS service you can look [here](http://issue.k8s.io/6667). Related documents can be found [here](../../build-tools/kube-dns/#how-do-i-configure-it) -- cgit v1.2.3 From 685bbf54c2a5cadb2940c276d4229470dc180a03 Mon Sep 17 00:00:00 2001 From: Tim Hockin Date: Tue, 25 Oct 2016 22:24:50 +0200 Subject: Remove 'this is HEAD' warning on docs --- README.md | 34 --------------------------- adding-an-APIGroup.md | 34 --------------------------- api-conventions.md | 33 -------------------------- api_changes.md | 34 --------------------------- automation.md | 34 --------------------------- bazel.md | 29 ----------------------- cherry-picks.md | 34 --------------------------- cli-roadmap.md | 34 --------------------------- client-libraries.md | 34 --------------------------- coding-conventions.md | 34 --------------------------- collab.md | 34 --------------------------- community-expectations.md | 29 ----------------------- controllers.md | 29 ----------------------- developer-guides/vagrant.md | 34 --------------------------- development.md | 34 --------------------------- e2e-node-tests.md | 34 --------------------------- e2e-tests.md | 34 --------------------------- faster_reviews.md | 34 --------------------------- flaky-tests.md | 34 --------------------------- generating-clientset.md | 34 --------------------------- getting-builds.md | 34 --------------------------- go-code.md | 34 --------------------------- godep.md | 34 --------------------------- gubernator.md | 34 --------------------------- how-to-doc.md | 45 ------------------------------------ instrumentation.md | 34 --------------------------- issues.md | 34 --------------------------- kubectl-conventions.md | 34 --------------------------- kubemark-guide.md | 34 --------------------------- local-cluster/docker.md | 34 --------------------------- local-cluster/local.md | 34 --------------------------- local-cluster/vagrant.md | 34 --------------------------- logging.md | 34 --------------------------- mesos-style.md | 34 --------------------------- node-performance-testing.md | 34 --------------------------- on-call-build-cop.md | 34 --------------------------- on-call-rotations.md | 34 --------------------------- on-call-user-support.md | 34 --------------------------- owners.md | 34 --------------------------- profiling.md | 34 --------------------------- pull-requests.md | 34 --------------------------- running-locally.md | 33 -------------------------- scheduler.md | 34 --------------------------- scheduler_algorithm.md | 34 --------------------------- testing.md | 34 --------------------------- update-release-docs.md | 34 --------------------------- updating-docs-for-feature-changes.md | 34 --------------------------- writing-a-getting-started-guide.md | 34 --------------------------- writing-good-e2e-tests.md | 34 --------------------------- 49 files changed, 1660 deletions(-) diff --git a/README.md b/README.md index c77aa2db..cf29f3b4 100644 --- a/README.md +++ b/README.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/README.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - # Kubernetes Developer Guide The developer guide is for anyone wanting to either write code which directly accesses the diff --git a/adding-an-APIGroup.md b/adding-an-APIGroup.md index f1bd182d..5832be23 100644 --- a/adding-an-APIGroup.md +++ b/adding-an-APIGroup.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/adding-an-APIGroup.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - Adding an API Group =============== diff --git a/api-conventions.md b/api-conventions.md index 2742a9f0..0be45182 100644 --- a/api-conventions.md +++ b/api-conventions.md @@ -1,36 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/api-conventions.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - API Conventions =============== diff --git a/api_changes.md b/api_changes.md index 23316016..963deb7c 100755 --- a/api_changes.md +++ b/api_changes.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/api_changes.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - *This document is oriented at developers who want to change existing APIs. A set of API conventions, which applies to new APIs and to changes, can be found at [API Conventions](api-conventions.md). diff --git a/automation.md b/automation.md index 580606e4..3a9f1754 100644 --- a/automation.md +++ b/automation.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/automation.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - # Kubernetes Development Automation ## Overview diff --git a/bazel.md b/bazel.md index 80915f15..d1230dce 100644 --- a/bazel.md +++ b/bazel.md @@ -1,32 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - # Build with Bazel Building with bazel is currently experimental. Automanaged BUILD rules have the diff --git a/cherry-picks.md b/cherry-picks.md index 4283ee3b..40a4b264 100644 --- a/cherry-picks.md +++ b/cherry-picks.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/cherry-picks.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - # Overview This document explains cherry picks are managed on release branches within the diff --git a/cli-roadmap.md b/cli-roadmap.md index 7fce10ba..cd21da08 100644 --- a/cli-roadmap.md +++ b/cli-roadmap.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/cli-roadmap.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - # Kubernetes CLI/Configuration Roadmap See github issues with the following labels: diff --git a/client-libraries.md b/client-libraries.md index 868f8363..d38f9fd7 100644 --- a/client-libraries.md +++ b/client-libraries.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/client-libraries.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - ## Kubernetes API client libraries ### Supported diff --git a/coding-conventions.md b/coding-conventions.md index df66a96e..bcfab41d 100644 --- a/coding-conventions.md +++ b/coding-conventions.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/coding-conventions.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - # Coding Conventions Updated: 5/3/2016 diff --git a/collab.md b/collab.md index e11b544f..b4a6281d 100644 --- a/collab.md +++ b/collab.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/collab.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - # On Collaborative Development Kubernetes is open source, but many of the people working on it do so as their diff --git a/community-expectations.md b/community-expectations.md index 4dabb68b..ff2487fd 100644 --- a/community-expectations.md +++ b/community-expectations.md @@ -1,32 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - ## Community Expectations Kubernetes is a community project. Consequently, it is wholly dependent on diff --git a/controllers.md b/controllers.md index 82eb5c08..daedc236 100644 --- a/controllers.md +++ b/controllers.md @@ -1,32 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - # Writing Controllers A Kubernetes controller is an active reconciliation process. That is, it watches some object for the world's desired diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md index 53dd0681..b53b0002 100755 --- a/developer-guides/vagrant.md +++ b/developer-guides/vagrant.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/developer-guides/vagrant.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - ## Getting started with Vagrant Running Kubernetes with Vagrant is an easy way to run/test/develop on your diff --git a/development.md b/development.md index d36f7ec3..18870b28 100644 --- a/development.md +++ b/development.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/development.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - # Development Guide This document is intended to be the canonical source of truth for things like diff --git a/e2e-node-tests.md b/e2e-node-tests.md index 00735ed7..ce23497e 100644 --- a/e2e-node-tests.md +++ b/e2e-node-tests.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/e2e-node-tests.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - # Node End-To-End tests Node e2e tests are component tests meant for testing the Kubelet code on a custom host environment. diff --git a/e2e-tests.md b/e2e-tests.md index 431fa9a2..03efcb66 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/e2e-tests.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - # End-to-End Testing in Kubernetes Updated: 5/3/2016 diff --git a/faster_reviews.md b/faster_reviews.md index b15d9c52..85568d3f 100644 --- a/faster_reviews.md +++ b/faster_reviews.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/faster_reviews.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - # How to get faster PR reviews Most of what is written here is not at all specific to Kubernetes, but it bears diff --git a/flaky-tests.md b/flaky-tests.md index 645fd634..9656bd5f 100644 --- a/flaky-tests.md +++ b/flaky-tests.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/flaky-tests.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - # Flaky tests Any test that fails occasionally is "flaky". Since our merges only proceed when diff --git a/generating-clientset.md b/generating-clientset.md index aa29f54b..c5c8d698 100644 --- a/generating-clientset.md +++ b/generating-clientset.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/generating-clientset.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - # Generation and release cycle of clientset Client-gen is an automatic tool that generates diff --git a/getting-builds.md b/getting-builds.md index b9d8c66e..86563390 100644 --- a/getting-builds.md +++ b/getting-builds.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/getting-builds.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - # Getting Kubernetes Builds You can use [hack/get-build.sh](http://releases.k8s.io/HEAD/hack/get-build.sh) diff --git a/go-code.md b/go-code.md index 695102ec..2af055f4 100644 --- a/go-code.md +++ b/go-code.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/go-code.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - # Kubernetes Go Tools and Tips Kubernetes is one of the largest open source Go projects, so good tooling a solid understanding of diff --git a/godep.md b/godep.md index f746debb..c19157c6 100644 --- a/godep.md +++ b/godep.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/godep.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - # Using godep to manage dependencies This document is intended to show a way for managing `vendor/` tree dependencies diff --git a/gubernator.md b/gubernator.md index def20b5b..3fd2e445 100644 --- a/gubernator.md +++ b/gubernator.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/gubernator.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - # Gubernator *This document is oriented at developers who want to use Gubernator to debug while developing for Kubernetes.* diff --git a/how-to-doc.md b/how-to-doc.md index 99569426..2b32a066 100644 --- a/how-to-doc.md +++ b/how-to-doc.md @@ -1,32 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - # Document Conventions Updated: 11/3/2015 @@ -48,7 +19,6 @@ for Kubernetes.* - [Headings](#headings) - [What Are Mungers?](#what-are-mungers) - [Auto-added Mungers](#auto-added-mungers) - - [Unversioned Warning](#unversioned-warning) - [Is Versioned](#is-versioned) - [Generate Analytics](#generate-analytics) - [Generated documentation](#generated-documentation) @@ -212,21 +182,6 @@ recommended to just read this section as a reference instead of messing up with the following mungers. -### Unversioned Warning - -UNVERSIONED_WARNING munger inserts unversioned warning which warns the users -when they're reading the document from HEAD and informs them where to find the -corresponding document for a specific release. - -``` - - - - - - -``` - ### Is Versioned IS_VERSIONED munger inserts `IS_VERSIONED` tag in documents in each release, diff --git a/instrumentation.md b/instrumentation.md index b5677ad7..b73221a9 100644 --- a/instrumentation.md +++ b/instrumentation.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/instrumentation.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - ## Instrumenting Kubernetes with a new metric The following is a step-by-step guide for adding a new metric to the Kubernetes diff --git a/issues.md b/issues.md index 4a4e2493..fe9e94d9 100644 --- a/issues.md +++ b/issues.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/issues.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - ## GitHub Issues for the Kubernetes Project A quick overview of how we will review and prioritize incoming issues at diff --git a/kubectl-conventions.md b/kubectl-conventions.md index dd388a61..af964285 100644 --- a/kubectl-conventions.md +++ b/kubectl-conventions.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/kubectl-conventions.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - # Kubectl Conventions Updated: 8/27/2015 diff --git a/kubemark-guide.md b/kubemark-guide.md index 28ca49fd..e914226d 100755 --- a/kubemark-guide.md +++ b/kubemark-guide.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/kubemark-guide.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - # Kubemark User Guide ## Introduction diff --git a/local-cluster/docker.md b/local-cluster/docker.md index 6cdeb3c6..78768f80 100644 --- a/local-cluster/docker.md +++ b/local-cluster/docker.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/local-cluster/docker.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - **Stop. This guide has been superseded by [Minikube](https://github.com/kubernetes/minikube) which is the recommended method of running Kubernetes on your local machine.** diff --git a/local-cluster/local.md b/local-cluster/local.md index 1986346d..60bd5a8f 100644 --- a/local-cluster/local.md +++ b/local-cluster/local.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/local-cluster/local.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - **Stop. This guide has been superseded by [Minikube](https://github.com/kubernetes/minikube) which is the recommended method of running Kubernetes on your local machine.** ### Requirements diff --git a/local-cluster/vagrant.md b/local-cluster/vagrant.md index ffdbabe0..0f0fe91c 100644 --- a/local-cluster/vagrant.md +++ b/local-cluster/vagrant.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/local-cluster/vagrant.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - Running Kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/develop on your local machine (Linux, Mac OS X). ### Prerequisites diff --git a/logging.md b/logging.md index 71fa6c69..1241ee7f 100644 --- a/logging.md +++ b/logging.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/logging.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - ## Logging Conventions The following conventions for the glog levels to use. diff --git a/mesos-style.md b/mesos-style.md index 89a3e340..81554ce8 100644 --- a/mesos-style.md +++ b/mesos-style.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/mesos-style.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - # Building Mesos/Omega-style frameworks on Kubernetes ## Introduction diff --git a/node-performance-testing.md b/node-performance-testing.md index 9842e443..d6bb657f 100644 --- a/node-performance-testing.md +++ b/node-performance-testing.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/node-performance-testing.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - # Measuring Node Performance This document outlines the issues and pitfalls of measuring Node performance, as diff --git a/on-call-build-cop.md b/on-call-build-cop.md index 530d7230..15c71e5d 100644 --- a/on-call-build-cop.md +++ b/on-call-build-cop.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/on-call-build-cop.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - ## Kubernetes "Github and Build-cop" Rotation ### Preqrequisites diff --git a/on-call-rotations.md b/on-call-rotations.md index 8ff47dc5..a6535e82 100644 --- a/on-call-rotations.md +++ b/on-call-rotations.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/on-call-rotations.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - ## Kubernetes On-Call Rotations ### Kubernetes "first responder" rotations diff --git a/on-call-user-support.md b/on-call-user-support.md index 5efaeec7..c79d7e0e 100644 --- a/on-call-user-support.md +++ b/on-call-user-support.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/on-call-user-support.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - ## Kubernetes "User Support" Rotation ### Traffic sources and responsibilities diff --git a/owners.md b/owners.md index 2f735e36..db0f3202 100644 --- a/owners.md +++ b/owners.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/owners.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - # Owners files _Note_: This is a design for a feature that is not yet implemented. diff --git a/profiling.md b/profiling.md index 5786e005..f50537f1 100644 --- a/profiling.md +++ b/profiling.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/profiling.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - # Profiling Kubernetes This document explain how to plug in profiler and how to profile Kubernetes services. diff --git a/pull-requests.md b/pull-requests.md index ae7c039f..888d7320 100644 --- a/pull-requests.md +++ b/pull-requests.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/pull-requests.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - - [Pull Request Process](#pull-request-process) diff --git a/running-locally.md b/running-locally.md index a6332d71..327d685e 100644 --- a/running-locally.md +++ b/running-locally.md @@ -1,36 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/running-locally.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - Getting started locally ----------------------- diff --git a/scheduler.md b/scheduler.md index 5c50340d..b1cfea7a 100755 --- a/scheduler.md +++ b/scheduler.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/scheduler.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - # The Kubernetes Scheduler The Kubernetes scheduler runs as a process alongside the other master diff --git a/scheduler_algorithm.md b/scheduler_algorithm.md index 01f5df82..28c6c2bc 100755 --- a/scheduler_algorithm.md +++ b/scheduler_algorithm.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/scheduler_algorithm.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - # Scheduler Algorithm in Kubernetes For each unscheduled Pod, the Kubernetes scheduler tries to find a node across the cluster according to a set of rules. A general introduction to the Kubernetes scheduler can be found at [scheduler.md](scheduler.md). In this document, the algorithm of how to select a node for the Pod is explained. There are two steps before a destination node of a Pod is chosen. The first step is filtering all the nodes and the second is ranking the remaining nodes to find a best fit for the Pod. diff --git a/testing.md b/testing.md index ed0bdfa0..09293f00 100644 --- a/testing.md +++ b/testing.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/testing.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - # Testing guide Updated: 5/21/2016 diff --git a/update-release-docs.md b/update-release-docs.md index 215ba3f0..1e0988db 100644 --- a/update-release-docs.md +++ b/update-release-docs.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/update-release-docs.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - # Table of Contents diff --git a/updating-docs-for-feature-changes.md b/updating-docs-for-feature-changes.md index 5a22d780..6e85c48d 100644 --- a/updating-docs-for-feature-changes.md +++ b/updating-docs-for-feature-changes.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/updating-docs-for-feature-changes.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - # How to update docs for new kubernetes features This document describes things to consider when updating Kubernetes docs for new features or changes to existing features (including removing features). diff --git a/writing-a-getting-started-guide.md b/writing-a-getting-started-guide.md index a227ae6f..b50e556c 100644 --- a/writing-a-getting-started-guide.md +++ b/writing-a-getting-started-guide.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/writing-a-getting-started-guide.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - # Writing a Getting Started Guide This page gives some advice for anyone planning to write or update a Getting Started Guide for Kubernetes. diff --git a/writing-good-e2e-tests.md b/writing-good-e2e-tests.md index cfe8ff6a..ab13aff2 100644 --- a/writing-good-e2e-tests.md +++ b/writing-good-e2e-tests.md @@ -1,37 +1,3 @@ - - - - -WARNING -WARNING -WARNING -WARNING -WARNING - -

PLEASE NOTE: This document applies to the HEAD of the source tree

- -If you are using a released version of Kubernetes, you should -refer to the docs that go with that version. - - - -The latest release of this document can be found -[here](http://releases.k8s.io/release-1.4/docs/devel/writing-good-e2e-tests.md). - -Documentation for other releases can be found at -[releases.k8s.io](http://releases.k8s.io). - --- - - - - - # Writing good e2e tests for Kubernetes # ## Patterns and Anti-Patterns ## -- cgit v1.2.3 From 4b33176486dbc401173a001811404382b493f408 Mon Sep 17 00:00:00 2001 From: Tim Hockin Date: Tue, 25 Oct 2016 22:30:03 +0200 Subject: versionize-docs is dead --- cherry-picks.md | 18 ------------------ how-to-doc.md | 13 ------------- 2 files changed, 31 deletions(-) diff --git a/cherry-picks.md b/cherry-picks.md index 40a4b264..ad8df62d 100644 --- a/cherry-picks.md +++ b/cherry-picks.md @@ -42,24 +42,6 @@ label. There is an [issue](https://github.com/kubernetes/kubernetes/issues/23347) open tracking the tool to automate the batching procedure. -#### Cherrypicking a doc change - -If you are cherrypicking a change which adds a doc, then you also need to run -`build-tools/versionize-docs.sh` in the release branch to versionize that doc. -Ideally, just running `hack/cherry_pick_pull.sh` should be enough, but we are -not there yet: [#18861](https://github.com/kubernetes/kubernetes/issues/18861) - -To cherrypick PR 123456 to release-3.14, run the following commands after -running `hack/cherry_pick_pull.sh` and before merging the PR: - -``` -$ git checkout -b automated-cherry-pick-of-#123456-upstream-release-3.14 -origin/automated-cherry-pick-of-#123456-upstream-release-3.14 -$ ./build-tools/versionize-docs.sh release-3.14 -$ git commit -a -m "Running versionize docs" -$ git push origin automated-cherry-pick-of-#123456-upstream-release-3.14 -``` - ## Cherry Pick Review Cherry pick pull requests are reviewed differently than normal pull requests. In diff --git a/how-to-doc.md b/how-to-doc.md index 2b32a066..891969d7 100644 --- a/how-to-doc.md +++ b/how-to-doc.md @@ -19,7 +19,6 @@ for Kubernetes.* - [Headings](#headings) - [What Are Mungers?](#what-are-mungers) - [Auto-added Mungers](#auto-added-mungers) - - [Is Versioned](#is-versioned) - [Generate Analytics](#generate-analytics) - [Generated documentation](#generated-documentation) @@ -181,18 +180,6 @@ your md file that are auto-added. You don't have to add them manually. It's recommended to just read this section as a reference instead of messing up with the following mungers. - -### Is Versioned - -IS_VERSIONED munger inserts `IS_VERSIONED` tag in documents in each release, -which stops UNVERSIONED_WARNING munger from inserting warning messages. - -``` - - - -``` - ### Generate Analytics ANALYTICS munger inserts a Google Anaylytics link for this page. -- cgit v1.2.3 From 8a3536fe9bcb6d2636a2e1998d0e07ecc3427dc1 Mon Sep 17 00:00:00 2001 From: Chao Xu Date: Mon, 31 Oct 2016 11:59:48 -0700 Subject: remove release_1_4 remove archived federation clientsets update README --- generating-clientset.md | 67 +++++++++++++------------------------------------ 1 file changed, 17 insertions(+), 50 deletions(-) diff --git a/generating-clientset.md b/generating-clientset.md index c5c8d698..cbb6141c 100644 --- a/generating-clientset.md +++ b/generating-clientset.md @@ -1,72 +1,39 @@ # Generation and release cycle of clientset -Client-gen is an automatic tool that generates -[clientset](../../docs/proposals/client-package-structure.md#high-level-client-sets) -based on API types. This doc introduces the use the client-gen, and the release -cycle of the generated clientsets. +Client-gen is an automatic tool that generates [clientset](../../docs/proposals/client-package-structure.md#high-level-client-sets) based on API types. This doc introduces the use the client-gen, and the release cycle of the generated clientsets. ## Using client-gen -The workflow includes four steps: -- Marking API types with tags: in `pkg/apis/${GROUP}/${VERSION}/types.go`, mark -the types (e.g., Pods) that you want to generate clients for with the -`// +genclient=true` tag. If the resource associated with the type is not -namespace scoped (e.g., PersistentVolume), you need to append the -`nonNamespaced=true` tag as well. +The workflow includes three steps: -- Running the client-gen tool: you need to use the command line argument -`--input` to specify the groups and versions of the APIs you want to generate -clients for, client-gen will then look into -`pkg/apis/${GROUP}/${VERSION}/types.go` and generate clients for the types you -have marked with the `genclient` tags. For example, running: +1. Marking API types with tags: in `pkg/apis/${GROUP}/${VERSION}/types.go`, mark the types (e.g., Pods) that you want to generate clients for with the `// +genclient=true` tag. If the resource associated with the type is not namespace scoped (e.g., PersistentVolume), you need to append the `nonNamespaced=true` tag as well. -``` -$ client-gen --input="api/v1,extensions/v1beta1" --clientset-name="my_release" -``` +2. + - a. If you are developing in the k8s.io/kubernetes repository, you just need to run hack/update-codegen.sh. -will generate a clientset named "my_release" which includes clients for api/v1 -objects and extensions/v1beta1 objects. You can run `$ client-gen --help` to see -other command line arguments. + - b. If you are running client-gen outside of k8s.io/kubernetes, you need to use the command line argument `--input` to specify the groups and versions of the APIs you want to generate clients for, client-gen will then look into `pkg/apis/${GROUP}/${VERSION}/types.go` and generate clients for the types you have marked with the `genclient` tags. For example, to generated a clientset named "my_release" including clients for api/v1 objects and extensions/v1beta1 objects, you need to run: -- ***Adding expansion methods***: client-gen only generates the common methods, - such as `Create()` and `Delete()`. You can manually add additional methods - through the expansion interface. For example, this - [file](../../pkg/client/clientset_generated/release_1_4/typed/core/v1/pod_expansion.go) - adds additional methods to Pod's client. As a convention, we put the expansion - interface and its methods in file ${TYPE}_expansion.go. In most cases, you - don't want to remove existing expansion files. So to make life easier, - instead of creating a new clientset from scratch, ***you can copy and rename an - existing clientset (so that all the expansion files are copied)***, and then run - client-gen. +``` +$ client-gen --input="api/v1,extensions/v1beta1" --clientset-name="my_release" +``` -- Generating fake clients for testing purposes: client-gen will generate a fake -clientset if the command line argument `--fake-clientset` is set. The fake -clientset provides the default implementation, you only need to fake out the -methods you care about when writing test cases. +3. ***Adding expansion methods***: client-gen only generates the common methods, such as CRUD. You can manually add additional methods through the expansion interface. For example, this [file](../../pkg/client/clientset_generated/release_1_5/typed/core/v1/pod_expansion.go) adds additional methods to Pod's client. As a convention, we put the expansion interface and its methods in file ${TYPE}_expansion.go. In most cases, you don't want to remove existing expansion files. So to make life easier, instead of creating a new clientset from scratch, ***you can copy and rename an existing clientset (so that all the expansion files are copied)***, and then run client-gen. -The output of client-gen includes: +## Output of client-gen -- clientset: the clientset will be generated at -`pkg/client/clientset_generated/` by default, and you can change the path via -the `--clientset-path` command line argument. +- clientset: the clientset will be generated at `pkg/client/clientset_generated/` by default, and you can change the path via the `--clientset-path` command line argument. - Individual typed clients and client for group: They will be generated at `pkg/client/clientset_generated/${clientset_name}/typed/generated/${GROUP}/${VERSION}/` ## Released clientsets -At the 1.2 release, we have two released clientsets in the repo: -internalclientset and release_1_2. +If you are contributing code to k8s.io/kubernetes, try to use the release_X_Y clientset in this [directory](../../pkg/client/clientset_generated/). + +If you need a stable Go client to build your own project, please refer to the [client-go repository](https://github.com/kubernetes/client-go). -- internalclientset: because most components in our repo still deal with the -internal objects, the internalclientset talks in internal objects to ease the -adoption of clientset. We will keep updating it as our API evolves. Eventually -it will be replaced by a versioned clientset. +We are migrating k8s.io/kubernetes to use client-go as well, see issue [#35159](https://github.com/kubernetes/kubernetes/issues/35159). -- release_1_2: release_1_2 clientset is a versioned clientset, it includes -clients for the core v1 objects, extensions/v1beta1, autoscaling/v1, and -batch/v1 objects. We will NOT update it after we cut the 1.2 release. After the -1.2 release, we will create release_1_3 clientset and keep it updated until we -cut release 1.3. + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/generating-clientset.md?pixel)]() -- cgit v1.2.3 From 8ba36234240fac36804bfe4323327c2322d2e1df Mon Sep 17 00:00:00 2001 From: derekwaynecarr Date: Mon, 17 Oct 2016 13:23:48 -0400 Subject: pod and qos level cgroup support --- e2e-node-tests.md | 2 -- 1 file changed, 2 deletions(-) diff --git a/e2e-node-tests.md b/e2e-node-tests.md index ce23497e..78113440 100644 --- a/e2e-node-tests.md +++ b/e2e-node-tests.md @@ -210,8 +210,6 @@ make test_e2e_node TEST_ARGS="--disable-kubenet=false" # disable kubenet For testing with the QoS Cgroup Hierarchy enabled, you can pass --cgroups-per-qos flag as an argument into Ginkgo using TEST_ARGS -*Note: Disabled pending feature stabilization.* - ```sh make test_e2e_node TEST_ARGS="--cgroups-per-qos=true" ``` -- cgit v1.2.3 From 972cbd71dc83476fa5079f02fccbda1e5bbdf5dd Mon Sep 17 00:00:00 2001 From: Saad Ali Date: Wed, 2 Nov 2016 19:07:35 -0700 Subject: Fix typo in docs/devel/godep.md --- godep.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/godep.md b/godep.md index c19157c6..ddd6c5b1 100644 --- a/godep.md +++ b/godep.md @@ -86,7 +86,7 @@ godep get $DEP/... rm -rf Godeps rm -rf vendor ./hack/godep-save.sh -git co -- $(git st -s | grep "^ D" | awk '{print $2}' | grep ^Godeps) +git checkout -- $(git status -s | grep "^ D" | awk '{print $2}' | grep ^Godeps) ``` _If `go get -u path/to/dependency` fails with compilation errors, instead try -- cgit v1.2.3 From 76480357f53da80b60595ea3365c692e90953e0e Mon Sep 17 00:00:00 2001 From: Jimmy Cuadra Date: Thu, 27 Oct 2016 23:16:31 -1000 Subject: Rename PetSet to StatefulSet in docs and examples. --- updating-docs-for-feature-changes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/updating-docs-for-feature-changes.md b/updating-docs-for-feature-changes.md index 6e85c48d..309b809d 100644 --- a/updating-docs-for-feature-changes.md +++ b/updating-docs-for-feature-changes.md @@ -11,7 +11,7 @@ Anyone making user facing changes to kubernetes. This is especially important f ### When making Api changes *e.g. adding Deployments* -* Always make sure docs for downstream effects are updated *(PetSet -> PVC, Deployment -> ReplicationController)* +* Always make sure docs for downstream effects are updated *(StatefulSet -> PVC, Deployment -> ReplicationController)* * Add or update the corresponding *[Glossary](http://kubernetes.io/docs/reference/)* item * Verify the guides / walkthroughs do not require any changes: * **If your change will be recommended over the approaches shown in these guides, then they must be updated to reflect your change** -- cgit v1.2.3 From 4462ce959956209a6446cd586459b66615488d90 Mon Sep 17 00:00:00 2001 From: Brandon Philips Date: Mon, 24 Oct 2016 11:38:06 -0700 Subject: kubectl: add less verbose version The kubectl version output is very complex and makes it hard for users and vendors to give actionable information. For example during the recent Kubernetes 1.4.3 TLS security scramble I had to write a one-liner for users to get out the version number to give to figure out if they are vulnerable: $ kubectl version | grep -i Server | sed -n 's%.*GitVersion:"\([^"]*\).*%\1%p' Instead this patch outputs simply output with `--short` ./kubectl version --short Client Version: v1.4.3 Server Version: v1.4.3 --- kubectl-conventions.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/kubectl-conventions.md b/kubectl-conventions.md index af964285..1e94b3ba 100644 --- a/kubectl-conventions.md +++ b/kubectl-conventions.md @@ -151,6 +151,9 @@ generation, etc., and display the output * `--output-version=...`: Convert the output to a different API group/version +* `--short`: Output a compact summary of normal output; the format is subject +to change and is optimizied for reading not parsing. + * `--validate`: Validate the resource schema ## Output conventions -- cgit v1.2.3 From 8dc9c4ddcc56f51ad9781b6250d950939e8d010a Mon Sep 17 00:00:00 2001 From: Tim Hockin Date: Thu, 10 Nov 2016 23:47:13 -0800 Subject: Add reviewable notes to CONTRIBUTING --- development.md | 20 +++++++++++++++++--- 1 file changed, 17 insertions(+), 3 deletions(-) diff --git a/development.md b/development.md index d36f7ec3..88fc8098 100644 --- a/development.md +++ b/development.md @@ -228,6 +228,23 @@ git push -f origin my-feature **Note:** If you have write access, please refrain from using the GitHub UI for creating PRs, because GitHub will create the PR branch inside the main repository rather than inside your fork. +### Getting a code review + +Once your pull request has been opened it will be assigned to one or more +reviewers. Those reviewers will do a thorough code review, looking for +correctness, bugs, opportunities for improvement, documentation and comments, +and style. + +Very small PRs are easy to review. Very large PRs are very difficult to +review. Github has a built-in code review tool, which is what most people use. +At the assigned reviewer's discretion, a PR may be switched to use +[Reviewable](https://reviewable.k8s.io) instead. Once a PR is switched to +Reviewable, please ONLY send or reply to comments through reviewable. Mixing +code review tools can be very confusing. + +See [Faster Reviews](faster_reviews.md) for some thoughts on how to streamline +the review process. + ### When to retain commits and when to squash Upon merge, all git commits should represent meaningful milestones or units of @@ -240,9 +257,6 @@ fixups (e.g. automated doc formatting), use one or more commits for the changes to tooling and a final commit to apply the fixup en masse. This makes reviews much easier. -See [Faster Reviews](faster_reviews.md) for more details. - - ## Testing Three basic commands let you run unit, integration and/or e2e tests: -- cgit v1.2.3 From 6272791a0413966dcbda2b16500c1a0094dfeadc Mon Sep 17 00:00:00 2001 From: "xialong.lee" Date: Sun, 13 Nov 2016 18:27:41 +0800 Subject: update gazel usage in bazel.md --- bazel.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/bazel.md b/bazel.md index d1230dce..258bc8ad 100644 --- a/bazel.md +++ b/bazel.md @@ -30,7 +30,7 @@ To update a single build file, run: $ # get gazel $ go get -u github.com/mikedanese/gazel $ # .e.g. ./pkg/kubectl/BUILD -$ gazel ./pkg/kubectl +$ gazel -root="${YOUR_KUBE_ROOT_PATH}" ./pkg/kubectl ``` Updating BUILD file for a package will be required when: -- cgit v1.2.3 From 676cabf17bab0b127113f1e35e359b73e9bb7d95 Mon Sep 17 00:00:00 2001 From: mdshuai Date: Tue, 15 Nov 2016 15:03:56 +0800 Subject: [kubelet]update --cgroups-per-qos to --experimental-cgroups-per-qos --- e2e-node-tests.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/e2e-node-tests.md b/e2e-node-tests.md index 78113440..5e5f5b49 100644 --- a/e2e-node-tests.md +++ b/e2e-node-tests.md @@ -208,10 +208,10 @@ make test_e2e_node TEST_ARGS="--disable-kubenet=false" # disable kubenet ## Additional QoS Cgroups Hierarchy level testing -For testing with the QoS Cgroup Hierarchy enabled, you can pass --cgroups-per-qos flag as an argument into Ginkgo using TEST_ARGS +For testing with the QoS Cgroup Hierarchy enabled, you can pass --experimental-cgroups-per-qos flag as an argument into Ginkgo using TEST_ARGS ```sh -make test_e2e_node TEST_ARGS="--cgroups-per-qos=true" +make test_e2e_node TEST_ARGS="--experimental-cgroups-per-qos=true" ``` # Notes on tests run by the Kubernetes project during pre-, post- submit. -- cgit v1.2.3 From 07c8d2dc406d1854ea791a2ad2fc842d25a93c9a Mon Sep 17 00:00:00 2001 From: Erick Fejta Date: Tue, 15 Nov 2016 11:59:58 -0800 Subject: Delete gotest-dockerized --- development.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/development.md b/development.md index 18870b28..050419e2 100644 --- a/development.md +++ b/development.md @@ -110,7 +110,7 @@ unique places you need to update the go version. - The image for cross compiling in [build-tools/build-image/cross/](../../build-tools/build-image/cross/). The `VERSION` file and `Dockerfile`. - Update [dockerized-e2e-runner.sh](https://github.com/kubernetes/test-infra/blob/master/jenkins/dockerized-e2e-runner.sh) to run a kubekins-e2e with the desired go version, which requires pushing [e2e-image](https://github.com/kubernetes/test-infra/tree/master/jenkins/e2e-image) and [test-image](https://github.com/kubernetes/test-infra/tree/master/jenkins/test-image) images that are `FROM` the desired go version. -- The docker image being run in [hack/jenkins/gotest-dockerized.sh](../../hack/jenkins/gotest-dockerized.sh). +- The docker image being run in [gotest-dockerized.sh](https://github.com/kubernetes/test-infra/tree/master/jenkins/gotest-dockerized.sh). - The cross tag `KUBE_BUILD_IMAGE_CROSS_TAG` in [build-tools/common.sh](../../build-tools/common.sh) ## Workflow -- cgit v1.2.3 From b55f78fbe87ed837bd4d839a295fe7ac5bb9f4da Mon Sep 17 00:00:00 2001 From: Klaus Ma Date: Tue, 15 Nov 2016 15:29:13 +0800 Subject: Added comments on running update-bazel.sh in /Users/klaus/Workspace/go-tools/src/k8s.io/kubernetes. --- bazel.md | 1 + 1 file changed, 1 insertion(+) diff --git a/bazel.md b/bazel.md index d1230dce..8704b05a 100644 --- a/bazel.md +++ b/bazel.md @@ -23,6 +23,7 @@ To update automanaged build files, run: $ ./hack/update-bazel.sh ``` +**NOTES**: `update-bazel.sh` only works if check out directory of Kubernetes is "$GOPATH/src/k8s.io/kubernetes". To update a single build file, run: -- cgit v1.2.3 From 062ade7009141e772f79ef298ee5d75f43a3033c Mon Sep 17 00:00:00 2001 From: sebgoa Date: Thu, 17 Nov 2016 16:49:00 +0100 Subject: fix munge-docs build errors --- on-call-user-support.md | 2 +- writing-a-getting-started-guide.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/on-call-user-support.md b/on-call-user-support.md index c79d7e0e..a111c6fe 100644 --- a/on-call-user-support.md +++ b/on-call-user-support.md @@ -30,7 +30,7 @@ redirect users to Slack. Also check out the In general, try to direct support questions to: 1. Documentation, such as the [user guide](../user-guide/README.md) and -[troubleshooting guide](../troubleshooting.md) +[troubleshooting guide](http://kubernetes.io/docs/troubleshooting/) 2. Stackoverflow diff --git a/writing-a-getting-started-guide.md b/writing-a-getting-started-guide.md index b50e556c..b1d65d60 100644 --- a/writing-a-getting-started-guide.md +++ b/writing-a-getting-started-guide.md @@ -43,7 +43,7 @@ These guidelines say *what* to do. See the Rationale section for *why*. If you have a cluster partially working, but doing all the above steps seems like too much work, we still want to hear from you. We suggest you write a blog post or a Gist, and we will link to it on our wiki page. -Just file an issue or chat us on [Slack](../troubleshooting.md#slack) and one of the committers will link to it from the wiki. +Just file an issue or chat us on [Slack](http://slack.kubernetes.io) and one of the committers will link to it from the wiki. ## Development Distro Guidelines -- cgit v1.2.3 From f9cb189988848abe9cc05f0c0d9c67226b3c08e3 Mon Sep 17 00:00:00 2001 From: Marcin Owsiany Date: Fri, 18 Nov 2016 09:21:28 +0100 Subject: Fix quoting of $PATH in instructions. --- testing.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/testing.md b/testing.md index 09293f00..45848f3b 100644 --- a/testing.md +++ b/testing.md @@ -177,12 +177,12 @@ includes a script to help install etcd on your machine. # Option a) install inside kubernetes root hack/install-etcd.sh # Installs in ./third_party/etcd -echo export PATH="$PATH:$(pwd)/third_party/etcd" >> ~/.profile # Add to PATH +echo export PATH="\$PATH:$(pwd)/third_party/etcd" >> ~/.profile # Add to PATH # Option b) install manually grep -E "image.*etcd" cluster/saltbase/etcd/etcd.manifest # Find version # Install that version using yum/apt-get/etc -echo export PATH="$PATH:" >> ~/.profile # Add to PATH +echo export PATH="\$PATH:" >> ~/.profile # Add to PATH ``` ### Etcd test data -- cgit v1.2.3 From b682c74db52752d666e01d3b13f5f7635f58bb74 Mon Sep 17 00:00:00 2001 From: Yu-Ju Hong Date: Tue, 22 Nov 2016 12:16:19 -0800 Subject: Add a CRI doc for developers This doc includes basic instructions to use CRI and the current status. It does not include the formal requirements for CRI, which should be documented separately. --- container-runtime-interface.md | 123 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 123 insertions(+) create mode 100644 container-runtime-interface.md diff --git a/container-runtime-interface.md b/container-runtime-interface.md new file mode 100644 index 00000000..596fc808 --- /dev/null +++ b/container-runtime-interface.md @@ -0,0 +1,123 @@ +# CRI: the Container Runtime Interface + +## What is CRI? + +CRI (_Container Runtime Interface_) consists of a +[protobuf API](../../pkg/kubelet/api/v1alpha1/runtime/api.proto), +specifications/requirements (to-be-added), +and [libraries] (https://github.com/kubernetes/kubernetes/tree/master/pkg/kubelet/server/streaming) +for container runtimes to integrate with kubelet on a node. CRI is currently in Alpha. + +In the future, we plan to add more developer tools such as the CRI validation +tests. + + +## Why develop CRI? + +Prior to the existence of CRI, container runtimes (e.g., `docker`, `rkt`) were +integrated with kubelet through implementing an internal, high-level interface +in kubelet. The entrance barrier for runtimes was high because the integration +required understanding the internals of kubelet and contributing to the main +Kubernetes repository. More importantly, this would not scale because every new +addition incurs a significant maintenance overhead in the main kubernetes +repository. + +Kubernetes aims to be extensible. CRI is one small, yet important step to enable +pluggable container runtimes and build a healthier ecosystem. + +## How to use CRI? + +1. Start the image and runtime services on your node. You can have a single + service acting as both image and runtime services. +2. Set the kubelet flags + - Pass the unix socket(s) to which your services listen to kubelet: + `--container-runtime-endpoint` and `--image-service-endpoint`. + - Enable CRI in kubelet by`--experimental-cri=true`). + - Use the "remote" runtime by `--container-runtime=remote`. + +Please see the [Status Update](#status-update) section for known issues for +each release. + +Note that CRI is still in its early stages. We are actively incorporating +feedback from early developers to improve the API. Developers should expect +occasional API breaking changes. + +## Does Kubelet use CRI today? + +No, but we are working on it. + +The first step is to switch kubelet to integrate with Docker via CRI by +default. The current [Docker CRI implementation](https://github.com/kubernetes/kubernetes/blob/release-1.5/pkg/kubelet/dockershim) +already passes most end-to-end tests, and has mandatory PR builders to prevent +regressions. While we are expanding the test coverage gradually, it is +difficult to test on all combinations of OS distributions, platforms, and +plugins. There are also many experimental or even undocumented features relied +upon by some users. We would like to **encourage the community to help test +this Docker-CRI integration and report bugs and/or missing features** to +smooth the transition in the near future. Please file a Github issue and +include @kubernetes/sig-node for any CRI problem. + +### How to test the new Docker CRI integration? + +Start kubelet with the following flags: + - Use the Docker container runtime by `--container-runtime=docker`(the default). + - Enable CRI in kubelet by`--experimental-cri=true`. + +Please also see the [known issues](#docker-cri-1.5-known-issues) before trying +out. + + +## Design docs and proposals + +We plan to add CRI specifications/requirements in the near future. For now, +these proposals and design docs are the best sources to understand CRI +besides discussions on Github issues. + + - [Original proposal](https://github.com/kubernetes/kubernetes/blob/release-1.5/docs/proposals/container-runtime-interface-v1.md) + - [Exec/attach/port-forward streaming requests](https://docs.google.com/document/d/1OE_QoInPlVCK9rMAx9aybRmgFiVjHpJCHI9LrfdNM_s/edit?usp=sharing) + - [Container stdout/stderr logs](https://github.com/kubernetes/kubernetes/blob/release-1.5/docs/proposals/kubelet-cri-logging.md) + - Networking: The CRI runtime handles network plugins and the + setup/teardown of the pod sandbox. + + +## Work-In-Progress CRI runtimes + + - [cri-o](https://github.com/kubernetes-incubator/cri-o) + - [rktlet](https://github.com/kubernetes-incubator/rktlet) + - [frakti](https://github.com/kubernetes/frakti) + + +## [Status update](#status-update) + +### Kubernetes v1.5 release (CRI v1alpha1) + + - [v1alpha1 version](https://github.com/kubernetes/kubernetes/blob/release-1.5/pkg/kubelet/api/v1alpha1/runtime/api.proto) of CRI is released. + + +#### [CRI known issues](#cri-1.5-known-issues): + + - Container metrics are not defined yet in CRI ([#27097](https://github.com/kubernetes/kubernetes/issues/27097)). + - CRI may not be compatible with other experimental features (e.g., Seccomp) + - Streaming server needs to be further productionized: + - Authentication: [#36666](https://github.com/kubernetes/kubernetes/issues/36666) + - Avoid including user data in the redirect URL: [#36187](https://github.com/kubernetes/kubernetes/issues/36187) + + +#### [Docker CRI integration known issues](#docker-cri-1.5-known-issues) + + - Docker compatibility: Support only Docker v1.11 and v1.12. + - Network: Does not support host port and bandwidth shaping + [#35457](https://github.com/kubernetes/kubernetes/issues/35457) + - Exec/attach/port-forward (streaming requests): Does not support `nsenter` + as the exec handler (`--exec-handler=nsenter`). Also see + (#cri-1.5-known-issues) for limitations on CRI streaming. + +## Contacts + + - Email: sig-node (kubernetes-sig-node@googlegroups.com) + - Slack: https://kubernetes.slack.com/messages/sig-node + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/container-runtime-interface.md?pixel)]() + -- cgit v1.2.3 From b7279dbb2e0adcbcb0341102c8dd4452f334177e Mon Sep 17 00:00:00 2001 From: Yu-Ju Hong Date: Tue, 22 Nov 2016 17:26:01 -0800 Subject: Add a known issue in the CRI doc --- container-runtime-interface.md | 38 +++++++++++++++++++++----------------- 1 file changed, 21 insertions(+), 17 deletions(-) diff --git a/container-runtime-interface.md b/container-runtime-interface.md index 596fc808..7ab085f7 100644 --- a/container-runtime-interface.md +++ b/container-runtime-interface.md @@ -11,7 +11,6 @@ for container runtimes to integrate with kubelet on a node. CRI is currently in In the future, we plan to add more developer tools such as the CRI validation tests. - ## Why develop CRI? Prior to the existence of CRI, container runtimes (e.g., `docker`, `rkt`) were @@ -32,7 +31,7 @@ pluggable container runtimes and build a healthier ecosystem. 2. Set the kubelet flags - Pass the unix socket(s) to which your services listen to kubelet: `--container-runtime-endpoint` and `--image-service-endpoint`. - - Enable CRI in kubelet by`--experimental-cri=true`). + - Enable CRI in kubelet by`--experimental-cri=true`. - Use the "remote" runtime by `--container-runtime=remote`. Please see the [Status Update](#status-update) section for known issues for @@ -66,7 +65,6 @@ Start kubelet with the following flags: Please also see the [known issues](#docker-cri-1.5-known-issues) before trying out. - ## Design docs and proposals We plan to add CRI specifications/requirements in the near future. For now, @@ -79,38 +77,44 @@ besides discussions on Github issues. - Networking: The CRI runtime handles network plugins and the setup/teardown of the pod sandbox. - ## Work-In-Progress CRI runtimes - [cri-o](https://github.com/kubernetes-incubator/cri-o) - [rktlet](https://github.com/kubernetes-incubator/rktlet) - [frakti](https://github.com/kubernetes/frakti) - ## [Status update](#status-update) ### Kubernetes v1.5 release (CRI v1alpha1) - [v1alpha1 version](https://github.com/kubernetes/kubernetes/blob/release-1.5/pkg/kubelet/api/v1alpha1/runtime/api.proto) of CRI is released. - #### [CRI known issues](#cri-1.5-known-issues): - - Container metrics are not defined yet in CRI ([#27097](https://github.com/kubernetes/kubernetes/issues/27097)). - - CRI may not be compatible with other experimental features (e.g., Seccomp) - - Streaming server needs to be further productionized: - - Authentication: [#36666](https://github.com/kubernetes/kubernetes/issues/36666) - - Avoid including user data in the redirect URL: [#36187](https://github.com/kubernetes/kubernetes/issues/36187) - + - [#27097](https://github.com/kubernetes/kubernetes/issues/27097): Container + metrics are not yet defined in CRI. + - [#36401](https://github.com/kubernetes/kubernetes/issues/36401): The new + container log path/format is not yet supported by the logging pipeline + (e.g., fluentd, GCL). + - CRI may not be compatible with other experimental features (e.g., Seccomp). + - Streaming server needs to be hardened. + - [#36666](https://github.com/kubernetes/kubernetes/issues/36666): + Authentication. + - [#36187](https://github.com/kubernetes/kubernetes/issues/36187): Avoid + including user data in the redirect URL. #### [Docker CRI integration known issues](#docker-cri-1.5-known-issues) - Docker compatibility: Support only Docker v1.11 and v1.12. - - Network: Does not support host port and bandwidth shaping - [#35457](https://github.com/kubernetes/kubernetes/issues/35457) - - Exec/attach/port-forward (streaming requests): Does not support `nsenter` - as the exec handler (`--exec-handler=nsenter`). Also see - (#cri-1.5-known-issues) for limitations on CRI streaming. + - Network: + - [#35457](https://github.com/kubernetes/kubernetes/issues/35457): Does + not support host ports. + - [#37315](https://github.com/kubernetes/kubernetes/issues/37315): Does + not support bandwidth shaping. + - Exec/attach/port-forward (streaming requests): + - [#35747](https://github.com/kubernetes/kubernetes/issues/35747): Does + not support `nsenter` as the exec handler (`--exec-handler=nsenter`). + - Also see (#cri-1.5-known-issues) for limitations on CRI streaming. ## Contacts -- cgit v1.2.3 From a08221e389ee026a516c01a4d0e6a275a996a68c Mon Sep 17 00:00:00 2001 From: Maciej Kwiek Date: Wed, 23 Nov 2016 10:52:04 +0100 Subject: Fix typo in e2e tests doc --- e2e-tests.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/e2e-tests.md b/e2e-tests.md index 03efcb66..fc8f1995 100644 --- a/e2e-tests.md +++ b/e2e-tests.md @@ -104,7 +104,7 @@ go run hack/e2e.go -v --test --test_args="--ginkgo.skip=Pods.*env" GINKGO_PARALLEL=y go run hack/e2e.go --v --test --test_args="--ginkgo.skip=\[Serial\]" # Run tests in parallel, skip any that must be run serially and keep the test namespace if test failed -GINKGO_PARALLEL=y go run hack/e2e.go --v --test --test_args="--ginkgo.skip=\[Serial\] --delete-namespace-on-falure=false" +GINKGO_PARALLEL=y go run hack/e2e.go --v --test --test_args="--ginkgo.skip=\[Serial\] --delete-namespace-on-failure=false" # Flags can be combined, and their actions will take place in this order: # --build, --up, --test, --down -- cgit v1.2.3 From 593222e8c03e9d4ea072a7c2cdcacbd4e9c351d4 Mon Sep 17 00:00:00 2001 From: Brandon Philips Date: Tue, 29 Nov 2016 10:29:19 -0800 Subject: docs: devel: describe the current state of adding approvers Document that we are currently holding off on adding new approvers until the reviewers process is in place. And set a target deadline. --- owners.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/owners.md b/owners.md index db0f3202..9f2cd8c2 100644 --- a/owners.md +++ b/owners.md @@ -9,6 +9,8 @@ will serve as the approvers for code to be submitted to these parts of the repos are not necessarily expected to do the first code review for all commits to these areas, but they are required to approve changes before they can be merged. +**Note** The Kubernetes project has a hiatus on adding new approvers to OWNERS files. At this time we are [adding more reviewers](https://github.com/kubernetes/kubernetes/pulls?utf8=%E2%9C%93&q=is%3Apr%20%22Curating%20owners%3A%22%20) to take the load off of the current set of approvers and once we have had a chance to flush this out for a release we will begin adding new approvers again. Adding new approvers is planned for after the Kubernetes 1.6.0 release. + ## High Level flow ### Step One: A PR is submitted -- cgit v1.2.3 From fdd5fb4c168407b984472d9928d8abd9a67b02ea Mon Sep 17 00:00:00 2001 From: Brandon Philips Date: Tue, 29 Nov 2016 10:30:33 -0800 Subject: docs: devel: point people at place for OWNERS status All of the tracking is happening here https://github.com/kubernetes/contrib/issues/1389 point people at it. --- owners.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/owners.md b/owners.md index 9f2cd8c2..217585ce 100644 --- a/owners.md +++ b/owners.md @@ -1,6 +1,6 @@ # Owners files -_Note_: This is a design for a feature that is not yet implemented. +_Note_: This is a design for a feature that is not yet implemented. See the [contrib PR](https://github.com/kubernetes/contrib/issues/1389) for the current progress. ## Overview -- cgit v1.2.3 From 7b116eb6113bff36074cc2d06c10a39973b2610f Mon Sep 17 00:00:00 2001 From: Michelle Noorali Date: Wed, 30 Nov 2016 14:41:44 -0500 Subject: refactor: isolate docs/devel for move --- README.md | 83 -- adding-an-APIGroup.md | 100 -- api-conventions.md | 1350 ------------------------- api_changes.md | 732 -------------- automation.md | 116 --- bazel.md | 44 - cherry-picks.md | 64 -- cli-roadmap.md | 11 - client-libraries.md | 27 - coding-conventions.md | 147 --- collab.md | 87 -- community-expectations.md | 87 -- container-runtime-interface.md | 127 --- controllers.md | 186 ---- devel/README.md | 83 ++ devel/adding-an-APIGroup.md | 100 ++ devel/api-conventions.md | 1350 +++++++++++++++++++++++++ devel/api_changes.md | 732 ++++++++++++++ devel/automation.md | 116 +++ devel/bazel.md | 44 + devel/cherry-picks.md | 64 ++ devel/cli-roadmap.md | 11 + devel/client-libraries.md | 27 + devel/coding-conventions.md | 147 +++ devel/collab.md | 87 ++ devel/community-expectations.md | 87 ++ devel/container-runtime-interface.md | 127 +++ devel/controllers.md | 186 ++++ devel/developer-guides/vagrant.md | 432 ++++++++ devel/development.md | 251 +++++ devel/e2e-node-tests.md | 231 +++++ devel/e2e-tests.md | 719 +++++++++++++ devel/faster_reviews.md | 218 ++++ devel/flaky-tests.md | 194 ++++ devel/generating-clientset.md | 41 + devel/getting-builds.md | 52 + devel/git_workflow.png | Bin 0 -> 114745 bytes devel/go-code.md | 32 + devel/godep.md | 123 +++ devel/gubernator-images/filterpage.png | Bin 0 -> 408077 bytes devel/gubernator-images/filterpage1.png | Bin 0 -> 375248 bytes devel/gubernator-images/filterpage2.png | Bin 0 -> 372828 bytes devel/gubernator-images/filterpage3.png | Bin 0 -> 362554 bytes devel/gubernator-images/skipping1.png | Bin 0 -> 67007 bytes devel/gubernator-images/skipping2.png | Bin 0 -> 114503 bytes devel/gubernator-images/testfailures.png | Bin 0 -> 189178 bytes devel/gubernator.md | 142 +++ devel/how-to-doc.md | 205 ++++ devel/instrumentation.md | 52 + devel/issues.md | 59 ++ devel/kubectl-conventions.md | 411 ++++++++ devel/kubemark-guide.md | 212 ++++ devel/local-cluster/docker.md | 269 +++++ devel/local-cluster/k8s-singlenode-docker.png | Bin 0 -> 31801 bytes devel/local-cluster/local.md | 125 +++ devel/local-cluster/vagrant.md | 397 ++++++++ devel/logging.md | 36 + devel/mesos-style.md | 218 ++++ devel/node-performance-testing.md | 127 +++ devel/on-call-build-cop.md | 151 +++ devel/on-call-rotations.md | 43 + devel/on-call-user-support.md | 89 ++ devel/owners.md | 100 ++ devel/pr_workflow.dia | Bin 0 -> 3189 bytes devel/pr_workflow.png | Bin 0 -> 80793 bytes devel/profiling.md | 46 + devel/pull-requests.md | 105 ++ devel/running-locally.md | 170 ++++ devel/scheduler.md | 72 ++ devel/scheduler_algorithm.md | 44 + devel/testing.md | 230 +++++ devel/update-release-docs.md | 115 +++ devel/updating-docs-for-feature-changes.md | 76 ++ devel/writing-a-getting-started-guide.md | 101 ++ devel/writing-good-e2e-tests.md | 235 +++++ developer-guides/vagrant.md | 432 -------- development.md | 251 ----- e2e-node-tests.md | 231 ----- e2e-tests.md | 719 ------------- faster_reviews.md | 218 ---- flaky-tests.md | 194 ---- generating-clientset.md | 41 - getting-builds.md | 52 - git_workflow.png | Bin 114745 -> 0 bytes go-code.md | 32 - godep.md | 123 --- gubernator-images/filterpage.png | Bin 408077 -> 0 bytes gubernator-images/filterpage1.png | Bin 375248 -> 0 bytes gubernator-images/filterpage2.png | Bin 372828 -> 0 bytes gubernator-images/filterpage3.png | Bin 362554 -> 0 bytes gubernator-images/skipping1.png | Bin 67007 -> 0 bytes gubernator-images/skipping2.png | Bin 114503 -> 0 bytes gubernator-images/testfailures.png | Bin 189178 -> 0 bytes gubernator.md | 142 --- how-to-doc.md | 205 ---- instrumentation.md | 52 - issues.md | 59 -- kubectl-conventions.md | 411 -------- kubemark-guide.md | 212 ---- local-cluster/docker.md | 269 ----- local-cluster/k8s-singlenode-docker.png | Bin 31801 -> 0 bytes local-cluster/local.md | 125 --- local-cluster/vagrant.md | 397 -------- logging.md | 36 - mesos-style.md | 218 ---- node-performance-testing.md | 127 --- on-call-build-cop.md | 151 --- on-call-rotations.md | 43 - on-call-user-support.md | 89 -- owners.md | 100 -- pr_workflow.dia | Bin 3189 -> 0 bytes pr_workflow.png | Bin 80793 -> 0 bytes profiling.md | 46 - pull-requests.md | 105 -- running-locally.md | 170 ---- scheduler.md | 72 -- scheduler_algorithm.md | 44 - testing.md | 230 ----- update-release-docs.md | 115 --- updating-docs-for-feature-changes.md | 76 -- writing-a-getting-started-guide.md | 101 -- writing-good-e2e-tests.md | 235 ----- 122 files changed, 9284 insertions(+), 9284 deletions(-) delete mode 100644 README.md delete mode 100644 adding-an-APIGroup.md delete mode 100644 api-conventions.md delete mode 100755 api_changes.md delete mode 100644 automation.md delete mode 100644 bazel.md delete mode 100644 cherry-picks.md delete mode 100644 cli-roadmap.md delete mode 100644 client-libraries.md delete mode 100644 coding-conventions.md delete mode 100644 collab.md delete mode 100644 community-expectations.md delete mode 100644 container-runtime-interface.md delete mode 100644 controllers.md create mode 100644 devel/README.md create mode 100644 devel/adding-an-APIGroup.md create mode 100644 devel/api-conventions.md create mode 100755 devel/api_changes.md create mode 100644 devel/automation.md create mode 100644 devel/bazel.md create mode 100644 devel/cherry-picks.md create mode 100644 devel/cli-roadmap.md create mode 100644 devel/client-libraries.md create mode 100644 devel/coding-conventions.md create mode 100644 devel/collab.md create mode 100644 devel/community-expectations.md create mode 100644 devel/container-runtime-interface.md create mode 100644 devel/controllers.md create mode 100755 devel/developer-guides/vagrant.md create mode 100644 devel/development.md create mode 100644 devel/e2e-node-tests.md create mode 100644 devel/e2e-tests.md create mode 100644 devel/faster_reviews.md create mode 100644 devel/flaky-tests.md create mode 100644 devel/generating-clientset.md create mode 100644 devel/getting-builds.md create mode 100644 devel/git_workflow.png create mode 100644 devel/go-code.md create mode 100644 devel/godep.md create mode 100644 devel/gubernator-images/filterpage.png create mode 100644 devel/gubernator-images/filterpage1.png create mode 100644 devel/gubernator-images/filterpage2.png create mode 100644 devel/gubernator-images/filterpage3.png create mode 100644 devel/gubernator-images/skipping1.png create mode 100644 devel/gubernator-images/skipping2.png create mode 100644 devel/gubernator-images/testfailures.png create mode 100644 devel/gubernator.md create mode 100644 devel/how-to-doc.md create mode 100644 devel/instrumentation.md create mode 100644 devel/issues.md create mode 100644 devel/kubectl-conventions.md create mode 100755 devel/kubemark-guide.md create mode 100644 devel/local-cluster/docker.md create mode 100644 devel/local-cluster/k8s-singlenode-docker.png create mode 100644 devel/local-cluster/local.md create mode 100644 devel/local-cluster/vagrant.md create mode 100644 devel/logging.md create mode 100644 devel/mesos-style.md create mode 100644 devel/node-performance-testing.md create mode 100644 devel/on-call-build-cop.md create mode 100644 devel/on-call-rotations.md create mode 100644 devel/on-call-user-support.md create mode 100644 devel/owners.md create mode 100644 devel/pr_workflow.dia create mode 100644 devel/pr_workflow.png create mode 100644 devel/profiling.md create mode 100644 devel/pull-requests.md create mode 100644 devel/running-locally.md create mode 100755 devel/scheduler.md create mode 100755 devel/scheduler_algorithm.md create mode 100644 devel/testing.md create mode 100644 devel/update-release-docs.md create mode 100644 devel/updating-docs-for-feature-changes.md create mode 100644 devel/writing-a-getting-started-guide.md create mode 100644 devel/writing-good-e2e-tests.md delete mode 100755 developer-guides/vagrant.md delete mode 100644 development.md delete mode 100644 e2e-node-tests.md delete mode 100644 e2e-tests.md delete mode 100644 faster_reviews.md delete mode 100644 flaky-tests.md delete mode 100644 generating-clientset.md delete mode 100644 getting-builds.md delete mode 100644 git_workflow.png delete mode 100644 go-code.md delete mode 100644 godep.md delete mode 100644 gubernator-images/filterpage.png delete mode 100644 gubernator-images/filterpage1.png delete mode 100644 gubernator-images/filterpage2.png delete mode 100644 gubernator-images/filterpage3.png delete mode 100644 gubernator-images/skipping1.png delete mode 100644 gubernator-images/skipping2.png delete mode 100644 gubernator-images/testfailures.png delete mode 100644 gubernator.md delete mode 100644 how-to-doc.md delete mode 100644 instrumentation.md delete mode 100644 issues.md delete mode 100644 kubectl-conventions.md delete mode 100755 kubemark-guide.md delete mode 100644 local-cluster/docker.md delete mode 100644 local-cluster/k8s-singlenode-docker.png delete mode 100644 local-cluster/local.md delete mode 100644 local-cluster/vagrant.md delete mode 100644 logging.md delete mode 100644 mesos-style.md delete mode 100644 node-performance-testing.md delete mode 100644 on-call-build-cop.md delete mode 100644 on-call-rotations.md delete mode 100644 on-call-user-support.md delete mode 100644 owners.md delete mode 100644 pr_workflow.dia delete mode 100644 pr_workflow.png delete mode 100644 profiling.md delete mode 100644 pull-requests.md delete mode 100644 running-locally.md delete mode 100755 scheduler.md delete mode 100755 scheduler_algorithm.md delete mode 100644 testing.md delete mode 100644 update-release-docs.md delete mode 100644 updating-docs-for-feature-changes.md delete mode 100644 writing-a-getting-started-guide.md delete mode 100644 writing-good-e2e-tests.md diff --git a/README.md b/README.md deleted file mode 100644 index cf29f3b4..00000000 --- a/README.md +++ /dev/null @@ -1,83 +0,0 @@ -# Kubernetes Developer Guide - -The developer guide is for anyone wanting to either write code which directly accesses the -Kubernetes API, or to contribute directly to the Kubernetes project. -It assumes some familiarity with concepts in the [User Guide](../user-guide/README.md) and the [Cluster Admin -Guide](../admin/README.md). - - -## The process of developing and contributing code to the Kubernetes project - -* **On Collaborative Development** ([collab.md](collab.md)): Info on pull requests and code reviews. - -* **GitHub Issues** ([issues.md](issues.md)): How incoming issues are reviewed and prioritized. - -* **Pull Request Process** ([pull-requests.md](pull-requests.md)): When and why pull requests are closed. - -* **Kubernetes On-Call Rotations** ([on-call-rotations.md](on-call-rotations.md)): Descriptions of on-call rotations for build and end-user support. - -* **Faster PR reviews** ([faster_reviews.md](faster_reviews.md)): How to get faster PR reviews. - -* **Getting Recent Builds** ([getting-builds.md](getting-builds.md)): How to get recent builds including the latest builds that pass CI. - -* **Automated Tools** ([automation.md](automation.md)): Descriptions of the automation that is running on our github repository. - - -## Setting up your dev environment, coding, and debugging - -* **Development Guide** ([development.md](development.md)): Setting up your development environment. - -* **Hunting flaky tests** ([flaky-tests.md](flaky-tests.md)): We have a goal of 99.9% flake free tests. - Here's how to run your tests many times. - -* **Logging Conventions** ([logging.md](logging.md)): Glog levels. - -* **Profiling Kubernetes** ([profiling.md](profiling.md)): How to plug in go pprof profiler to Kubernetes. - -* **Instrumenting Kubernetes with a new metric** - ([instrumentation.md](instrumentation.md)): How to add a new metrics to the - Kubernetes code base. - -* **Coding Conventions** ([coding-conventions.md](coding-conventions.md)): - Coding style advice for contributors. - -* **Document Conventions** ([how-to-doc.md](how-to-doc.md)) - Document style advice for contributors. - -* **Running a cluster locally** ([running-locally.md](running-locally.md)): - A fast and lightweight local cluster deployment for development. - -## Developing against the Kubernetes API - -* The [REST API documentation](../api-reference/README.md) explains the REST - API exposed by apiserver. - -* **Annotations** ([docs/user-guide/annotations.md](../user-guide/annotations.md)): are for attaching arbitrary non-identifying metadata to objects. - Programs that automate Kubernetes objects may use annotations to store small amounts of their state. - -* **API Conventions** ([api-conventions.md](api-conventions.md)): - Defining the verbs and resources used in the Kubernetes API. - -* **API Client Libraries** ([client-libraries.md](client-libraries.md)): - A list of existing client libraries, both supported and user-contributed. - - -## Writing plugins - -* **Authentication Plugins** ([docs/admin/authentication.md](../admin/authentication.md)): - The current and planned states of authentication tokens. - -* **Authorization Plugins** ([docs/admin/authorization.md](../admin/authorization.md)): - Authorization applies to all HTTP requests on the main apiserver port. - This doc explains the available authorization implementations. - -* **Admission Control Plugins** ([admission_control](../design/admission_control.md)) - - -## Building releases - -See the [kubernetes/release](https://github.com/kubernetes/release) repository for details on creating releases and related tools and helper scripts. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/README.md?pixel)]() - diff --git a/adding-an-APIGroup.md b/adding-an-APIGroup.md deleted file mode 100644 index 5832be23..00000000 --- a/adding-an-APIGroup.md +++ /dev/null @@ -1,100 +0,0 @@ -Adding an API Group -=============== - -This document includes the steps to add an API group. You may also want to take -a look at PR [#16621](https://github.com/kubernetes/kubernetes/pull/16621) and -PR [#13146](https://github.com/kubernetes/kubernetes/pull/13146), which add API -groups. - -Please also read about [API conventions](api-conventions.md) and -[API changes](api_changes.md) before adding an API group. - -### Your core group package: - -We plan on improving the way the types are factored in the future; see -[#16062](https://github.com/kubernetes/kubernetes/pull/16062) for the directions -in which this might evolve. - -1. Create a folder in pkg/apis to hold your group. Create types.go in -pkg/apis/``/ and pkg/apis/``/``/ to define API objects -in your group; - -2. Create pkg/apis/``/{register.go, ``/register.go} to register -this group's API objects to the encoding/decoding scheme (e.g., -[pkg/apis/authentication/register.go](../../pkg/apis/authentication/register.go) and -[pkg/apis/authentication/v1beta1/register.go](../../pkg/apis/authentication/v1beta1/register.go); - -3. Add a pkg/apis/``/install/install.go, which is responsible for adding -the group to the `latest` package, so that other packages can access the group's -meta through `latest.Group`. You probably only need to change the name of group -and version in the [example](../../pkg/apis/authentication/install/install.go)). You -need to import this `install` package in {pkg/master, -pkg/client/unversioned}/import_known_versions.go, if you want to make your group -accessible to other packages in the kube-apiserver binary, binaries that uses -the client package. - -Step 2 and 3 are mechanical, we plan on autogenerate these using the -cmd/libs/go2idl/ tool. - -### Scripts changes and auto-generated code: - -1. Generate conversions and deep-copies: - - 1. Add your "group/" or "group/version" into - cmd/libs/go2idl/conversion-gen/main.go; - 2. Make sure your pkg/apis/``/`` directory has a doc.go file - with the comment `// +k8s:deepcopy-gen=package,register`, to catch the - attention of our generation tools. - 3. Make sure your `pkg/apis//` directory has a doc.go file - with the comment `// +k8s:conversion-gen=`, to catch the - attention of our generation tools. For most APIs the only target you - need is `k8s.io/kubernetes/pkg/apis/` (your internal API). - 3. Make sure your `pkg/apis/` and `pkg/apis//` directories - have a doc.go file with the comment `+groupName=.k8s.io`, to correctly - generate the DNS-suffixed group name. - 5. Run hack/update-all.sh. - -2. Generate files for Ugorji codec: - - 1. Touch types.generated.go in pkg/apis/``{/, ``}; - 2. Run hack/update-codecgen.sh. - -3. Generate protobuf objects: - - 1. Add your group to `cmd/libs/go2idl/go-to-protobuf/protobuf/cmd.go` to - `New()` in the `Packages` field - 2. Run hack/update-generated-protobuf.sh - -### Client (optional): - -We are overhauling pkg/client, so this section might be outdated; see -[#15730](https://github.com/kubernetes/kubernetes/pull/15730) for how the client -package might evolve. Currently, to add your group to the client package, you -need to: - -1. Create pkg/client/unversioned/``.go, define a group client interface -and implement the client. You can take pkg/client/unversioned/extensions.go as a -reference. - -2. Add the group client interface to the `Interface` in -pkg/client/unversioned/client.go and add method to fetch the interface. Again, -you can take how we add the Extensions group there as an example. - -3. If you need to support the group in kubectl, you'll also need to modify -pkg/kubectl/cmd/util/factory.go. - -### Make the group/version selectable in unit tests (optional): - -1. Add your group in pkg/api/testapi/testapi.go, then you can access the group -in tests through testapi.``; - -2. Add your "group/version" to `KUBE_TEST_API_VERSIONS` in - hack/make-rules/test.sh and hack/make-rules/test-integration.sh - -TODO: Add a troubleshooting section. - - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/adding-an-APIGroup.md?pixel)]() - diff --git a/api-conventions.md b/api-conventions.md deleted file mode 100644 index 0be45182..00000000 --- a/api-conventions.md +++ /dev/null @@ -1,1350 +0,0 @@ -API Conventions -=============== - -Updated: 4/22/2016 - -*This document is oriented at users who want a deeper understanding of the -Kubernetes API structure, and developers wanting to extend the Kubernetes API. -An introduction to using resources with kubectl can be found in [Working with -resources](../user-guide/working-with-resources.md).* - -**Table of Contents** - - - - [Types (Kinds)](#types-kinds) - - [Resources](#resources) - - [Objects](#objects) - - [Metadata](#metadata) - - [Spec and Status](#spec-and-status) - - [Typical status properties](#typical-status-properties) - - [References to related objects](#references-to-related-objects) - - [Lists of named subobjects preferred over maps](#lists-of-named-subobjects-preferred-over-maps) - - [Primitive types](#primitive-types) - - [Constants](#constants) - - [Unions](#unions) - - [Lists and Simple kinds](#lists-and-simple-kinds) - - [Differing Representations](#differing-representations) - - [Verbs on Resources](#verbs-on-resources) - - [PATCH operations](#patch-operations) - - [Strategic Merge Patch](#strategic-merge-patch) - - [List Operations](#list-operations) - - [Map Operations](#map-operations) - - [Idempotency](#idempotency) - - [Optional vs. Required](#optional-vs-required) - - [Defaulting](#defaulting) - - [Late Initialization](#late-initialization) - - [Concurrency Control and Consistency](#concurrency-control-and-consistency) - - [Serialization Format](#serialization-format) - - [Units](#units) - - [Selecting Fields](#selecting-fields) - - [Object references](#object-references) - - [HTTP Status codes](#http-status-codes) - - [Success codes](#success-codes) - - [Error codes](#error-codes) - - [Response Status Kind](#response-status-kind) - - [Events](#events) - - [Naming conventions](#naming-conventions) - - [Label, selector, and annotation conventions](#label-selector-and-annotation-conventions) - - [WebSockets and SPDY](#websockets-and-spdy) - - [Validation](#validation) - - - -The conventions of the [Kubernetes API](../api.md) (and related APIs in the -ecosystem) are intended to ease client development and ensure that configuration -mechanisms can be implemented that work across a diverse set of use cases -consistently. - -The general style of the Kubernetes API is RESTful - clients create, update, -delete, or retrieve a description of an object via the standard HTTP verbs -(POST, PUT, DELETE, and GET) - and those APIs preferentially accept and return -JSON. Kubernetes also exposes additional endpoints for non-standard verbs and -allows alternative content types. All of the JSON accepted and returned by the -server has a schema, identified by the "kind" and "apiVersion" fields. Where -relevant HTTP header fields exist, they should mirror the content of JSON -fields, but the information should not be represented only in the HTTP header. - -The following terms are defined: - -* **Kind** the name of a particular object schema (e.g. the "Cat" and "Dog" -kinds would have different attributes and properties) -* **Resource** a representation of a system entity, sent or retrieved as JSON -via HTTP to the server. Resources are exposed via: - * Collections - a list of resources of the same type, which may be queryable - * Elements - an individual resource, addressable via a URL - -Each resource typically accepts and returns data of a single kind. A kind may be -accepted or returned by multiple resources that reflect specific use cases. For -instance, the kind "Pod" is exposed as a "pods" resource that allows end users -to create, update, and delete pods, while a separate "pod status" resource (that -acts on "Pod" kind) allows automated processes to update a subset of the fields -in that resource. - -Resource collections should be all lowercase and plural, whereas kinds are -CamelCase and singular. - - -## Types (Kinds) - -Kinds are grouped into three categories: - -1. **Objects** represent a persistent entity in the system. - - Creating an API object is a record of intent - once created, the system will -work to ensure that resource exists. All API objects have common metadata. - - An object may have multiple resources that clients can use to perform -specific actions that create, update, delete, or get. - - Examples: `Pod`, `ReplicationController`, `Service`, `Namespace`, `Node`. - -2. **Lists** are collections of **resources** of one (usually) or more -(occasionally) kinds. - - The name of a list kind must end with "List". Lists have a limited set of -common metadata. All lists use the required "items" field to contain the array -of objects they return. Any kind that has the "items" field must be a list kind. - - Most objects defined in the system should have an endpoint that returns the -full set of resources, as well as zero or more endpoints that return subsets of -the full list. Some objects may be singletons (the current user, the system -defaults) and may not have lists. - - In addition, all lists that return objects with labels should support label -filtering (see [docs/user-guide/labels.md](../user-guide/labels.md), and most -lists should support filtering by fields. - - Examples: PodLists, ServiceLists, NodeLists - - TODO: Describe field filtering below or in a separate doc. - -3. **Simple** kinds are used for specific actions on objects and for -non-persistent entities. - - Given their limited scope, they have the same set of limited common metadata -as lists. - - For instance, the "Status" kind is returned when errors occur and is not -persisted in the system. - - Many simple resources are "subresources", which are rooted at API paths of -specific resources. When resources wish to expose alternative actions or views -that are closely coupled to a single resource, they should do so using new -sub-resources. Common subresources include: - - * `/binding`: Used to bind a resource representing a user request (e.g., Pod, -PersistentVolumeClaim) to a cluster infrastructure resource (e.g., Node, -PersistentVolume). - * `/status`: Used to write just the status portion of a resource. For -example, the `/pods` endpoint only allows updates to `metadata` and `spec`, -since those reflect end-user intent. An automated process should be able to -modify status for users to see by sending an updated Pod kind to the server to -the "/pods/<name>/status" endpoint - the alternate endpoint allows -different rules to be applied to the update, and access to be appropriately -restricted. - * `/scale`: Used to read and write the count of a resource in a manner that -is independent of the specific resource schema. - - Two additional subresources, `proxy` and `portforward`, provide access to -cluster resources as described in -[docs/user-guide/accessing-the-cluster.md](../user-guide/accessing-the-cluster.md). - -The standard REST verbs (defined below) MUST return singular JSON objects. Some -API endpoints may deviate from the strict REST pattern and return resources that -are not singular JSON objects, such as streams of JSON objects or unstructured -text log data. - -The term "kind" is reserved for these "top-level" API types. The term "type" -should be used for distinguishing sub-categories within objects or subobjects. - -### Resources - -All JSON objects returned by an API MUST have the following fields: - -* kind: a string that identifies the schema this object should have -* apiVersion: a string that identifies the version of the schema the object -should have - -These fields are required for proper decoding of the object. They may be -populated by the server by default from the specified URL path, but the client -likely needs to know the values in order to construct the URL path. - -### Objects - -#### Metadata - -Every object kind MUST have the following metadata in a nested object field -called "metadata": - -* namespace: a namespace is a DNS compatible label that objects are subdivided -into. The default namespace is 'default'. See -[docs/user-guide/namespaces.md](../user-guide/namespaces.md) for more. -* name: a string that uniquely identifies this object within the current -namespace (see [docs/user-guide/identifiers.md](../user-guide/identifiers.md)). -This value is used in the path when retrieving an individual object. -* uid: a unique in time and space value (typically an RFC 4122 generated -identifier, see [docs/user-guide/identifiers.md](../user-guide/identifiers.md)) -used to distinguish between objects with the same name that have been deleted -and recreated - -Every object SHOULD have the following metadata in a nested object field called -"metadata": - -* resourceVersion: a string that identifies the internal version of this object -that can be used by clients to determine when objects have changed. This value -MUST be treated as opaque by clients and passed unmodified back to the server. -Clients should not assume that the resource version has meaning across -namespaces, different kinds of resources, or different servers. (See -[concurrency control](#concurrency-control-and-consistency), below, for more -details.) -* generation: a sequence number representing a specific generation of the -desired state. Set by the system and monotonically increasing, per-resource. May -be compared, such as for RAW and WAW consistency. -* creationTimestamp: a string representing an RFC 3339 date of the date and time -an object was created -* deletionTimestamp: a string representing an RFC 3339 date of the date and time -after which this resource will be deleted. This field is set by the server when -a graceful deletion is requested by the user, and is not directly settable by a -client. The resource will be deleted (no longer visible from resource lists, and -not reachable by name) after the time in this field. Once set, this value may -not be unset or be set further into the future, although it may be shortened or -the resource may be deleted prior to this time. -* labels: a map of string keys and values that can be used to organize and -categorize objects (see [docs/user-guide/labels.md](../user-guide/labels.md)) -* annotations: a map of string keys and values that can be used by external -tooling to store and retrieve arbitrary metadata about this object (see -[docs/user-guide/annotations.md](../user-guide/annotations.md)) - -Labels are intended for organizational purposes by end users (select the pods -that match this label query). Annotations enable third-party automation and -tooling to decorate objects with additional metadata for their own use. - -#### Spec and Status - -By convention, the Kubernetes API makes a distinction between the specification -of the desired state of an object (a nested object field called "spec") and the -status of the object at the current time (a nested object field called -"status"). The specification is a complete description of the desired state, -including configuration settings provided by the user, -[default values](#defaulting) expanded by the system, and properties initialized -or otherwise changed after creation by other ecosystem components (e.g., -schedulers, auto-scalers), and is persisted in stable storage with the API -object. If the specification is deleted, the object will be purged from the -system. The status summarizes the current state of the object in the system, and -is usually persisted with the object by an automated processes but may be -generated on the fly. At some cost and perhaps some temporary degradation in -behavior, the status could be reconstructed by observation if it were lost. - -When a new version of an object is POSTed or PUT, the "spec" is updated and -available immediately. Over time the system will work to bring the "status" into -line with the "spec". The system will drive toward the most recent "spec" -regardless of previous versions of that stanza. In other words, if a value is -changed from 2 to 5 in one PUT and then back down to 3 in another PUT the system -is not required to 'touch base' at 5 before changing the "status" to 3. In other -words, the system's behavior is *level-based* rather than *edge-based*. This -enables robust behavior in the presence of missed intermediate state changes. - -The Kubernetes API also serves as the foundation for the declarative -configuration schema for the system. In order to facilitate level-based -operation and expression of declarative configuration, fields in the -specification should have declarative rather than imperative names and -semantics -- they represent the desired state, not actions intended to yield the -desired state. - -The PUT and POST verbs on objects MUST ignore the "status" values, to avoid -accidentally overwriting the status in read-modify-write scenarios. A `/status` -subresource MUST be provided to enable system components to update statuses of -resources they manage. - -Otherwise, PUT expects the whole object to be specified. Therefore, if a field -is omitted it is assumed that the client wants to clear that field's value. The -PUT verb does not accept partial updates. Modification of just part of an object -may be achieved by GETting the resource, modifying part of the spec, labels, or -annotations, and then PUTting it back. See -[concurrency control](#concurrency-control-and-consistency), below, regarding -read-modify-write consistency when using this pattern. Some objects may expose -alternative resource representations that allow mutation of the status, or -performing custom actions on the object. - -All objects that represent a physical resource whose state may vary from the -user's desired intent SHOULD have a "spec" and a "status". Objects whose state -cannot vary from the user's desired intent MAY have only "spec", and MAY rename -"spec" to a more appropriate name. - -Objects that contain both spec and status should not contain additional -top-level fields other than the standard metadata fields. - -##### Typical status properties - -**Conditions** represent the latest available observations of an object's -current state. Objects may report multiple conditions, and new types of -conditions may be added in the future. Therefore, conditions are represented -using a list/slice, where all have similar structure. - -The `FooCondition` type for some resource type `Foo` may include a subset of the -following fields, but must contain at least `type` and `status` fields: - -```go - Type FooConditionType `json:"type" description:"type of Foo condition"` - Status ConditionStatus `json:"status" description:"status of the condition, one of True, False, Unknown"` - LastHeartbeatTime unversioned.Time `json:"lastHeartbeatTime,omitempty" description:"last time we got an update on a given condition"` - LastTransitionTime unversioned.Time `json:"lastTransitionTime,omitempty" description:"last time the condition transit from one status to another"` - Reason string `json:"reason,omitempty" description:"one-word CamelCase reason for the condition's last transition"` - Message string `json:"message,omitempty" description:"human-readable message indicating details about last transition"` -``` - -Additional fields may be added in the future. - -Conditions should be added to explicitly convey properties that users and -components care about rather than requiring those properties to be inferred from -other observations. - -Condition status values may be `True`, `False`, or `Unknown`. The absence of a -condition should be interpreted the same as `Unknown`. - -In general, condition values may change back and forth, but some condition -transitions may be monotonic, depending on the resource and condition type. -However, conditions are observations and not, themselves, state machines, nor do -we define comprehensive state machines for objects, nor behaviors associated -with state transitions. The system is level-based rather than edge-triggered, -and should assume an Open World. - -A typical oscillating condition type is `Ready`, which indicates the object was -believed to be fully operational at the time it was last probed. A possible -monotonic condition could be `Succeeded`. A `False` status for `Succeeded` would -imply failure. An object that was still active would not have a `Succeeded` -condition, or its status would be `Unknown`. - -Some resources in the v1 API contain fields called **`phase`**, and associated -`message`, `reason`, and other status fields. The pattern of using `phase` is -deprecated. Newer API types should use conditions instead. Phase was essentially -a state-machine enumeration field, that contradicted -[system-design principles](../design/principles.md#control-logic) and hampered -evolution, since [adding new enum values breaks backward -compatibility](api_changes.md). Rather than encouraging clients to infer -implicit properties from phases, we intend to explicitly expose the conditions -that clients need to monitor. Conditions also have the benefit that it is -possible to create some conditions with uniform meaning across all resource -types, while still exposing others that are unique to specific resource types. -See [#7856](http://issues.k8s.io/7856) for more details and discussion. - -In condition types, and everywhere else they appear in the API, **`Reason`** is -intended to be a one-word, CamelCase representation of the category of cause of -the current status, and **`Message`** is intended to be a human-readable phrase -or sentence, which may contain specific details of the individual occurrence. -`Reason` is intended to be used in concise output, such as one-line -`kubectl get` output, and in summarizing occurrences of causes, whereas -`Message` is intended to be presented to users in detailed status explanations, -such as `kubectl describe` output. - -Historical information status (e.g., last transition time, failure counts) is -only provided with reasonable effort, and is not guaranteed to not be lost. - -Status information that may be large (especially proportional in size to -collections of other resources, such as lists of references to other objects -- -see below) and/or rapidly changing, such as -[resource usage](../design/resources.md#usage-data), should be put into separate -objects, with possibly a reference from the original object. This helps to -ensure that GETs and watch remain reasonably efficient for the majority of -clients, which may not need that data. - -Some resources report the `observedGeneration`, which is the `generation` most -recently observed by the component responsible for acting upon changes to the -desired state of the resource. This can be used, for instance, to ensure that -the reported status reflects the most recent desired status. - -#### References to related objects - -References to loosely coupled sets of objects, such as -[pods](../user-guide/pods.md) overseen by a -[replication controller](../user-guide/replication-controller.md), are usually -best referred to using a [label selector](../user-guide/labels.md). In order to -ensure that GETs of individual objects remain bounded in time and space, these -sets may be queried via separate API queries, but will not be expanded in the -referring object's status. - -References to specific objects, especially specific resource versions and/or -specific fields of those objects, are specified using the `ObjectReference` type -(or other types representing strict subsets of it). Unlike partial URLs, the -ObjectReference type facilitates flexible defaulting of fields from the -referring object or other contextual information. - -References in the status of the referee to the referrer may be permitted, when -the references are one-to-one and do not need to be frequently updated, -particularly in an edge-based manner. - -#### Lists of named subobjects preferred over maps - -Discussed in [#2004](http://issue.k8s.io/2004) and elsewhere. There are no maps -of subobjects in any API objects. Instead, the convention is to use a list of -subobjects containing name fields. - -For example: - -```yaml -ports: - - name: www - containerPort: 80 -``` - -vs. - -```yaml -ports: - www: - containerPort: 80 -``` - -This rule maintains the invariant that all JSON/YAML keys are fields in API -objects. The only exceptions are pure maps in the API (currently, labels, -selectors, annotations, data), as opposed to sets of subobjects. - -#### Primitive types - -* Avoid floating-point values as much as possible, and never use them in spec. -Floating-point values cannot be reliably round-tripped (encoded and re-decoded) -without changing, and have varying precision and representations across -languages and architectures. -* All numbers (e.g., uint32, int64) are converted to float64 by Javascript and -some other languages, so any field which is expected to exceed that either in -magnitude or in precision (specifically integer values > 53 bits) should be -serialized and accepted as strings. -* Do not use unsigned integers, due to inconsistent support across languages and -libraries. Just validate that the integer is non-negative if that's the case. -* Do not use enums. Use aliases for string instead (e.g., `NodeConditionType`). -* Look at similar fields in the API (e.g., ports, durations) and follow the -conventions of existing fields. -* All public integer fields MUST use the Go `(u)int32` or Go `(u)int64` types, -not `(u)int` (which is ambiguous depending on target platform). Internal types -may use `(u)int`. - -#### Constants - -Some fields will have a list of allowed values (enumerations). These values will -be strings, and they will be in CamelCase, with an initial uppercase letter. -Examples: "ClusterFirst", "Pending", "ClientIP". - -#### Unions - -Sometimes, at most one of a set of fields can be set. For example, the -[volumes] field of a PodSpec has 17 different volume type-specific fields, such -as `nfs` and `iscsi`. All fields in the set should be -[Optional](#optional-vs-required). - -Sometimes, when a new type is created, the api designer may anticipate that a -union will be needed in the future, even if only one field is allowed initially. -In this case, be sure to make the field [Optional](#optional-vs-required) -optional. In the validation, you may still return an error if the sole field is -unset. Do not set a default value for that field. - -### Lists and Simple kinds - -Every list or simple kind SHOULD have the following metadata in a nested object -field called "metadata": - -* resourceVersion: a string that identifies the common version of the objects -returned by in a list. This value MUST be treated as opaque by clients and -passed unmodified back to the server. A resource version is only valid within a -single namespace on a single kind of resource. - -Every simple kind returned by the server, and any simple kind sent to the server -that must support idempotency or optimistic concurrency should return this -value. Since simple resources are often used as input alternate actions that -modify objects, the resource version of the simple resource should correspond to -the resource version of the object. - - -## Differing Representations - -An API may represent a single entity in different ways for different clients, or -transform an object after certain transitions in the system occur. In these -cases, one request object may have two representations available as different -resources, or different kinds. - -An example is a Service, which represents the intent of the user to group a set -of pods with common behavior on common ports. When Kubernetes detects a pod -matches the service selector, the IP address and port of the pod are added to an -Endpoints resource for that Service. The Endpoints resource exists only if the -Service exists, but exposes only the IPs and ports of the selected pods. The -full service is represented by two distinct resources - under the original -Service resource the user created, as well as in the Endpoints resource. - -As another example, a "pod status" resource may accept a PUT with the "pod" -kind, with different rules about what fields may be changed. - -Future versions of Kubernetes may allow alternative encodings of objects beyond -JSON. - - -## Verbs on Resources - -API resources should use the traditional REST pattern: - -* GET /<resourceNamePlural> - Retrieve a list of type -<resourceName>, e.g. GET /pods returns a list of Pods. -* POST /<resourceNamePlural> - Create a new resource from the JSON object -provided by the client. -* GET /<resourceNamePlural>/<name> - Retrieves a single resource -with the given name, e.g. GET /pods/first returns a Pod named 'first'. Should be -constant time, and the resource should be bounded in size. -* DELETE /<resourceNamePlural>/<name> - Delete the single resource -with the given name. DeleteOptions may specify gracePeriodSeconds, the optional -duration in seconds before the object should be deleted. Individual kinds may -declare fields which provide a default grace period, and different kinds may -have differing kind-wide default grace periods. A user provided grace period -overrides a default grace period, including the zero grace period ("now"). -* PUT /<resourceNamePlural>/<name> - Update or create the resource -with the given name with the JSON object provided by the client. -* PATCH /<resourceNamePlural>/<name> - Selectively modify the -specified fields of the resource. See more information [below](#patch). -* GET /<resourceNamePlural>&watch=true - Receive a stream of JSON -objects corresponding to changes made to any resource of the given kind over -time. - -### PATCH operations - -The API supports three different PATCH operations, determined by their -corresponding Content-Type header: - -* JSON Patch, `Content-Type: application/json-patch+json` - * As defined in [RFC6902](https://tools.ietf.org/html/rfc6902), a JSON Patch is -a sequence of operations that are executed on the resource, e.g. `{"op": "add", -"path": "/a/b/c", "value": [ "foo", "bar" ]}`. For more details on how to use -JSON Patch, see the RFC. -* Merge Patch, `Content-Type: application/merge-patch+json` - * As defined in [RFC7386](https://tools.ietf.org/html/rfc7386), a Merge Patch -is essentially a partial representation of the resource. The submitted JSON is -"merged" with the current resource to create a new one, then the new one is -saved. For more details on how to use Merge Patch, see the RFC. -* Strategic Merge Patch, `Content-Type: application/strategic-merge-patch+json` - * Strategic Merge Patch is a custom implementation of Merge Patch. For a -detailed explanation of how it works and why it needed to be introduced, see -below. - -#### Strategic Merge Patch - -In the standard JSON merge patch, JSON objects are always merged but lists are -always replaced. Often that isn't what we want. Let's say we start with the -following Pod: - -```yaml -spec: - containers: - - name: nginx - image: nginx-1.0 -``` - -...and we POST that to the server (as JSON). Then let's say we want to *add* a -container to this Pod. - -```yaml -PATCH /api/v1/namespaces/default/pods/pod-name -spec: - containers: - - name: log-tailer - image: log-tailer-1.0 -``` - -If we were to use standard Merge Patch, the entire container list would be -replaced with the single log-tailer container. However, our intent is for the -container lists to merge together based on the `name` field. - -To solve this problem, Strategic Merge Patch uses metadata attached to the API -objects to determine what lists should be merged and which ones should not. -Currently the metadata is available as struct tags on the API objects -themselves, but will become available to clients as Swagger annotations in the -future. In the above example, the `patchStrategy` metadata for the `containers` -field would be `merge` and the `patchMergeKey` would be `name`. - -Note: If the patch results in merging two lists of scalars, the scalars are -first deduplicated and then merged. - -Strategic Merge Patch also supports special operations as listed below. - -### List Operations - -To override the container list to be strictly replaced, regardless of the -default: - -```yaml -containers: - - name: nginx - image: nginx-1.0 - - $patch: replace # any further $patch operations nested in this list will be ignored -``` - -To delete an element of a list that should be merged: - -```yaml -containers: - - name: nginx - image: nginx-1.0 - - $patch: delete - name: log-tailer # merge key and value goes here -``` - -### Map Operations - -To indicate that a map should not be merged and instead should be taken literally: - -```yaml -$patch: replace # recursive and applies to all fields of the map it's in -containers: -- name: nginx - image: nginx-1.0 -``` - -To delete a field of a map: - -```yaml -name: nginx -image: nginx-1.0 -labels: - live: null # set the value of the map key to null -``` - - -## Idempotency - -All compatible Kubernetes APIs MUST support "name idempotency" and respond with -an HTTP status code 409 when a request is made to POST an object that has the -same name as an existing object in the system. See -[docs/user-guide/identifiers.md](../user-guide/identifiers.md) for details. - -Names generated by the system may be requested using `metadata.generateName`. -GenerateName indicates that the name should be made unique by the server prior -to persisting it. A non-empty value for the field indicates the name will be -made unique (and the name returned to the client will be different than the name -passed). The value of this field will be combined with a unique suffix on the -server if the Name field has not been provided. The provided value must be valid -within the rules for Name, and may be truncated by the length of the suffix -required to make the value unique on the server. If this field is specified, and -Name is not present, the server will NOT return a 409 if the generated name -exists - instead, it will either return 201 Created or 504 with Reason -`ServerTimeout` indicating a unique name could not be found in the time -allotted, and the client should retry (optionally after the time indicated in -the Retry-After header). - -## Optional vs. Required - -Fields must be either optional or required. - -Optional fields have the following properties: - -- They have `omitempty` struct tag in Go. -- They are a pointer type in the Go definition (e.g. `bool *awesomeFlag`) or -have a built-in `nil` value (e.g. maps and slices). -- The API server should allow POSTing and PUTing a resource with this field -unset. - -Required fields have the opposite properties, namely: - -- They do not have an `omitempty` struct tag. -- They are not a pointer type in the Go definition (e.g. `bool otherFlag`). -- The API server should not allow POSTing or PUTing a resource with this field -unset. - -Using the `omitempty` tag causes swagger documentation to reflect that the field -is optional. - -Using a pointer allows distinguishing unset from the zero value for that type. -There are some cases where, in principle, a pointer is not needed for an -optional field since the zero value is forbidden, and thus implies unset. There -are examples of this in the codebase. However: - -- it can be difficult for implementors to anticipate all cases where an empty -value might need to be distinguished from a zero value -- structs are not omitted from encoder output even where omitempty is specified, -which is messy; -- having a pointer consistently imply optional is clearer for users of the Go -language client, and any other clients that use corresponding types - -Therefore, we ask that pointers always be used with optional fields that do not -have a built-in `nil` value. - - -## Defaulting - -Default resource values are API version-specific, and they are applied during -the conversion from API-versioned declarative configuration to internal objects -representing the desired state (`Spec`) of the resource. Subsequent GETs of the -resource will include the default values explicitly. - -Incorporating the default values into the `Spec` ensures that `Spec` depicts the -full desired state so that it is easier for the system to determine how to -achieve the state, and for the user to know what to anticipate. - -API version-specific default values are set by the API server. - -## Late Initialization - -Late initialization is when resource fields are set by a system controller -after an object is created/updated. - -For example, the scheduler sets the `pod.spec.nodeName` field after the pod is -created. - -Late-initializers should only make the following types of modifications: - - Setting previously unset fields - - Adding keys to maps - - Adding values to arrays which have mergeable semantics -(`patchStrategy:"merge"` attribute in the type definition). - -These conventions: - 1. allow a user (with sufficient privilege) to override any system-default - behaviors by setting the fields that would otherwise have been defaulted. - 1. enables updates from users to be merged with changes made during late -initialization, using strategic merge patch, as opposed to clobbering the -change. - 1. allow the component which does the late-initialization to use strategic -merge patch, which facilitates composition and concurrency of such components. - -Although the apiserver Admission Control stage acts prior to object creation, -Admission Control plugins should follow the Late Initialization conventions -too, to allow their implementation to be later moved to a 'controller', or to -client libraries. - -## Concurrency Control and Consistency - -Kubernetes leverages the concept of *resource versions* to achieve optimistic -concurrency. All Kubernetes resources have a "resourceVersion" field as part of -their metadata. This resourceVersion is a string that identifies the internal -version of an object that can be used by clients to determine when objects have -changed. When a record is about to be updated, it's version is checked against a -pre-saved value, and if it doesn't match, the update fails with a StatusConflict -(HTTP status code 409). - -The resourceVersion is changed by the server every time an object is modified. -If resourceVersion is included with the PUT operation the system will verify -that there have not been other successful mutations to the resource during a -read/modify/write cycle, by verifying that the current value of resourceVersion -matches the specified value. - -The resourceVersion is currently backed by [etcd's -modifiedIndex](https://coreos.com/docs/distributed-configuration/etcd-api/). -However, it's important to note that the application should *not* rely on the -implementation details of the versioning system maintained by Kubernetes. We may -change the implementation of resourceVersion in the future, such as to change it -to a timestamp or per-object counter. - -The only way for a client to know the expected value of resourceVersion is to -have received it from the server in response to a prior operation, typically a -GET. This value MUST be treated as opaque by clients and passed unmodified back -to the server. Clients should not assume that the resource version has meaning -across namespaces, different kinds of resources, or different servers. -Currently, the value of resourceVersion is set to match etcd's sequencer. You -could think of it as a logical clock the API server can use to order requests. -However, we expect the implementation of resourceVersion to change in the -future, such as in the case we shard the state by kind and/or namespace, or port -to another storage system. - -In the case of a conflict, the correct client action at this point is to GET the -resource again, apply the changes afresh, and try submitting again. This -mechanism can be used to prevent races like the following: - -``` -Client #1 Client #2 -GET Foo GET Foo -Set Foo.Bar = "one" Set Foo.Baz = "two" -PUT Foo PUT Foo -``` - -When these sequences occur in parallel, either the change to Foo.Bar or the -change to Foo.Baz can be lost. - -On the other hand, when specifying the resourceVersion, one of the PUTs will -fail, since whichever write succeeds changes the resourceVersion for Foo. - -resourceVersion may be used as a precondition for other operations (e.g., GET, -DELETE) in the future, such as for read-after-write consistency in the presence -of caching. - -"Watch" operations specify resourceVersion using a query parameter. It is used -to specify the point at which to begin watching the specified resources. This -may be used to ensure that no mutations are missed between a GET of a resource -(or list of resources) and a subsequent Watch, even if the current version of -the resource is more recent. This is currently the main reason that list -operations (GET on a collection) return resourceVersion. - - -## Serialization Format - -APIs may return alternative representations of any resource in response to an -Accept header or under alternative endpoints, but the default serialization for -input and output of API responses MUST be JSON. - -Protobuf serialization of API objects are currently **EXPERIMENTAL** and will change without notice. - -All dates should be serialized as RFC3339 strings. - -## Units - -Units must either be explicit in the field name (e.g., `timeoutSeconds`), or -must be specified as part of the value (e.g., `resource.Quantity`). Which -approach is preferred is TBD, though currently we use the `fooSeconds` -convention for durations. - - -## Selecting Fields - -Some APIs may need to identify which field in a JSON object is invalid, or to -reference a value to extract from a separate resource. The current -recommendation is to use standard JavaScript syntax for accessing that field, -assuming the JSON object was transformed into a JavaScript object, without the -leading dot, such as `metadata.name`. - -Examples: - -* Find the field "current" in the object "state" in the second item in the array -"fields": `fields[1].state.current` - -## Object references - -Object references should either be called `fooName` if referring to an object of -kind `Foo` by just the name (within the current namespace, if a namespaced -resource), or should be called `fooRef`, and should contain a subset of the -fields of the `ObjectReference` type. - - -TODO: Plugins, extensions, nested kinds, headers - - -## HTTP Status codes - -The server will respond with HTTP status codes that match the HTTP spec. See the -section below for a breakdown of the types of status codes the server will send. - -The following HTTP status codes may be returned by the API. - -#### Success codes - -* `200 StatusOK` - * Indicates that the request completed successfully. -* `201 StatusCreated` - * Indicates that the request to create kind completed successfully. -* `204 StatusNoContent` - * Indicates that the request completed successfully, and the response contains -no body. - * Returned in response to HTTP OPTIONS requests. - -#### Error codes - -* `307 StatusTemporaryRedirect` - * Indicates that the address for the requested resource has changed. - * Suggested client recovery behavior: - * Follow the redirect. - - -* `400 StatusBadRequest` - * Indicates the requested is invalid. - * Suggested client recovery behavior: - * Do not retry. Fix the request. - - -* `401 StatusUnauthorized` - * Indicates that the server can be reached and understood the request, but -refuses to take any further action, because the client must provide -authorization. If the client has provided authorization, the server is -indicating the provided authorization is unsuitable or invalid. - * Suggested client recovery behavior: - * If the user has not supplied authorization information, prompt them for -the appropriate credentials. If the user has supplied authorization information, -inform them their credentials were rejected and optionally prompt them again. - - -* `403 StatusForbidden` - * Indicates that the server can be reached and understood the request, but -refuses to take any further action, because it is configured to deny access for -some reason to the requested resource by the client. - * Suggested client recovery behavior: - * Do not retry. Fix the request. - - -* `404 StatusNotFound` - * Indicates that the requested resource does not exist. - * Suggested client recovery behavior: - * Do not retry. Fix the request. - - -* `405 StatusMethodNotAllowed` - * Indicates that the action the client attempted to perform on the resource -was not supported by the code. - * Suggested client recovery behavior: - * Do not retry. Fix the request. - - -* `409 StatusConflict` - * Indicates that either the resource the client attempted to create already -exists or the requested update operation cannot be completed due to a conflict. - * Suggested client recovery behavior: - * * If creating a new resource: - * * Either change the identifier and try again, or GET and compare the -fields in the pre-existing object and issue a PUT/update to modify the existing -object. - * * If updating an existing resource: - * See `Conflict` from the `status` response section below on how to -retrieve more information about the nature of the conflict. - * GET and compare the fields in the pre-existing object, merge changes (if -still valid according to preconditions), and retry with the updated request -(including `ResourceVersion`). - - -* `410 StatusGone` - * Indicates that the item is no longer available at the server and no -forwarding address is known. - * Suggested client recovery behavior: - * Do not retry. Fix the request. - - -* `422 StatusUnprocessableEntity` - * Indicates that the requested create or update operation cannot be completed -due to invalid data provided as part of the request. - * Suggested client recovery behavior: - * Do not retry. Fix the request. - - -* `429 StatusTooManyRequests` - * Indicates that the either the client rate limit has been exceeded or the -server has received more requests then it can process. - * Suggested client recovery behavior: - * Read the `Retry-After` HTTP header from the response, and wait at least -that long before retrying. - - -* `500 StatusInternalServerError` - * Indicates that the server can be reached and understood the request, but -either an unexpected internal error occurred and the outcome of the call is -unknown, or the server cannot complete the action in a reasonable time (this may -be due to temporary server load or a transient communication issue with another -server). - * Suggested client recovery behavior: - * Retry with exponential backoff. - - -* `503 StatusServiceUnavailable` - * Indicates that required service is unavailable. - * Suggested client recovery behavior: - * Retry with exponential backoff. - - -* `504 StatusServerTimeout` - * Indicates that the request could not be completed within the given time. -Clients can get this response ONLY when they specified a timeout param in the -request. - * Suggested client recovery behavior: - * Increase the value of the timeout param and retry with exponential -backoff. - -## Response Status Kind - -Kubernetes will always return the `Status` kind from any API endpoint when an -error occurs. Clients SHOULD handle these types of objects when appropriate. - -A `Status` kind will be returned by the API in two cases: - * When an operation is not successful (i.e. when the server would return a non -2xx HTTP status code). - * When a HTTP `DELETE` call is successful. - -The status object is encoded as JSON and provided as the body of the response. -The status object contains fields for humans and machine consumers of the API to -get more detailed information for the cause of the failure. The information in -the status object supplements, but does not override, the HTTP status code's -meaning. When fields in the status object have the same meaning as generally -defined HTTP headers and that header is returned with the response, the header -should be considered as having higher priority. - -**Example:** - -```console -$ curl -v -k -H "Authorization: Bearer WhCDvq4VPpYhrcfmF6ei7V9qlbqTubUc" https://10.240.122.184:443/api/v1/namespaces/default/pods/grafana - -> GET /api/v1/namespaces/default/pods/grafana HTTP/1.1 -> User-Agent: curl/7.26.0 -> Host: 10.240.122.184 -> Accept: */* -> Authorization: Bearer WhCDvq4VPpYhrcfmF6ei7V9qlbqTubUc -> - -< HTTP/1.1 404 Not Found -< Content-Type: application/json -< Date: Wed, 20 May 2015 18:10:42 GMT -< Content-Length: 232 -< -{ - "kind": "Status", - "apiVersion": "v1", - "metadata": {}, - "status": "Failure", - "message": "pods \"grafana\" not found", - "reason": "NotFound", - "details": { - "name": "grafana", - "kind": "pods" - }, - "code": 404 -} -``` - -`status` field contains one of two possible values: -* `Success` -* `Failure` - -`message` may contain human-readable description of the error - -`reason` may contain a machine-readable, one-word, CamelCase description of why -this operation is in the `Failure` status. If this value is empty there is no -information available. The `reason` clarifies an HTTP status code but does not -override it. - -`details` may contain extended data associated with the reason. Each reason may -define its own extended details. This field is optional and the data returned is -not guaranteed to conform to any schema except that defined by the reason type. - -Possible values for the `reason` and `details` fields: -* `BadRequest` - * Indicates that the request itself was invalid, because the request doesn't -make any sense, for example deleting a read-only object. - * This is different than `status reason` `Invalid` above which indicates that -the API call could possibly succeed, but the data was invalid. - * API calls that return BadRequest can never succeed. - * Http status code: `400 StatusBadRequest` - - -* `Unauthorized` - * Indicates that the server can be reached and understood the request, but -refuses to take any further action without the client providing appropriate -authorization. If the client has provided authorization, this error indicates -the provided credentials are insufficient or invalid. - * Details (optional): - * `kind string` - * The kind attribute of the unauthorized resource (on some operations may -differ from the requested resource). - * `name string` - * The identifier of the unauthorized resource. - * HTTP status code: `401 StatusUnauthorized` - - -* `Forbidden` - * Indicates that the server can be reached and understood the request, but -refuses to take any further action, because it is configured to deny access for -some reason to the requested resource by the client. - * Details (optional): - * `kind string` - * The kind attribute of the forbidden resource (on some operations may -differ from the requested resource). - * `name string` - * The identifier of the forbidden resource. - * HTTP status code: `403 StatusForbidden` - - -* `NotFound` - * Indicates that one or more resources required for this operation could not -be found. - * Details (optional): - * `kind string` - * The kind attribute of the missing resource (on some operations may -differ from the requested resource). - * `name string` - * The identifier of the missing resource. - * HTTP status code: `404 StatusNotFound` - - -* `AlreadyExists` - * Indicates that the resource you are creating already exists. - * Details (optional): - * `kind string` - * The kind attribute of the conflicting resource. - * `name string` - * The identifier of the conflicting resource. - * HTTP status code: `409 StatusConflict` - -* `Conflict` - * Indicates that the requested update operation cannot be completed due to a -conflict. The client may need to alter the request. Each resource may define -custom details that indicate the nature of the conflict. - * HTTP status code: `409 StatusConflict` - - -* `Invalid` - * Indicates that the requested create or update operation cannot be completed -due to invalid data provided as part of the request. - * Details (optional): - * `kind string` - * the kind attribute of the invalid resource - * `name string` - * the identifier of the invalid resource - * `causes` - * One or more `StatusCause` entries indicating the data in the provided -resource that was invalid. The `reason`, `message`, and `field` attributes will -be set. - * HTTP status code: `422 StatusUnprocessableEntity` - - -* `Timeout` - * Indicates that the request could not be completed within the given time. -Clients may receive this response if the server has decided to rate limit the -client, or if the server is overloaded and cannot process the request at this -time. - * Http status code: `429 TooManyRequests` - * The server should set the `Retry-After` HTTP header and return -`retryAfterSeconds` in the details field of the object. A value of `0` is the -default. - - -* `ServerTimeout` - * Indicates that the server can be reached and understood the request, but -cannot complete the action in a reasonable time. This maybe due to temporary -server load or a transient communication issue with another server. - * Details (optional): - * `kind string` - * The kind attribute of the resource being acted on. - * `name string` - * The operation that is being attempted. - * The server should set the `Retry-After` HTTP header and return -`retryAfterSeconds` in the details field of the object. A value of `0` is the -default. - * Http status code: `504 StatusServerTimeout` - - -* `MethodNotAllowed` - * Indicates that the action the client attempted to perform on the resource -was not supported by the code. - * For instance, attempting to delete a resource that can only be created. - * API calls that return MethodNotAllowed can never succeed. - * Http status code: `405 StatusMethodNotAllowed` - - -* `InternalError` - * Indicates that an internal error occurred, it is unexpected and the outcome -of the call is unknown. - * Details (optional): - * `causes` - * The original error. - * Http status code: `500 StatusInternalServerError` `code` may contain the suggested HTTP return code for this status. - - -## Events - -Events are complementary to status information, since they can provide some -historical information about status and occurrences in addition to current or -previous status. Generate events for situations users or administrators should -be alerted about. - -Choose a unique, specific, short, CamelCase reason for each event category. For -example, `FreeDiskSpaceInvalid` is a good event reason because it is likely to -refer to just one situation, but `Started` is not a good reason because it -doesn't sufficiently indicate what started, even when combined with other event -fields. - -`Error creating foo` or `Error creating foo %s` would be appropriate for an -event message, with the latter being preferable, since it is more informational. - -Accumulate repeated events in the client, especially for frequent events, to -reduce data volume, load on the system, and noise exposed to users. - -## Naming conventions - -* Go field names must be CamelCase. JSON field names must be camelCase. Other -than capitalization of the initial letter, the two should almost always match. -No underscores nor dashes in either. -* Field and resource names should be declarative, not imperative (DoSomething, -SomethingDoer, DoneBy, DoneAt). -* Use `Node` where referring to -the node resource in the context of the cluster. Use `Host` where referring to -properties of the individual physical/virtual system, such as `hostname`, -`hostPath`, `hostNetwork`, etc. -* `FooController` is a deprecated kind naming convention. Name the kind after -the thing being controlled instead (e.g., `Job` rather than `JobController`). -* The name of a field that specifies the time at which `something` occurs should -be called `somethingTime`. Do not use `stamp` (e.g., `creationTimestamp`). -* We use the `fooSeconds` convention for durations, as discussed in the [units -subsection](#units). - * `fooPeriodSeconds` is preferred for periodic intervals and other waiting -periods (e.g., over `fooIntervalSeconds`). - * `fooTimeoutSeconds` is preferred for inactivity/unresponsiveness deadlines. - * `fooDeadlineSeconds` is preferred for activity completion deadlines. -* Do not use abbreviations in the API, except where they are extremely commonly -used, such as "id", "args", or "stdin". -* Acronyms should similarly only be used when extremely commonly known. All -letters in the acronym should have the same case, using the appropriate case for -the situation. For example, at the beginning of a field name, the acronym should -be all lowercase, such as "httpGet". Where used as a constant, all letters -should be uppercase, such as "TCP" or "UDP". -* The name of a field referring to another resource of kind `Foo` by name should -be called `fooName`. The name of a field referring to another resource of kind -`Foo` by ObjectReference (or subset thereof) should be called `fooRef`. -* More generally, include the units and/or type in the field name if they could -be ambiguous and they are not specified by the value or value type. - -## Label, selector, and annotation conventions - -Labels are the domain of users. They are intended to facilitate organization and -management of API resources using attributes that are meaningful to users, as -opposed to meaningful to the system. Think of them as user-created mp3 or email -inbox labels, as opposed to the directory structure used by a program to store -its data. The former enables the user to apply an arbitrary ontology, whereas -the latter is implementation-centric and inflexible. Users will use labels to -select resources to operate on, display label values in CLI/UI columns, etc. -Users should always retain full power and flexibility over the label schemas -they apply to labels in their namespaces. - -However, we should support conveniences for common cases by default. For -example, what we now do in ReplicationController is automatically set the RC's -selector and labels to the labels in the pod template by default, if they are -not already set. That ensures that the selector will match the template, and -that the RC can be managed using the same labels as the pods it creates. Note -that once we generalize selectors, it won't necessarily be possible to -unambiguously generate labels that match an arbitrary selector. - -If the user wants to apply additional labels to the pods that it doesn't select -upon, such as to facilitate adoption of pods or in the expectation that some -label values will change, they can set the selector to a subset of the pod -labels. Similarly, the RC's labels could be initialized to a subset of the pod -template's labels, or could include additional/different labels. - -For disciplined users managing resources within their own namespaces, it's not -that hard to consistently apply schemas that ensure uniqueness. One just needs -to ensure that at least one value of some label key in common differs compared -to all other comparable resources. We could/should provide a verification tool -to check that. However, development of conventions similar to the examples in -[Labels](../user-guide/labels.md) make uniqueness straightforward. Furthermore, -relatively narrowly used namespaces (e.g., per environment, per application) can -be used to reduce the set of resources that could potentially cause overlap. - -In cases where users could be running misc. examples with inconsistent schemas, -or where tooling or components need to programmatically generate new objects to -be selected, there needs to be a straightforward way to generate unique label -sets. A simple way to ensure uniqueness of the set is to ensure uniqueness of a -single label value, such as by using a resource name, uid, resource hash, or -generation number. - -Problems with uids and hashes, however, include that they have no semantic -meaning to the user, are not memorable nor readily recognizable, and are not -predictable. Lack of predictability obstructs use cases such as creation of a -replication controller from a pod, such as people want to do when exploring the -system, bootstrapping a self-hosted cluster, or deletion and re-creation of a -new RC that adopts the pods of the previous one, such as to rename it. -Generation numbers are more predictable and much clearer, assuming there is a -logical sequence. Fortunately, for deployments that's the case. For jobs, use of -creation timestamps is common internally. Users should always be able to turn -off auto-generation, in order to permit some of the scenarios described above. -Note that auto-generated labels will also become one more field that needs to be -stripped out when cloning a resource, within a namespace, in a new namespace, in -a new cluster, etc., and will need to be ignored around when updating a resource -via patch or read-modify-write sequence. - -Inclusion of a system prefix in a label key is fairly hostile to UX. A prefix is -only necessary in the case that the user cannot choose the label key, in order -to avoid collisions with user-defined labels. However, I firmly believe that the -user should always be allowed to select the label keys to use on their -resources, so it should always be possible to override default label keys. - -Therefore, resources supporting auto-generation of unique labels should have a -`uniqueLabelKey` field, so that the user could specify the key if they wanted -to, but if unspecified, it could be set by default, such as to the resource -type, like job, deployment, or replicationController. The value would need to be -at least spatially unique, and perhaps temporally unique in the case of job. - -Annotations have very different intended usage from labels. We expect them to be -primarily generated and consumed by tooling and system extensions. I'm inclined -to generalize annotations to permit them to directly store arbitrary json. Rigid -names and name prefixes make sense, since they are analogous to API fields. - -In fact, in-development API fields, including those used to represent fields of -newer alpha/beta API versions in the older stable storage version, may be -represented as annotations with the form `something.alpha.kubernetes.io/name` or -`something.beta.kubernetes.io/name` (depending on our confidence in it). For -example `net.alpha.kubernetes.io/policy` might represent an experimental network -policy field. The "name" portion of the annotation should follow the below -conventions for annotations. When an annotation gets promoted to a field, the -name transformation should then be mechanical: `foo-bar` becomes `fooBar`. - -Other advice regarding use of labels, annotations, and other generic map keys by -Kubernetes components and tools: - - Key names should be all lowercase, with words separated by dashes, such as -`desired-replicas` - - Prefix the key with `kubernetes.io/` or `foo.kubernetes.io/`, preferably the -latter if the label/annotation is specific to `foo` - - For instance, prefer `service-account.kubernetes.io/name` over -`kubernetes.io/service-account.name` - - Use annotations to store API extensions that the controller responsible for -the resource doesn't need to know about, experimental fields that aren't -intended to be generally used API fields, etc. Beware that annotations aren't -automatically handled by the API conversion machinery. - - -## WebSockets and SPDY - -Some of the API operations exposed by Kubernetes involve transfer of binary -streams between the client and a container, including attach, exec, portforward, -and logging. The API therefore exposes certain operations over upgradeable HTTP -connections ([described in RFC 2817](https://tools.ietf.org/html/rfc2817)) via -the WebSocket and SPDY protocols. These actions are exposed as subresources with -their associated verbs (exec, log, attach, and portforward) and are requested -via a GET (to support JavaScript in a browser) and POST (semantically accurate). - -There are two primary protocols in use today: - -1. Streamed channels - - When dealing with multiple independent binary streams of data such as the -remote execution of a shell command (writing to STDIN, reading from STDOUT and -STDERR) or forwarding multiple ports the streams can be multiplexed onto a -single TCP connection. Kubernetes supports a SPDY based framing protocol that -leverages SPDY channels and a WebSocket framing protocol that multiplexes -multiple channels onto the same stream by prefixing each binary chunk with a -byte indicating its channel. The WebSocket protocol supports an optional -subprotocol that handles base64-encoded bytes from the client and returns -base64-encoded bytes from the server and character based channel prefixes ('0', -'1', '2') for ease of use from JavaScript in a browser. - -2. Streaming response - - The default log output for a channel of streaming data is an HTTP Chunked -Transfer-Encoding, which can return an arbitrary stream of binary data from the -server. Browser-based JavaScript is limited in its ability to access the raw -data from a chunked response, especially when very large amounts of logs are -returned, and in future API calls it may be desirable to transfer large files. -The streaming API endpoints support an optional WebSocket upgrade that provides -a unidirectional channel from the server to the client and chunks data as binary -WebSocket frames. An optional WebSocket subprotocol is exposed that base64 -encodes the stream before returning it to the client. - -Clients should use the SPDY protocols if their clients have native support, or -WebSockets as a fallback. Note that WebSockets is susceptible to Head-of-Line -blocking and so clients must read and process each message sequentially. In -the future, an HTTP/2 implementation will be exposed that deprecates SPDY. - - -## Validation - -API objects are validated upon receipt by the apiserver. Validation errors are -flagged and returned to the caller in a `Failure` status with `reason` set to -`Invalid`. In order to facilitate consistent error messages, we ask that -validation logic adheres to the following guidelines whenever possible (though -exceptional cases will exist). - -* Be as precise as possible. -* Telling users what they CAN do is more useful than telling them what they -CANNOT do. -* When asserting a requirement in the positive, use "must". Examples: "must be -greater than 0", "must match regex '[a-z]+'". Words like "should" imply that -the assertion is optional, and must be avoided. -* When asserting a formatting requirement in the negative, use "must not". -Example: "must not contain '..'". Words like "should not" imply that the -assertion is optional, and must be avoided. -* When asserting a behavioral requirement in the negative, use "may not". -Examples: "may not be specified when otherField is empty", "only `name` may be -specified". -* When referencing a literal string value, indicate the literal in -single-quotes. Example: "must not contain '..'". -* When referencing another field name, indicate the name in back-quotes. -Example: "must be greater than `request`". -* When specifying inequalities, use words rather than symbols. Examples: "must -be less than 256", "must be greater than or equal to 0". Do not use words -like "larger than", "bigger than", "more than", "higher than", etc. -* When specifying numeric ranges, use inclusive ranges when possible. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/api-conventions.md?pixel)]() - diff --git a/api_changes.md b/api_changes.md deleted file mode 100755 index 963deb7c..00000000 --- a/api_changes.md +++ /dev/null @@ -1,732 +0,0 @@ -*This document is oriented at developers who want to change existing APIs. -A set of API conventions, which applies to new APIs and to changes, can be -found at [API Conventions](api-conventions.md). - -**Table of Contents** - - -- [So you want to change the API?](#so-you-want-to-change-the-api) - - [Operational overview](#operational-overview) - - [On compatibility](#on-compatibility) - - [Incompatible API changes](#incompatible-api-changes) - - [Changing versioned APIs](#changing-versioned-apis) - - [Edit types.go](#edit-typesgo) - - [Edit defaults.go](#edit-defaultsgo) - - [Edit conversion.go](#edit-conversiongo) - - [Changing the internal structures](#changing-the-internal-structures) - - [Edit types.go](#edit-typesgo-1) - - [Edit validation.go](#edit-validationgo) - - [Edit version conversions](#edit-version-conversions) - - [Generate protobuf objects](#generate-protobuf-objects) - - [Edit json (un)marshaling code](#edit-json-unmarshaling-code) - - [Making a new API Group](#making-a-new-api-group) - - [Update the fuzzer](#update-the-fuzzer) - - [Update the semantic comparisons](#update-the-semantic-comparisons) - - [Implement your change](#implement-your-change) - - [Write end-to-end tests](#write-end-to-end-tests) - - [Examples and docs](#examples-and-docs) - - [Alpha, Beta, and Stable Versions](#alpha-beta-and-stable-versions) - - [Adding Unstable Features to Stable Versions](#adding-unstable-features-to-stable-versions) - - - -# So you want to change the API? - -Before attempting a change to the API, you should familiarize yourself with a -number of existing API types and with the [API conventions](api-conventions.md). -If creating a new API type/resource, we also recommend that you first send a PR -containing just a proposal for the new API types, and that you initially target -the extensions API (pkg/apis/extensions). - -The Kubernetes API has two major components - the internal structures and -the versioned APIs. The versioned APIs are intended to be stable, while the -internal structures are implemented to best reflect the needs of the Kubernetes -code itself. - -What this means for API changes is that you have to be somewhat thoughtful in -how you approach changes, and that you have to touch a number of pieces to make -a complete change. This document aims to guide you through the process, though -not all API changes will need all of these steps. - -## Operational overview - -It is important to have a high level understanding of the API system used in -Kubernetes in order to navigate the rest of this document. - -As mentioned above, the internal representation of an API object is decoupled -from any one API version. This provides a lot of freedom to evolve the code, -but it requires robust infrastructure to convert between representations. There -are multiple steps in processing an API operation - even something as simple as -a GET involves a great deal of machinery. - -The conversion process is logically a "star" with the internal form at the -center. Every versioned API can be converted to the internal form (and -vice-versa), but versioned APIs do not convert to other versioned APIs directly. -This sounds like a heavy process, but in reality we do not intend to keep more -than a small number of versions alive at once. While all of the Kubernetes code -operates on the internal structures, they are always converted to a versioned -form before being written to storage (disk or etcd) or being sent over a wire. -Clients should consume and operate on the versioned APIs exclusively. - -To demonstrate the general process, here is a (hypothetical) example: - - 1. A user POSTs a `Pod` object to `/api/v7beta1/...` - 2. The JSON is unmarshalled into a `v7beta1.Pod` structure - 3. Default values are applied to the `v7beta1.Pod` - 4. The `v7beta1.Pod` is converted to an `api.Pod` structure - 5. The `api.Pod` is validated, and any errors are returned to the user - 6. The `api.Pod` is converted to a `v6.Pod` (because v6 is the latest stable -version) - 7. The `v6.Pod` is marshalled into JSON and written to etcd - -Now that we have the `Pod` object stored, a user can GET that object in any -supported api version. For example: - - 1. A user GETs the `Pod` from `/api/v5/...` - 2. The JSON is read from etcd and unmarshalled into a `v6.Pod` structure - 3. Default values are applied to the `v6.Pod` - 4. The `v6.Pod` is converted to an `api.Pod` structure - 5. The `api.Pod` is converted to a `v5.Pod` structure - 6. The `v5.Pod` is marshalled into JSON and sent to the user - -The implication of this process is that API changes must be done carefully and -backward-compatibly. - -## On compatibility - -Before talking about how to make API changes, it is worthwhile to clarify what -we mean by API compatibility. Kubernetes considers forwards and backwards -compatibility of its APIs a top priority. - -An API change is considered forward and backward-compatible if it: - - * adds new functionality that is not required for correct behavior (e.g., -does not add a new required field) - * does not change existing semantics, including: - * default values and behavior - * interpretation of existing API types, fields, and values - * which fields are required and which are not - -Put another way: - -1. Any API call (e.g. a structure POSTed to a REST endpoint) that worked before -your change must work the same after your change. -2. Any API call that uses your change must not cause problems (e.g. crash or -degrade behavior) when issued against servers that do not include your change. -3. It must be possible to round-trip your change (convert to different API -versions and back) with no loss of information. -4. Existing clients need not be aware of your change in order for them to -continue to function as they did previously, even when your change is utilized. - -If your change does not meet these criteria, it is not considered strictly -compatible, and may break older clients, or result in newer clients causing -undefined behavior. - -Let's consider some examples. In a hypothetical API (assume we're at version -v6), the `Frobber` struct looks something like this: - -```go -// API v6. -type Frobber struct { - Height int `json:"height"` - Param string `json:"param"` -} -``` - -You want to add a new `Width` field. It is generally safe to add new fields -without changing the API version, so you can simply change it to: - -```go -// Still API v6. -type Frobber struct { - Height int `json:"height"` - Width int `json:"width"` - Param string `json:"param"` -} -``` - -The onus is on you to define a sane default value for `Width` such that rule #1 -above is true - API calls and stored objects that used to work must continue to -work. - -For your next change you want to allow multiple `Param` values. You can not -simply change `Param string` to `Params []string` (without creating a whole new -API version) - that fails rules #1 and #2. You can instead do something like: - -```go -// Still API v6, but kind of clumsy. -type Frobber struct { - Height int `json:"height"` - Width int `json:"width"` - Param string `json:"param"` // the first param - ExtraParams []string `json:"extraParams"` // additional params -} -``` - -Now you can satisfy the rules: API calls that provide the old style `Param` -will still work, while servers that don't understand `ExtraParams` can ignore -it. This is somewhat unsatisfying as an API, but it is strictly compatible. - -Part of the reason for versioning APIs and for using internal structs that are -distinct from any one version is to handle growth like this. The internal -representation can be implemented as: - -```go -// Internal, soon to be v7beta1. -type Frobber struct { - Height int - Width int - Params []string -} -``` - -The code that converts to/from versioned APIs can decode this into the somewhat -uglier (but compatible!) structures. Eventually, a new API version, let's call -it v7beta1, will be forked and it can use the clean internal structure. - -We've seen how to satisfy rules #1 and #2. Rule #3 means that you can not -extend one versioned API without also extending the others. For example, an -API call might POST an object in API v7beta1 format, which uses the cleaner -`Params` field, but the API server might store that object in trusty old v6 -form (since v7beta1 is "beta"). When the user reads the object back in the -v7beta1 API it would be unacceptable to have lost all but `Params[0]`. This -means that, even though it is ugly, a compatible change must be made to the v6 -API. - -However, this is very challenging to do correctly. It often requires multiple -representations of the same information in the same API resource, which need to -be kept in sync in the event that either is changed. For example, let's say you -decide to rename a field within the same API version. In this case, you add -units to `height` and `width`. You implement this by adding duplicate fields: - -```go -type Frobber struct { - Height *int `json:"height"` - Width *int `json:"width"` - HeightInInches *int `json:"heightInInches"` - WidthInInches *int `json:"widthInInches"` -} -``` - -You convert all of the fields to pointers in order to distinguish between unset -and set to 0, and then set each corresponding field from the other in the -defaulting pass (e.g., `heightInInches` from `height`, and vice versa), which -runs just prior to conversion. That works fine when the user creates a resource -from a hand-written configuration -- clients can write either field and read -either field, but what about creation or update from the output of GET, or -update via PATCH (see -[In-place updates](../user-guide/managing-deployments.md#in-place-updates-of-resources))? -In this case, the two fields will conflict, because only one field would be -updated in the case of an old client that was only aware of the old field (e.g., -`height`). - -Say the client creates: - -```json -{ - "height": 10, - "width": 5 -} -``` - -and GETs: - -```json -{ - "height": 10, - "heightInInches": 10, - "width": 5, - "widthInInches": 5 -} -``` - -then PUTs back: - -```json -{ - "height": 13, - "heightInInches": 10, - "width": 5, - "widthInInches": 5 -} -``` - -The update should not fail, because it would have worked before `heightInInches` -was added. - -Therefore, when there are duplicate fields, the old field MUST take precedence -over the new, and the new field should be set to match by the server upon write. -A new client would be aware of the old field as well as the new, and so can -ensure that the old field is either unset or is set consistently with the new -field. However, older clients would be unaware of the new field. Please avoid -introducing duplicate fields due to the complexity they incur in the API. - -A new representation, even in a new API version, that is more expressive than an -old one breaks backward compatibility, since clients that only understood the -old representation would not be aware of the new representation nor its -semantics. Examples of proposals that have run into this challenge include -[generalized label selectors](http://issues.k8s.io/341) and [pod-level security -context](http://prs.k8s.io/12823). - -As another interesting example, enumerated values cause similar challenges. -Adding a new value to an enumerated set is *not* a compatible change. Clients -which assume they know how to handle all possible values of a given field will -not be able to handle the new values. However, removing value from an enumerated -set *can* be a compatible change, if handled properly (treat the removed value -as deprecated but allowed). This is actually a special case of a new -representation, discussed above. - -For [Unions](api-conventions.md#unions), sets of fields where at most one should -be set, it is acceptable to add a new option to the union if the [appropriate -conventions](api-conventions.md#objects) were followed in the original object. -Removing an option requires following the deprecation process. - -## Incompatible API changes - -There are times when this might be OK, but mostly we want changes that meet this -definition. If you think you need to break compatibility, you should talk to the -Kubernetes team first. - -Breaking compatibility of a beta or stable API version, such as v1, is -unacceptable. Compatibility for experimental or alpha APIs is not strictly -required, but breaking compatibility should not be done lightly, as it disrupts -all users of the feature. Experimental APIs may be removed. Alpha and beta API -versions may be deprecated and eventually removed wholesale, as described in the -[versioning document](../design/versioning.md). Document incompatible changes -across API versions under the appropriate -[{v? conversion tips tag in the api.md doc](../api.md). - -If your change is going to be backward incompatible or might be a breaking -change for API consumers, please send an announcement to -`kubernetes-dev@googlegroups.com` before the change gets in. If you are unsure, -ask. Also make sure that the change gets documented in the release notes for the -next release by labeling the PR with the "release-note" github label. - -If you found that your change accidentally broke clients, it should be reverted. - -In short, the expected API evolution is as follows: - -* `extensions/v1alpha1` -> -* `newapigroup/v1alpha1` -> ... -> `newapigroup/v1alphaN` -> -* `newapigroup/v1beta1` -> ... -> `newapigroup/v1betaN` -> -* `newapigroup/v1` -> -* `newapigroup/v2alpha1` -> ... - -While in extensions we have no obligation to move forward with the API at all -and may delete or break it at any time. - -While in alpha we expect to move forward with it, but may break it. - -Once in beta we will preserve forward compatibility, but may introduce new -versions and delete old ones. - -v1 must be backward-compatible for an extended length of time. - -## Changing versioned APIs - -For most changes, you will probably find it easiest to change the versioned -APIs first. This forces you to think about how to make your change in a -compatible way. Rather than doing each step in every version, it's usually -easier to do each versioned API one at a time, or to do all of one version -before starting "all the rest". - -### Edit types.go - -The struct definitions for each API are in `pkg/api//types.go`. Edit -those files to reflect the change you want to make. Note that all types and -non-inline fields in versioned APIs must be preceded by descriptive comments - -these are used to generate documentation. Comments for types should not contain -the type name; API documentation is generated from these comments and end-users -should not be exposed to golang type names. - -Optional fields should have the `,omitempty` json tag; fields are interpreted as -being required otherwise. - -### Edit defaults.go - -If your change includes new fields for which you will need default values, you -need to add cases to `pkg/api//defaults.go`. Of course, since you -have added code, you have to add a test: `pkg/api//defaults_test.go`. - -Do use pointers to scalars when you need to distinguish between an unset value -and an automatic zero value. For example, -`PodSpec.TerminationGracePeriodSeconds` is defined as `*int64` the go type -definition. A zero value means 0 seconds, and a nil value asks the system to -pick a default. - -Don't forget to run the tests! - -### Edit conversion.go - -Given that you have not yet changed the internal structs, this might feel -premature, and that's because it is. You don't yet have anything to convert to -or from. We will revisit this in the "internal" section. If you're doing this -all in a different order (i.e. you started with the internal structs), then you -should jump to that topic below. In the very rare case that you are making an -incompatible change you might or might not want to do this now, but you will -have to do more later. The files you want are -`pkg/api//conversion.go` and `pkg/api//conversion_test.go`. - -Note that the conversion machinery doesn't generically handle conversion of -values, such as various kinds of field references and API constants. [The client -library](../../pkg/client/restclient/request.go) has custom conversion code for -field references. You also need to add a call to -api.Scheme.AddFieldLabelConversionFunc with a mapping function that understands -supported translations. - -## Changing the internal structures - -Now it is time to change the internal structs so your versioned changes can be -used. - -### Edit types.go - -Similar to the versioned APIs, the definitions for the internal structs are in -`pkg/api/types.go`. Edit those files to reflect the change you want to make. -Keep in mind that the internal structs must be able to express *all* of the -versioned APIs. - -## Edit validation.go - -Most changes made to the internal structs need some form of input validation. -Validation is currently done on internal objects in -`pkg/api/validation/validation.go`. This validation is the one of the first -opportunities we have to make a great user experience - good error messages and -thorough validation help ensure that users are giving you what you expect and, -when they don't, that they know why and how to fix it. Think hard about the -contents of `string` fields, the bounds of `int` fields and the -requiredness/optionalness of fields. - -Of course, code needs tests - `pkg/api/validation/validation_test.go`. - -## Edit version conversions - -At this point you have both the versioned API changes and the internal -structure changes done. If there are any notable differences - field names, -types, structural change in particular - you must add some logic to convert -versioned APIs to and from the internal representation. If you see errors from -the `serialization_test`, it may indicate the need for explicit conversions. - -Performance of conversions very heavily influence performance of apiserver. -Thus, we are auto-generating conversion functions that are much more efficient -than the generic ones (which are based on reflections and thus are highly -inefficient). - -The conversion code resides with each versioned API. There are two files: - - - `pkg/api//conversion.go` containing manually written conversion -functions - - `pkg/api//conversion_generated.go` containing auto-generated -conversion functions - - `pkg/apis/extensions//conversion.go` containing manually written -conversion functions - - `pkg/apis/extensions//conversion_generated.go` containing -auto-generated conversion functions - -Since auto-generated conversion functions are using manually written ones, -those manually written should be named with a defined convention, i.e. a -function converting type X in pkg a to type Y in pkg b, should be named: -`convert_a_X_To_b_Y`. - -Also note that you can (and for efficiency reasons should) use auto-generated -conversion functions when writing your conversion functions. - -Once all the necessary manually written conversions are added, you need to -regenerate auto-generated ones. To regenerate them run: - -```sh -hack/update-codegen.sh -``` - -As part of the build, kubernetes will also generate code to handle deep copy of -your versioned api objects. The deep copy code resides with each versioned API: - - `/zz_generated.deepcopy.go` containing auto-generated copy functions - -If regeneration is somehow not possible due to compile errors, the easiest -workaround is to comment out the code causing errors and let the script to -regenerate it. If the auto-generated conversion methods are not used by the -manually-written ones, it's fine to just remove the whole file and let the -generator to create it from scratch. - -Unsurprisingly, adding manually written conversion also requires you to add -tests to `pkg/api//conversion_test.go`. - - -## Generate protobuf objects - -For any core API object, we also need to generate the Protobuf IDL and marshallers. -That generation is done with - -```sh -hack/update-generated-protobuf.sh -``` - -The vast majority of objects will not need any consideration when converting -to protobuf, but be aware that if you depend on a Golang type in the standard -library there may be additional work required, although in practice we typically -use our own equivalents for JSON serialization. The `pkg/api/serialization_test.go` -will verify that your protobuf serialization preserves all fields - be sure to -run it several times to ensure there are no incompletely calculated fields. - -## Edit json (un)marshaling code - -We are auto-generating code for marshaling and unmarshaling json representation -of api objects - this is to improve the overall system performance. - -The auto-generated code resides with each versioned API: - - - `pkg/api//types.generated.go` - - `pkg/apis/extensions//types.generated.go` - -To regenerate them run: - -```sh -hack/update-codecgen.sh -``` - -## Making a new API Group - -This section is under construction, as we make the tooling completely generic. - -At the moment, you'll have to make a new directory under `pkg/apis/`; copy the -directory structure from `pkg/apis/authentication`. Add the new group/version to all -of the `hack/{verify,update}-generated-{deep-copy,conversions,swagger}.sh` files -in the appropriate places--it should just require adding your new group/version -to a bash array. See [docs on adding an API group](adding-an-APIGroup.md) for -more. - -Adding API groups outside of the `pkg/apis/` directory is not currently -supported, but is clearly desirable. The deep copy & conversion generators need -to work by parsing go files instead of by reflection; then they will be easy to -point at arbitrary directories: see issue [#13775](http://issue.k8s.io/13775). - -## Update the fuzzer - -Part of our testing regimen for APIs is to "fuzz" (fill with random values) API -objects and then convert them to and from the different API versions. This is -a great way of exposing places where you lost information or made bad -assumptions. If you have added any fields which need very careful formatting -(the test does not run validation) or if you have made assumptions such as -"this slice will always have at least 1 element", you may get an error or even -a panic from the `serialization_test`. If so, look at the diff it produces (or -the backtrace in case of a panic) and figure out what you forgot. Encode that -into the fuzzer's custom fuzz functions. Hint: if you added defaults for a -field, that field will need to have a custom fuzz function that ensures that the -field is fuzzed to a non-empty value. - -The fuzzer can be found in `pkg/api/testing/fuzzer.go`. - -## Update the semantic comparisons - -VERY VERY rarely is this needed, but when it hits, it hurts. In some rare cases -we end up with objects (e.g. resource quantities) that have morally equivalent -values with different bitwise representations (e.g. value 10 with a base-2 -formatter is the same as value 0 with a base-10 formatter). The only way Go -knows how to do deep-equality is through field-by-field bitwise comparisons. -This is a problem for us. - -The first thing you should do is try not to do that. If you really can't avoid -this, I'd like to introduce you to our `semantic DeepEqual` routine. It supports -custom overrides for specific types - you can find that in `pkg/api/helpers.go`. - -There's one other time when you might have to touch this: `unexported fields`. -You see, while Go's `reflect` package is allowed to touch `unexported fields`, -us mere mortals are not - this includes `semantic DeepEqual`. Fortunately, most -of our API objects are "dumb structs" all the way down - all fields are exported -(start with a capital letter) and there are no unexported fields. But sometimes -you want to include an object in our API that does have unexported fields -somewhere in it (for example, `time.Time` has unexported fields). If this hits -you, you may have to touch the `semantic DeepEqual` customization functions. - -## Implement your change - -Now you have the API all changed - go implement whatever it is that you're -doing! - -## Write end-to-end tests - -Check out the [E2E docs](e2e-tests.md) for detailed information about how to -write end-to-end tests for your feature. - -## Examples and docs - -At last, your change is done, all unit tests pass, e2e passes, you're done, -right? Actually, no. You just changed the API. If you are touching an existing -facet of the API, you have to try *really* hard to make sure that *all* the -examples and docs are updated. There's no easy way to do this, due in part to -JSON and YAML silently dropping unknown fields. You're clever - you'll figure it -out. Put `grep` or `ack` to good use. - -If you added functionality, you should consider documenting it and/or writing -an example to illustrate your change. - -Make sure you update the swagger and OpenAPI spec by running: - -```sh -hack/update-swagger-spec.sh -hack/update-openapi-spec.sh -``` - -The API spec changes should be in a commit separate from your other changes. - -## Alpha, Beta, and Stable Versions - -New feature development proceeds through a series of stages of increasing -maturity: - -- Development level - - Object Versioning: no convention - - Availability: not committed to main kubernetes repo, and thus not available -in official releases - - Audience: other developers closely collaborating on a feature or -proof-of-concept - - Upgradeability, Reliability, Completeness, and Support: no requirements or -guarantees -- Alpha level - - Object Versioning: API version name contains `alpha` (e.g. `v1alpha1`) - - Availability: committed to main kubernetes repo; appears in an official -release; feature is disabled by default, but may be enabled by flag - - Audience: developers and expert users interested in giving early feedback on -features - - Completeness: some API operations, CLI commands, or UI support may not be -implemented; the API need not have had an *API review* (an intensive and -targeted review of the API, on top of a normal code review) - - Upgradeability: the object schema and semantics may change in a later -software release, without any provision for preserving objects in an existing -cluster; removing the upgradability concern allows developers to make rapid -progress; in particular, API versions can increment faster than the minor -release cadence and the developer need not maintain multiple versions; -developers should still increment the API version when object schema or -semantics change in an [incompatible way](#on-compatibility) - - Cluster Reliability: because the feature is relatively new, and may lack -complete end-to-end tests, enabling the feature via a flag might expose bugs -with destabilize the cluster (e.g. a bug in a control loop might rapidly create -excessive numbers of object, exhausting API storage). - - Support: there is *no commitment* from the project to complete the feature; -the feature may be dropped entirely in a later software release - - Recommended Use Cases: only in short-lived testing clusters, due to -complexity of upgradeability and lack of long-term support and lack of -upgradability. -- Beta level: - - Object Versioning: API version name contains `beta` (e.g. `v2beta3`) - - Availability: in official Kubernetes releases, and enabled by default - - Audience: users interested in providing feedback on features - - Completeness: all API operations, CLI commands, and UI support should be -implemented; end-to-end tests complete; the API has had a thorough API review -and is thought to be complete, though use during beta may frequently turn up API -issues not thought of during review - - Upgradeability: the object schema and semantics may change in a later -software release; when this happens, an upgrade path will be documented; in some -cases, objects will be automatically converted to the new version; in other -cases, a manual upgrade may be necessary; a manual upgrade may require downtime -for anything relying on the new feature, and may require manual conversion of -objects to the new version; when manual conversion is necessary, the project -will provide documentation on the process (for an example, see [v1 conversion -tips](../api.md#v1-conversion-tips)) - - Cluster Reliability: since the feature has e2e tests, enabling the feature -via a flag should not create new bugs in unrelated features; because the feature -is new, it may have minor bugs - - Support: the project commits to complete the feature, in some form, in a -subsequent Stable version; typically this will happen within 3 months, but -sometimes longer; releases should simultaneously support two consecutive -versions (e.g. `v1beta1` and `v1beta2`; or `v1beta2` and `v1`) for at least one -minor release cycle (typically 3 months) so that users have enough time to -upgrade and migrate objects - - Recommended Use Cases: in short-lived testing clusters; in production -clusters as part of a short-lived evaluation of the feature in order to provide -feedback -- Stable level: - - Object Versioning: API version `vX` where `X` is an integer (e.g. `v1`) - - Availability: in official Kubernetes releases, and enabled by default - - Audience: all users - - Completeness: same as beta - - Upgradeability: only [strictly compatible](#on-compatibility) changes -allowed in subsequent software releases - - Cluster Reliability: high - - Support: API version will continue to be present for many subsequent -software releases; - - Recommended Use Cases: any - -### Adding Unstable Features to Stable Versions - -When adding a feature to an object which is already Stable, the new fields and -new behaviors need to meet the Stable level requirements. If these cannot be -met, then the new field cannot be added to the object. - -For example, consider the following object: - -```go -// API v6. -type Frobber struct { - Height int `json:"height"` - Param string `json:"param"` -} -``` - -A developer is considering adding a new `Width` parameter, like this: - -```go -// API v6. -type Frobber struct { - Height int `json:"height"` - Width int `json:"height"` - Param string `json:"param"` -} -``` - -However, the new feature is not stable enough to be used in a stable version -(`v6`). Some reasons for this might include: - -- the final representation is undecided (e.g. should it be called `Width` or -`Breadth`?) -- the implementation is not stable enough for general use (e.g. the `Area()` -routine sometimes overflows.) - -The developer cannot add the new field until stability is met. However, -sometimes stability cannot be met until some users try the new feature, and some -users are only able or willing to accept a released version of Kubernetes. In -that case, the developer has a few options, both of which require staging work -over several releases. - - -A preferred option is to first make a release where the new value (`Width` in -this example) is specified via an annotation, like this: - -```go -kind: frobber -version: v6 -metadata: - name: myfrobber - annotations: - frobbing.alpha.kubernetes.io/width: 2 -height: 4 -param: "green and blue" -``` - -This format allows users to specify the new field, but makes it clear that they -are using a Alpha feature when they do, since the word `alpha` is in the -annotation key. - -Another option is to introduce a new type with an new `alpha` or `beta` version -designator, like this: - -``` -// API v6alpha2 -type Frobber struct { - Height int `json:"height"` - Width int `json:"height"` - Param string `json:"param"` -} -``` - -The latter requires that all objects in the same API group as `Frobber` to be -replicated in the new version, `v6alpha2`. This also requires user to use a new -client which uses the other version. Therefore, this is not a preferred option. - -A related issue is how a cluster manager can roll back from a new version -with a new feature, that is already being used by users. See -https://github.com/kubernetes/kubernetes/issues/4855. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/api_changes.md?pixel)]() - diff --git a/automation.md b/automation.md deleted file mode 100644 index 3a9f1754..00000000 --- a/automation.md +++ /dev/null @@ -1,116 +0,0 @@ -# Kubernetes Development Automation - -## Overview - -Kubernetes uses a variety of automated tools in an attempt to relieve developers -of repetitive, low brain power work. This document attempts to describe these -processes. - - -## Submit Queue - -In an effort to - * reduce load on core developers - * maintain e2e stability - * load test github's label feature - -We have added an automated [submit-queue] -(https://github.com/kubernetes/contrib/blob/master/mungegithub/mungers/submit-queue.go) -to the -[github "munger"](https://github.com/kubernetes/contrib/tree/master/mungegithub) -for kubernetes. - -The submit-queue does the following: - -```go -for _, pr := range readyToMergePRs() { - if testsAreStable() { - if retestPR(pr) == success { - mergePR(pr) - } - } -} -``` - -The status of the submit-queue is [online.](http://submit-queue.k8s.io/) - -### Ready to merge status - -The submit-queue lists what it believes are required on the [merge requirements tab](http://submit-queue.k8s.io/#/info) of the info page. That may be more up to date. - -A PR is considered "ready for merging" if it matches the following: - * The PR must have the label "cla: yes" or "cla: human-approved" - * The PR must be mergeable. aka cannot need a rebase - * All of the following github statuses must be green - * Jenkins GCE Node e2e - * Jenkins GCE e2e - * Jenkins unit/integration - * The PR cannot have any prohibited future milestones (such as a v1.5 milestone during v1.4 code freeze) - * The PR must have the "lgtm" label. The "lgtm" label is automatically applied - following a review comment consisting of only "LGTM" (case-insensitive) - * The PR must not have been updated since the "lgtm" label was applied - * The PR must not have the "do-not-merge" label - -### Merge process - -Merges _only_ occur when the [critical builds](http://submit-queue.k8s.io/#/e2e) -are passing. We're open to including more builds here, let us know... - -Merges are serialized, so only a single PR is merged at a time, to ensure -against races. - -If the PR has the `retest-not-required` label, it is simply merged. If the PR does -not have this label the e2e, unit/integration, and node tests are re-run. If these -tests pass a second time, the PR will be merged as long as the `critical builds` are -green when this PR finishes retesting. - -## Github Munger - -We run [github "mungers"](https://github.com/kubernetes/contrib/tree/master/mungegithub). - -This runs repeatedly over github pulls and issues and runs modular "mungers" -similar to "mungedocs." The mungers include the 'submit-queue' referenced above along -with numerous other functions. See the README in the link above. - -Please feel free to unleash your creativity on this tool, send us new mungers -that you think will help support the Kubernetes development process. - -### Closing stale pull-requests - -Github Munger will close pull-requests that don't have human activity in the -last 90 days. It will warn about this process 60 days before closing the -pull-request, and warn again 30 days later. One way to prevent this from -happening is to add the "keep-open" label on the pull-request. - -Feel free to re-open and maybe add the "keep-open" label if this happens to a -valid pull-request. It may also be a good opportunity to get more attention by -verifying that it is properly assigned and/or mention people that might be -interested. Commenting on the pull-request will also keep it open for another 90 -days. - -## PR builder - -We also run a robotic PR builder that attempts to run tests for each PR. - -Before a PR from an unknown user is run, the PR builder bot (`k8s-bot`) asks to -a message from a contributor that a PR is "ok to test", the contributor replies -with that message. ("please" is optional, but remember to treat your robots with -kindness...) - -## FAQ: - -#### How can I ask my PR to be tested again for Jenkins failures? - -PRs should only need to be manually re-tested if you believe there was a flake -during the original test. All flakes should be filed as an -[issue](https://github.com/kubernetes/kubernetes/issues?q=is%3Aopen+is%3Aissue+label%3Akind%2Fflake). -Once you find or file a flake a contributer (this may be you!) should request -a retest with "@k8s-bot test this issue: #NNNNN", where NNNNN is replaced with -the issue number you found or filed. - -Any pushes of new code to the PR will automatically trigger a new test. No human -interraction is required. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/automation.md?pixel)]() - diff --git a/bazel.md b/bazel.md deleted file mode 100644 index e6a4e9c5..00000000 --- a/bazel.md +++ /dev/null @@ -1,44 +0,0 @@ -# Build with Bazel - -Building with bazel is currently experimental. Automanaged BUILD rules have the -tag "automanaged" and are maintained by -[gazel](https://github.com/mikedanese/gazel). Instructions for installing bazel -can be found [here](https://www.bazel.io/versions/master/docs/install.html). - -To build docker images for the components, run: - -``` -$ bazel build //build-tools/... -``` - -To run many of the unit tests, run: - -``` -$ bazel test //cmd/... //build-tools/... //pkg/... //federation/... //plugin/... -``` - -To update automanaged build files, run: - -``` -$ ./hack/update-bazel.sh -``` - -**NOTES**: `update-bazel.sh` only works if check out directory of Kubernetes is "$GOPATH/src/k8s.io/kubernetes". - -To update a single build file, run: - -``` -$ # get gazel -$ go get -u github.com/mikedanese/gazel -$ # .e.g. ./pkg/kubectl/BUILD -$ gazel -root="${YOUR_KUBE_ROOT_PATH}" ./pkg/kubectl -``` - -Updating BUILD file for a package will be required when: -* Files are added to or removed from a package -* Import dependencies change for a package - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/bazel.md?pixel)]() - diff --git a/cherry-picks.md b/cherry-picks.md deleted file mode 100644 index ad8df62d..00000000 --- a/cherry-picks.md +++ /dev/null @@ -1,64 +0,0 @@ -# Overview - -This document explains cherry picks are managed on release branches within the -Kubernetes projects. Patches are either applied in batches or individually -depending on the point in the release cycle. - -## Propose a Cherry Pick - -1. Cherrypicks are [managed with labels and milestones] -(pull-requests.md#release-notes) -1. To get a PR merged to the release branch, first ensure the following labels - are on the original **master** branch PR: - * An appropriate milestone (e.g. v1.3) - * The `cherrypick-candidate` label -1. If `release-note-none` is set on the master PR, the cherrypick PR will need - to set the same label to confirm that no release note is needed. -1. `release-note` labeled PRs generate a release note using the PR title by - default OR the release-note block in the PR template if filled in. - * See the [PR template](../../.github/PULL_REQUEST_TEMPLATE.md) for more - details. - * PR titles and body comments are mutable and can be modified at any time - prior to the release to reflect a release note friendly message. - -### How do cherrypick-candidates make it to the release branch? - -1. **BATCHING:** After a branch is first created and before the X.Y.0 release - * Branch owners review the list of `cherrypick-candidate` labeled PRs. - * PRs batched up and merged to the release branch get a `cherrypick-approved` -label and lose the `cherrypick-candidate` label. - * PRs that won't be merged to the release branch, lose the -`cherrypick-candidate` label. - -1. **INDIVIDUAL CHERRYPICKS:** After the first X.Y.0 on a branch - * Run the cherry pick script. This example applies a master branch PR #98765 -to the remote branch `upstream/release-3.14`: -`hack/cherry_pick_pull.sh upstream/release-3.14 98765` - * Your cherrypick PR (targeted to the branch) will immediately get the -`do-not-merge` label. The branch owner will triage PRs targeted to -the branch and label the ones to be merged by applying the `lgtm` -label. - -There is an [issue](https://github.com/kubernetes/kubernetes/issues/23347) open -tracking the tool to automate the batching procedure. - -## Cherry Pick Review - -Cherry pick pull requests are reviewed differently than normal pull requests. In -particular, they may be self-merged by the release branch owner without fanfare, -in the case the release branch owner knows the cherry pick was already -requested - this should not be the norm, but it may happen. - -## Searching for Cherry Picks - -See the [cherrypick queue dashboard](http://cherrypick.k8s.io/#/queue) for -status of PRs labeled as `cherrypick-candidate`. - -[Contributor License Agreements](http://releases.k8s.io/HEAD/CONTRIBUTING.md) is -considered implicit for all code within cherry-pick pull requests, ***unless -there is a large conflict***. - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/cherry-picks.md?pixel)]() - diff --git a/cli-roadmap.md b/cli-roadmap.md deleted file mode 100644 index cd21da08..00000000 --- a/cli-roadmap.md +++ /dev/null @@ -1,11 +0,0 @@ -# Kubernetes CLI/Configuration Roadmap - -See github issues with the following labels: -* [area/app-config-deployment](https://github.com/kubernetes/kubernetes/labels/area/app-config-deployment) -* [component/kubectl](https://github.com/kubernetes/kubernetes/labels/component/kubectl) -* [component/clientlib](https://github.com/kubernetes/kubernetes/labels/component/clientlib) - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/cli-roadmap.md?pixel)]() - diff --git a/client-libraries.md b/client-libraries.md deleted file mode 100644 index d38f9fd7..00000000 --- a/client-libraries.md +++ /dev/null @@ -1,27 +0,0 @@ -## Kubernetes API client libraries - -### Supported - - * [Go](https://github.com/kubernetes/client-go) - -### User Contributed - -*Note: Libraries provided by outside parties are supported by their authors, not -the core Kubernetes team* - - * [Clojure](https://github.com/yanatan16/clj-kubernetes-api) - * [Java (OSGi)](https://bitbucket.org/amdatulabs/amdatu-kubernetes) - * [Java (Fabric8, OSGi)](https://github.com/fabric8io/kubernetes-client) - * [Node.js](https://github.com/tenxcloud/node-kubernetes-client) - * [Node.js](https://github.com/godaddy/kubernetes-client) - * [Perl](https://metacpan.org/pod/Net::Kubernetes) - * [PHP](https://github.com/devstub/kubernetes-api-php-client) - * [PHP](https://github.com/maclof/kubernetes-client) - * [Python](https://github.com/eldarion-gondor/pykube) - * [Ruby](https://github.com/Ch00k/kuber) - * [Ruby](https://github.com/abonas/kubeclient) - * [Scala](https://github.com/doriordan/skuber) - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/client-libraries.md?pixel)]() - diff --git a/coding-conventions.md b/coding-conventions.md deleted file mode 100644 index bcfab41d..00000000 --- a/coding-conventions.md +++ /dev/null @@ -1,147 +0,0 @@ -# Coding Conventions - -Updated: 5/3/2016 - -**Table of Contents** - - -- [Coding Conventions](#coding-conventions) - - [Code conventions](#code-conventions) - - [Testing conventions](#testing-conventions) - - [Directory and file conventions](#directory-and-file-conventions) - - [Coding advice](#coding-advice) - - - -## Code conventions - - - Bash - - - https://google.github.io/styleguide/shell.xml - - - Ensure that build, release, test, and cluster-management scripts run on -OS X - - - Go - - - Ensure your code passes the [presubmit checks](development.md#hooks) - - - [Go Code Review -Comments](https://github.com/golang/go/wiki/CodeReviewComments) - - - [Effective Go](https://golang.org/doc/effective_go.html) - - - Comment your code. - - [Go's commenting -conventions](http://blog.golang.org/godoc-documenting-go-code) - - If reviewers ask questions about why the code is the way it is, that's a -sign that comments might be helpful. - - - - Command-line flags should use dashes, not underscores - - - - Naming - - Please consider package name when selecting an interface name, and avoid -redundancy. - - - e.g.: `storage.Interface` is better than `storage.StorageInterface`. - - - Do not use uppercase characters, underscores, or dashes in package -names. - - Please consider parent directory name when choosing a package name. - - - so pkg/controllers/autoscaler/foo.go should say `package autoscaler` -not `package autoscalercontroller`. - - Unless there's a good reason, the `package foo` line should match -the name of the directory in which the .go file exists. - - Importers can use a different name if they need to disambiguate. - - - Locks should be called `lock` and should never be embedded (always `lock -sync.Mutex`). When multiple locks are present, give each lock a distinct name -following Go conventions - `stateLock`, `mapLock` etc. - - - [API changes](api_changes.md) - - - [API conventions](api-conventions.md) - - - [Kubectl conventions](kubectl-conventions.md) - - - [Logging conventions](logging.md) - -## Testing conventions - - - All new packages and most new significant functionality must come with unit -tests - - - Table-driven tests are preferred for testing multiple scenarios/inputs; for -example, see [TestNamespaceAuthorization](../../test/integration/auth/auth_test.go) - - - Significant features should come with integration (test/integration) and/or -[end-to-end (test/e2e) tests](e2e-tests.md) - - Including new kubectl commands and major features of existing commands - - - Unit tests must pass on OS X and Windows platforms - if you use Linux -specific features, your test case must either be skipped on windows or compiled -out (skipped is better when running Linux specific commands, compiled out is -required when your code does not compile on Windows). - - - Avoid relying on Docker hub (e.g. pull from Docker hub). Use gcr.io instead. - - - Avoid waiting for a short amount of time (or without waiting) and expect an -asynchronous thing to happen (e.g. wait for 1 seconds and expect a Pod to be -running). Wait and retry instead. - - - See the [testing guide](testing.md) for additional testing advice. - -## Directory and file conventions - - - Avoid package sprawl. Find an appropriate subdirectory for new packages. -(See [#4851](http://issues.k8s.io/4851) for discussion.) - - Libraries with no more appropriate home belong in new package -subdirectories of pkg/util - - - Avoid general utility packages. Packages called "util" are suspect. Instead, -derive a name that describes your desired function. For example, the utility -functions dealing with waiting for operations are in the "wait" package and -include functionality like Poll. So the full name is wait.Poll - - - All filenames should be lowercase - - - Go source files and directories use underscores, not dashes - - Package directories should generally avoid using separators as much as -possible (when packages are multiple words, they usually should be in nested -subdirectories). - - - Document directories and filenames should use dashes rather than underscores - - - Contrived examples that illustrate system features belong in -/docs/user-guide or /docs/admin, depending on whether it is a feature primarily -intended for users that deploy applications or cluster administrators, -respectively. Actual application examples belong in /examples. - - Examples should also illustrate [best practices for configuration and -using the system](../user-guide/config-best-practices.md) - - - Third-party code - - - Go code for normal third-party dependencies is managed using -[Godeps](https://github.com/tools/godep) - - - Other third-party code belongs in `/third_party` - - forked third party Go code goes in `/third_party/forked` - - forked _golang stdlib_ code goes in `/third_party/golang` - - - Third-party code must include licenses - - - This includes modified third-party code and excerpts, as well - -## Coding advice - - - Go - - - [Go landmines](https://gist.github.com/lavalamp/4bd23295a9f32706a48f) - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/coding-conventions.md?pixel)]() - diff --git a/collab.md b/collab.md deleted file mode 100644 index b4a6281d..00000000 --- a/collab.md +++ /dev/null @@ -1,87 +0,0 @@ -# On Collaborative Development - -Kubernetes is open source, but many of the people working on it do so as their -day job. In order to avoid forcing people to be "at work" effectively 24/7, we -want to establish some semi-formal protocols around development. Hopefully these -rules make things go more smoothly. If you find that this is not the case, -please complain loudly. - -## Patches welcome - -First and foremost: as a potential contributor, your changes and ideas are -welcome at any hour of the day or night, weekdays, weekends, and holidays. -Please do not ever hesitate to ask a question or send a PR. - -## Code reviews - -All changes must be code reviewed. For non-maintainers this is obvious, since -you can't commit anyway. But even for maintainers, we want all changes to get at -least one review, preferably (for non-trivial changes obligatorily) from someone -who knows the areas the change touches. For non-trivial changes we may want two -reviewers. The primary reviewer will make this decision and nominate a second -reviewer, if needed. Except for trivial changes, PRs should not be committed -until relevant parties (e.g. owners of the subsystem affected by the PR) have -had a reasonable chance to look at PR in their local business hours. - -Most PRs will find reviewers organically. If a maintainer intends to be the -primary reviewer of a PR they should set themselves as the assignee on GitHub -and say so in a reply to the PR. Only the primary reviewer of a change should -actually do the merge, except in rare cases (e.g. they are unavailable in a -reasonable timeframe). - -If a PR has gone 2 work days without an owner emerging, please poke the PR -thread and ask for a reviewer to be assigned. - -Except for rare cases, such as trivial changes (e.g. typos, comments) or -emergencies (e.g. broken builds), maintainers should not merge their own -changes. - -Expect reviewers to request that you avoid [common go style -mistakes](https://github.com/golang/go/wiki/CodeReviewComments) in your PRs. - -## Assigned reviews - -Maintainers can assign reviews to other maintainers, when appropriate. The -assignee becomes the shepherd for that PR and is responsible for merging the PR -once they are satisfied with it or else closing it. The assignee might request -reviews from non-maintainers. - -## Merge hours - -Maintainers will do merges of appropriately reviewed-and-approved changes during -their local "business hours" (typically 7:00 am Monday to 5:00 pm (17:00h) -Friday). PRs that arrive over the weekend or on holidays will only be merged if -there is a very good reason for it and if the code review requirements have been -met. Concretely this means that nobody should merge changes immediately before -going to bed for the night. - -There may be discussion an even approvals granted outside of the above hours, -but merges will generally be deferred. - -If a PR is considered complex or controversial, the merge of that PR should be -delayed to give all interested parties in all timezones the opportunity to -provide feedback. Concretely, this means that such PRs should be held for 24 -hours before merging. Of course "complex" and "controversial" are left to the -judgment of the people involved, but we trust that part of being a committer is -the judgment required to evaluate such things honestly, and not be motivated by -your desire (or your cube-mate's desire) to get their code merged. Also see -"Holds" below, any reviewer can issue a "hold" to indicate that the PR is in -fact complicated or complex and deserves further review. - -PRs that are incorrectly judged to be merge-able, may be reverted and subject to -re-review, if subsequent reviewers believe that they in fact are controversial -or complex. - - -## Holds - -Any maintainer or core contributor who wants to review a PR but does not have -time immediately may put a hold on a PR simply by saying so on the PR discussion -and offering an ETA measured in single-digit days at most. Any PR that has a -hold shall not be merged until the person who requested the hold acks the -review, withdraws their hold, or is overruled by a preponderance of maintainers. - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/collab.md?pixel)]() - diff --git a/community-expectations.md b/community-expectations.md deleted file mode 100644 index ff2487fd..00000000 --- a/community-expectations.md +++ /dev/null @@ -1,87 +0,0 @@ -## Community Expectations - -Kubernetes is a community project. Consequently, it is wholly dependent on -its community to provide a productive, friendly and collaborative environment. - -The first and foremost goal of the Kubernetes community to develop orchestration -technology that radically simplifies the process of creating reliable -distributed systems. However a second, equally important goal is the creation -of a community that fosters easy, agile development of such orchestration -systems. - -We therefore describe the expectations for -members of the Kubernetes community. This document is intended to be a living one -that evolves as the community evolves via the same PR and code review process -that shapes the rest of the project. It currently covers the expectations -of conduct that govern all members of the community as well as the expectations -around code review that govern all active contributors to Kubernetes. - -### Code of Conduct - -The most important expectation of the Kubernetes community is that all members -abide by the Kubernetes [community code of conduct](../../code-of-conduct.md). -Only by respecting each other can we develop a productive, collaborative -community. - -### Code review - -As a community we believe in the [value of code review for all contributions](collab.md). -Code review increases both the quality and readability of our codebase, which -in turn produces high quality software. - -However, the code review process can also introduce latency for contributors -and additional work for reviewers that can frustrate both parties. - -Consequently, as a community we expect that all active participants in the -community will also be active reviewers. - -We ask that active contributors to the project participate in the code review process -in areas where that contributor has expertise. Active -contributors are considered to be anyone who meets any of the following criteria: - * Sent more than two pull requests (PRs) in the previous one month, or more - than 20 PRs in the previous year. - * Filed more than three issues in the previous month, or more than 30 issues in - the previous 12 months. - * Commented on more than pull requests in the previous month, or - more than 50 pull requests in the previous 12 months. - * Marked any PR as LGTM in the previous month. - * Have *collaborator* permissions in the Kubernetes github project. - -In addition to these community expectations, any community member who wants to -be an active reviewer can also add their name to an *active reviewer* file -(location tbd) which will make them an active reviewer for as long as they -are included in the file. - -#### Expectations of reviewers: Review comments - -Because reviewers are often the first points of contact between new members of -the community and can significantly impact the first impression of the -Kubernetes community, reviewers are especially important in shaping the -Kubernetes community. Reviewers are highly encouraged to review the -[code of conduct](../../code-of-conduct.md) and are strongly encouraged to go above -and beyond the code of conduct to promote a collaborative, respectful -Kubernetes community. - -#### Expectations of reviewers: Review latency - -Reviewers are expected to respond in a timely fashion to PRs that are assigned -to them. Reviewers are expected to respond to an *active* PRs with reasonable -latency, and if reviewers fail to respond, those PRs may be assigned to other -reviewers. - -*Active* PRs are considered those which have a proper CLA (`cla:yes`) label -and do not need rebase to be merged. PRs that do not have a proper CLA, or -require a rebase are not considered active PRs. - -## Thanks - -Many thanks in advance to everyone who contributes their time and effort to -making Kubernetes both a successful system as well as a successful community. -The strength of our software shines in the strengths of each individual -community member. Thanks! - - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/community-expectations.md?pixel)]() - diff --git a/container-runtime-interface.md b/container-runtime-interface.md deleted file mode 100644 index 7ab085f7..00000000 --- a/container-runtime-interface.md +++ /dev/null @@ -1,127 +0,0 @@ -# CRI: the Container Runtime Interface - -## What is CRI? - -CRI (_Container Runtime Interface_) consists of a -[protobuf API](../../pkg/kubelet/api/v1alpha1/runtime/api.proto), -specifications/requirements (to-be-added), -and [libraries] (https://github.com/kubernetes/kubernetes/tree/master/pkg/kubelet/server/streaming) -for container runtimes to integrate with kubelet on a node. CRI is currently in Alpha. - -In the future, we plan to add more developer tools such as the CRI validation -tests. - -## Why develop CRI? - -Prior to the existence of CRI, container runtimes (e.g., `docker`, `rkt`) were -integrated with kubelet through implementing an internal, high-level interface -in kubelet. The entrance barrier for runtimes was high because the integration -required understanding the internals of kubelet and contributing to the main -Kubernetes repository. More importantly, this would not scale because every new -addition incurs a significant maintenance overhead in the main kubernetes -repository. - -Kubernetes aims to be extensible. CRI is one small, yet important step to enable -pluggable container runtimes and build a healthier ecosystem. - -## How to use CRI? - -1. Start the image and runtime services on your node. You can have a single - service acting as both image and runtime services. -2. Set the kubelet flags - - Pass the unix socket(s) to which your services listen to kubelet: - `--container-runtime-endpoint` and `--image-service-endpoint`. - - Enable CRI in kubelet by`--experimental-cri=true`. - - Use the "remote" runtime by `--container-runtime=remote`. - -Please see the [Status Update](#status-update) section for known issues for -each release. - -Note that CRI is still in its early stages. We are actively incorporating -feedback from early developers to improve the API. Developers should expect -occasional API breaking changes. - -## Does Kubelet use CRI today? - -No, but we are working on it. - -The first step is to switch kubelet to integrate with Docker via CRI by -default. The current [Docker CRI implementation](https://github.com/kubernetes/kubernetes/blob/release-1.5/pkg/kubelet/dockershim) -already passes most end-to-end tests, and has mandatory PR builders to prevent -regressions. While we are expanding the test coverage gradually, it is -difficult to test on all combinations of OS distributions, platforms, and -plugins. There are also many experimental or even undocumented features relied -upon by some users. We would like to **encourage the community to help test -this Docker-CRI integration and report bugs and/or missing features** to -smooth the transition in the near future. Please file a Github issue and -include @kubernetes/sig-node for any CRI problem. - -### How to test the new Docker CRI integration? - -Start kubelet with the following flags: - - Use the Docker container runtime by `--container-runtime=docker`(the default). - - Enable CRI in kubelet by`--experimental-cri=true`. - -Please also see the [known issues](#docker-cri-1.5-known-issues) before trying -out. - -## Design docs and proposals - -We plan to add CRI specifications/requirements in the near future. For now, -these proposals and design docs are the best sources to understand CRI -besides discussions on Github issues. - - - [Original proposal](https://github.com/kubernetes/kubernetes/blob/release-1.5/docs/proposals/container-runtime-interface-v1.md) - - [Exec/attach/port-forward streaming requests](https://docs.google.com/document/d/1OE_QoInPlVCK9rMAx9aybRmgFiVjHpJCHI9LrfdNM_s/edit?usp=sharing) - - [Container stdout/stderr logs](https://github.com/kubernetes/kubernetes/blob/release-1.5/docs/proposals/kubelet-cri-logging.md) - - Networking: The CRI runtime handles network plugins and the - setup/teardown of the pod sandbox. - -## Work-In-Progress CRI runtimes - - - [cri-o](https://github.com/kubernetes-incubator/cri-o) - - [rktlet](https://github.com/kubernetes-incubator/rktlet) - - [frakti](https://github.com/kubernetes/frakti) - -## [Status update](#status-update) - -### Kubernetes v1.5 release (CRI v1alpha1) - - - [v1alpha1 version](https://github.com/kubernetes/kubernetes/blob/release-1.5/pkg/kubelet/api/v1alpha1/runtime/api.proto) of CRI is released. - -#### [CRI known issues](#cri-1.5-known-issues): - - - [#27097](https://github.com/kubernetes/kubernetes/issues/27097): Container - metrics are not yet defined in CRI. - - [#36401](https://github.com/kubernetes/kubernetes/issues/36401): The new - container log path/format is not yet supported by the logging pipeline - (e.g., fluentd, GCL). - - CRI may not be compatible with other experimental features (e.g., Seccomp). - - Streaming server needs to be hardened. - - [#36666](https://github.com/kubernetes/kubernetes/issues/36666): - Authentication. - - [#36187](https://github.com/kubernetes/kubernetes/issues/36187): Avoid - including user data in the redirect URL. - -#### [Docker CRI integration known issues](#docker-cri-1.5-known-issues) - - - Docker compatibility: Support only Docker v1.11 and v1.12. - - Network: - - [#35457](https://github.com/kubernetes/kubernetes/issues/35457): Does - not support host ports. - - [#37315](https://github.com/kubernetes/kubernetes/issues/37315): Does - not support bandwidth shaping. - - Exec/attach/port-forward (streaming requests): - - [#35747](https://github.com/kubernetes/kubernetes/issues/35747): Does - not support `nsenter` as the exec handler (`--exec-handler=nsenter`). - - Also see (#cri-1.5-known-issues) for limitations on CRI streaming. - -## Contacts - - - Email: sig-node (kubernetes-sig-node@googlegroups.com) - - Slack: https://kubernetes.slack.com/messages/sig-node - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/container-runtime-interface.md?pixel)]() - diff --git a/controllers.md b/controllers.md deleted file mode 100644 index daedc236..00000000 --- a/controllers.md +++ /dev/null @@ -1,186 +0,0 @@ -# Writing Controllers - -A Kubernetes controller is an active reconciliation process. That is, it watches some object for the world's desired -state, and it watches the world's actual state, too. Then, it sends instructions to try and make the world's current -state be more like the desired state. - -The simplest implementation of this is a loop: - -```go -for { - desired := getDesiredState() - current := getCurrentState() - makeChanges(desired, current) -} -``` - -Watches, etc, are all merely optimizations of this logic. - -## Guidelines - -When you’re writing controllers, there are few guidelines that will help make sure you get the results and performance -you’re looking for. - -1. Operate on one item at a time. If you use a `workqueue.Interface`, you’ll be able to queue changes for a - particular resource and later pop them in multiple “worker” gofuncs with a guarantee that no two gofuncs will - work on the same item at the same time. - - Many controllers must trigger off multiple resources (I need to "check X if Y changes"), but nearly all controllers - can collapse those into a queue of “check this X” based on relationships. For instance, a ReplicaSetController needs - to react to a pod being deleted, but it does that by finding the related ReplicaSets and queuing those. - - -1. Random ordering between resources. When controllers queue off multiple types of resources, there is no guarantee - of ordering amongst those resources. - - Distinct watches are updated independently. Even with an objective ordering of “created resourceA/X” and “created - resourceB/Y”, your controller could observe “created resourceB/Y” and “created resourceA/X”. - - -1. Level driven, not edge driven. Just like having a shell script that isn’t running all the time, your controller - may be off for an indeterminate amount of time before running again. - - If an API object appears with a marker value of `true`, you can’t count on having seen it turn from `false` to `true`, - only that you now observe it being `true`. Even an API watch suffers from this problem, so be sure that you’re not - counting on seeing a change unless your controller is also marking the information it last made the decision on in - the object's status. - - -1. Use `SharedInformers`. `SharedInformers` provide hooks to receive notifications of adds, updates, and deletes for - a particular resource. They also provide convenience functions for accessing shared caches and determining when a - cache is primed. - - Use the factory methods down in https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/framework/informers/factory.go - to ensure that you are sharing the same instance of the cache as everyone else. - - This saves us connections against the API server, duplicate serialization costs server-side, duplicate deserialization - costs controller-side, and duplicate caching costs controller-side. - - You may see other mechanisms like reflectors and deltafifos driving controllers. Those were older mechanisms that we - later used to build the `SharedInformers`. You should avoid using them in new controllers - - -1. Never mutate original objects! Caches are shared across controllers, this means that if you mutate your "copy" - (actually a reference or shallow copy) of an object, you’ll mess up other controllers (not just your own). - - The most common point of failure is making a shallow copy, then mutating a map, like `Annotations`. Use - `api.Scheme.Copy` to make a deep copy. - - -1. Wait for your secondary caches. Many controllers have primary and secondary resources. Primary resources are the - resources that you’ll be updating `Status` for. Secondary resources are resources that you’ll be managing - (creating/deleting) or using for lookups. - - Use the `framework.WaitForCacheSync` function to wait for your secondary caches before starting your primary sync - functions. This will make sure that things like a Pod count for a ReplicaSet isn’t working off of known out of date - information that results in thrashing. - - -1. There are other actors in the system. Just because you haven't changed an object doesn't mean that somebody else - hasn't. - - Don't forget that the current state may change at any moment--it's not sufficient to just watch the desired state. - If you use the absence of objects in the desired state to indicate that things in the current state should be deleted, - make sure you don't have a bug in your observation code (e.g., act before your cache has filled). - - -1. Percolate errors to the top level for consistent re-queuing. We have a `workqueue.RateLimitingInterface` to allow - simple requeuing with reasonable backoffs. - - Your main controller func should return an error when requeuing is necessary. When it isn’t, it should use - `utilruntime.HandleError` and return nil instead. This makes it very easy for reviewers to inspect error handling - cases and to be confident that your controller doesn’t accidentally lose things it should retry for. - - -1. Watches and Informers will “sync”. Periodically, they will deliver every matching object in the cluster to your - `Update` method. This is good for cases where you may need to take additional action on the object, but sometimes you - know there won’t be more work to do. - - In cases where you are *certain* that you don't need to requeue items when there are no new changes, you can compare the - resource version of the old and new objects. If they are the same, you skip requeuing the work. Be careful when you - do this. If you ever skip requeuing your item on failures, you could fail, not requeue, and then never retry that - item again. - - -## Rough Structure - -Overall, your controller should look something like this: - -```go -type Controller struct{ - // podLister is secondary cache of pods which is used for object lookups - podLister cache.StoreToPodLister - - // queue is where incoming work is placed to de-dup and to allow "easy" rate limited requeues on errors - queue workqueue.RateLimitingInterface -} - -func (c *Controller) Run(threadiness int, stopCh chan struct{}){ - // don't let panics crash the process - defer utilruntime.HandleCrash() - // make sure the work queue is shutdown which will trigger workers to end - defer dsc.queue.ShutDown() - - glog.Infof("Starting controller") - - // wait for your secondary caches to fill before starting your work - if !framework.WaitForCacheSync(stopCh, c.podStoreSynced) { - return - } - - // start up your worker threads based on threadiness. Some controllers have multiple kinds of workers - for i := 0; i < threadiness; i++ { - // runWorker will loop until "something bad" happens. The .Until will then rekick the worker - // after one second - go wait.Until(c.runWorker, time.Second, stopCh) - } - - // wait until we're told to stop - <-stopCh - glog.Infof("Shutting down controller") -} - -func (c *Controller) runWorker() { - // hot loop until we're told to stop. processNextWorkItem will automatically wait until there's work - // available, so we don't don't worry about secondary waits - for c.processNextWorkItem() { - } -} - -// processNextWorkItem deals with one key off the queue. It returns false when it's time to quit. -func (c *Controller) processNextWorkItem() bool { - // pull the next work item from queue. It should be a key we use to lookup something in a cache - key, quit := c.queue.Get() - if quit { - return false - } - // you always have to indicate to the queue that you've completed a piece of work - defer c.queue.Done(key) - - // do your work on the key. This method will contains your "do stuff" logic" - err := c.syncHandler(key.(string)) - if err == nil { - // if you had no error, tell the queue to stop tracking history for your key. This will - // reset things like failure counts for per-item rate limiting - c.queue.Forget(key) - return true - } - - // there was a failure so be sure to report it. This method allows for pluggable error handling - // which can be used for things like cluster-monitoring - utilruntime.HandleError(fmt.Errorf("%v failed with : %v", key, err)) - // since we failed, we should requeue the item to work on later. This method will add a backoff - // to avoid hotlooping on particular items (they're probably still not going to work right away) - // and overall controller protection (everything I've done is broken, this controller needs to - // calm down or it can starve other useful work) cases. - c.queue.AddRateLimited(key) - - return true -} - -``` - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/controllers.md?pixel)]() - diff --git a/devel/README.md b/devel/README.md new file mode 100644 index 00000000..cf29f3b4 --- /dev/null +++ b/devel/README.md @@ -0,0 +1,83 @@ +# Kubernetes Developer Guide + +The developer guide is for anyone wanting to either write code which directly accesses the +Kubernetes API, or to contribute directly to the Kubernetes project. +It assumes some familiarity with concepts in the [User Guide](../user-guide/README.md) and the [Cluster Admin +Guide](../admin/README.md). + + +## The process of developing and contributing code to the Kubernetes project + +* **On Collaborative Development** ([collab.md](collab.md)): Info on pull requests and code reviews. + +* **GitHub Issues** ([issues.md](issues.md)): How incoming issues are reviewed and prioritized. + +* **Pull Request Process** ([pull-requests.md](pull-requests.md)): When and why pull requests are closed. + +* **Kubernetes On-Call Rotations** ([on-call-rotations.md](on-call-rotations.md)): Descriptions of on-call rotations for build and end-user support. + +* **Faster PR reviews** ([faster_reviews.md](faster_reviews.md)): How to get faster PR reviews. + +* **Getting Recent Builds** ([getting-builds.md](getting-builds.md)): How to get recent builds including the latest builds that pass CI. + +* **Automated Tools** ([automation.md](automation.md)): Descriptions of the automation that is running on our github repository. + + +## Setting up your dev environment, coding, and debugging + +* **Development Guide** ([development.md](development.md)): Setting up your development environment. + +* **Hunting flaky tests** ([flaky-tests.md](flaky-tests.md)): We have a goal of 99.9% flake free tests. + Here's how to run your tests many times. + +* **Logging Conventions** ([logging.md](logging.md)): Glog levels. + +* **Profiling Kubernetes** ([profiling.md](profiling.md)): How to plug in go pprof profiler to Kubernetes. + +* **Instrumenting Kubernetes with a new metric** + ([instrumentation.md](instrumentation.md)): How to add a new metrics to the + Kubernetes code base. + +* **Coding Conventions** ([coding-conventions.md](coding-conventions.md)): + Coding style advice for contributors. + +* **Document Conventions** ([how-to-doc.md](how-to-doc.md)) + Document style advice for contributors. + +* **Running a cluster locally** ([running-locally.md](running-locally.md)): + A fast and lightweight local cluster deployment for development. + +## Developing against the Kubernetes API + +* The [REST API documentation](../api-reference/README.md) explains the REST + API exposed by apiserver. + +* **Annotations** ([docs/user-guide/annotations.md](../user-guide/annotations.md)): are for attaching arbitrary non-identifying metadata to objects. + Programs that automate Kubernetes objects may use annotations to store small amounts of their state. + +* **API Conventions** ([api-conventions.md](api-conventions.md)): + Defining the verbs and resources used in the Kubernetes API. + +* **API Client Libraries** ([client-libraries.md](client-libraries.md)): + A list of existing client libraries, both supported and user-contributed. + + +## Writing plugins + +* **Authentication Plugins** ([docs/admin/authentication.md](../admin/authentication.md)): + The current and planned states of authentication tokens. + +* **Authorization Plugins** ([docs/admin/authorization.md](../admin/authorization.md)): + Authorization applies to all HTTP requests on the main apiserver port. + This doc explains the available authorization implementations. + +* **Admission Control Plugins** ([admission_control](../design/admission_control.md)) + + +## Building releases + +See the [kubernetes/release](https://github.com/kubernetes/release) repository for details on creating releases and related tools and helper scripts. + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/README.md?pixel)]() + diff --git a/devel/adding-an-APIGroup.md b/devel/adding-an-APIGroup.md new file mode 100644 index 00000000..5832be23 --- /dev/null +++ b/devel/adding-an-APIGroup.md @@ -0,0 +1,100 @@ +Adding an API Group +=============== + +This document includes the steps to add an API group. You may also want to take +a look at PR [#16621](https://github.com/kubernetes/kubernetes/pull/16621) and +PR [#13146](https://github.com/kubernetes/kubernetes/pull/13146), which add API +groups. + +Please also read about [API conventions](api-conventions.md) and +[API changes](api_changes.md) before adding an API group. + +### Your core group package: + +We plan on improving the way the types are factored in the future; see +[#16062](https://github.com/kubernetes/kubernetes/pull/16062) for the directions +in which this might evolve. + +1. Create a folder in pkg/apis to hold your group. Create types.go in +pkg/apis/``/ and pkg/apis/``/``/ to define API objects +in your group; + +2. Create pkg/apis/``/{register.go, ``/register.go} to register +this group's API objects to the encoding/decoding scheme (e.g., +[pkg/apis/authentication/register.go](../../pkg/apis/authentication/register.go) and +[pkg/apis/authentication/v1beta1/register.go](../../pkg/apis/authentication/v1beta1/register.go); + +3. Add a pkg/apis/``/install/install.go, which is responsible for adding +the group to the `latest` package, so that other packages can access the group's +meta through `latest.Group`. You probably only need to change the name of group +and version in the [example](../../pkg/apis/authentication/install/install.go)). You +need to import this `install` package in {pkg/master, +pkg/client/unversioned}/import_known_versions.go, if you want to make your group +accessible to other packages in the kube-apiserver binary, binaries that uses +the client package. + +Step 2 and 3 are mechanical, we plan on autogenerate these using the +cmd/libs/go2idl/ tool. + +### Scripts changes and auto-generated code: + +1. Generate conversions and deep-copies: + + 1. Add your "group/" or "group/version" into + cmd/libs/go2idl/conversion-gen/main.go; + 2. Make sure your pkg/apis/``/`` directory has a doc.go file + with the comment `// +k8s:deepcopy-gen=package,register`, to catch the + attention of our generation tools. + 3. Make sure your `pkg/apis//` directory has a doc.go file + with the comment `// +k8s:conversion-gen=`, to catch the + attention of our generation tools. For most APIs the only target you + need is `k8s.io/kubernetes/pkg/apis/` (your internal API). + 3. Make sure your `pkg/apis/` and `pkg/apis//` directories + have a doc.go file with the comment `+groupName=.k8s.io`, to correctly + generate the DNS-suffixed group name. + 5. Run hack/update-all.sh. + +2. Generate files for Ugorji codec: + + 1. Touch types.generated.go in pkg/apis/``{/, ``}; + 2. Run hack/update-codecgen.sh. + +3. Generate protobuf objects: + + 1. Add your group to `cmd/libs/go2idl/go-to-protobuf/protobuf/cmd.go` to + `New()` in the `Packages` field + 2. Run hack/update-generated-protobuf.sh + +### Client (optional): + +We are overhauling pkg/client, so this section might be outdated; see +[#15730](https://github.com/kubernetes/kubernetes/pull/15730) for how the client +package might evolve. Currently, to add your group to the client package, you +need to: + +1. Create pkg/client/unversioned/``.go, define a group client interface +and implement the client. You can take pkg/client/unversioned/extensions.go as a +reference. + +2. Add the group client interface to the `Interface` in +pkg/client/unversioned/client.go and add method to fetch the interface. Again, +you can take how we add the Extensions group there as an example. + +3. If you need to support the group in kubectl, you'll also need to modify +pkg/kubectl/cmd/util/factory.go. + +### Make the group/version selectable in unit tests (optional): + +1. Add your group in pkg/api/testapi/testapi.go, then you can access the group +in tests through testapi.``; + +2. Add your "group/version" to `KUBE_TEST_API_VERSIONS` in + hack/make-rules/test.sh and hack/make-rules/test-integration.sh + +TODO: Add a troubleshooting section. + + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/adding-an-APIGroup.md?pixel)]() + diff --git a/devel/api-conventions.md b/devel/api-conventions.md new file mode 100644 index 00000000..0be45182 --- /dev/null +++ b/devel/api-conventions.md @@ -0,0 +1,1350 @@ +API Conventions +=============== + +Updated: 4/22/2016 + +*This document is oriented at users who want a deeper understanding of the +Kubernetes API structure, and developers wanting to extend the Kubernetes API. +An introduction to using resources with kubectl can be found in [Working with +resources](../user-guide/working-with-resources.md).* + +**Table of Contents** + + + - [Types (Kinds)](#types-kinds) + - [Resources](#resources) + - [Objects](#objects) + - [Metadata](#metadata) + - [Spec and Status](#spec-and-status) + - [Typical status properties](#typical-status-properties) + - [References to related objects](#references-to-related-objects) + - [Lists of named subobjects preferred over maps](#lists-of-named-subobjects-preferred-over-maps) + - [Primitive types](#primitive-types) + - [Constants](#constants) + - [Unions](#unions) + - [Lists and Simple kinds](#lists-and-simple-kinds) + - [Differing Representations](#differing-representations) + - [Verbs on Resources](#verbs-on-resources) + - [PATCH operations](#patch-operations) + - [Strategic Merge Patch](#strategic-merge-patch) + - [List Operations](#list-operations) + - [Map Operations](#map-operations) + - [Idempotency](#idempotency) + - [Optional vs. Required](#optional-vs-required) + - [Defaulting](#defaulting) + - [Late Initialization](#late-initialization) + - [Concurrency Control and Consistency](#concurrency-control-and-consistency) + - [Serialization Format](#serialization-format) + - [Units](#units) + - [Selecting Fields](#selecting-fields) + - [Object references](#object-references) + - [HTTP Status codes](#http-status-codes) + - [Success codes](#success-codes) + - [Error codes](#error-codes) + - [Response Status Kind](#response-status-kind) + - [Events](#events) + - [Naming conventions](#naming-conventions) + - [Label, selector, and annotation conventions](#label-selector-and-annotation-conventions) + - [WebSockets and SPDY](#websockets-and-spdy) + - [Validation](#validation) + + + +The conventions of the [Kubernetes API](../api.md) (and related APIs in the +ecosystem) are intended to ease client development and ensure that configuration +mechanisms can be implemented that work across a diverse set of use cases +consistently. + +The general style of the Kubernetes API is RESTful - clients create, update, +delete, or retrieve a description of an object via the standard HTTP verbs +(POST, PUT, DELETE, and GET) - and those APIs preferentially accept and return +JSON. Kubernetes also exposes additional endpoints for non-standard verbs and +allows alternative content types. All of the JSON accepted and returned by the +server has a schema, identified by the "kind" and "apiVersion" fields. Where +relevant HTTP header fields exist, they should mirror the content of JSON +fields, but the information should not be represented only in the HTTP header. + +The following terms are defined: + +* **Kind** the name of a particular object schema (e.g. the "Cat" and "Dog" +kinds would have different attributes and properties) +* **Resource** a representation of a system entity, sent or retrieved as JSON +via HTTP to the server. Resources are exposed via: + * Collections - a list of resources of the same type, which may be queryable + * Elements - an individual resource, addressable via a URL + +Each resource typically accepts and returns data of a single kind. A kind may be +accepted or returned by multiple resources that reflect specific use cases. For +instance, the kind "Pod" is exposed as a "pods" resource that allows end users +to create, update, and delete pods, while a separate "pod status" resource (that +acts on "Pod" kind) allows automated processes to update a subset of the fields +in that resource. + +Resource collections should be all lowercase and plural, whereas kinds are +CamelCase and singular. + + +## Types (Kinds) + +Kinds are grouped into three categories: + +1. **Objects** represent a persistent entity in the system. + + Creating an API object is a record of intent - once created, the system will +work to ensure that resource exists. All API objects have common metadata. + + An object may have multiple resources that clients can use to perform +specific actions that create, update, delete, or get. + + Examples: `Pod`, `ReplicationController`, `Service`, `Namespace`, `Node`. + +2. **Lists** are collections of **resources** of one (usually) or more +(occasionally) kinds. + + The name of a list kind must end with "List". Lists have a limited set of +common metadata. All lists use the required "items" field to contain the array +of objects they return. Any kind that has the "items" field must be a list kind. + + Most objects defined in the system should have an endpoint that returns the +full set of resources, as well as zero or more endpoints that return subsets of +the full list. Some objects may be singletons (the current user, the system +defaults) and may not have lists. + + In addition, all lists that return objects with labels should support label +filtering (see [docs/user-guide/labels.md](../user-guide/labels.md), and most +lists should support filtering by fields. + + Examples: PodLists, ServiceLists, NodeLists + + TODO: Describe field filtering below or in a separate doc. + +3. **Simple** kinds are used for specific actions on objects and for +non-persistent entities. + + Given their limited scope, they have the same set of limited common metadata +as lists. + + For instance, the "Status" kind is returned when errors occur and is not +persisted in the system. + + Many simple resources are "subresources", which are rooted at API paths of +specific resources. When resources wish to expose alternative actions or views +that are closely coupled to a single resource, they should do so using new +sub-resources. Common subresources include: + + * `/binding`: Used to bind a resource representing a user request (e.g., Pod, +PersistentVolumeClaim) to a cluster infrastructure resource (e.g., Node, +PersistentVolume). + * `/status`: Used to write just the status portion of a resource. For +example, the `/pods` endpoint only allows updates to `metadata` and `spec`, +since those reflect end-user intent. An automated process should be able to +modify status for users to see by sending an updated Pod kind to the server to +the "/pods/<name>/status" endpoint - the alternate endpoint allows +different rules to be applied to the update, and access to be appropriately +restricted. + * `/scale`: Used to read and write the count of a resource in a manner that +is independent of the specific resource schema. + + Two additional subresources, `proxy` and `portforward`, provide access to +cluster resources as described in +[docs/user-guide/accessing-the-cluster.md](../user-guide/accessing-the-cluster.md). + +The standard REST verbs (defined below) MUST return singular JSON objects. Some +API endpoints may deviate from the strict REST pattern and return resources that +are not singular JSON objects, such as streams of JSON objects or unstructured +text log data. + +The term "kind" is reserved for these "top-level" API types. The term "type" +should be used for distinguishing sub-categories within objects or subobjects. + +### Resources + +All JSON objects returned by an API MUST have the following fields: + +* kind: a string that identifies the schema this object should have +* apiVersion: a string that identifies the version of the schema the object +should have + +These fields are required for proper decoding of the object. They may be +populated by the server by default from the specified URL path, but the client +likely needs to know the values in order to construct the URL path. + +### Objects + +#### Metadata + +Every object kind MUST have the following metadata in a nested object field +called "metadata": + +* namespace: a namespace is a DNS compatible label that objects are subdivided +into. The default namespace is 'default'. See +[docs/user-guide/namespaces.md](../user-guide/namespaces.md) for more. +* name: a string that uniquely identifies this object within the current +namespace (see [docs/user-guide/identifiers.md](../user-guide/identifiers.md)). +This value is used in the path when retrieving an individual object. +* uid: a unique in time and space value (typically an RFC 4122 generated +identifier, see [docs/user-guide/identifiers.md](../user-guide/identifiers.md)) +used to distinguish between objects with the same name that have been deleted +and recreated + +Every object SHOULD have the following metadata in a nested object field called +"metadata": + +* resourceVersion: a string that identifies the internal version of this object +that can be used by clients to determine when objects have changed. This value +MUST be treated as opaque by clients and passed unmodified back to the server. +Clients should not assume that the resource version has meaning across +namespaces, different kinds of resources, or different servers. (See +[concurrency control](#concurrency-control-and-consistency), below, for more +details.) +* generation: a sequence number representing a specific generation of the +desired state. Set by the system and monotonically increasing, per-resource. May +be compared, such as for RAW and WAW consistency. +* creationTimestamp: a string representing an RFC 3339 date of the date and time +an object was created +* deletionTimestamp: a string representing an RFC 3339 date of the date and time +after which this resource will be deleted. This field is set by the server when +a graceful deletion is requested by the user, and is not directly settable by a +client. The resource will be deleted (no longer visible from resource lists, and +not reachable by name) after the time in this field. Once set, this value may +not be unset or be set further into the future, although it may be shortened or +the resource may be deleted prior to this time. +* labels: a map of string keys and values that can be used to organize and +categorize objects (see [docs/user-guide/labels.md](../user-guide/labels.md)) +* annotations: a map of string keys and values that can be used by external +tooling to store and retrieve arbitrary metadata about this object (see +[docs/user-guide/annotations.md](../user-guide/annotations.md)) + +Labels are intended for organizational purposes by end users (select the pods +that match this label query). Annotations enable third-party automation and +tooling to decorate objects with additional metadata for their own use. + +#### Spec and Status + +By convention, the Kubernetes API makes a distinction between the specification +of the desired state of an object (a nested object field called "spec") and the +status of the object at the current time (a nested object field called +"status"). The specification is a complete description of the desired state, +including configuration settings provided by the user, +[default values](#defaulting) expanded by the system, and properties initialized +or otherwise changed after creation by other ecosystem components (e.g., +schedulers, auto-scalers), and is persisted in stable storage with the API +object. If the specification is deleted, the object will be purged from the +system. The status summarizes the current state of the object in the system, and +is usually persisted with the object by an automated processes but may be +generated on the fly. At some cost and perhaps some temporary degradation in +behavior, the status could be reconstructed by observation if it were lost. + +When a new version of an object is POSTed or PUT, the "spec" is updated and +available immediately. Over time the system will work to bring the "status" into +line with the "spec". The system will drive toward the most recent "spec" +regardless of previous versions of that stanza. In other words, if a value is +changed from 2 to 5 in one PUT and then back down to 3 in another PUT the system +is not required to 'touch base' at 5 before changing the "status" to 3. In other +words, the system's behavior is *level-based* rather than *edge-based*. This +enables robust behavior in the presence of missed intermediate state changes. + +The Kubernetes API also serves as the foundation for the declarative +configuration schema for the system. In order to facilitate level-based +operation and expression of declarative configuration, fields in the +specification should have declarative rather than imperative names and +semantics -- they represent the desired state, not actions intended to yield the +desired state. + +The PUT and POST verbs on objects MUST ignore the "status" values, to avoid +accidentally overwriting the status in read-modify-write scenarios. A `/status` +subresource MUST be provided to enable system components to update statuses of +resources they manage. + +Otherwise, PUT expects the whole object to be specified. Therefore, if a field +is omitted it is assumed that the client wants to clear that field's value. The +PUT verb does not accept partial updates. Modification of just part of an object +may be achieved by GETting the resource, modifying part of the spec, labels, or +annotations, and then PUTting it back. See +[concurrency control](#concurrency-control-and-consistency), below, regarding +read-modify-write consistency when using this pattern. Some objects may expose +alternative resource representations that allow mutation of the status, or +performing custom actions on the object. + +All objects that represent a physical resource whose state may vary from the +user's desired intent SHOULD have a "spec" and a "status". Objects whose state +cannot vary from the user's desired intent MAY have only "spec", and MAY rename +"spec" to a more appropriate name. + +Objects that contain both spec and status should not contain additional +top-level fields other than the standard metadata fields. + +##### Typical status properties + +**Conditions** represent the latest available observations of an object's +current state. Objects may report multiple conditions, and new types of +conditions may be added in the future. Therefore, conditions are represented +using a list/slice, where all have similar structure. + +The `FooCondition` type for some resource type `Foo` may include a subset of the +following fields, but must contain at least `type` and `status` fields: + +```go + Type FooConditionType `json:"type" description:"type of Foo condition"` + Status ConditionStatus `json:"status" description:"status of the condition, one of True, False, Unknown"` + LastHeartbeatTime unversioned.Time `json:"lastHeartbeatTime,omitempty" description:"last time we got an update on a given condition"` + LastTransitionTime unversioned.Time `json:"lastTransitionTime,omitempty" description:"last time the condition transit from one status to another"` + Reason string `json:"reason,omitempty" description:"one-word CamelCase reason for the condition's last transition"` + Message string `json:"message,omitempty" description:"human-readable message indicating details about last transition"` +``` + +Additional fields may be added in the future. + +Conditions should be added to explicitly convey properties that users and +components care about rather than requiring those properties to be inferred from +other observations. + +Condition status values may be `True`, `False`, or `Unknown`. The absence of a +condition should be interpreted the same as `Unknown`. + +In general, condition values may change back and forth, but some condition +transitions may be monotonic, depending on the resource and condition type. +However, conditions are observations and not, themselves, state machines, nor do +we define comprehensive state machines for objects, nor behaviors associated +with state transitions. The system is level-based rather than edge-triggered, +and should assume an Open World. + +A typical oscillating condition type is `Ready`, which indicates the object was +believed to be fully operational at the time it was last probed. A possible +monotonic condition could be `Succeeded`. A `False` status for `Succeeded` would +imply failure. An object that was still active would not have a `Succeeded` +condition, or its status would be `Unknown`. + +Some resources in the v1 API contain fields called **`phase`**, and associated +`message`, `reason`, and other status fields. The pattern of using `phase` is +deprecated. Newer API types should use conditions instead. Phase was essentially +a state-machine enumeration field, that contradicted +[system-design principles](../design/principles.md#control-logic) and hampered +evolution, since [adding new enum values breaks backward +compatibility](api_changes.md). Rather than encouraging clients to infer +implicit properties from phases, we intend to explicitly expose the conditions +that clients need to monitor. Conditions also have the benefit that it is +possible to create some conditions with uniform meaning across all resource +types, while still exposing others that are unique to specific resource types. +See [#7856](http://issues.k8s.io/7856) for more details and discussion. + +In condition types, and everywhere else they appear in the API, **`Reason`** is +intended to be a one-word, CamelCase representation of the category of cause of +the current status, and **`Message`** is intended to be a human-readable phrase +or sentence, which may contain specific details of the individual occurrence. +`Reason` is intended to be used in concise output, such as one-line +`kubectl get` output, and in summarizing occurrences of causes, whereas +`Message` is intended to be presented to users in detailed status explanations, +such as `kubectl describe` output. + +Historical information status (e.g., last transition time, failure counts) is +only provided with reasonable effort, and is not guaranteed to not be lost. + +Status information that may be large (especially proportional in size to +collections of other resources, such as lists of references to other objects -- +see below) and/or rapidly changing, such as +[resource usage](../design/resources.md#usage-data), should be put into separate +objects, with possibly a reference from the original object. This helps to +ensure that GETs and watch remain reasonably efficient for the majority of +clients, which may not need that data. + +Some resources report the `observedGeneration`, which is the `generation` most +recently observed by the component responsible for acting upon changes to the +desired state of the resource. This can be used, for instance, to ensure that +the reported status reflects the most recent desired status. + +#### References to related objects + +References to loosely coupled sets of objects, such as +[pods](../user-guide/pods.md) overseen by a +[replication controller](../user-guide/replication-controller.md), are usually +best referred to using a [label selector](../user-guide/labels.md). In order to +ensure that GETs of individual objects remain bounded in time and space, these +sets may be queried via separate API queries, but will not be expanded in the +referring object's status. + +References to specific objects, especially specific resource versions and/or +specific fields of those objects, are specified using the `ObjectReference` type +(or other types representing strict subsets of it). Unlike partial URLs, the +ObjectReference type facilitates flexible defaulting of fields from the +referring object or other contextual information. + +References in the status of the referee to the referrer may be permitted, when +the references are one-to-one and do not need to be frequently updated, +particularly in an edge-based manner. + +#### Lists of named subobjects preferred over maps + +Discussed in [#2004](http://issue.k8s.io/2004) and elsewhere. There are no maps +of subobjects in any API objects. Instead, the convention is to use a list of +subobjects containing name fields. + +For example: + +```yaml +ports: + - name: www + containerPort: 80 +``` + +vs. + +```yaml +ports: + www: + containerPort: 80 +``` + +This rule maintains the invariant that all JSON/YAML keys are fields in API +objects. The only exceptions are pure maps in the API (currently, labels, +selectors, annotations, data), as opposed to sets of subobjects. + +#### Primitive types + +* Avoid floating-point values as much as possible, and never use them in spec. +Floating-point values cannot be reliably round-tripped (encoded and re-decoded) +without changing, and have varying precision and representations across +languages and architectures. +* All numbers (e.g., uint32, int64) are converted to float64 by Javascript and +some other languages, so any field which is expected to exceed that either in +magnitude or in precision (specifically integer values > 53 bits) should be +serialized and accepted as strings. +* Do not use unsigned integers, due to inconsistent support across languages and +libraries. Just validate that the integer is non-negative if that's the case. +* Do not use enums. Use aliases for string instead (e.g., `NodeConditionType`). +* Look at similar fields in the API (e.g., ports, durations) and follow the +conventions of existing fields. +* All public integer fields MUST use the Go `(u)int32` or Go `(u)int64` types, +not `(u)int` (which is ambiguous depending on target platform). Internal types +may use `(u)int`. + +#### Constants + +Some fields will have a list of allowed values (enumerations). These values will +be strings, and they will be in CamelCase, with an initial uppercase letter. +Examples: "ClusterFirst", "Pending", "ClientIP". + +#### Unions + +Sometimes, at most one of a set of fields can be set. For example, the +[volumes] field of a PodSpec has 17 different volume type-specific fields, such +as `nfs` and `iscsi`. All fields in the set should be +[Optional](#optional-vs-required). + +Sometimes, when a new type is created, the api designer may anticipate that a +union will be needed in the future, even if only one field is allowed initially. +In this case, be sure to make the field [Optional](#optional-vs-required) +optional. In the validation, you may still return an error if the sole field is +unset. Do not set a default value for that field. + +### Lists and Simple kinds + +Every list or simple kind SHOULD have the following metadata in a nested object +field called "metadata": + +* resourceVersion: a string that identifies the common version of the objects +returned by in a list. This value MUST be treated as opaque by clients and +passed unmodified back to the server. A resource version is only valid within a +single namespace on a single kind of resource. + +Every simple kind returned by the server, and any simple kind sent to the server +that must support idempotency or optimistic concurrency should return this +value. Since simple resources are often used as input alternate actions that +modify objects, the resource version of the simple resource should correspond to +the resource version of the object. + + +## Differing Representations + +An API may represent a single entity in different ways for different clients, or +transform an object after certain transitions in the system occur. In these +cases, one request object may have two representations available as different +resources, or different kinds. + +An example is a Service, which represents the intent of the user to group a set +of pods with common behavior on common ports. When Kubernetes detects a pod +matches the service selector, the IP address and port of the pod are added to an +Endpoints resource for that Service. The Endpoints resource exists only if the +Service exists, but exposes only the IPs and ports of the selected pods. The +full service is represented by two distinct resources - under the original +Service resource the user created, as well as in the Endpoints resource. + +As another example, a "pod status" resource may accept a PUT with the "pod" +kind, with different rules about what fields may be changed. + +Future versions of Kubernetes may allow alternative encodings of objects beyond +JSON. + + +## Verbs on Resources + +API resources should use the traditional REST pattern: + +* GET /<resourceNamePlural> - Retrieve a list of type +<resourceName>, e.g. GET /pods returns a list of Pods. +* POST /<resourceNamePlural> - Create a new resource from the JSON object +provided by the client. +* GET /<resourceNamePlural>/<name> - Retrieves a single resource +with the given name, e.g. GET /pods/first returns a Pod named 'first'. Should be +constant time, and the resource should be bounded in size. +* DELETE /<resourceNamePlural>/<name> - Delete the single resource +with the given name. DeleteOptions may specify gracePeriodSeconds, the optional +duration in seconds before the object should be deleted. Individual kinds may +declare fields which provide a default grace period, and different kinds may +have differing kind-wide default grace periods. A user provided grace period +overrides a default grace period, including the zero grace period ("now"). +* PUT /<resourceNamePlural>/<name> - Update or create the resource +with the given name with the JSON object provided by the client. +* PATCH /<resourceNamePlural>/<name> - Selectively modify the +specified fields of the resource. See more information [below](#patch). +* GET /<resourceNamePlural>&watch=true - Receive a stream of JSON +objects corresponding to changes made to any resource of the given kind over +time. + +### PATCH operations + +The API supports three different PATCH operations, determined by their +corresponding Content-Type header: + +* JSON Patch, `Content-Type: application/json-patch+json` + * As defined in [RFC6902](https://tools.ietf.org/html/rfc6902), a JSON Patch is +a sequence of operations that are executed on the resource, e.g. `{"op": "add", +"path": "/a/b/c", "value": [ "foo", "bar" ]}`. For more details on how to use +JSON Patch, see the RFC. +* Merge Patch, `Content-Type: application/merge-patch+json` + * As defined in [RFC7386](https://tools.ietf.org/html/rfc7386), a Merge Patch +is essentially a partial representation of the resource. The submitted JSON is +"merged" with the current resource to create a new one, then the new one is +saved. For more details on how to use Merge Patch, see the RFC. +* Strategic Merge Patch, `Content-Type: application/strategic-merge-patch+json` + * Strategic Merge Patch is a custom implementation of Merge Patch. For a +detailed explanation of how it works and why it needed to be introduced, see +below. + +#### Strategic Merge Patch + +In the standard JSON merge patch, JSON objects are always merged but lists are +always replaced. Often that isn't what we want. Let's say we start with the +following Pod: + +```yaml +spec: + containers: + - name: nginx + image: nginx-1.0 +``` + +...and we POST that to the server (as JSON). Then let's say we want to *add* a +container to this Pod. + +```yaml +PATCH /api/v1/namespaces/default/pods/pod-name +spec: + containers: + - name: log-tailer + image: log-tailer-1.0 +``` + +If we were to use standard Merge Patch, the entire container list would be +replaced with the single log-tailer container. However, our intent is for the +container lists to merge together based on the `name` field. + +To solve this problem, Strategic Merge Patch uses metadata attached to the API +objects to determine what lists should be merged and which ones should not. +Currently the metadata is available as struct tags on the API objects +themselves, but will become available to clients as Swagger annotations in the +future. In the above example, the `patchStrategy` metadata for the `containers` +field would be `merge` and the `patchMergeKey` would be `name`. + +Note: If the patch results in merging two lists of scalars, the scalars are +first deduplicated and then merged. + +Strategic Merge Patch also supports special operations as listed below. + +### List Operations + +To override the container list to be strictly replaced, regardless of the +default: + +```yaml +containers: + - name: nginx + image: nginx-1.0 + - $patch: replace # any further $patch operations nested in this list will be ignored +``` + +To delete an element of a list that should be merged: + +```yaml +containers: + - name: nginx + image: nginx-1.0 + - $patch: delete + name: log-tailer # merge key and value goes here +``` + +### Map Operations + +To indicate that a map should not be merged and instead should be taken literally: + +```yaml +$patch: replace # recursive and applies to all fields of the map it's in +containers: +- name: nginx + image: nginx-1.0 +``` + +To delete a field of a map: + +```yaml +name: nginx +image: nginx-1.0 +labels: + live: null # set the value of the map key to null +``` + + +## Idempotency + +All compatible Kubernetes APIs MUST support "name idempotency" and respond with +an HTTP status code 409 when a request is made to POST an object that has the +same name as an existing object in the system. See +[docs/user-guide/identifiers.md](../user-guide/identifiers.md) for details. + +Names generated by the system may be requested using `metadata.generateName`. +GenerateName indicates that the name should be made unique by the server prior +to persisting it. A non-empty value for the field indicates the name will be +made unique (and the name returned to the client will be different than the name +passed). The value of this field will be combined with a unique suffix on the +server if the Name field has not been provided. The provided value must be valid +within the rules for Name, and may be truncated by the length of the suffix +required to make the value unique on the server. If this field is specified, and +Name is not present, the server will NOT return a 409 if the generated name +exists - instead, it will either return 201 Created or 504 with Reason +`ServerTimeout` indicating a unique name could not be found in the time +allotted, and the client should retry (optionally after the time indicated in +the Retry-After header). + +## Optional vs. Required + +Fields must be either optional or required. + +Optional fields have the following properties: + +- They have `omitempty` struct tag in Go. +- They are a pointer type in the Go definition (e.g. `bool *awesomeFlag`) or +have a built-in `nil` value (e.g. maps and slices). +- The API server should allow POSTing and PUTing a resource with this field +unset. + +Required fields have the opposite properties, namely: + +- They do not have an `omitempty` struct tag. +- They are not a pointer type in the Go definition (e.g. `bool otherFlag`). +- The API server should not allow POSTing or PUTing a resource with this field +unset. + +Using the `omitempty` tag causes swagger documentation to reflect that the field +is optional. + +Using a pointer allows distinguishing unset from the zero value for that type. +There are some cases where, in principle, a pointer is not needed for an +optional field since the zero value is forbidden, and thus implies unset. There +are examples of this in the codebase. However: + +- it can be difficult for implementors to anticipate all cases where an empty +value might need to be distinguished from a zero value +- structs are not omitted from encoder output even where omitempty is specified, +which is messy; +- having a pointer consistently imply optional is clearer for users of the Go +language client, and any other clients that use corresponding types + +Therefore, we ask that pointers always be used with optional fields that do not +have a built-in `nil` value. + + +## Defaulting + +Default resource values are API version-specific, and they are applied during +the conversion from API-versioned declarative configuration to internal objects +representing the desired state (`Spec`) of the resource. Subsequent GETs of the +resource will include the default values explicitly. + +Incorporating the default values into the `Spec` ensures that `Spec` depicts the +full desired state so that it is easier for the system to determine how to +achieve the state, and for the user to know what to anticipate. + +API version-specific default values are set by the API server. + +## Late Initialization + +Late initialization is when resource fields are set by a system controller +after an object is created/updated. + +For example, the scheduler sets the `pod.spec.nodeName` field after the pod is +created. + +Late-initializers should only make the following types of modifications: + - Setting previously unset fields + - Adding keys to maps + - Adding values to arrays which have mergeable semantics +(`patchStrategy:"merge"` attribute in the type definition). + +These conventions: + 1. allow a user (with sufficient privilege) to override any system-default + behaviors by setting the fields that would otherwise have been defaulted. + 1. enables updates from users to be merged with changes made during late +initialization, using strategic merge patch, as opposed to clobbering the +change. + 1. allow the component which does the late-initialization to use strategic +merge patch, which facilitates composition and concurrency of such components. + +Although the apiserver Admission Control stage acts prior to object creation, +Admission Control plugins should follow the Late Initialization conventions +too, to allow their implementation to be later moved to a 'controller', or to +client libraries. + +## Concurrency Control and Consistency + +Kubernetes leverages the concept of *resource versions* to achieve optimistic +concurrency. All Kubernetes resources have a "resourceVersion" field as part of +their metadata. This resourceVersion is a string that identifies the internal +version of an object that can be used by clients to determine when objects have +changed. When a record is about to be updated, it's version is checked against a +pre-saved value, and if it doesn't match, the update fails with a StatusConflict +(HTTP status code 409). + +The resourceVersion is changed by the server every time an object is modified. +If resourceVersion is included with the PUT operation the system will verify +that there have not been other successful mutations to the resource during a +read/modify/write cycle, by verifying that the current value of resourceVersion +matches the specified value. + +The resourceVersion is currently backed by [etcd's +modifiedIndex](https://coreos.com/docs/distributed-configuration/etcd-api/). +However, it's important to note that the application should *not* rely on the +implementation details of the versioning system maintained by Kubernetes. We may +change the implementation of resourceVersion in the future, such as to change it +to a timestamp or per-object counter. + +The only way for a client to know the expected value of resourceVersion is to +have received it from the server in response to a prior operation, typically a +GET. This value MUST be treated as opaque by clients and passed unmodified back +to the server. Clients should not assume that the resource version has meaning +across namespaces, different kinds of resources, or different servers. +Currently, the value of resourceVersion is set to match etcd's sequencer. You +could think of it as a logical clock the API server can use to order requests. +However, we expect the implementation of resourceVersion to change in the +future, such as in the case we shard the state by kind and/or namespace, or port +to another storage system. + +In the case of a conflict, the correct client action at this point is to GET the +resource again, apply the changes afresh, and try submitting again. This +mechanism can be used to prevent races like the following: + +``` +Client #1 Client #2 +GET Foo GET Foo +Set Foo.Bar = "one" Set Foo.Baz = "two" +PUT Foo PUT Foo +``` + +When these sequences occur in parallel, either the change to Foo.Bar or the +change to Foo.Baz can be lost. + +On the other hand, when specifying the resourceVersion, one of the PUTs will +fail, since whichever write succeeds changes the resourceVersion for Foo. + +resourceVersion may be used as a precondition for other operations (e.g., GET, +DELETE) in the future, such as for read-after-write consistency in the presence +of caching. + +"Watch" operations specify resourceVersion using a query parameter. It is used +to specify the point at which to begin watching the specified resources. This +may be used to ensure that no mutations are missed between a GET of a resource +(or list of resources) and a subsequent Watch, even if the current version of +the resource is more recent. This is currently the main reason that list +operations (GET on a collection) return resourceVersion. + + +## Serialization Format + +APIs may return alternative representations of any resource in response to an +Accept header or under alternative endpoints, but the default serialization for +input and output of API responses MUST be JSON. + +Protobuf serialization of API objects are currently **EXPERIMENTAL** and will change without notice. + +All dates should be serialized as RFC3339 strings. + +## Units + +Units must either be explicit in the field name (e.g., `timeoutSeconds`), or +must be specified as part of the value (e.g., `resource.Quantity`). Which +approach is preferred is TBD, though currently we use the `fooSeconds` +convention for durations. + + +## Selecting Fields + +Some APIs may need to identify which field in a JSON object is invalid, or to +reference a value to extract from a separate resource. The current +recommendation is to use standard JavaScript syntax for accessing that field, +assuming the JSON object was transformed into a JavaScript object, without the +leading dot, such as `metadata.name`. + +Examples: + +* Find the field "current" in the object "state" in the second item in the array +"fields": `fields[1].state.current` + +## Object references + +Object references should either be called `fooName` if referring to an object of +kind `Foo` by just the name (within the current namespace, if a namespaced +resource), or should be called `fooRef`, and should contain a subset of the +fields of the `ObjectReference` type. + + +TODO: Plugins, extensions, nested kinds, headers + + +## HTTP Status codes + +The server will respond with HTTP status codes that match the HTTP spec. See the +section below for a breakdown of the types of status codes the server will send. + +The following HTTP status codes may be returned by the API. + +#### Success codes + +* `200 StatusOK` + * Indicates that the request completed successfully. +* `201 StatusCreated` + * Indicates that the request to create kind completed successfully. +* `204 StatusNoContent` + * Indicates that the request completed successfully, and the response contains +no body. + * Returned in response to HTTP OPTIONS requests. + +#### Error codes + +* `307 StatusTemporaryRedirect` + * Indicates that the address for the requested resource has changed. + * Suggested client recovery behavior: + * Follow the redirect. + + +* `400 StatusBadRequest` + * Indicates the requested is invalid. + * Suggested client recovery behavior: + * Do not retry. Fix the request. + + +* `401 StatusUnauthorized` + * Indicates that the server can be reached and understood the request, but +refuses to take any further action, because the client must provide +authorization. If the client has provided authorization, the server is +indicating the provided authorization is unsuitable or invalid. + * Suggested client recovery behavior: + * If the user has not supplied authorization information, prompt them for +the appropriate credentials. If the user has supplied authorization information, +inform them their credentials were rejected and optionally prompt them again. + + +* `403 StatusForbidden` + * Indicates that the server can be reached and understood the request, but +refuses to take any further action, because it is configured to deny access for +some reason to the requested resource by the client. + * Suggested client recovery behavior: + * Do not retry. Fix the request. + + +* `404 StatusNotFound` + * Indicates that the requested resource does not exist. + * Suggested client recovery behavior: + * Do not retry. Fix the request. + + +* `405 StatusMethodNotAllowed` + * Indicates that the action the client attempted to perform on the resource +was not supported by the code. + * Suggested client recovery behavior: + * Do not retry. Fix the request. + + +* `409 StatusConflict` + * Indicates that either the resource the client attempted to create already +exists or the requested update operation cannot be completed due to a conflict. + * Suggested client recovery behavior: + * * If creating a new resource: + * * Either change the identifier and try again, or GET and compare the +fields in the pre-existing object and issue a PUT/update to modify the existing +object. + * * If updating an existing resource: + * See `Conflict` from the `status` response section below on how to +retrieve more information about the nature of the conflict. + * GET and compare the fields in the pre-existing object, merge changes (if +still valid according to preconditions), and retry with the updated request +(including `ResourceVersion`). + + +* `410 StatusGone` + * Indicates that the item is no longer available at the server and no +forwarding address is known. + * Suggested client recovery behavior: + * Do not retry. Fix the request. + + +* `422 StatusUnprocessableEntity` + * Indicates that the requested create or update operation cannot be completed +due to invalid data provided as part of the request. + * Suggested client recovery behavior: + * Do not retry. Fix the request. + + +* `429 StatusTooManyRequests` + * Indicates that the either the client rate limit has been exceeded or the +server has received more requests then it can process. + * Suggested client recovery behavior: + * Read the `Retry-After` HTTP header from the response, and wait at least +that long before retrying. + + +* `500 StatusInternalServerError` + * Indicates that the server can be reached and understood the request, but +either an unexpected internal error occurred and the outcome of the call is +unknown, or the server cannot complete the action in a reasonable time (this may +be due to temporary server load or a transient communication issue with another +server). + * Suggested client recovery behavior: + * Retry with exponential backoff. + + +* `503 StatusServiceUnavailable` + * Indicates that required service is unavailable. + * Suggested client recovery behavior: + * Retry with exponential backoff. + + +* `504 StatusServerTimeout` + * Indicates that the request could not be completed within the given time. +Clients can get this response ONLY when they specified a timeout param in the +request. + * Suggested client recovery behavior: + * Increase the value of the timeout param and retry with exponential +backoff. + +## Response Status Kind + +Kubernetes will always return the `Status` kind from any API endpoint when an +error occurs. Clients SHOULD handle these types of objects when appropriate. + +A `Status` kind will be returned by the API in two cases: + * When an operation is not successful (i.e. when the server would return a non +2xx HTTP status code). + * When a HTTP `DELETE` call is successful. + +The status object is encoded as JSON and provided as the body of the response. +The status object contains fields for humans and machine consumers of the API to +get more detailed information for the cause of the failure. The information in +the status object supplements, but does not override, the HTTP status code's +meaning. When fields in the status object have the same meaning as generally +defined HTTP headers and that header is returned with the response, the header +should be considered as having higher priority. + +**Example:** + +```console +$ curl -v -k -H "Authorization: Bearer WhCDvq4VPpYhrcfmF6ei7V9qlbqTubUc" https://10.240.122.184:443/api/v1/namespaces/default/pods/grafana + +> GET /api/v1/namespaces/default/pods/grafana HTTP/1.1 +> User-Agent: curl/7.26.0 +> Host: 10.240.122.184 +> Accept: */* +> Authorization: Bearer WhCDvq4VPpYhrcfmF6ei7V9qlbqTubUc +> + +< HTTP/1.1 404 Not Found +< Content-Type: application/json +< Date: Wed, 20 May 2015 18:10:42 GMT +< Content-Length: 232 +< +{ + "kind": "Status", + "apiVersion": "v1", + "metadata": {}, + "status": "Failure", + "message": "pods \"grafana\" not found", + "reason": "NotFound", + "details": { + "name": "grafana", + "kind": "pods" + }, + "code": 404 +} +``` + +`status` field contains one of two possible values: +* `Success` +* `Failure` + +`message` may contain human-readable description of the error + +`reason` may contain a machine-readable, one-word, CamelCase description of why +this operation is in the `Failure` status. If this value is empty there is no +information available. The `reason` clarifies an HTTP status code but does not +override it. + +`details` may contain extended data associated with the reason. Each reason may +define its own extended details. This field is optional and the data returned is +not guaranteed to conform to any schema except that defined by the reason type. + +Possible values for the `reason` and `details` fields: +* `BadRequest` + * Indicates that the request itself was invalid, because the request doesn't +make any sense, for example deleting a read-only object. + * This is different than `status reason` `Invalid` above which indicates that +the API call could possibly succeed, but the data was invalid. + * API calls that return BadRequest can never succeed. + * Http status code: `400 StatusBadRequest` + + +* `Unauthorized` + * Indicates that the server can be reached and understood the request, but +refuses to take any further action without the client providing appropriate +authorization. If the client has provided authorization, this error indicates +the provided credentials are insufficient or invalid. + * Details (optional): + * `kind string` + * The kind attribute of the unauthorized resource (on some operations may +differ from the requested resource). + * `name string` + * The identifier of the unauthorized resource. + * HTTP status code: `401 StatusUnauthorized` + + +* `Forbidden` + * Indicates that the server can be reached and understood the request, but +refuses to take any further action, because it is configured to deny access for +some reason to the requested resource by the client. + * Details (optional): + * `kind string` + * The kind attribute of the forbidden resource (on some operations may +differ from the requested resource). + * `name string` + * The identifier of the forbidden resource. + * HTTP status code: `403 StatusForbidden` + + +* `NotFound` + * Indicates that one or more resources required for this operation could not +be found. + * Details (optional): + * `kind string` + * The kind attribute of the missing resource (on some operations may +differ from the requested resource). + * `name string` + * The identifier of the missing resource. + * HTTP status code: `404 StatusNotFound` + + +* `AlreadyExists` + * Indicates that the resource you are creating already exists. + * Details (optional): + * `kind string` + * The kind attribute of the conflicting resource. + * `name string` + * The identifier of the conflicting resource. + * HTTP status code: `409 StatusConflict` + +* `Conflict` + * Indicates that the requested update operation cannot be completed due to a +conflict. The client may need to alter the request. Each resource may define +custom details that indicate the nature of the conflict. + * HTTP status code: `409 StatusConflict` + + +* `Invalid` + * Indicates that the requested create or update operation cannot be completed +due to invalid data provided as part of the request. + * Details (optional): + * `kind string` + * the kind attribute of the invalid resource + * `name string` + * the identifier of the invalid resource + * `causes` + * One or more `StatusCause` entries indicating the data in the provided +resource that was invalid. The `reason`, `message`, and `field` attributes will +be set. + * HTTP status code: `422 StatusUnprocessableEntity` + + +* `Timeout` + * Indicates that the request could not be completed within the given time. +Clients may receive this response if the server has decided to rate limit the +client, or if the server is overloaded and cannot process the request at this +time. + * Http status code: `429 TooManyRequests` + * The server should set the `Retry-After` HTTP header and return +`retryAfterSeconds` in the details field of the object. A value of `0` is the +default. + + +* `ServerTimeout` + * Indicates that the server can be reached and understood the request, but +cannot complete the action in a reasonable time. This maybe due to temporary +server load or a transient communication issue with another server. + * Details (optional): + * `kind string` + * The kind attribute of the resource being acted on. + * `name string` + * The operation that is being attempted. + * The server should set the `Retry-After` HTTP header and return +`retryAfterSeconds` in the details field of the object. A value of `0` is the +default. + * Http status code: `504 StatusServerTimeout` + + +* `MethodNotAllowed` + * Indicates that the action the client attempted to perform on the resource +was not supported by the code. + * For instance, attempting to delete a resource that can only be created. + * API calls that return MethodNotAllowed can never succeed. + * Http status code: `405 StatusMethodNotAllowed` + + +* `InternalError` + * Indicates that an internal error occurred, it is unexpected and the outcome +of the call is unknown. + * Details (optional): + * `causes` + * The original error. + * Http status code: `500 StatusInternalServerError` `code` may contain the suggested HTTP return code for this status. + + +## Events + +Events are complementary to status information, since they can provide some +historical information about status and occurrences in addition to current or +previous status. Generate events for situations users or administrators should +be alerted about. + +Choose a unique, specific, short, CamelCase reason for each event category. For +example, `FreeDiskSpaceInvalid` is a good event reason because it is likely to +refer to just one situation, but `Started` is not a good reason because it +doesn't sufficiently indicate what started, even when combined with other event +fields. + +`Error creating foo` or `Error creating foo %s` would be appropriate for an +event message, with the latter being preferable, since it is more informational. + +Accumulate repeated events in the client, especially for frequent events, to +reduce data volume, load on the system, and noise exposed to users. + +## Naming conventions + +* Go field names must be CamelCase. JSON field names must be camelCase. Other +than capitalization of the initial letter, the two should almost always match. +No underscores nor dashes in either. +* Field and resource names should be declarative, not imperative (DoSomething, +SomethingDoer, DoneBy, DoneAt). +* Use `Node` where referring to +the node resource in the context of the cluster. Use `Host` where referring to +properties of the individual physical/virtual system, such as `hostname`, +`hostPath`, `hostNetwork`, etc. +* `FooController` is a deprecated kind naming convention. Name the kind after +the thing being controlled instead (e.g., `Job` rather than `JobController`). +* The name of a field that specifies the time at which `something` occurs should +be called `somethingTime`. Do not use `stamp` (e.g., `creationTimestamp`). +* We use the `fooSeconds` convention for durations, as discussed in the [units +subsection](#units). + * `fooPeriodSeconds` is preferred for periodic intervals and other waiting +periods (e.g., over `fooIntervalSeconds`). + * `fooTimeoutSeconds` is preferred for inactivity/unresponsiveness deadlines. + * `fooDeadlineSeconds` is preferred for activity completion deadlines. +* Do not use abbreviations in the API, except where they are extremely commonly +used, such as "id", "args", or "stdin". +* Acronyms should similarly only be used when extremely commonly known. All +letters in the acronym should have the same case, using the appropriate case for +the situation. For example, at the beginning of a field name, the acronym should +be all lowercase, such as "httpGet". Where used as a constant, all letters +should be uppercase, such as "TCP" or "UDP". +* The name of a field referring to another resource of kind `Foo` by name should +be called `fooName`. The name of a field referring to another resource of kind +`Foo` by ObjectReference (or subset thereof) should be called `fooRef`. +* More generally, include the units and/or type in the field name if they could +be ambiguous and they are not specified by the value or value type. + +## Label, selector, and annotation conventions + +Labels are the domain of users. They are intended to facilitate organization and +management of API resources using attributes that are meaningful to users, as +opposed to meaningful to the system. Think of them as user-created mp3 or email +inbox labels, as opposed to the directory structure used by a program to store +its data. The former enables the user to apply an arbitrary ontology, whereas +the latter is implementation-centric and inflexible. Users will use labels to +select resources to operate on, display label values in CLI/UI columns, etc. +Users should always retain full power and flexibility over the label schemas +they apply to labels in their namespaces. + +However, we should support conveniences for common cases by default. For +example, what we now do in ReplicationController is automatically set the RC's +selector and labels to the labels in the pod template by default, if they are +not already set. That ensures that the selector will match the template, and +that the RC can be managed using the same labels as the pods it creates. Note +that once we generalize selectors, it won't necessarily be possible to +unambiguously generate labels that match an arbitrary selector. + +If the user wants to apply additional labels to the pods that it doesn't select +upon, such as to facilitate adoption of pods or in the expectation that some +label values will change, they can set the selector to a subset of the pod +labels. Similarly, the RC's labels could be initialized to a subset of the pod +template's labels, or could include additional/different labels. + +For disciplined users managing resources within their own namespaces, it's not +that hard to consistently apply schemas that ensure uniqueness. One just needs +to ensure that at least one value of some label key in common differs compared +to all other comparable resources. We could/should provide a verification tool +to check that. However, development of conventions similar to the examples in +[Labels](../user-guide/labels.md) make uniqueness straightforward. Furthermore, +relatively narrowly used namespaces (e.g., per environment, per application) can +be used to reduce the set of resources that could potentially cause overlap. + +In cases where users could be running misc. examples with inconsistent schemas, +or where tooling or components need to programmatically generate new objects to +be selected, there needs to be a straightforward way to generate unique label +sets. A simple way to ensure uniqueness of the set is to ensure uniqueness of a +single label value, such as by using a resource name, uid, resource hash, or +generation number. + +Problems with uids and hashes, however, include that they have no semantic +meaning to the user, are not memorable nor readily recognizable, and are not +predictable. Lack of predictability obstructs use cases such as creation of a +replication controller from a pod, such as people want to do when exploring the +system, bootstrapping a self-hosted cluster, or deletion and re-creation of a +new RC that adopts the pods of the previous one, such as to rename it. +Generation numbers are more predictable and much clearer, assuming there is a +logical sequence. Fortunately, for deployments that's the case. For jobs, use of +creation timestamps is common internally. Users should always be able to turn +off auto-generation, in order to permit some of the scenarios described above. +Note that auto-generated labels will also become one more field that needs to be +stripped out when cloning a resource, within a namespace, in a new namespace, in +a new cluster, etc., and will need to be ignored around when updating a resource +via patch or read-modify-write sequence. + +Inclusion of a system prefix in a label key is fairly hostile to UX. A prefix is +only necessary in the case that the user cannot choose the label key, in order +to avoid collisions with user-defined labels. However, I firmly believe that the +user should always be allowed to select the label keys to use on their +resources, so it should always be possible to override default label keys. + +Therefore, resources supporting auto-generation of unique labels should have a +`uniqueLabelKey` field, so that the user could specify the key if they wanted +to, but if unspecified, it could be set by default, such as to the resource +type, like job, deployment, or replicationController. The value would need to be +at least spatially unique, and perhaps temporally unique in the case of job. + +Annotations have very different intended usage from labels. We expect them to be +primarily generated and consumed by tooling and system extensions. I'm inclined +to generalize annotations to permit them to directly store arbitrary json. Rigid +names and name prefixes make sense, since they are analogous to API fields. + +In fact, in-development API fields, including those used to represent fields of +newer alpha/beta API versions in the older stable storage version, may be +represented as annotations with the form `something.alpha.kubernetes.io/name` or +`something.beta.kubernetes.io/name` (depending on our confidence in it). For +example `net.alpha.kubernetes.io/policy` might represent an experimental network +policy field. The "name" portion of the annotation should follow the below +conventions for annotations. When an annotation gets promoted to a field, the +name transformation should then be mechanical: `foo-bar` becomes `fooBar`. + +Other advice regarding use of labels, annotations, and other generic map keys by +Kubernetes components and tools: + - Key names should be all lowercase, with words separated by dashes, such as +`desired-replicas` + - Prefix the key with `kubernetes.io/` or `foo.kubernetes.io/`, preferably the +latter if the label/annotation is specific to `foo` + - For instance, prefer `service-account.kubernetes.io/name` over +`kubernetes.io/service-account.name` + - Use annotations to store API extensions that the controller responsible for +the resource doesn't need to know about, experimental fields that aren't +intended to be generally used API fields, etc. Beware that annotations aren't +automatically handled by the API conversion machinery. + + +## WebSockets and SPDY + +Some of the API operations exposed by Kubernetes involve transfer of binary +streams between the client and a container, including attach, exec, portforward, +and logging. The API therefore exposes certain operations over upgradeable HTTP +connections ([described in RFC 2817](https://tools.ietf.org/html/rfc2817)) via +the WebSocket and SPDY protocols. These actions are exposed as subresources with +their associated verbs (exec, log, attach, and portforward) and are requested +via a GET (to support JavaScript in a browser) and POST (semantically accurate). + +There are two primary protocols in use today: + +1. Streamed channels + + When dealing with multiple independent binary streams of data such as the +remote execution of a shell command (writing to STDIN, reading from STDOUT and +STDERR) or forwarding multiple ports the streams can be multiplexed onto a +single TCP connection. Kubernetes supports a SPDY based framing protocol that +leverages SPDY channels and a WebSocket framing protocol that multiplexes +multiple channels onto the same stream by prefixing each binary chunk with a +byte indicating its channel. The WebSocket protocol supports an optional +subprotocol that handles base64-encoded bytes from the client and returns +base64-encoded bytes from the server and character based channel prefixes ('0', +'1', '2') for ease of use from JavaScript in a browser. + +2. Streaming response + + The default log output for a channel of streaming data is an HTTP Chunked +Transfer-Encoding, which can return an arbitrary stream of binary data from the +server. Browser-based JavaScript is limited in its ability to access the raw +data from a chunked response, especially when very large amounts of logs are +returned, and in future API calls it may be desirable to transfer large files. +The streaming API endpoints support an optional WebSocket upgrade that provides +a unidirectional channel from the server to the client and chunks data as binary +WebSocket frames. An optional WebSocket subprotocol is exposed that base64 +encodes the stream before returning it to the client. + +Clients should use the SPDY protocols if their clients have native support, or +WebSockets as a fallback. Note that WebSockets is susceptible to Head-of-Line +blocking and so clients must read and process each message sequentially. In +the future, an HTTP/2 implementation will be exposed that deprecates SPDY. + + +## Validation + +API objects are validated upon receipt by the apiserver. Validation errors are +flagged and returned to the caller in a `Failure` status with `reason` set to +`Invalid`. In order to facilitate consistent error messages, we ask that +validation logic adheres to the following guidelines whenever possible (though +exceptional cases will exist). + +* Be as precise as possible. +* Telling users what they CAN do is more useful than telling them what they +CANNOT do. +* When asserting a requirement in the positive, use "must". Examples: "must be +greater than 0", "must match regex '[a-z]+'". Words like "should" imply that +the assertion is optional, and must be avoided. +* When asserting a formatting requirement in the negative, use "must not". +Example: "must not contain '..'". Words like "should not" imply that the +assertion is optional, and must be avoided. +* When asserting a behavioral requirement in the negative, use "may not". +Examples: "may not be specified when otherField is empty", "only `name` may be +specified". +* When referencing a literal string value, indicate the literal in +single-quotes. Example: "must not contain '..'". +* When referencing another field name, indicate the name in back-quotes. +Example: "must be greater than `request`". +* When specifying inequalities, use words rather than symbols. Examples: "must +be less than 256", "must be greater than or equal to 0". Do not use words +like "larger than", "bigger than", "more than", "higher than", etc. +* When specifying numeric ranges, use inclusive ranges when possible. + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/api-conventions.md?pixel)]() + diff --git a/devel/api_changes.md b/devel/api_changes.md new file mode 100755 index 00000000..963deb7c --- /dev/null +++ b/devel/api_changes.md @@ -0,0 +1,732 @@ +*This document is oriented at developers who want to change existing APIs. +A set of API conventions, which applies to new APIs and to changes, can be +found at [API Conventions](api-conventions.md). + +**Table of Contents** + + +- [So you want to change the API?](#so-you-want-to-change-the-api) + - [Operational overview](#operational-overview) + - [On compatibility](#on-compatibility) + - [Incompatible API changes](#incompatible-api-changes) + - [Changing versioned APIs](#changing-versioned-apis) + - [Edit types.go](#edit-typesgo) + - [Edit defaults.go](#edit-defaultsgo) + - [Edit conversion.go](#edit-conversiongo) + - [Changing the internal structures](#changing-the-internal-structures) + - [Edit types.go](#edit-typesgo-1) + - [Edit validation.go](#edit-validationgo) + - [Edit version conversions](#edit-version-conversions) + - [Generate protobuf objects](#generate-protobuf-objects) + - [Edit json (un)marshaling code](#edit-json-unmarshaling-code) + - [Making a new API Group](#making-a-new-api-group) + - [Update the fuzzer](#update-the-fuzzer) + - [Update the semantic comparisons](#update-the-semantic-comparisons) + - [Implement your change](#implement-your-change) + - [Write end-to-end tests](#write-end-to-end-tests) + - [Examples and docs](#examples-and-docs) + - [Alpha, Beta, and Stable Versions](#alpha-beta-and-stable-versions) + - [Adding Unstable Features to Stable Versions](#adding-unstable-features-to-stable-versions) + + + +# So you want to change the API? + +Before attempting a change to the API, you should familiarize yourself with a +number of existing API types and with the [API conventions](api-conventions.md). +If creating a new API type/resource, we also recommend that you first send a PR +containing just a proposal for the new API types, and that you initially target +the extensions API (pkg/apis/extensions). + +The Kubernetes API has two major components - the internal structures and +the versioned APIs. The versioned APIs are intended to be stable, while the +internal structures are implemented to best reflect the needs of the Kubernetes +code itself. + +What this means for API changes is that you have to be somewhat thoughtful in +how you approach changes, and that you have to touch a number of pieces to make +a complete change. This document aims to guide you through the process, though +not all API changes will need all of these steps. + +## Operational overview + +It is important to have a high level understanding of the API system used in +Kubernetes in order to navigate the rest of this document. + +As mentioned above, the internal representation of an API object is decoupled +from any one API version. This provides a lot of freedom to evolve the code, +but it requires robust infrastructure to convert between representations. There +are multiple steps in processing an API operation - even something as simple as +a GET involves a great deal of machinery. + +The conversion process is logically a "star" with the internal form at the +center. Every versioned API can be converted to the internal form (and +vice-versa), but versioned APIs do not convert to other versioned APIs directly. +This sounds like a heavy process, but in reality we do not intend to keep more +than a small number of versions alive at once. While all of the Kubernetes code +operates on the internal structures, they are always converted to a versioned +form before being written to storage (disk or etcd) or being sent over a wire. +Clients should consume and operate on the versioned APIs exclusively. + +To demonstrate the general process, here is a (hypothetical) example: + + 1. A user POSTs a `Pod` object to `/api/v7beta1/...` + 2. The JSON is unmarshalled into a `v7beta1.Pod` structure + 3. Default values are applied to the `v7beta1.Pod` + 4. The `v7beta1.Pod` is converted to an `api.Pod` structure + 5. The `api.Pod` is validated, and any errors are returned to the user + 6. The `api.Pod` is converted to a `v6.Pod` (because v6 is the latest stable +version) + 7. The `v6.Pod` is marshalled into JSON and written to etcd + +Now that we have the `Pod` object stored, a user can GET that object in any +supported api version. For example: + + 1. A user GETs the `Pod` from `/api/v5/...` + 2. The JSON is read from etcd and unmarshalled into a `v6.Pod` structure + 3. Default values are applied to the `v6.Pod` + 4. The `v6.Pod` is converted to an `api.Pod` structure + 5. The `api.Pod` is converted to a `v5.Pod` structure + 6. The `v5.Pod` is marshalled into JSON and sent to the user + +The implication of this process is that API changes must be done carefully and +backward-compatibly. + +## On compatibility + +Before talking about how to make API changes, it is worthwhile to clarify what +we mean by API compatibility. Kubernetes considers forwards and backwards +compatibility of its APIs a top priority. + +An API change is considered forward and backward-compatible if it: + + * adds new functionality that is not required for correct behavior (e.g., +does not add a new required field) + * does not change existing semantics, including: + * default values and behavior + * interpretation of existing API types, fields, and values + * which fields are required and which are not + +Put another way: + +1. Any API call (e.g. a structure POSTed to a REST endpoint) that worked before +your change must work the same after your change. +2. Any API call that uses your change must not cause problems (e.g. crash or +degrade behavior) when issued against servers that do not include your change. +3. It must be possible to round-trip your change (convert to different API +versions and back) with no loss of information. +4. Existing clients need not be aware of your change in order for them to +continue to function as they did previously, even when your change is utilized. + +If your change does not meet these criteria, it is not considered strictly +compatible, and may break older clients, or result in newer clients causing +undefined behavior. + +Let's consider some examples. In a hypothetical API (assume we're at version +v6), the `Frobber` struct looks something like this: + +```go +// API v6. +type Frobber struct { + Height int `json:"height"` + Param string `json:"param"` +} +``` + +You want to add a new `Width` field. It is generally safe to add new fields +without changing the API version, so you can simply change it to: + +```go +// Still API v6. +type Frobber struct { + Height int `json:"height"` + Width int `json:"width"` + Param string `json:"param"` +} +``` + +The onus is on you to define a sane default value for `Width` such that rule #1 +above is true - API calls and stored objects that used to work must continue to +work. + +For your next change you want to allow multiple `Param` values. You can not +simply change `Param string` to `Params []string` (without creating a whole new +API version) - that fails rules #1 and #2. You can instead do something like: + +```go +// Still API v6, but kind of clumsy. +type Frobber struct { + Height int `json:"height"` + Width int `json:"width"` + Param string `json:"param"` // the first param + ExtraParams []string `json:"extraParams"` // additional params +} +``` + +Now you can satisfy the rules: API calls that provide the old style `Param` +will still work, while servers that don't understand `ExtraParams` can ignore +it. This is somewhat unsatisfying as an API, but it is strictly compatible. + +Part of the reason for versioning APIs and for using internal structs that are +distinct from any one version is to handle growth like this. The internal +representation can be implemented as: + +```go +// Internal, soon to be v7beta1. +type Frobber struct { + Height int + Width int + Params []string +} +``` + +The code that converts to/from versioned APIs can decode this into the somewhat +uglier (but compatible!) structures. Eventually, a new API version, let's call +it v7beta1, will be forked and it can use the clean internal structure. + +We've seen how to satisfy rules #1 and #2. Rule #3 means that you can not +extend one versioned API without also extending the others. For example, an +API call might POST an object in API v7beta1 format, which uses the cleaner +`Params` field, but the API server might store that object in trusty old v6 +form (since v7beta1 is "beta"). When the user reads the object back in the +v7beta1 API it would be unacceptable to have lost all but `Params[0]`. This +means that, even though it is ugly, a compatible change must be made to the v6 +API. + +However, this is very challenging to do correctly. It often requires multiple +representations of the same information in the same API resource, which need to +be kept in sync in the event that either is changed. For example, let's say you +decide to rename a field within the same API version. In this case, you add +units to `height` and `width`. You implement this by adding duplicate fields: + +```go +type Frobber struct { + Height *int `json:"height"` + Width *int `json:"width"` + HeightInInches *int `json:"heightInInches"` + WidthInInches *int `json:"widthInInches"` +} +``` + +You convert all of the fields to pointers in order to distinguish between unset +and set to 0, and then set each corresponding field from the other in the +defaulting pass (e.g., `heightInInches` from `height`, and vice versa), which +runs just prior to conversion. That works fine when the user creates a resource +from a hand-written configuration -- clients can write either field and read +either field, but what about creation or update from the output of GET, or +update via PATCH (see +[In-place updates](../user-guide/managing-deployments.md#in-place-updates-of-resources))? +In this case, the two fields will conflict, because only one field would be +updated in the case of an old client that was only aware of the old field (e.g., +`height`). + +Say the client creates: + +```json +{ + "height": 10, + "width": 5 +} +``` + +and GETs: + +```json +{ + "height": 10, + "heightInInches": 10, + "width": 5, + "widthInInches": 5 +} +``` + +then PUTs back: + +```json +{ + "height": 13, + "heightInInches": 10, + "width": 5, + "widthInInches": 5 +} +``` + +The update should not fail, because it would have worked before `heightInInches` +was added. + +Therefore, when there are duplicate fields, the old field MUST take precedence +over the new, and the new field should be set to match by the server upon write. +A new client would be aware of the old field as well as the new, and so can +ensure that the old field is either unset or is set consistently with the new +field. However, older clients would be unaware of the new field. Please avoid +introducing duplicate fields due to the complexity they incur in the API. + +A new representation, even in a new API version, that is more expressive than an +old one breaks backward compatibility, since clients that only understood the +old representation would not be aware of the new representation nor its +semantics. Examples of proposals that have run into this challenge include +[generalized label selectors](http://issues.k8s.io/341) and [pod-level security +context](http://prs.k8s.io/12823). + +As another interesting example, enumerated values cause similar challenges. +Adding a new value to an enumerated set is *not* a compatible change. Clients +which assume they know how to handle all possible values of a given field will +not be able to handle the new values. However, removing value from an enumerated +set *can* be a compatible change, if handled properly (treat the removed value +as deprecated but allowed). This is actually a special case of a new +representation, discussed above. + +For [Unions](api-conventions.md#unions), sets of fields where at most one should +be set, it is acceptable to add a new option to the union if the [appropriate +conventions](api-conventions.md#objects) were followed in the original object. +Removing an option requires following the deprecation process. + +## Incompatible API changes + +There are times when this might be OK, but mostly we want changes that meet this +definition. If you think you need to break compatibility, you should talk to the +Kubernetes team first. + +Breaking compatibility of a beta or stable API version, such as v1, is +unacceptable. Compatibility for experimental or alpha APIs is not strictly +required, but breaking compatibility should not be done lightly, as it disrupts +all users of the feature. Experimental APIs may be removed. Alpha and beta API +versions may be deprecated and eventually removed wholesale, as described in the +[versioning document](../design/versioning.md). Document incompatible changes +across API versions under the appropriate +[{v? conversion tips tag in the api.md doc](../api.md). + +If your change is going to be backward incompatible or might be a breaking +change for API consumers, please send an announcement to +`kubernetes-dev@googlegroups.com` before the change gets in. If you are unsure, +ask. Also make sure that the change gets documented in the release notes for the +next release by labeling the PR with the "release-note" github label. + +If you found that your change accidentally broke clients, it should be reverted. + +In short, the expected API evolution is as follows: + +* `extensions/v1alpha1` -> +* `newapigroup/v1alpha1` -> ... -> `newapigroup/v1alphaN` -> +* `newapigroup/v1beta1` -> ... -> `newapigroup/v1betaN` -> +* `newapigroup/v1` -> +* `newapigroup/v2alpha1` -> ... + +While in extensions we have no obligation to move forward with the API at all +and may delete or break it at any time. + +While in alpha we expect to move forward with it, but may break it. + +Once in beta we will preserve forward compatibility, but may introduce new +versions and delete old ones. + +v1 must be backward-compatible for an extended length of time. + +## Changing versioned APIs + +For most changes, you will probably find it easiest to change the versioned +APIs first. This forces you to think about how to make your change in a +compatible way. Rather than doing each step in every version, it's usually +easier to do each versioned API one at a time, or to do all of one version +before starting "all the rest". + +### Edit types.go + +The struct definitions for each API are in `pkg/api//types.go`. Edit +those files to reflect the change you want to make. Note that all types and +non-inline fields in versioned APIs must be preceded by descriptive comments - +these are used to generate documentation. Comments for types should not contain +the type name; API documentation is generated from these comments and end-users +should not be exposed to golang type names. + +Optional fields should have the `,omitempty` json tag; fields are interpreted as +being required otherwise. + +### Edit defaults.go + +If your change includes new fields for which you will need default values, you +need to add cases to `pkg/api//defaults.go`. Of course, since you +have added code, you have to add a test: `pkg/api//defaults_test.go`. + +Do use pointers to scalars when you need to distinguish between an unset value +and an automatic zero value. For example, +`PodSpec.TerminationGracePeriodSeconds` is defined as `*int64` the go type +definition. A zero value means 0 seconds, and a nil value asks the system to +pick a default. + +Don't forget to run the tests! + +### Edit conversion.go + +Given that you have not yet changed the internal structs, this might feel +premature, and that's because it is. You don't yet have anything to convert to +or from. We will revisit this in the "internal" section. If you're doing this +all in a different order (i.e. you started with the internal structs), then you +should jump to that topic below. In the very rare case that you are making an +incompatible change you might or might not want to do this now, but you will +have to do more later. The files you want are +`pkg/api//conversion.go` and `pkg/api//conversion_test.go`. + +Note that the conversion machinery doesn't generically handle conversion of +values, such as various kinds of field references and API constants. [The client +library](../../pkg/client/restclient/request.go) has custom conversion code for +field references. You also need to add a call to +api.Scheme.AddFieldLabelConversionFunc with a mapping function that understands +supported translations. + +## Changing the internal structures + +Now it is time to change the internal structs so your versioned changes can be +used. + +### Edit types.go + +Similar to the versioned APIs, the definitions for the internal structs are in +`pkg/api/types.go`. Edit those files to reflect the change you want to make. +Keep in mind that the internal structs must be able to express *all* of the +versioned APIs. + +## Edit validation.go + +Most changes made to the internal structs need some form of input validation. +Validation is currently done on internal objects in +`pkg/api/validation/validation.go`. This validation is the one of the first +opportunities we have to make a great user experience - good error messages and +thorough validation help ensure that users are giving you what you expect and, +when they don't, that they know why and how to fix it. Think hard about the +contents of `string` fields, the bounds of `int` fields and the +requiredness/optionalness of fields. + +Of course, code needs tests - `pkg/api/validation/validation_test.go`. + +## Edit version conversions + +At this point you have both the versioned API changes and the internal +structure changes done. If there are any notable differences - field names, +types, structural change in particular - you must add some logic to convert +versioned APIs to and from the internal representation. If you see errors from +the `serialization_test`, it may indicate the need for explicit conversions. + +Performance of conversions very heavily influence performance of apiserver. +Thus, we are auto-generating conversion functions that are much more efficient +than the generic ones (which are based on reflections and thus are highly +inefficient). + +The conversion code resides with each versioned API. There are two files: + + - `pkg/api//conversion.go` containing manually written conversion +functions + - `pkg/api//conversion_generated.go` containing auto-generated +conversion functions + - `pkg/apis/extensions//conversion.go` containing manually written +conversion functions + - `pkg/apis/extensions//conversion_generated.go` containing +auto-generated conversion functions + +Since auto-generated conversion functions are using manually written ones, +those manually written should be named with a defined convention, i.e. a +function converting type X in pkg a to type Y in pkg b, should be named: +`convert_a_X_To_b_Y`. + +Also note that you can (and for efficiency reasons should) use auto-generated +conversion functions when writing your conversion functions. + +Once all the necessary manually written conversions are added, you need to +regenerate auto-generated ones. To regenerate them run: + +```sh +hack/update-codegen.sh +``` + +As part of the build, kubernetes will also generate code to handle deep copy of +your versioned api objects. The deep copy code resides with each versioned API: + - `/zz_generated.deepcopy.go` containing auto-generated copy functions + +If regeneration is somehow not possible due to compile errors, the easiest +workaround is to comment out the code causing errors and let the script to +regenerate it. If the auto-generated conversion methods are not used by the +manually-written ones, it's fine to just remove the whole file and let the +generator to create it from scratch. + +Unsurprisingly, adding manually written conversion also requires you to add +tests to `pkg/api//conversion_test.go`. + + +## Generate protobuf objects + +For any core API object, we also need to generate the Protobuf IDL and marshallers. +That generation is done with + +```sh +hack/update-generated-protobuf.sh +``` + +The vast majority of objects will not need any consideration when converting +to protobuf, but be aware that if you depend on a Golang type in the standard +library there may be additional work required, although in practice we typically +use our own equivalents for JSON serialization. The `pkg/api/serialization_test.go` +will verify that your protobuf serialization preserves all fields - be sure to +run it several times to ensure there are no incompletely calculated fields. + +## Edit json (un)marshaling code + +We are auto-generating code for marshaling and unmarshaling json representation +of api objects - this is to improve the overall system performance. + +The auto-generated code resides with each versioned API: + + - `pkg/api//types.generated.go` + - `pkg/apis/extensions//types.generated.go` + +To regenerate them run: + +```sh +hack/update-codecgen.sh +``` + +## Making a new API Group + +This section is under construction, as we make the tooling completely generic. + +At the moment, you'll have to make a new directory under `pkg/apis/`; copy the +directory structure from `pkg/apis/authentication`. Add the new group/version to all +of the `hack/{verify,update}-generated-{deep-copy,conversions,swagger}.sh` files +in the appropriate places--it should just require adding your new group/version +to a bash array. See [docs on adding an API group](adding-an-APIGroup.md) for +more. + +Adding API groups outside of the `pkg/apis/` directory is not currently +supported, but is clearly desirable. The deep copy & conversion generators need +to work by parsing go files instead of by reflection; then they will be easy to +point at arbitrary directories: see issue [#13775](http://issue.k8s.io/13775). + +## Update the fuzzer + +Part of our testing regimen for APIs is to "fuzz" (fill with random values) API +objects and then convert them to and from the different API versions. This is +a great way of exposing places where you lost information or made bad +assumptions. If you have added any fields which need very careful formatting +(the test does not run validation) or if you have made assumptions such as +"this slice will always have at least 1 element", you may get an error or even +a panic from the `serialization_test`. If so, look at the diff it produces (or +the backtrace in case of a panic) and figure out what you forgot. Encode that +into the fuzzer's custom fuzz functions. Hint: if you added defaults for a +field, that field will need to have a custom fuzz function that ensures that the +field is fuzzed to a non-empty value. + +The fuzzer can be found in `pkg/api/testing/fuzzer.go`. + +## Update the semantic comparisons + +VERY VERY rarely is this needed, but when it hits, it hurts. In some rare cases +we end up with objects (e.g. resource quantities) that have morally equivalent +values with different bitwise representations (e.g. value 10 with a base-2 +formatter is the same as value 0 with a base-10 formatter). The only way Go +knows how to do deep-equality is through field-by-field bitwise comparisons. +This is a problem for us. + +The first thing you should do is try not to do that. If you really can't avoid +this, I'd like to introduce you to our `semantic DeepEqual` routine. It supports +custom overrides for specific types - you can find that in `pkg/api/helpers.go`. + +There's one other time when you might have to touch this: `unexported fields`. +You see, while Go's `reflect` package is allowed to touch `unexported fields`, +us mere mortals are not - this includes `semantic DeepEqual`. Fortunately, most +of our API objects are "dumb structs" all the way down - all fields are exported +(start with a capital letter) and there are no unexported fields. But sometimes +you want to include an object in our API that does have unexported fields +somewhere in it (for example, `time.Time` has unexported fields). If this hits +you, you may have to touch the `semantic DeepEqual` customization functions. + +## Implement your change + +Now you have the API all changed - go implement whatever it is that you're +doing! + +## Write end-to-end tests + +Check out the [E2E docs](e2e-tests.md) for detailed information about how to +write end-to-end tests for your feature. + +## Examples and docs + +At last, your change is done, all unit tests pass, e2e passes, you're done, +right? Actually, no. You just changed the API. If you are touching an existing +facet of the API, you have to try *really* hard to make sure that *all* the +examples and docs are updated. There's no easy way to do this, due in part to +JSON and YAML silently dropping unknown fields. You're clever - you'll figure it +out. Put `grep` or `ack` to good use. + +If you added functionality, you should consider documenting it and/or writing +an example to illustrate your change. + +Make sure you update the swagger and OpenAPI spec by running: + +```sh +hack/update-swagger-spec.sh +hack/update-openapi-spec.sh +``` + +The API spec changes should be in a commit separate from your other changes. + +## Alpha, Beta, and Stable Versions + +New feature development proceeds through a series of stages of increasing +maturity: + +- Development level + - Object Versioning: no convention + - Availability: not committed to main kubernetes repo, and thus not available +in official releases + - Audience: other developers closely collaborating on a feature or +proof-of-concept + - Upgradeability, Reliability, Completeness, and Support: no requirements or +guarantees +- Alpha level + - Object Versioning: API version name contains `alpha` (e.g. `v1alpha1`) + - Availability: committed to main kubernetes repo; appears in an official +release; feature is disabled by default, but may be enabled by flag + - Audience: developers and expert users interested in giving early feedback on +features + - Completeness: some API operations, CLI commands, or UI support may not be +implemented; the API need not have had an *API review* (an intensive and +targeted review of the API, on top of a normal code review) + - Upgradeability: the object schema and semantics may change in a later +software release, without any provision for preserving objects in an existing +cluster; removing the upgradability concern allows developers to make rapid +progress; in particular, API versions can increment faster than the minor +release cadence and the developer need not maintain multiple versions; +developers should still increment the API version when object schema or +semantics change in an [incompatible way](#on-compatibility) + - Cluster Reliability: because the feature is relatively new, and may lack +complete end-to-end tests, enabling the feature via a flag might expose bugs +with destabilize the cluster (e.g. a bug in a control loop might rapidly create +excessive numbers of object, exhausting API storage). + - Support: there is *no commitment* from the project to complete the feature; +the feature may be dropped entirely in a later software release + - Recommended Use Cases: only in short-lived testing clusters, due to +complexity of upgradeability and lack of long-term support and lack of +upgradability. +- Beta level: + - Object Versioning: API version name contains `beta` (e.g. `v2beta3`) + - Availability: in official Kubernetes releases, and enabled by default + - Audience: users interested in providing feedback on features + - Completeness: all API operations, CLI commands, and UI support should be +implemented; end-to-end tests complete; the API has had a thorough API review +and is thought to be complete, though use during beta may frequently turn up API +issues not thought of during review + - Upgradeability: the object schema and semantics may change in a later +software release; when this happens, an upgrade path will be documented; in some +cases, objects will be automatically converted to the new version; in other +cases, a manual upgrade may be necessary; a manual upgrade may require downtime +for anything relying on the new feature, and may require manual conversion of +objects to the new version; when manual conversion is necessary, the project +will provide documentation on the process (for an example, see [v1 conversion +tips](../api.md#v1-conversion-tips)) + - Cluster Reliability: since the feature has e2e tests, enabling the feature +via a flag should not create new bugs in unrelated features; because the feature +is new, it may have minor bugs + - Support: the project commits to complete the feature, in some form, in a +subsequent Stable version; typically this will happen within 3 months, but +sometimes longer; releases should simultaneously support two consecutive +versions (e.g. `v1beta1` and `v1beta2`; or `v1beta2` and `v1`) for at least one +minor release cycle (typically 3 months) so that users have enough time to +upgrade and migrate objects + - Recommended Use Cases: in short-lived testing clusters; in production +clusters as part of a short-lived evaluation of the feature in order to provide +feedback +- Stable level: + - Object Versioning: API version `vX` where `X` is an integer (e.g. `v1`) + - Availability: in official Kubernetes releases, and enabled by default + - Audience: all users + - Completeness: same as beta + - Upgradeability: only [strictly compatible](#on-compatibility) changes +allowed in subsequent software releases + - Cluster Reliability: high + - Support: API version will continue to be present for many subsequent +software releases; + - Recommended Use Cases: any + +### Adding Unstable Features to Stable Versions + +When adding a feature to an object which is already Stable, the new fields and +new behaviors need to meet the Stable level requirements. If these cannot be +met, then the new field cannot be added to the object. + +For example, consider the following object: + +```go +// API v6. +type Frobber struct { + Height int `json:"height"` + Param string `json:"param"` +} +``` + +A developer is considering adding a new `Width` parameter, like this: + +```go +// API v6. +type Frobber struct { + Height int `json:"height"` + Width int `json:"height"` + Param string `json:"param"` +} +``` + +However, the new feature is not stable enough to be used in a stable version +(`v6`). Some reasons for this might include: + +- the final representation is undecided (e.g. should it be called `Width` or +`Breadth`?) +- the implementation is not stable enough for general use (e.g. the `Area()` +routine sometimes overflows.) + +The developer cannot add the new field until stability is met. However, +sometimes stability cannot be met until some users try the new feature, and some +users are only able or willing to accept a released version of Kubernetes. In +that case, the developer has a few options, both of which require staging work +over several releases. + + +A preferred option is to first make a release where the new value (`Width` in +this example) is specified via an annotation, like this: + +```go +kind: frobber +version: v6 +metadata: + name: myfrobber + annotations: + frobbing.alpha.kubernetes.io/width: 2 +height: 4 +param: "green and blue" +``` + +This format allows users to specify the new field, but makes it clear that they +are using a Alpha feature when they do, since the word `alpha` is in the +annotation key. + +Another option is to introduce a new type with an new `alpha` or `beta` version +designator, like this: + +``` +// API v6alpha2 +type Frobber struct { + Height int `json:"height"` + Width int `json:"height"` + Param string `json:"param"` +} +``` + +The latter requires that all objects in the same API group as `Frobber` to be +replicated in the new version, `v6alpha2`. This also requires user to use a new +client which uses the other version. Therefore, this is not a preferred option. + +A related issue is how a cluster manager can roll back from a new version +with a new feature, that is already being used by users. See +https://github.com/kubernetes/kubernetes/issues/4855. + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/api_changes.md?pixel)]() + diff --git a/devel/automation.md b/devel/automation.md new file mode 100644 index 00000000..3a9f1754 --- /dev/null +++ b/devel/automation.md @@ -0,0 +1,116 @@ +# Kubernetes Development Automation + +## Overview + +Kubernetes uses a variety of automated tools in an attempt to relieve developers +of repetitive, low brain power work. This document attempts to describe these +processes. + + +## Submit Queue + +In an effort to + * reduce load on core developers + * maintain e2e stability + * load test github's label feature + +We have added an automated [submit-queue] +(https://github.com/kubernetes/contrib/blob/master/mungegithub/mungers/submit-queue.go) +to the +[github "munger"](https://github.com/kubernetes/contrib/tree/master/mungegithub) +for kubernetes. + +The submit-queue does the following: + +```go +for _, pr := range readyToMergePRs() { + if testsAreStable() { + if retestPR(pr) == success { + mergePR(pr) + } + } +} +``` + +The status of the submit-queue is [online.](http://submit-queue.k8s.io/) + +### Ready to merge status + +The submit-queue lists what it believes are required on the [merge requirements tab](http://submit-queue.k8s.io/#/info) of the info page. That may be more up to date. + +A PR is considered "ready for merging" if it matches the following: + * The PR must have the label "cla: yes" or "cla: human-approved" + * The PR must be mergeable. aka cannot need a rebase + * All of the following github statuses must be green + * Jenkins GCE Node e2e + * Jenkins GCE e2e + * Jenkins unit/integration + * The PR cannot have any prohibited future milestones (such as a v1.5 milestone during v1.4 code freeze) + * The PR must have the "lgtm" label. The "lgtm" label is automatically applied + following a review comment consisting of only "LGTM" (case-insensitive) + * The PR must not have been updated since the "lgtm" label was applied + * The PR must not have the "do-not-merge" label + +### Merge process + +Merges _only_ occur when the [critical builds](http://submit-queue.k8s.io/#/e2e) +are passing. We're open to including more builds here, let us know... + +Merges are serialized, so only a single PR is merged at a time, to ensure +against races. + +If the PR has the `retest-not-required` label, it is simply merged. If the PR does +not have this label the e2e, unit/integration, and node tests are re-run. If these +tests pass a second time, the PR will be merged as long as the `critical builds` are +green when this PR finishes retesting. + +## Github Munger + +We run [github "mungers"](https://github.com/kubernetes/contrib/tree/master/mungegithub). + +This runs repeatedly over github pulls and issues and runs modular "mungers" +similar to "mungedocs." The mungers include the 'submit-queue' referenced above along +with numerous other functions. See the README in the link above. + +Please feel free to unleash your creativity on this tool, send us new mungers +that you think will help support the Kubernetes development process. + +### Closing stale pull-requests + +Github Munger will close pull-requests that don't have human activity in the +last 90 days. It will warn about this process 60 days before closing the +pull-request, and warn again 30 days later. One way to prevent this from +happening is to add the "keep-open" label on the pull-request. + +Feel free to re-open and maybe add the "keep-open" label if this happens to a +valid pull-request. It may also be a good opportunity to get more attention by +verifying that it is properly assigned and/or mention people that might be +interested. Commenting on the pull-request will also keep it open for another 90 +days. + +## PR builder + +We also run a robotic PR builder that attempts to run tests for each PR. + +Before a PR from an unknown user is run, the PR builder bot (`k8s-bot`) asks to +a message from a contributor that a PR is "ok to test", the contributor replies +with that message. ("please" is optional, but remember to treat your robots with +kindness...) + +## FAQ: + +#### How can I ask my PR to be tested again for Jenkins failures? + +PRs should only need to be manually re-tested if you believe there was a flake +during the original test. All flakes should be filed as an +[issue](https://github.com/kubernetes/kubernetes/issues?q=is%3Aopen+is%3Aissue+label%3Akind%2Fflake). +Once you find or file a flake a contributer (this may be you!) should request +a retest with "@k8s-bot test this issue: #NNNNN", where NNNNN is replaced with +the issue number you found or filed. + +Any pushes of new code to the PR will automatically trigger a new test. No human +interraction is required. + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/automation.md?pixel)]() + diff --git a/devel/bazel.md b/devel/bazel.md new file mode 100644 index 00000000..e6a4e9c5 --- /dev/null +++ b/devel/bazel.md @@ -0,0 +1,44 @@ +# Build with Bazel + +Building with bazel is currently experimental. Automanaged BUILD rules have the +tag "automanaged" and are maintained by +[gazel](https://github.com/mikedanese/gazel). Instructions for installing bazel +can be found [here](https://www.bazel.io/versions/master/docs/install.html). + +To build docker images for the components, run: + +``` +$ bazel build //build-tools/... +``` + +To run many of the unit tests, run: + +``` +$ bazel test //cmd/... //build-tools/... //pkg/... //federation/... //plugin/... +``` + +To update automanaged build files, run: + +``` +$ ./hack/update-bazel.sh +``` + +**NOTES**: `update-bazel.sh` only works if check out directory of Kubernetes is "$GOPATH/src/k8s.io/kubernetes". + +To update a single build file, run: + +``` +$ # get gazel +$ go get -u github.com/mikedanese/gazel +$ # .e.g. ./pkg/kubectl/BUILD +$ gazel -root="${YOUR_KUBE_ROOT_PATH}" ./pkg/kubectl +``` + +Updating BUILD file for a package will be required when: +* Files are added to or removed from a package +* Import dependencies change for a package + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/bazel.md?pixel)]() + diff --git a/devel/cherry-picks.md b/devel/cherry-picks.md new file mode 100644 index 00000000..ad8df62d --- /dev/null +++ b/devel/cherry-picks.md @@ -0,0 +1,64 @@ +# Overview + +This document explains cherry picks are managed on release branches within the +Kubernetes projects. Patches are either applied in batches or individually +depending on the point in the release cycle. + +## Propose a Cherry Pick + +1. Cherrypicks are [managed with labels and milestones] +(pull-requests.md#release-notes) +1. To get a PR merged to the release branch, first ensure the following labels + are on the original **master** branch PR: + * An appropriate milestone (e.g. v1.3) + * The `cherrypick-candidate` label +1. If `release-note-none` is set on the master PR, the cherrypick PR will need + to set the same label to confirm that no release note is needed. +1. `release-note` labeled PRs generate a release note using the PR title by + default OR the release-note block in the PR template if filled in. + * See the [PR template](../../.github/PULL_REQUEST_TEMPLATE.md) for more + details. + * PR titles and body comments are mutable and can be modified at any time + prior to the release to reflect a release note friendly message. + +### How do cherrypick-candidates make it to the release branch? + +1. **BATCHING:** After a branch is first created and before the X.Y.0 release + * Branch owners review the list of `cherrypick-candidate` labeled PRs. + * PRs batched up and merged to the release branch get a `cherrypick-approved` +label and lose the `cherrypick-candidate` label. + * PRs that won't be merged to the release branch, lose the +`cherrypick-candidate` label. + +1. **INDIVIDUAL CHERRYPICKS:** After the first X.Y.0 on a branch + * Run the cherry pick script. This example applies a master branch PR #98765 +to the remote branch `upstream/release-3.14`: +`hack/cherry_pick_pull.sh upstream/release-3.14 98765` + * Your cherrypick PR (targeted to the branch) will immediately get the +`do-not-merge` label. The branch owner will triage PRs targeted to +the branch and label the ones to be merged by applying the `lgtm` +label. + +There is an [issue](https://github.com/kubernetes/kubernetes/issues/23347) open +tracking the tool to automate the batching procedure. + +## Cherry Pick Review + +Cherry pick pull requests are reviewed differently than normal pull requests. In +particular, they may be self-merged by the release branch owner without fanfare, +in the case the release branch owner knows the cherry pick was already +requested - this should not be the norm, but it may happen. + +## Searching for Cherry Picks + +See the [cherrypick queue dashboard](http://cherrypick.k8s.io/#/queue) for +status of PRs labeled as `cherrypick-candidate`. + +[Contributor License Agreements](http://releases.k8s.io/HEAD/CONTRIBUTING.md) is +considered implicit for all code within cherry-pick pull requests, ***unless +there is a large conflict***. + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/cherry-picks.md?pixel)]() + diff --git a/devel/cli-roadmap.md b/devel/cli-roadmap.md new file mode 100644 index 00000000..cd21da08 --- /dev/null +++ b/devel/cli-roadmap.md @@ -0,0 +1,11 @@ +# Kubernetes CLI/Configuration Roadmap + +See github issues with the following labels: +* [area/app-config-deployment](https://github.com/kubernetes/kubernetes/labels/area/app-config-deployment) +* [component/kubectl](https://github.com/kubernetes/kubernetes/labels/component/kubectl) +* [component/clientlib](https://github.com/kubernetes/kubernetes/labels/component/clientlib) + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/cli-roadmap.md?pixel)]() + diff --git a/devel/client-libraries.md b/devel/client-libraries.md new file mode 100644 index 00000000..d38f9fd7 --- /dev/null +++ b/devel/client-libraries.md @@ -0,0 +1,27 @@ +## Kubernetes API client libraries + +### Supported + + * [Go](https://github.com/kubernetes/client-go) + +### User Contributed + +*Note: Libraries provided by outside parties are supported by their authors, not +the core Kubernetes team* + + * [Clojure](https://github.com/yanatan16/clj-kubernetes-api) + * [Java (OSGi)](https://bitbucket.org/amdatulabs/amdatu-kubernetes) + * [Java (Fabric8, OSGi)](https://github.com/fabric8io/kubernetes-client) + * [Node.js](https://github.com/tenxcloud/node-kubernetes-client) + * [Node.js](https://github.com/godaddy/kubernetes-client) + * [Perl](https://metacpan.org/pod/Net::Kubernetes) + * [PHP](https://github.com/devstub/kubernetes-api-php-client) + * [PHP](https://github.com/maclof/kubernetes-client) + * [Python](https://github.com/eldarion-gondor/pykube) + * [Ruby](https://github.com/Ch00k/kuber) + * [Ruby](https://github.com/abonas/kubeclient) + * [Scala](https://github.com/doriordan/skuber) + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/client-libraries.md?pixel)]() + diff --git a/devel/coding-conventions.md b/devel/coding-conventions.md new file mode 100644 index 00000000..bcfab41d --- /dev/null +++ b/devel/coding-conventions.md @@ -0,0 +1,147 @@ +# Coding Conventions + +Updated: 5/3/2016 + +**Table of Contents** + + +- [Coding Conventions](#coding-conventions) + - [Code conventions](#code-conventions) + - [Testing conventions](#testing-conventions) + - [Directory and file conventions](#directory-and-file-conventions) + - [Coding advice](#coding-advice) + + + +## Code conventions + + - Bash + + - https://google.github.io/styleguide/shell.xml + + - Ensure that build, release, test, and cluster-management scripts run on +OS X + + - Go + + - Ensure your code passes the [presubmit checks](development.md#hooks) + + - [Go Code Review +Comments](https://github.com/golang/go/wiki/CodeReviewComments) + + - [Effective Go](https://golang.org/doc/effective_go.html) + + - Comment your code. + - [Go's commenting +conventions](http://blog.golang.org/godoc-documenting-go-code) + - If reviewers ask questions about why the code is the way it is, that's a +sign that comments might be helpful. + + + - Command-line flags should use dashes, not underscores + + + - Naming + - Please consider package name when selecting an interface name, and avoid +redundancy. + + - e.g.: `storage.Interface` is better than `storage.StorageInterface`. + + - Do not use uppercase characters, underscores, or dashes in package +names. + - Please consider parent directory name when choosing a package name. + + - so pkg/controllers/autoscaler/foo.go should say `package autoscaler` +not `package autoscalercontroller`. + - Unless there's a good reason, the `package foo` line should match +the name of the directory in which the .go file exists. + - Importers can use a different name if they need to disambiguate. + + - Locks should be called `lock` and should never be embedded (always `lock +sync.Mutex`). When multiple locks are present, give each lock a distinct name +following Go conventions - `stateLock`, `mapLock` etc. + + - [API changes](api_changes.md) + + - [API conventions](api-conventions.md) + + - [Kubectl conventions](kubectl-conventions.md) + + - [Logging conventions](logging.md) + +## Testing conventions + + - All new packages and most new significant functionality must come with unit +tests + + - Table-driven tests are preferred for testing multiple scenarios/inputs; for +example, see [TestNamespaceAuthorization](../../test/integration/auth/auth_test.go) + + - Significant features should come with integration (test/integration) and/or +[end-to-end (test/e2e) tests](e2e-tests.md) + - Including new kubectl commands and major features of existing commands + + - Unit tests must pass on OS X and Windows platforms - if you use Linux +specific features, your test case must either be skipped on windows or compiled +out (skipped is better when running Linux specific commands, compiled out is +required when your code does not compile on Windows). + + - Avoid relying on Docker hub (e.g. pull from Docker hub). Use gcr.io instead. + + - Avoid waiting for a short amount of time (or without waiting) and expect an +asynchronous thing to happen (e.g. wait for 1 seconds and expect a Pod to be +running). Wait and retry instead. + + - See the [testing guide](testing.md) for additional testing advice. + +## Directory and file conventions + + - Avoid package sprawl. Find an appropriate subdirectory for new packages. +(See [#4851](http://issues.k8s.io/4851) for discussion.) + - Libraries with no more appropriate home belong in new package +subdirectories of pkg/util + + - Avoid general utility packages. Packages called "util" are suspect. Instead, +derive a name that describes your desired function. For example, the utility +functions dealing with waiting for operations are in the "wait" package and +include functionality like Poll. So the full name is wait.Poll + + - All filenames should be lowercase + + - Go source files and directories use underscores, not dashes + - Package directories should generally avoid using separators as much as +possible (when packages are multiple words, they usually should be in nested +subdirectories). + + - Document directories and filenames should use dashes rather than underscores + + - Contrived examples that illustrate system features belong in +/docs/user-guide or /docs/admin, depending on whether it is a feature primarily +intended for users that deploy applications or cluster administrators, +respectively. Actual application examples belong in /examples. + - Examples should also illustrate [best practices for configuration and +using the system](../user-guide/config-best-practices.md) + + - Third-party code + + - Go code for normal third-party dependencies is managed using +[Godeps](https://github.com/tools/godep) + + - Other third-party code belongs in `/third_party` + - forked third party Go code goes in `/third_party/forked` + - forked _golang stdlib_ code goes in `/third_party/golang` + + - Third-party code must include licenses + + - This includes modified third-party code and excerpts, as well + +## Coding advice + + - Go + + - [Go landmines](https://gist.github.com/lavalamp/4bd23295a9f32706a48f) + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/coding-conventions.md?pixel)]() + diff --git a/devel/collab.md b/devel/collab.md new file mode 100644 index 00000000..b4a6281d --- /dev/null +++ b/devel/collab.md @@ -0,0 +1,87 @@ +# On Collaborative Development + +Kubernetes is open source, but many of the people working on it do so as their +day job. In order to avoid forcing people to be "at work" effectively 24/7, we +want to establish some semi-formal protocols around development. Hopefully these +rules make things go more smoothly. If you find that this is not the case, +please complain loudly. + +## Patches welcome + +First and foremost: as a potential contributor, your changes and ideas are +welcome at any hour of the day or night, weekdays, weekends, and holidays. +Please do not ever hesitate to ask a question or send a PR. + +## Code reviews + +All changes must be code reviewed. For non-maintainers this is obvious, since +you can't commit anyway. But even for maintainers, we want all changes to get at +least one review, preferably (for non-trivial changes obligatorily) from someone +who knows the areas the change touches. For non-trivial changes we may want two +reviewers. The primary reviewer will make this decision and nominate a second +reviewer, if needed. Except for trivial changes, PRs should not be committed +until relevant parties (e.g. owners of the subsystem affected by the PR) have +had a reasonable chance to look at PR in their local business hours. + +Most PRs will find reviewers organically. If a maintainer intends to be the +primary reviewer of a PR they should set themselves as the assignee on GitHub +and say so in a reply to the PR. Only the primary reviewer of a change should +actually do the merge, except in rare cases (e.g. they are unavailable in a +reasonable timeframe). + +If a PR has gone 2 work days without an owner emerging, please poke the PR +thread and ask for a reviewer to be assigned. + +Except for rare cases, such as trivial changes (e.g. typos, comments) or +emergencies (e.g. broken builds), maintainers should not merge their own +changes. + +Expect reviewers to request that you avoid [common go style +mistakes](https://github.com/golang/go/wiki/CodeReviewComments) in your PRs. + +## Assigned reviews + +Maintainers can assign reviews to other maintainers, when appropriate. The +assignee becomes the shepherd for that PR and is responsible for merging the PR +once they are satisfied with it or else closing it. The assignee might request +reviews from non-maintainers. + +## Merge hours + +Maintainers will do merges of appropriately reviewed-and-approved changes during +their local "business hours" (typically 7:00 am Monday to 5:00 pm (17:00h) +Friday). PRs that arrive over the weekend or on holidays will only be merged if +there is a very good reason for it and if the code review requirements have been +met. Concretely this means that nobody should merge changes immediately before +going to bed for the night. + +There may be discussion an even approvals granted outside of the above hours, +but merges will generally be deferred. + +If a PR is considered complex or controversial, the merge of that PR should be +delayed to give all interested parties in all timezones the opportunity to +provide feedback. Concretely, this means that such PRs should be held for 24 +hours before merging. Of course "complex" and "controversial" are left to the +judgment of the people involved, but we trust that part of being a committer is +the judgment required to evaluate such things honestly, and not be motivated by +your desire (or your cube-mate's desire) to get their code merged. Also see +"Holds" below, any reviewer can issue a "hold" to indicate that the PR is in +fact complicated or complex and deserves further review. + +PRs that are incorrectly judged to be merge-able, may be reverted and subject to +re-review, if subsequent reviewers believe that they in fact are controversial +or complex. + + +## Holds + +Any maintainer or core contributor who wants to review a PR but does not have +time immediately may put a hold on a PR simply by saying so on the PR discussion +and offering an ETA measured in single-digit days at most. Any PR that has a +hold shall not be merged until the person who requested the hold acks the +review, withdraws their hold, or is overruled by a preponderance of maintainers. + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/collab.md?pixel)]() + diff --git a/devel/community-expectations.md b/devel/community-expectations.md new file mode 100644 index 00000000..ff2487fd --- /dev/null +++ b/devel/community-expectations.md @@ -0,0 +1,87 @@ +## Community Expectations + +Kubernetes is a community project. Consequently, it is wholly dependent on +its community to provide a productive, friendly and collaborative environment. + +The first and foremost goal of the Kubernetes community to develop orchestration +technology that radically simplifies the process of creating reliable +distributed systems. However a second, equally important goal is the creation +of a community that fosters easy, agile development of such orchestration +systems. + +We therefore describe the expectations for +members of the Kubernetes community. This document is intended to be a living one +that evolves as the community evolves via the same PR and code review process +that shapes the rest of the project. It currently covers the expectations +of conduct that govern all members of the community as well as the expectations +around code review that govern all active contributors to Kubernetes. + +### Code of Conduct + +The most important expectation of the Kubernetes community is that all members +abide by the Kubernetes [community code of conduct](../../code-of-conduct.md). +Only by respecting each other can we develop a productive, collaborative +community. + +### Code review + +As a community we believe in the [value of code review for all contributions](collab.md). +Code review increases both the quality and readability of our codebase, which +in turn produces high quality software. + +However, the code review process can also introduce latency for contributors +and additional work for reviewers that can frustrate both parties. + +Consequently, as a community we expect that all active participants in the +community will also be active reviewers. + +We ask that active contributors to the project participate in the code review process +in areas where that contributor has expertise. Active +contributors are considered to be anyone who meets any of the following criteria: + * Sent more than two pull requests (PRs) in the previous one month, or more + than 20 PRs in the previous year. + * Filed more than three issues in the previous month, or more than 30 issues in + the previous 12 months. + * Commented on more than pull requests in the previous month, or + more than 50 pull requests in the previous 12 months. + * Marked any PR as LGTM in the previous month. + * Have *collaborator* permissions in the Kubernetes github project. + +In addition to these community expectations, any community member who wants to +be an active reviewer can also add their name to an *active reviewer* file +(location tbd) which will make them an active reviewer for as long as they +are included in the file. + +#### Expectations of reviewers: Review comments + +Because reviewers are often the first points of contact between new members of +the community and can significantly impact the first impression of the +Kubernetes community, reviewers are especially important in shaping the +Kubernetes community. Reviewers are highly encouraged to review the +[code of conduct](../../code-of-conduct.md) and are strongly encouraged to go above +and beyond the code of conduct to promote a collaborative, respectful +Kubernetes community. + +#### Expectations of reviewers: Review latency + +Reviewers are expected to respond in a timely fashion to PRs that are assigned +to them. Reviewers are expected to respond to an *active* PRs with reasonable +latency, and if reviewers fail to respond, those PRs may be assigned to other +reviewers. + +*Active* PRs are considered those which have a proper CLA (`cla:yes`) label +and do not need rebase to be merged. PRs that do not have a proper CLA, or +require a rebase are not considered active PRs. + +## Thanks + +Many thanks in advance to everyone who contributes their time and effort to +making Kubernetes both a successful system as well as a successful community. +The strength of our software shines in the strengths of each individual +community member. Thanks! + + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/community-expectations.md?pixel)]() + diff --git a/devel/container-runtime-interface.md b/devel/container-runtime-interface.md new file mode 100644 index 00000000..7ab085f7 --- /dev/null +++ b/devel/container-runtime-interface.md @@ -0,0 +1,127 @@ +# CRI: the Container Runtime Interface + +## What is CRI? + +CRI (_Container Runtime Interface_) consists of a +[protobuf API](../../pkg/kubelet/api/v1alpha1/runtime/api.proto), +specifications/requirements (to-be-added), +and [libraries] (https://github.com/kubernetes/kubernetes/tree/master/pkg/kubelet/server/streaming) +for container runtimes to integrate with kubelet on a node. CRI is currently in Alpha. + +In the future, we plan to add more developer tools such as the CRI validation +tests. + +## Why develop CRI? + +Prior to the existence of CRI, container runtimes (e.g., `docker`, `rkt`) were +integrated with kubelet through implementing an internal, high-level interface +in kubelet. The entrance barrier for runtimes was high because the integration +required understanding the internals of kubelet and contributing to the main +Kubernetes repository. More importantly, this would not scale because every new +addition incurs a significant maintenance overhead in the main kubernetes +repository. + +Kubernetes aims to be extensible. CRI is one small, yet important step to enable +pluggable container runtimes and build a healthier ecosystem. + +## How to use CRI? + +1. Start the image and runtime services on your node. You can have a single + service acting as both image and runtime services. +2. Set the kubelet flags + - Pass the unix socket(s) to which your services listen to kubelet: + `--container-runtime-endpoint` and `--image-service-endpoint`. + - Enable CRI in kubelet by`--experimental-cri=true`. + - Use the "remote" runtime by `--container-runtime=remote`. + +Please see the [Status Update](#status-update) section for known issues for +each release. + +Note that CRI is still in its early stages. We are actively incorporating +feedback from early developers to improve the API. Developers should expect +occasional API breaking changes. + +## Does Kubelet use CRI today? + +No, but we are working on it. + +The first step is to switch kubelet to integrate with Docker via CRI by +default. The current [Docker CRI implementation](https://github.com/kubernetes/kubernetes/blob/release-1.5/pkg/kubelet/dockershim) +already passes most end-to-end tests, and has mandatory PR builders to prevent +regressions. While we are expanding the test coverage gradually, it is +difficult to test on all combinations of OS distributions, platforms, and +plugins. There are also many experimental or even undocumented features relied +upon by some users. We would like to **encourage the community to help test +this Docker-CRI integration and report bugs and/or missing features** to +smooth the transition in the near future. Please file a Github issue and +include @kubernetes/sig-node for any CRI problem. + +### How to test the new Docker CRI integration? + +Start kubelet with the following flags: + - Use the Docker container runtime by `--container-runtime=docker`(the default). + - Enable CRI in kubelet by`--experimental-cri=true`. + +Please also see the [known issues](#docker-cri-1.5-known-issues) before trying +out. + +## Design docs and proposals + +We plan to add CRI specifications/requirements in the near future. For now, +these proposals and design docs are the best sources to understand CRI +besides discussions on Github issues. + + - [Original proposal](https://github.com/kubernetes/kubernetes/blob/release-1.5/docs/proposals/container-runtime-interface-v1.md) + - [Exec/attach/port-forward streaming requests](https://docs.google.com/document/d/1OE_QoInPlVCK9rMAx9aybRmgFiVjHpJCHI9LrfdNM_s/edit?usp=sharing) + - [Container stdout/stderr logs](https://github.com/kubernetes/kubernetes/blob/release-1.5/docs/proposals/kubelet-cri-logging.md) + - Networking: The CRI runtime handles network plugins and the + setup/teardown of the pod sandbox. + +## Work-In-Progress CRI runtimes + + - [cri-o](https://github.com/kubernetes-incubator/cri-o) + - [rktlet](https://github.com/kubernetes-incubator/rktlet) + - [frakti](https://github.com/kubernetes/frakti) + +## [Status update](#status-update) + +### Kubernetes v1.5 release (CRI v1alpha1) + + - [v1alpha1 version](https://github.com/kubernetes/kubernetes/blob/release-1.5/pkg/kubelet/api/v1alpha1/runtime/api.proto) of CRI is released. + +#### [CRI known issues](#cri-1.5-known-issues): + + - [#27097](https://github.com/kubernetes/kubernetes/issues/27097): Container + metrics are not yet defined in CRI. + - [#36401](https://github.com/kubernetes/kubernetes/issues/36401): The new + container log path/format is not yet supported by the logging pipeline + (e.g., fluentd, GCL). + - CRI may not be compatible with other experimental features (e.g., Seccomp). + - Streaming server needs to be hardened. + - [#36666](https://github.com/kubernetes/kubernetes/issues/36666): + Authentication. + - [#36187](https://github.com/kubernetes/kubernetes/issues/36187): Avoid + including user data in the redirect URL. + +#### [Docker CRI integration known issues](#docker-cri-1.5-known-issues) + + - Docker compatibility: Support only Docker v1.11 and v1.12. + - Network: + - [#35457](https://github.com/kubernetes/kubernetes/issues/35457): Does + not support host ports. + - [#37315](https://github.com/kubernetes/kubernetes/issues/37315): Does + not support bandwidth shaping. + - Exec/attach/port-forward (streaming requests): + - [#35747](https://github.com/kubernetes/kubernetes/issues/35747): Does + not support `nsenter` as the exec handler (`--exec-handler=nsenter`). + - Also see (#cri-1.5-known-issues) for limitations on CRI streaming. + +## Contacts + + - Email: sig-node (kubernetes-sig-node@googlegroups.com) + - Slack: https://kubernetes.slack.com/messages/sig-node + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/container-runtime-interface.md?pixel)]() + diff --git a/devel/controllers.md b/devel/controllers.md new file mode 100644 index 00000000..daedc236 --- /dev/null +++ b/devel/controllers.md @@ -0,0 +1,186 @@ +# Writing Controllers + +A Kubernetes controller is an active reconciliation process. That is, it watches some object for the world's desired +state, and it watches the world's actual state, too. Then, it sends instructions to try and make the world's current +state be more like the desired state. + +The simplest implementation of this is a loop: + +```go +for { + desired := getDesiredState() + current := getCurrentState() + makeChanges(desired, current) +} +``` + +Watches, etc, are all merely optimizations of this logic. + +## Guidelines + +When you’re writing controllers, there are few guidelines that will help make sure you get the results and performance +you’re looking for. + +1. Operate on one item at a time. If you use a `workqueue.Interface`, you’ll be able to queue changes for a + particular resource and later pop them in multiple “worker” gofuncs with a guarantee that no two gofuncs will + work on the same item at the same time. + + Many controllers must trigger off multiple resources (I need to "check X if Y changes"), but nearly all controllers + can collapse those into a queue of “check this X” based on relationships. For instance, a ReplicaSetController needs + to react to a pod being deleted, but it does that by finding the related ReplicaSets and queuing those. + + +1. Random ordering between resources. When controllers queue off multiple types of resources, there is no guarantee + of ordering amongst those resources. + + Distinct watches are updated independently. Even with an objective ordering of “created resourceA/X” and “created + resourceB/Y”, your controller could observe “created resourceB/Y” and “created resourceA/X”. + + +1. Level driven, not edge driven. Just like having a shell script that isn’t running all the time, your controller + may be off for an indeterminate amount of time before running again. + + If an API object appears with a marker value of `true`, you can’t count on having seen it turn from `false` to `true`, + only that you now observe it being `true`. Even an API watch suffers from this problem, so be sure that you’re not + counting on seeing a change unless your controller is also marking the information it last made the decision on in + the object's status. + + +1. Use `SharedInformers`. `SharedInformers` provide hooks to receive notifications of adds, updates, and deletes for + a particular resource. They also provide convenience functions for accessing shared caches and determining when a + cache is primed. + + Use the factory methods down in https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/framework/informers/factory.go + to ensure that you are sharing the same instance of the cache as everyone else. + + This saves us connections against the API server, duplicate serialization costs server-side, duplicate deserialization + costs controller-side, and duplicate caching costs controller-side. + + You may see other mechanisms like reflectors and deltafifos driving controllers. Those were older mechanisms that we + later used to build the `SharedInformers`. You should avoid using them in new controllers + + +1. Never mutate original objects! Caches are shared across controllers, this means that if you mutate your "copy" + (actually a reference or shallow copy) of an object, you’ll mess up other controllers (not just your own). + + The most common point of failure is making a shallow copy, then mutating a map, like `Annotations`. Use + `api.Scheme.Copy` to make a deep copy. + + +1. Wait for your secondary caches. Many controllers have primary and secondary resources. Primary resources are the + resources that you’ll be updating `Status` for. Secondary resources are resources that you’ll be managing + (creating/deleting) or using for lookups. + + Use the `framework.WaitForCacheSync` function to wait for your secondary caches before starting your primary sync + functions. This will make sure that things like a Pod count for a ReplicaSet isn’t working off of known out of date + information that results in thrashing. + + +1. There are other actors in the system. Just because you haven't changed an object doesn't mean that somebody else + hasn't. + + Don't forget that the current state may change at any moment--it's not sufficient to just watch the desired state. + If you use the absence of objects in the desired state to indicate that things in the current state should be deleted, + make sure you don't have a bug in your observation code (e.g., act before your cache has filled). + + +1. Percolate errors to the top level for consistent re-queuing. We have a `workqueue.RateLimitingInterface` to allow + simple requeuing with reasonable backoffs. + + Your main controller func should return an error when requeuing is necessary. When it isn’t, it should use + `utilruntime.HandleError` and return nil instead. This makes it very easy for reviewers to inspect error handling + cases and to be confident that your controller doesn’t accidentally lose things it should retry for. + + +1. Watches and Informers will “sync”. Periodically, they will deliver every matching object in the cluster to your + `Update` method. This is good for cases where you may need to take additional action on the object, but sometimes you + know there won’t be more work to do. + + In cases where you are *certain* that you don't need to requeue items when there are no new changes, you can compare the + resource version of the old and new objects. If they are the same, you skip requeuing the work. Be careful when you + do this. If you ever skip requeuing your item on failures, you could fail, not requeue, and then never retry that + item again. + + +## Rough Structure + +Overall, your controller should look something like this: + +```go +type Controller struct{ + // podLister is secondary cache of pods which is used for object lookups + podLister cache.StoreToPodLister + + // queue is where incoming work is placed to de-dup and to allow "easy" rate limited requeues on errors + queue workqueue.RateLimitingInterface +} + +func (c *Controller) Run(threadiness int, stopCh chan struct{}){ + // don't let panics crash the process + defer utilruntime.HandleCrash() + // make sure the work queue is shutdown which will trigger workers to end + defer dsc.queue.ShutDown() + + glog.Infof("Starting controller") + + // wait for your secondary caches to fill before starting your work + if !framework.WaitForCacheSync(stopCh, c.podStoreSynced) { + return + } + + // start up your worker threads based on threadiness. Some controllers have multiple kinds of workers + for i := 0; i < threadiness; i++ { + // runWorker will loop until "something bad" happens. The .Until will then rekick the worker + // after one second + go wait.Until(c.runWorker, time.Second, stopCh) + } + + // wait until we're told to stop + <-stopCh + glog.Infof("Shutting down controller") +} + +func (c *Controller) runWorker() { + // hot loop until we're told to stop. processNextWorkItem will automatically wait until there's work + // available, so we don't don't worry about secondary waits + for c.processNextWorkItem() { + } +} + +// processNextWorkItem deals with one key off the queue. It returns false when it's time to quit. +func (c *Controller) processNextWorkItem() bool { + // pull the next work item from queue. It should be a key we use to lookup something in a cache + key, quit := c.queue.Get() + if quit { + return false + } + // you always have to indicate to the queue that you've completed a piece of work + defer c.queue.Done(key) + + // do your work on the key. This method will contains your "do stuff" logic" + err := c.syncHandler(key.(string)) + if err == nil { + // if you had no error, tell the queue to stop tracking history for your key. This will + // reset things like failure counts for per-item rate limiting + c.queue.Forget(key) + return true + } + + // there was a failure so be sure to report it. This method allows for pluggable error handling + // which can be used for things like cluster-monitoring + utilruntime.HandleError(fmt.Errorf("%v failed with : %v", key, err)) + // since we failed, we should requeue the item to work on later. This method will add a backoff + // to avoid hotlooping on particular items (they're probably still not going to work right away) + // and overall controller protection (everything I've done is broken, this controller needs to + // calm down or it can starve other useful work) cases. + c.queue.AddRateLimited(key) + + return true +} + +``` + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/controllers.md?pixel)]() + diff --git a/devel/developer-guides/vagrant.md b/devel/developer-guides/vagrant.md new file mode 100755 index 00000000..b53b0002 --- /dev/null +++ b/devel/developer-guides/vagrant.md @@ -0,0 +1,432 @@ +## Getting started with Vagrant + +Running Kubernetes with Vagrant is an easy way to run/test/develop on your +local machine in an environment using the same setup procedures when running on +GCE or AWS cloud providers. This provider is not tested on a per PR basis, if +you experience bugs when testing from HEAD, please open an issue. + +### Prerequisites + +1. Install latest version >= 1.8.1 of vagrant from +http://www.vagrantup.com/downloads.html + +2. Install a virtual machine host. Examples: + 1. [Virtual Box](https://www.virtualbox.org/wiki/Downloads) + 2. [VMWare Fusion](https://www.vmware.com/products/fusion/) plus +[Vagrant VMWare Fusion provider](https://www.vagrantup.com/vmware) + 3. [Parallels Desktop](https://www.parallels.com/products/desktop/) +plus +[Vagrant Parallels provider](https://parallels.github.io/vagrant-parallels/) + +3. Get or build a +[binary release](../../../docs/getting-started-guides/binary_release.md) + +### Setup + +Setting up a cluster is as simple as running: + +```shell +export KUBERNETES_PROVIDER=vagrant +curl -sS https://get.k8s.io | bash +``` + +Alternatively, you can download +[Kubernetes release](https://github.com/kubernetes/kubernetes/releases) and +extract the archive. To start your local cluster, open a shell and run: + +```shell +cd kubernetes + +export KUBERNETES_PROVIDER=vagrant +./cluster/kube-up.sh +``` + +The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster +management scripts which variant to use. If you forget to set this, the +assumption is you are running on Google Compute Engine. + +By default, the Vagrant setup will create a single master VM (called +kubernetes-master) and one node (called kubernetes-node-1). Each VM will take 1 +GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate +free disk space). + +Vagrant will provision each machine in the cluster with all the necessary +components to run Kubernetes. The initial setup can take a few minutes to +complete on each machine. + +If you installed more than one Vagrant provider, Kubernetes will usually pick +the appropriate one. However, you can override which one Kubernetes will use by +setting the +[`VAGRANT_DEFAULT_PROVIDER`](https://docs.vagrantup.com/v2/providers/default.html) +environment variable: + +```shell +export VAGRANT_DEFAULT_PROVIDER=parallels +export KUBERNETES_PROVIDER=vagrant +./cluster/kube-up.sh +``` + +By default, each VM in the cluster is running Fedora. + +To access the master or any node: + +```shell +vagrant ssh master +vagrant ssh node-1 +``` + +If you are running more than one node, you can access the others by: + +```shell +vagrant ssh node-2 +vagrant ssh node-3 +``` + +Each node in the cluster installs the docker daemon and the kubelet. + +The master node instantiates the Kubernetes master components as pods on the +machine. + +To view the service status and/or logs on the kubernetes-master: + +```shell +[vagrant@kubernetes-master ~] $ vagrant ssh master +[vagrant@kubernetes-master ~] $ sudo su + +[root@kubernetes-master ~] $ systemctl status kubelet +[root@kubernetes-master ~] $ journalctl -ru kubelet + +[root@kubernetes-master ~] $ systemctl status docker +[root@kubernetes-master ~] $ journalctl -ru docker + +[root@kubernetes-master ~] $ tail -f /var/log/kube-apiserver.log +[root@kubernetes-master ~] $ tail -f /var/log/kube-controller-manager.log +[root@kubernetes-master ~] $ tail -f /var/log/kube-scheduler.log +``` + +To view the services on any of the nodes: + +```shell +[vagrant@kubernetes-master ~] $ vagrant ssh node-1 +[vagrant@kubernetes-master ~] $ sudo su + +[root@kubernetes-master ~] $ systemctl status kubelet +[root@kubernetes-master ~] $ journalctl -ru kubelet + +[root@kubernetes-master ~] $ systemctl status docker +[root@kubernetes-master ~] $ journalctl -ru docker +``` + +### Interacting with your Kubernetes cluster with Vagrant. + +With your Kubernetes cluster up, you can manage the nodes in your cluster with +the regular Vagrant commands. + +To push updates to new Kubernetes code after making source changes: + +```shell +./cluster/kube-push.sh +``` + +To stop and then restart the cluster: + +```shell +vagrant halt +./cluster/kube-up.sh +``` + +To destroy the cluster: + +```shell +vagrant destroy +``` + +Once your Vagrant machines are up and provisioned, the first thing to do is to +check that you can use the `kubectl.sh` script. + +You may need to build the binaries first, you can do this with `make` + +```shell +$ ./cluster/kubectl.sh get nodes +``` + +### Authenticating with your master + +When using the vagrant provider in Kubernetes, the `cluster/kubectl.sh` script +will cache your credentials in a `~/.kubernetes_vagrant_auth` file so you will +not be prompted for them in the future. + +```shell +cat ~/.kubernetes_vagrant_auth +``` + +```json +{ "User": "vagrant", + "Password": "vagrant", + "CAFile": "/home/k8s_user/.kubernetes.vagrant.ca.crt", + "CertFile": "/home/k8s_user/.kubecfg.vagrant.crt", + "KeyFile": "/home/k8s_user/.kubecfg.vagrant.key" +} +``` + +You should now be set to use the `cluster/kubectl.sh` script. For example try to +list the nodes that you have started with: + +```shell +./cluster/kubectl.sh get nodes +``` + +### Running containers + +You can use `cluster/kube-*.sh` commands to interact with your VM machines: + +```shell +$ ./cluster/kubectl.sh get pods +NAME READY STATUS RESTARTS AGE + +$ ./cluster/kubectl.sh get services +NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE + +$ ./cluster/kubectl.sh get deployments +CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS +``` + +To Start a container running nginx with a Deployment and three replicas: + +```shell +$ ./cluster/kubectl.sh run my-nginx --image=nginx --replicas=3 --port=80 +``` + +When listing the pods, you will see that three containers have been started and +are in Waiting state: + +```shell +$ ./cluster/kubectl.sh get pods +NAME READY STATUS RESTARTS AGE +my-nginx-3800858182-4e6pe 0/1 ContainerCreating 0 3s +my-nginx-3800858182-8ko0s 1/1 Running 0 3s +my-nginx-3800858182-seu3u 0/1 ContainerCreating 0 3s +``` + +When the provisioning is complete: + +```shell +$ ./cluster/kubectl.sh get pods +NAME READY STATUS RESTARTS AGE +my-nginx-3800858182-4e6pe 1/1 Running 0 40s +my-nginx-3800858182-8ko0s 1/1 Running 0 40s +my-nginx-3800858182-seu3u 1/1 Running 0 40s + +$ ./cluster/kubectl.sh get services +NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE + +$ ./cluster/kubectl.sh get deployments +NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE +my-nginx 3 3 3 3 1m +``` + +We did not start any Services, hence there are none listed. But we see three +replicas displayed properly. Check the +[guestbook](https://github.com/kubernetes/kubernetes/tree/%7B%7Bpage.githubbranch%7D%7D/examples/guestbook) +application to learn how to create a Service. You can already play with scaling +the replicas with: + +```shell +$ ./cluster/kubectl.sh scale deployments my-nginx --replicas=2 +$ ./cluster/kubectl.sh get pods +NAME READY STATUS RESTARTS AGE +my-nginx-3800858182-4e6pe 1/1 Running 0 2m +my-nginx-3800858182-8ko0s 1/1 Running 0 2m +``` + +Congratulations! + +### Testing + +The following will run all of the end-to-end testing scenarios assuming you set +your environment: + +```shell +NUM_NODES=3 go run hack/e2e.go -v --build --up --test --down +``` + +### Troubleshooting + +#### I keep downloading the same (large) box all the time! + +By default the Vagrantfile will download the box from S3. You can change this +(and cache the box locally) by providing a name and an alternate URL when +calling `kube-up.sh` + +```shell +export KUBERNETES_BOX_NAME=choose_your_own_name_for_your_kuber_box +export KUBERNETES_BOX_URL=path_of_your_kuber_box +export KUBERNETES_PROVIDER=vagrant +./cluster/kube-up.sh +``` + +#### I am getting timeouts when trying to curl the master from my host! + +During provision of the cluster, you may see the following message: + +```shell +Validating node-1 +............. +Waiting for each node to be registered with cloud provider +error: couldn't read version from server: Get https://10.245.1.2/api: dial tcp 10.245.1.2:443: i/o timeout +``` + +Some users have reported VPNs may prevent traffic from being routed to the host +machine into the virtual machine network. + +To debug, first verify that the master is binding to the proper IP address: + +``` +$ vagrant ssh master +$ ifconfig | grep eth1 -C 2 +eth1: flags=4163 mtu 1500 inet 10.245.1.2 netmask + 255.255.255.0 broadcast 10.245.1.255 +``` + +Then verify that your host machine has a network connection to a bridge that can +serve that address: + +```shell +$ ifconfig | grep 10.245.1 -C 2 + +vboxnet5: flags=4163 mtu 1500 + inet 10.245.1.1 netmask 255.255.255.0 broadcast 10.245.1.255 + inet6 fe80::800:27ff:fe00:5 prefixlen 64 scopeid 0x20 + ether 0a:00:27:00:00:05 txqueuelen 1000 (Ethernet) +``` + +If you do not see a response on your host machine, you will most likely need to +connect your host to the virtual network created by the virtualization provider. + +If you do see a network, but are still unable to ping the machine, check if your +VPN is blocking the request. + +#### I just created the cluster, but I am getting authorization errors! + +You probably have an incorrect ~/.kubernetes_vagrant_auth file for the cluster +you are attempting to contact. + +```shell +rm ~/.kubernetes_vagrant_auth +``` + +After using kubectl.sh make sure that the correct credentials are set: + +```shell +cat ~/.kubernetes_vagrant_auth +``` + +```json +{ + "User": "vagrant", + "Password": "vagrant" +} +``` + +#### I just created the cluster, but I do not see my container running! + +If this is your first time creating the cluster, the kubelet on each node +schedules a number of docker pull requests to fetch prerequisite images. This +can take some time and as a result may delay your initial pod getting +provisioned. + +#### I have Vagrant up but the nodes won't validate! + +Log on to one of the nodes (`vagrant ssh node-1`) and inspect the salt node +log (`sudo cat /var/log/salt/node`). + +#### I want to change the number of nodes! + +You can control the number of nodes that are instantiated via the environment +variable `NUM_NODES` on your host machine. If you plan to work with replicas, we +strongly encourage you to work with enough nodes to satisfy your largest +intended replica size. If you do not plan to work with replicas, you can save +some system resources by running with a single node. You do this, by setting +`NUM_NODES` to 1 like so: + +```shell +export NUM_NODES=1 +``` + +#### I want my VMs to have more memory! + +You can control the memory allotted to virtual machines with the +`KUBERNETES_MEMORY` environment variable. Just set it to the number of megabytes +you would like the machines to have. For example: + +```shell +export KUBERNETES_MEMORY=2048 +``` + +If you need more granular control, you can set the amount of memory for the +master and nodes independently. For example: + +```shell +export KUBERNETES_MASTER_MEMORY=1536 +export KUBERNETES_NODE_MEMORY=2048 +``` + +#### I want to set proxy settings for my Kubernetes cluster boot strapping! + +If you are behind a proxy, you need to install the Vagrant proxy plugin and set +the proxy settings: + +```shell +vagrant plugin install vagrant-proxyconf +export KUBERNETES_HTTP_PROXY=http://username:password@proxyaddr:proxyport +export KUBERNETES_HTTPS_PROXY=https://username:password@proxyaddr:proxyport +``` + +You can also specify addresses that bypass the proxy, for example: + +```shell +export KUBERNETES_NO_PROXY=127.0.0.1 +``` + +If you are using sudo to make Kubernetes build, use the `-E` flag to pass in the +environment variables. For example, if running `make quick-release`, use: + +```shell +sudo -E make quick-release +``` + +#### I have repository access errors during VM provisioning! + +Sometimes VM provisioning may fail with errors that look like this: + +``` +Timeout was reached for https://mirrors.fedoraproject.org/metalink?repo=fedora-23&arch=x86_64 [Connection timed out after 120002 milliseconds] +``` + +You may use a custom Fedora repository URL to fix this: + +```shell +export CUSTOM_FEDORA_REPOSITORY_URL=https://download.fedoraproject.org/pub/fedora/ +``` + +#### I ran vagrant suspend and nothing works! + +`vagrant suspend` seems to mess up the network. It's not supported at this time. + +#### I want vagrant to sync folders via nfs! + +You can ensure that vagrant uses nfs to sync folders with virtual machines by +setting the KUBERNETES_VAGRANT_USE_NFS environment variable to 'true'. nfs is +faster than virtualbox or vmware's 'shared folders' and does not require guest +additions. See the +[vagrant docs](http://docs.vagrantup.com/v2/synced-folders/nfs.html) for details +on configuring nfs on the host. This setting will have no effect on the libvirt +provider, which uses nfs by default. For example: + +```shell +export KUBERNETES_VAGRANT_USE_NFS=true +``` + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/developer-guides/vagrant.md?pixel)]() + diff --git a/devel/development.md b/devel/development.md new file mode 100644 index 00000000..1349e003 --- /dev/null +++ b/devel/development.md @@ -0,0 +1,251 @@ +# Development Guide + +This document is intended to be the canonical source of truth for things like +supported toolchain versions for building Kubernetes. If you find a +requirement that this doc does not capture, please +[submit an issue](https://github.com/kubernetes/kubernetes/issues) on github. If +you find other docs with references to requirements that are not simply links to +this doc, please [submit an issue](https://github.com/kubernetes/kubernetes/issues). + +This document is intended to be relative to the branch in which it is found. +It is guaranteed that requirements will change over time for the development +branch, but release branches of Kubernetes should not change. + +## Building Kubernetes with Docker + +Official releases are built using Docker containers. To build Kubernetes using +Docker please follow [these instructions] +(http://releases.k8s.io/HEAD/build-tools/README.md). + +## Building Kubernetes on a local OS/shell environment + +Many of the Kubernetes development helper scripts rely on a fairly up-to-date +GNU tools environment, so most recent Linux distros should work just fine +out-of-the-box. Note that Mac OS X ships with somewhat outdated BSD-based tools, +some of which may be incompatible in subtle ways, so we recommend +[replacing those with modern GNU tools] +(https://www.topbug.net/blog/2013/04/14/install-and-use-gnu-command-line-tools-in-mac-os-x/). + +### Go development environment + +Kubernetes is written in the [Go](http://golang.org) programming language. +To build Kubernetes without using Docker containers, you'll need a Go +development environment. Builds for Kubernetes 1.0 - 1.2 require Go version +1.4.2. Builds for Kubernetes 1.3 and higher require Go version 1.6.0. If you +haven't set up a Go development environment, please follow [these +instructions](http://golang.org/doc/code.html) to install the go tools. + +Set up your GOPATH and add a path entry for go binaries to your PATH. Typically +added to your ~/.profile: + +```sh +export GOPATH=$HOME/go +export PATH=$PATH:$GOPATH/bin +``` + +### Godep dependency management + +Kubernetes build and test scripts use [godep](https://github.com/tools/godep) to +manage dependencies. + +#### Install godep + +Ensure that [mercurial](http://mercurial.selenic.com/wiki/Download) is +installed on your system. (some of godep's dependencies use the mercurial +source control system). Use `apt-get install mercurial` or `yum install +mercurial` on Linux, or [brew.sh](http://brew.sh) on OS X, or download directly +from mercurial. + +Install godep and go-bindata (may require sudo): + +```sh +go get -u github.com/tools/godep +go get -u github.com/jteeuwen/go-bindata/go-bindata +``` + +Note: +At this time, godep version >= v63 is known to work in the Kubernetes project. + +To check your version of godep: + +```sh +$ godep version +godep v74 (linux/amd64/go1.6.2) +``` + +Developers planning to managing dependencies in the `vendor/` tree may want to +explore alternative environment setups. See +[using godep to manage dependencies](godep.md). + +### Local build using make + +To build Kubernetes using your local Go development environment (generate linux +binaries): + +```sh + make +``` + +You may pass build options and packages to the script as necessary. For example, +to build with optimizations disabled for enabling use of source debug tools: + +```sh + make GOGCFLAGS="-N -l" +``` + +To build binaries for all platforms: + +```sh + make cross +``` + +### How to update the Go version used to test & build k8s + +The kubernetes project tries to stay on the latest version of Go so it can +benefit from the improvements to the language over time and can easily +bump to a minor release version for security updates. + +Since kubernetes is mostly built and tested in containers, there are a few +unique places you need to update the go version. + +- The image for cross compiling in [build-tools/build-image/cross/](../../build-tools/build-image/cross/). The `VERSION` file and `Dockerfile`. +- Update [dockerized-e2e-runner.sh](https://github.com/kubernetes/test-infra/blob/master/jenkins/dockerized-e2e-runner.sh) to run a kubekins-e2e with the desired go version, which requires pushing [e2e-image](https://github.com/kubernetes/test-infra/tree/master/jenkins/e2e-image) and [test-image](https://github.com/kubernetes/test-infra/tree/master/jenkins/test-image) images that are `FROM` the desired go version. +- The docker image being run in [gotest-dockerized.sh](https://github.com/kubernetes/test-infra/tree/master/jenkins/gotest-dockerized.sh). +- The cross tag `KUBE_BUILD_IMAGE_CROSS_TAG` in [build-tools/common.sh](../../build-tools/common.sh) + +## Workflow + +Below, we outline one of the more common git workflows that core developers use. +Other git workflows are also valid. + +### Visual overview + +![Git workflow](git_workflow.png) + +### Fork the main repository + +1. Go to https://github.com/kubernetes/kubernetes +2. Click the "Fork" button (at the top right) + +### Clone your fork + +The commands below require that you have $GOPATH set ([$GOPATH +docs](https://golang.org/doc/code.html#GOPATH)). We highly recommend you put +Kubernetes' code into your GOPATH. Note: the commands below will not work if +there is more than one directory in your `$GOPATH`. + +```sh +mkdir -p $GOPATH/src/k8s.io +cd $GOPATH/src/k8s.io +# Replace "$YOUR_GITHUB_USERNAME" below with your github username +git clone https://github.com/$YOUR_GITHUB_USERNAME/kubernetes.git +cd kubernetes +git remote add upstream 'https://github.com/kubernetes/kubernetes.git' +``` + +### Create a branch and make changes + +```sh +git checkout -b my-feature +# Make your code changes +``` + +### Keeping your development fork in sync + +```sh +git fetch upstream +git rebase upstream/master +``` + +Note: If you have write access to the main repository at +github.com/kubernetes/kubernetes, you should modify your git configuration so +that you can't accidentally push to upstream: + +```sh +git remote set-url --push upstream no_push +``` + +### Committing changes to your fork + +Before committing any changes, please link/copy the pre-commit hook into your +.git directory. This will keep you from accidentally committing non-gofmt'd Go +code. This hook will also do a build and test whether documentation generation +scripts need to be executed. + +The hook requires both Godep and etcd on your `PATH`. + +```sh +cd kubernetes/.git/hooks/ +ln -s ../../hooks/pre-commit . +``` + +Then you can commit your changes and push them to your fork: + +```sh +git commit +git push -f origin my-feature +``` + +### Creating a pull request + +1. Visit https://github.com/$YOUR_GITHUB_USERNAME/kubernetes +2. Click the "Compare & pull request" button next to your "my-feature" branch. +3. Check out the pull request [process](pull-requests.md) for more details + +**Note:** If you have write access, please refrain from using the GitHub UI for creating PRs, because GitHub will create the PR branch inside the main repository rather than inside your fork. + +### Getting a code review + +Once your pull request has been opened it will be assigned to one or more +reviewers. Those reviewers will do a thorough code review, looking for +correctness, bugs, opportunities for improvement, documentation and comments, +and style. + +Very small PRs are easy to review. Very large PRs are very difficult to +review. Github has a built-in code review tool, which is what most people use. +At the assigned reviewer's discretion, a PR may be switched to use +[Reviewable](https://reviewable.k8s.io) instead. Once a PR is switched to +Reviewable, please ONLY send or reply to comments through reviewable. Mixing +code review tools can be very confusing. + +See [Faster Reviews](faster_reviews.md) for some thoughts on how to streamline +the review process. + +### When to retain commits and when to squash + +Upon merge, all git commits should represent meaningful milestones or units of +work. Use commits to add clarity to the development and review process. + +Before merging a PR, squash any "fix review feedback", "typo", and "rebased" +sorts of commits. It is not imperative that every commit in a PR compile and +pass tests independently, but it is worth striving for. For mass automated +fixups (e.g. automated doc formatting), use one or more commits for the +changes to tooling and a final commit to apply the fixup en masse. This makes +reviews much easier. + +## Testing + +Three basic commands let you run unit, integration and/or e2e tests: + +```sh +cd kubernetes +make test # Run every unit test +make test WHAT=pkg/util/cache GOFLAGS=-v # Run tests of a package verbosely +make test-integration # Run integration tests, requires etcd +make test-e2e # Run e2e tests +``` + +See the [testing guide](testing.md) and [end-to-end tests](e2e-tests.md) for additional information and scenarios. + +## Regenerating the CLI documentation + +```sh +hack/update-generated-docs.sh +``` + + + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/development.md?pixel)]() + diff --git a/devel/e2e-node-tests.md b/devel/e2e-node-tests.md new file mode 100644 index 00000000..5e5f5b49 --- /dev/null +++ b/devel/e2e-node-tests.md @@ -0,0 +1,231 @@ +# Node End-To-End tests + +Node e2e tests are component tests meant for testing the Kubelet code on a custom host environment. + +Tests can be run either locally or against a host running on GCE. + +Node e2e tests are run as both pre- and post- submit tests by the Kubernetes project. + +*Note: Linux only. Mac and Windows unsupported.* + +*Note: There is no scheduler running. The e2e tests have to do manual scheduling, e.g. by using `framework.PodClient`.* + +# Running tests + +## Locally + +Why run tests *Locally*? Much faster than running tests Remotely. + +Prerequisites: +- [Install etcd](https://github.com/coreos/etcd/releases) on your PATH + - Verify etcd is installed correctly by running `which etcd` + - Or make etcd binary available and executable at `/tmp/etcd` +- [Install ginkgo](https://github.com/onsi/ginkgo) on your PATH + - Verify ginkgo is installed correctly by running `which ginkgo` + +From the Kubernetes base directory, run: + +```sh +make test-e2e-node +``` + +This will: run the *ginkgo* binary against the subdirectory *test/e2e_node*, which will in turn: +- Ask for sudo access (needed for running some of the processes) +- Build the Kubernetes source code +- Pre-pull docker images used by the tests +- Start a local instance of *etcd* +- Start a local instance of *kube-apiserver* +- Start a local instance of *kubelet* +- Run the test using the locally started processes +- Output the test results to STDOUT +- Stop *kubelet*, *kube-apiserver*, and *etcd* + +## Remotely + +Why Run tests *Remotely*? Tests will be run in a customized pristine environment. Closely mimics what will be done +as pre- and post- submit testing performed by the project. + +Prerequisites: +- [join the googlegroup](https://groups.google.com/forum/#!forum/kubernetes-dev) +`kubernetes-dev@googlegroups.com` + - *This provides read access to the node test images.* +- Setup a [Google Cloud Platform](https://cloud.google.com/) account and project with Google Compute Engine enabled +- Install and setup the [gcloud sdk](https://cloud.google.com/sdk/downloads) + - Verify the sdk is setup correctly by running `gcloud compute instances list` and `gcloud compute images list --project kubernetes-node-e2e-images` + +Run: + +```sh +make test-e2e-node REMOTE=true +``` + +This will: +- Build the Kubernetes source code +- Create a new GCE instance using the default test image + - Instance will be called **test-e2e-node-containervm-v20160321-image** +- Lookup the instance public ip address +- Copy a compressed archive file to the host containing the following binaries: + - ginkgo + - kubelet + - kube-apiserver + - e2e_node.test (this binary contains the actual tests to be run) +- Unzip the archive to a directory under **/tmp/gcloud** +- Run the tests using the `ginkgo` command + - Starts etcd, kube-apiserver, kubelet + - The ginkgo command is used because this supports more features than running the test binary directly +- Output the remote test results to STDOUT +- `scp` the log files back to the local host under /tmp/_artifacts/e2e-node-containervm-v20160321-image +- Stop the processes on the remote host +- **Leave the GCE instance running** + +**Note: Subsequent tests run using the same image will *reuse the existing host* instead of deleting it and +provisioning a new one. To delete the GCE instance after each test see +*[DELETE_INSTANCE](#delete-instance-after-tests-run)*.** + + +# Additional Remote Options + +## Run tests using different images + +This is useful if you want to run tests against a host using a different OS distro or container runtime than +provided by the default image. + +List the available test images using gcloud. + +```sh +make test-e2e-node LIST_IMAGES=true +``` + +This will output a list of the available images for the default image project. + +Then run: + +```sh +make test-e2e-node REMOTE=true IMAGES="" +``` + +## Run tests against a running GCE instance (not an image) + +This is useful if you have an host instance running already and want to run the tests there instead of on a new instance. + +```sh +make test-e2e-node REMOTE=true HOSTS="" +``` + +## Delete instance after tests run + +This is useful if you want recreate the instance for each test run to trigger flakes related to starting the instance. + +```sh +make test-e2e-node REMOTE=true DELETE_INSTANCES=true +``` + +## Keep instance, test binaries, and *processes* around after tests run + +This is useful if you want to manually inspect or debug the kubelet process run as part of the tests. + +```sh +make test-e2e-node REMOTE=true CLEANUP=false +``` + +## Run tests using an image in another project + +This is useful if you want to create your own host image in another project and use it for testing. + +```sh +make test-e2e-node REMOTE=true IMAGE_PROJECT="" IMAGES="" +``` + +Setting up your own host image may require additional steps such as installing etcd or docker. See +[setup_host.sh](../../test/e2e_node/environment/setup_host.sh) for common steps to setup hosts to run node tests. + +## Create instances using a different instance name prefix + +This is useful if you want to create instances using a different name so that you can run multiple copies of the +test in parallel against different instances of the same image. + +```sh +make test-e2e-node REMOTE=true INSTANCE_PREFIX="my-prefix" +``` + +# Additional Test Options for both Remote and Local execution + +## Only run a subset of the tests + +To run tests matching a regex: + +```sh +make test-e2e-node REMOTE=true FOCUS="" +``` + +To run tests NOT matching a regex: + +```sh +make test-e2e-node REMOTE=true SKIP="" +``` + +## Run tests continually until they fail + +This is useful if you are trying to debug a flaky test failure. This will cause ginkgo to continually +run the tests until they fail. **Note: this will only perform test setup once (e.g. creating the instance) and is +less useful for catching flakes related creating the instance from an image.** + +```sh +make test-e2e-node REMOTE=true RUN_UNTIL_FAILURE=true +``` + +## Run tests in parallel + +Running test in parallel can usually shorten the test duration. By default node +e2e test runs with`--nodes=8` (see ginkgo flag +[--nodes](https://onsi.github.io/ginkgo/#parallel-specs)). You can use the +`PARALLELISM` option to change the parallelism. + +```sh +make test-e2e-node PARALLELISM=4 # run test with 4 parallel nodes +make test-e2e-node PARALLELISM=1 # run test sequentially +``` + +## Run tests with kubenet network plugin + +[kubenet](http://kubernetes.io/docs/admin/network-plugins/#kubenet) is +the default network plugin used by kubelet since Kubernetes 1.3. The +plugin requires [CNI](https://github.com/containernetworking/cni) and +[nsenter](http://man7.org/linux/man-pages/man1/nsenter.1.html). + +Currently, kubenet is enabled by default for Remote execution `REMOTE=true`, +but disabled for Local execution. **Note: kubenet is not supported for +local execution currently. This may cause network related test result to be +different for Local and Remote execution. So if you want to run network +related test, Remote execution is recommended.** + +To enable/disable kubenet: + +```sh +make test_e2e_node TEST_ARGS="--disable-kubenet=true" # enable kubenet +make test_e2e_node TEST_ARGS="--disable-kubenet=false" # disable kubenet +``` + +## Additional QoS Cgroups Hierarchy level testing + +For testing with the QoS Cgroup Hierarchy enabled, you can pass --experimental-cgroups-per-qos flag as an argument into Ginkgo using TEST_ARGS + +```sh +make test_e2e_node TEST_ARGS="--experimental-cgroups-per-qos=true" +``` + +# Notes on tests run by the Kubernetes project during pre-, post- submit. + +The node e2e tests are run by the PR builder for each Pull Request and the results published at +the bottom of the comments section. To re-run just the node e2e tests from the PR builder add the comment +`@k8s-bot node e2e test this issue: #` and **include a link to the test +failure logs if caused by a flake.** + +The PR builder runs tests against the images listed in [jenkins-pull.properties](../../test/e2e_node/jenkins/jenkins-pull.properties) + +The post submit tests run against the images listed in [jenkins-ci.properties](../../test/e2e_node/jenkins/jenkins-ci.properties) + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/e2e-node-tests.md?pixel)]() + diff --git a/devel/e2e-tests.md b/devel/e2e-tests.md new file mode 100644 index 00000000..fc8f1995 --- /dev/null +++ b/devel/e2e-tests.md @@ -0,0 +1,719 @@ +# End-to-End Testing in Kubernetes + +Updated: 5/3/2016 + +**Table of Contents** + + +- [End-to-End Testing in Kubernetes](#end-to-end-testing-in-kubernetes) + - [Overview](#overview) + - [Building and Running the Tests](#building-and-running-the-tests) + - [Cleaning up](#cleaning-up) + - [Advanced testing](#advanced-testing) + - [Bringing up a cluster for testing](#bringing-up-a-cluster-for-testing) + - [Federation e2e tests](#federation-e2e-tests) + - [Configuring federation e2e tests](#configuring-federation-e2e-tests) + - [Image Push Repository](#image-push-repository) + - [Build](#build) + - [Deploy federation control plane](#deploy-federation-control-plane) + - [Run the Tests](#run-the-tests) + - [Teardown](#teardown) + - [Shortcuts for test developers](#shortcuts-for-test-developers) + - [Debugging clusters](#debugging-clusters) + - [Local clusters](#local-clusters) + - [Testing against local clusters](#testing-against-local-clusters) + - [Version-skewed and upgrade testing](#version-skewed-and-upgrade-testing) + - [Kinds of tests](#kinds-of-tests) + - [Viper configuration and hierarchichal test parameters.](#viper-configuration-and-hierarchichal-test-parameters) + - [Conformance tests](#conformance-tests) + - [Defining Conformance Subset](#defining-conformance-subset) + - [Continuous Integration](#continuous-integration) + - [What is CI?](#what-is-ci) + - [What runs in CI?](#what-runs-in-ci) + - [Non-default tests](#non-default-tests) + - [The PR-builder](#the-pr-builder) + - [Adding a test to CI](#adding-a-test-to-ci) + - [Moving a test out of CI](#moving-a-test-out-of-ci) + - [Performance Evaluation](#performance-evaluation) + - [One More Thing](#one-more-thing) + + + +## Overview + +End-to-end (e2e) tests for Kubernetes provide a mechanism to test end-to-end +behavior of the system, and is the last signal to ensure end user operations +match developer specifications. Although unit and integration tests provide a +good signal, in a distributed system like Kubernetes it is not uncommon that a +minor change may pass all unit and integration tests, but cause unforeseen +changes at the system level. + +The primary objectives of the e2e tests are to ensure a consistent and reliable +behavior of the kubernetes code base, and to catch hard-to-test bugs before +users do, when unit and integration tests are insufficient. + +The e2e tests in kubernetes are built atop of +[Ginkgo](http://onsi.github.io/ginkgo/) and +[Gomega](http://onsi.github.io/gomega/). There are a host of features that this +Behavior-Driven Development (BDD) testing framework provides, and it is +recommended that the developer read the documentation prior to diving into the + tests. + +The purpose of *this* document is to serve as a primer for developers who are +looking to execute or add tests using a local development environment. + +Before writing new tests or making substantive changes to existing tests, you +should also read [Writing Good e2e Tests](writing-good-e2e-tests.md) + +## Building and Running the Tests + +There are a variety of ways to run e2e tests, but we aim to decrease the number +of ways to run e2e tests to a canonical way: `hack/e2e.go`. + +You can run an end-to-end test which will bring up a master and nodes, perform +some tests, and then tear everything down. Make sure you have followed the +getting started steps for your chosen cloud platform (which might involve +changing the `KUBERNETES_PROVIDER` environment variable to something other than +"gce"). + +To build Kubernetes, up a cluster, run tests, and tear everything down, use: + +```sh +go run hack/e2e.go -v --build --up --test --down +``` + +If you'd like to just perform one of these steps, here are some examples: + +```sh +# Build binaries for testing +go run hack/e2e.go -v --build + +# Create a fresh cluster. Deletes a cluster first, if it exists +go run hack/e2e.go -v --up + +# Run all tests +go run hack/e2e.go -v --test + +# Run tests matching the regex "\[Feature:Performance\]" +go run hack/e2e.go -v --test --test_args="--ginkgo.focus=\[Feature:Performance\]" + +# Conversely, exclude tests that match the regex "Pods.*env" +go run hack/e2e.go -v --test --test_args="--ginkgo.skip=Pods.*env" + +# Run tests in parallel, skip any that must be run serially +GINKGO_PARALLEL=y go run hack/e2e.go --v --test --test_args="--ginkgo.skip=\[Serial\]" + +# Run tests in parallel, skip any that must be run serially and keep the test namespace if test failed +GINKGO_PARALLEL=y go run hack/e2e.go --v --test --test_args="--ginkgo.skip=\[Serial\] --delete-namespace-on-failure=false" + +# Flags can be combined, and their actions will take place in this order: +# --build, --up, --test, --down +# +# You can also specify an alternative provider, such as 'aws' +# +# e.g.: +KUBERNETES_PROVIDER=aws go run hack/e2e.go -v --build --up --test --down + +# -ctl can be used to quickly call kubectl against your e2e cluster. Useful for +# cleaning up after a failed test or viewing logs. Use -v to avoid suppressing +# kubectl output. +go run hack/e2e.go -v -ctl='get events' +go run hack/e2e.go -v -ctl='delete pod foobar' +``` + +The tests are built into a single binary which can be run used to deploy a +Kubernetes system or run tests against an already-deployed Kubernetes system. +See `go run hack/e2e.go --help` (or the flag definitions in `hack/e2e.go`) for +more options, such as reusing an existing cluster. + +### Cleaning up + +During a run, pressing `control-C` should result in an orderly shutdown, but if +something goes wrong and you still have some VMs running you can force a cleanup +with this command: + +```sh +go run hack/e2e.go -v --down +``` + +## Advanced testing + +### Bringing up a cluster for testing + +If you want, you may bring up a cluster in some other manner and run tests +against it. To do so, or to do other non-standard test things, you can pass +arguments into Ginkgo using `--test_args` (e.g. see above). For the purposes of +brevity, we will look at a subset of the options, which are listed below: + +``` +--ginkgo.dryRun=false: If set, ginkgo will walk the test hierarchy without +actually running anything. Best paired with -v. + +--ginkgo.failFast=false: If set, ginkgo will stop running a test suite after a +failure occurs. + +--ginkgo.failOnPending=false: If set, ginkgo will mark the test suite as failed +if any specs are pending. + +--ginkgo.focus="": If set, ginkgo will only run specs that match this regular +expression. + +--ginkgo.skip="": If set, ginkgo will only run specs that do not match this +regular expression. + +--ginkgo.trace=false: If set, default reporter prints out the full stack trace +when a failure occurs + +--ginkgo.v=false: If set, default reporter print out all specs as they begin. + +--host="": The host, or api-server, to connect to + +--kubeconfig="": Path to kubeconfig containing embedded authinfo. + +--prom-push-gateway="": The URL to prometheus gateway, so that metrics can be +pushed during e2es and scraped by prometheus. Typically something like +127.0.0.1:9091. + +--provider="": The name of the Kubernetes provider (gce, gke, local, vagrant, +etc.) + +--repo-root="../../": Root directory of kubernetes repository, for finding test +files. +``` + +Prior to running the tests, you may want to first create a simple auth file in +your home directory, e.g. `$HOME/.kube/config`, with the following: + +``` +{ + "User": "root", + "Password": "" +} +``` + +As mentioned earlier there are a host of other options that are available, but +they are left to the developer. + +**NOTE:** If you are running tests on a local cluster repeatedly, you may need +to periodically perform some manual cleanup: + + - `rm -rf /var/run/kubernetes`, clear kube generated credentials, sometimes +stale permissions can cause problems. + + - `sudo iptables -F`, clear ip tables rules left by the kube-proxy. + +### Federation e2e tests + +By default, `e2e.go` provisions a single Kubernetes cluster, and any `Feature:Federation` ginkgo tests will be skipped. + +Federation e2e testing involve bringing up multiple "underlying" Kubernetes clusters, +and deploying the federation control plane as a Kubernetes application on the underlying clusters. + +The federation e2e tests are still managed via `e2e.go`, but require some extra configuration items. + +#### Configuring federation e2e tests + +The following environment variables will enable federation e2e building, provisioning and testing. + +```sh +$ export FEDERATION=true +$ export E2E_ZONES="us-central1-a us-central1-b us-central1-f" +``` + +A Kubernetes cluster will be provisioned in each zone listed in `E2E_ZONES`. A zone can only appear once in the `E2E_ZONES` list. + +#### Image Push Repository + +Next, specify the docker repository where your ci images will be pushed. + +* **If `KUBERNETES_PROVIDER=gce` or `KUBERNETES_PROVIDER=gke`**: + + If you use the same GCP project where you to run the e2e tests as the container image repository, + FEDERATION_PUSH_REPO_BASE environment variable will be defaulted to "gcr.io/${DEFAULT_GCP_PROJECT_NAME}". + You can skip ahead to the **Build** section. + + You can simply set your push repo base based on your project name, and the necessary repositories will be + auto-created when you first push your container images. + + ```sh + $ export FEDERATION_PUSH_REPO_BASE="gcr.io/${GCE_PROJECT_NAME}" + ``` + + Skip ahead to the **Build** section. + +* **For all other providers**: + + You'll be responsible for creating and managing access to the repositories manually. + + ```sh + $ export FEDERATION_PUSH_REPO_BASE="quay.io/colin_hom" + ``` + + Given this example, the `federation-apiserver` container image will be pushed to the repository + `quay.io/colin_hom/federation-apiserver`. + + The docker client on the machine running `e2e.go` must have push access for the following pre-existing repositories: + + * `${FEDERATION_PUSH_REPO_BASE}/federation-apiserver` + * `${FEDERATION_PUSH_REPO_BASE}/federation-controller-manager` + + These repositories must allow public read access, as the e2e node docker daemons will not have any credentials. If you're using + GCE/GKE as your provider, the repositories will have read-access by default. + +#### Build + +* Compile the binaries and build container images: + + ```sh + $ KUBE_RELEASE_RUN_TESTS=n KUBE_FASTBUILD=true go run hack/e2e.go -v -build + ``` + +* Push the federation container images + + ```sh + $ build-tools/push-federation-images.sh + ``` + +#### Deploy federation control plane + +The following command will create the underlying Kubernetes clusters in each of `E2E_ZONES`, and then provision the +federation control plane in the cluster occupying the last zone in the `E2E_ZONES` list. + +```sh +$ go run hack/e2e.go -v --up +``` + +#### Run the Tests + +This will run only the `Feature:Federation` e2e tests. You can omit the `ginkgo.focus` argument to run the entire e2e suite. + +```sh +$ go run hack/e2e.go -v --test --test_args="--ginkgo.focus=\[Feature:Federation\]" +``` + +#### Teardown + +```sh +$ go run hack/e2e.go -v --down +``` + +#### Shortcuts for test developers + +* To speed up `e2e.go -up`, provision a single-node kubernetes cluster in a single e2e zone: + + `NUM_NODES=1 E2E_ZONES="us-central1-f"` + + Keep in mind that some tests may require multiple underlying clusters and/or minimum compute resource availability. + +* You can quickly recompile the e2e testing framework via `go install ./test/e2e`. This will not do anything besides + allow you to verify that the go code compiles. + +* If you want to run your e2e testing framework without re-provisioning the e2e setup, you can do so via + `make WHAT=test/e2e/e2e.test` and then re-running the ginkgo tests. + +* If you're hacking around with the federation control plane deployment itself, + you can quickly re-deploy the federation control plane Kubernetes manifests without tearing any resources down. + To re-deploy the federation control plane after running `-up` for the first time: + + ```sh + $ federation/cluster/federation-up.sh + ``` + +### Debugging clusters + +If a cluster fails to initialize, or you'd like to better understand cluster +state to debug a failed e2e test, you can use the `cluster/log-dump.sh` script +to gather logs. + +This script requires that the cluster provider supports ssh. Assuming it does, +running: + +``` +cluster/log-dump.sh +```` + +will ssh to the master and all nodes and download a variety of useful logs to +the provided directory (which should already exist). + +The Google-run Jenkins builds automatically collected these logs for every +build, saving them in the `artifacts` directory uploaded to GCS. + +### Local clusters + +It can be much faster to iterate on a local cluster instead of a cloud-based +one. To start a local cluster, you can run: + +```sh +# The PATH construction is needed because PATH is one of the special-cased +# environment variables not passed by sudo -E +sudo PATH=$PATH hack/local-up-cluster.sh +``` + +This will start a single-node Kubernetes cluster than runs pods using the local +docker daemon. Press Control-C to stop the cluster. + +You can generate a valid kubeconfig file by following instructions printed at the +end of aforementioned script. + +#### Testing against local clusters + +In order to run an E2E test against a locally running cluster, point the tests +at a custom host directly: + +```sh +export KUBECONFIG=/path/to/kubeconfig +export KUBE_MASTER_IP="http://127.0.0.1:" +export KUBE_MASTER=local +go run hack/e2e.go -v --test +``` + +To control the tests that are run: + +```sh +go run hack/e2e.go -v --test --test_args="--ginkgo.focus=\"Secrets\"" +``` + +### Version-skewed and upgrade testing + +We run version-skewed tests to check that newer versions of Kubernetes work +similarly enough to older versions. The general strategy is to cover the following cases: + +1. One version of `kubectl` with another version of the cluster and tests (e.g. + that v1.2 and v1.4 `kubectl` doesn't break v1.3 tests running against a v1.3 + cluster). +1. A newer version of the Kubernetes master with older nodes and tests (e.g. + that upgrading a master to v1.3 with nodes at v1.2 still passes v1.2 tests). +1. A newer version of the whole cluster with older tests (e.g. that a cluster + upgraded---master and nodes---to v1.3 still passes v1.2 tests). +1. That an upgraded cluster functions the same as a brand-new cluster of the + same version (e.g. a cluster upgraded to v1.3 passes the same v1.3 tests as + a newly-created v1.3 cluster). + +[hack/e2e-runner.sh](http://releases.k8s.io/HEAD/hack/jenkins/e2e-runner.sh) is +the authoritative source on how to run version-skewed tests, but below is a +quick-and-dirty tutorial. + +```sh +# Assume you have two copies of the Kubernetes repository checked out, at +# ./kubernetes and ./kubernetes_old + +# If using GKE: +export KUBERNETES_PROVIDER=gke +export CLUSTER_API_VERSION=${OLD_VERSION} + +# Deploy a cluster at the old version; see above for more details +cd ./kubernetes_old +go run ./hack/e2e.go -v --up + +# Upgrade the cluster to the new version +# +# If using GKE, add --upgrade-target=${NEW_VERSION} +# +# You can target Feature:MasterUpgrade or Feature:ClusterUpgrade +cd ../kubernetes +go run ./hack/e2e.go -v --test --check_version_skew=false --test_args="--ginkgo.focus=\[Feature:MasterUpgrade\]" + +# Run old tests with new kubectl +cd ../kubernetes_old +go run ./hack/e2e.go -v --test --test_args="--kubectl-path=$(pwd)/../kubernetes/cluster/kubectl.sh" +``` + +If you are just testing version-skew, you may want to just deploy at one +version and then test at another version, instead of going through the whole +upgrade process: + +```sh +# With the same setup as above + +# Deploy a cluster at the new version +cd ./kubernetes +go run ./hack/e2e.go -v --up + +# Run new tests with old kubectl +go run ./hack/e2e.go -v --test --test_args="--kubectl-path=$(pwd)/../kubernetes_old/cluster/kubectl.sh" + +# Run old tests with new kubectl +cd ../kubernetes_old +go run ./hack/e2e.go -v --test --test_args="--kubectl-path=$(pwd)/../kubernetes/cluster/kubectl.sh" +``` + +## Kinds of tests + +We are working on implementing clearer partitioning of our e2e tests to make +running a known set of tests easier (#10548). Tests can be labeled with any of +the following labels, in order of increasing precedence (that is, each label +listed below supersedes the previous ones): + + - If a test has no labels, it is expected to run fast (under five minutes), be +able to be run in parallel, and be consistent. + + - `[Slow]`: If a test takes more than five minutes to run (by itself or in +parallel with many other tests), it is labeled `[Slow]`. This partition allows +us to run almost all of our tests quickly in parallel, without waiting for the +stragglers to finish. + + - `[Serial]`: If a test cannot be run in parallel with other tests (e.g. it +takes too many resources or restarts nodes), it is labeled `[Serial]`, and +should be run in serial as part of a separate suite. + + - `[Disruptive]`: If a test restarts components that might cause other tests +to fail or break the cluster completely, it is labeled `[Disruptive]`. Any +`[Disruptive]` test is also assumed to qualify for the `[Serial]` label, but +need not be labeled as both. These tests are not run against soak clusters to +avoid restarting components. + + - `[Flaky]`: If a test is found to be flaky and we have decided that it's too +hard to fix in the short term (e.g. it's going to take a full engineer-week), it +receives the `[Flaky]` label until it is fixed. The `[Flaky]` label should be +used very sparingly, and should be accompanied with a reference to the issue for +de-flaking the test, because while a test remains labeled `[Flaky]`, it is not +monitored closely in CI. `[Flaky]` tests are by default not run, unless a +`focus` or `skip` argument is explicitly given. + + - `[Feature:.+]`: If a test has non-default requirements to run or targets +some non-core functionality, and thus should not be run as part of the standard +suite, it receives a `[Feature:.+]` label, e.g. `[Feature:Performance]` or +`[Feature:Ingress]`. `[Feature:.+]` tests are not run in our core suites, +instead running in custom suites. If a feature is experimental or alpha and is +not enabled by default due to being incomplete or potentially subject to +breaking changes, it does *not* block the merge-queue, and thus should run in +some separate test suites owned by the feature owner(s) +(see [Continuous Integration](#continuous-integration) below). + +### Viper configuration and hierarchichal test parameters. + +The future of e2e test configuration idioms will be increasingly defined using viper, and decreasingly via flags. + +Flags in general fall apart once tests become sufficiently complicated. So, even if we could use another flag library, it wouldn't be ideal. + +To use viper, rather than flags, to configure your tests: + +- Just add "e2e.json" to the current directory you are in, and define parameters in it... i.e. `"kubeconfig":"/tmp/x"`. + +Note that advanced testing parameters, and hierarchichally defined parameters, are only defined in viper, to see what they are, you can dive into [TestContextType](../../test/e2e/framework/test_context.go). + +In time, it is our intent to add or autogenerate a sample viper configuration that includes all e2e parameters, to ship with kubernetes. + +### Conformance tests + +Finally, `[Conformance]` tests represent a subset of the e2e-tests we expect to +pass on **any** Kubernetes cluster. The `[Conformance]` label does not supersede +any other labels. + +As each new release of Kubernetes providers new functionality, the subset of +tests necessary to demonstrate conformance grows with each release. Conformance +is thus considered versioned, with the same backwards compatibility guarantees +as laid out in [our versioning policy](../design/versioning.md#supported-releases). +Conformance tests for a given version should be run off of the release branch +that corresponds to that version. Thus `v1.2` conformance tests would be run +from the head of the `release-1.2` branch. eg: + + - A v1.3 development cluster should pass v1.1, v1.2 conformance tests + + - A v1.2 cluster should pass v1.1, v1.2 conformance tests + + - A v1.1 cluster should pass v1.0, v1.1 conformance tests, and fail v1.2 +conformance tests + +Conformance tests are designed to be run with no cloud provider configured. +Conformance tests can be run against clusters that have not been created with +`hack/e2e.go`, just provide a kubeconfig with the appropriate endpoint and +credentials. + +```sh +# setup for conformance tests +export KUBECONFIG=/path/to/kubeconfig +export KUBERNETES_CONFORMANCE_TEST=y +export KUBERNETES_PROVIDER=skeleton + +# run all conformance tests +go run hack/e2e.go -v --test --test_args="--ginkgo.focus=\[Conformance\]" + +# run all parallel-safe conformance tests in parallel +GINKGO_PARALLEL=y go run hack/e2e.go -v --test --test_args="--ginkgo.focus=\[Conformance\] --ginkgo.skip=\[Serial\]" + +# ... and finish up with remaining tests in serial +go run hack/e2e.go -v --test --test_args="--ginkgo.focus=\[Serial\].*\[Conformance\]" +``` + +### Defining Conformance Subset + +It is impossible to define the entire space of Conformance tests without knowing +the future, so instead, we define the compliment of conformance tests, below +(`Please update this with companion PRs as necessary`): + + - A conformance test cannot test cloud provider specific features (i.e. GCE +monitoring, S3 Bucketing, ...) + + - A conformance test cannot rely on any particular non-standard file system +permissions granted to containers or users (i.e. sharing writable host /tmp with +a container) + + - A conformance test cannot rely on any binaries that are not required for the +linux kernel or for a kubelet to run (i.e. git) + + - A conformance test cannot test a feature which obviously cannot be supported +on a broad range of platforms (i.e. testing of multiple disk mounts, GPUs, high +density) + +## Continuous Integration + +A quick overview of how we run e2e CI on Kubernetes. + +### What is CI? + +We run a battery of `e2e` tests against `HEAD` of the master branch on a +continuous basis, and block merges via the [submit +queue](http://submit-queue.k8s.io/) on a subset of those tests if they fail (the +subset is defined in the [munger config] +(https://github.com/kubernetes/contrib/blob/master/mungegithub/mungers/submit-queue.go) +via the `jenkins-jobs` flag; note we also block on `kubernetes-build` and +`kubernetes-test-go` jobs for build and unit and integration tests). + +CI results can be found at [ci-test.k8s.io](http://ci-test.k8s.io), e.g. +[ci-test.k8s.io/kubernetes-e2e-gce/10594](http://ci-test.k8s.io/kubernetes-e2e-gce/10594). + +### What runs in CI? + +We run all default tests (those that aren't marked `[Flaky]` or `[Feature:.+]`) +against GCE and GKE. To minimize the time from regression-to-green-run, we +partition tests across different jobs: + + - `kubernetes-e2e-` runs all non-`[Slow]`, non-`[Serial]`, +non-`[Disruptive]`, non-`[Flaky]`, non-`[Feature:.+]` tests in parallel. + + - `kubernetes-e2e--slow` runs all `[Slow]`, non-`[Serial]`, +non-`[Disruptive]`, non-`[Flaky]`, non-`[Feature:.+]` tests in parallel. + + - `kubernetes-e2e--serial` runs all `[Serial]` and `[Disruptive]`, +non-`[Flaky]`, non-`[Feature:.+]` tests in serial. + +We also run non-default tests if the tests exercise general-availability ("GA") +features that require a special environment to run in, e.g. +`kubernetes-e2e-gce-scalability` and `kubernetes-kubemark-gce`, which test for +Kubernetes performance. + +#### Non-default tests + +Many `[Feature:.+]` tests we don't run in CI. These tests are for features that +are experimental (often in the `experimental` API), and aren't enabled by +default. + +### The PR-builder + +We also run a battery of tests against every PR before we merge it. These tests +are equivalent to `kubernetes-gce`: it runs all non-`[Slow]`, non-`[Serial]`, +non-`[Disruptive]`, non-`[Flaky]`, non-`[Feature:.+]` tests in parallel. These +tests are considered "smoke tests" to give a decent signal that the PR doesn't +break most functionality. Results for your PR can be found at +[pr-test.k8s.io](http://pr-test.k8s.io), e.g. +[pr-test.k8s.io/20354](http://pr-test.k8s.io/20354) for #20354. + +### Adding a test to CI + +As mentioned above, prior to adding a new test, it is a good idea to perform a +`-ginkgo.dryRun=true` on the system, in order to see if a behavior is already +being tested, or to determine if it may be possible to augment an existing set +of tests for a specific use case. + +If a behavior does not currently have coverage and a developer wishes to add a +new e2e test, navigate to the ./test/e2e directory and create a new test using +the existing suite as a guide. + +TODO(#20357): Create a self-documented example which has been disabled, but can +be copied to create new tests and outlines the capabilities and libraries used. + +When writing a test, consult #kinds_of_tests above to determine how your test +should be marked, (e.g. `[Slow]`, `[Serial]`; remember, by default we assume a +test can run in parallel with other tests!). + +When first adding a test it should *not* go straight into CI, because failures +block ordinary development. A test should only be added to CI after is has been +running in some non-CI suite long enough to establish a track record showing +that the test does not fail when run against *working* software. Note also that +tests running in CI are generally running on a well-loaded cluster, so must +contend for resources; see above about [kinds of tests](#kinds_of_tests). + +Generally, a feature starts as `experimental`, and will be run in some suite +owned by the team developing the feature. If a feature is in beta or GA, it +*should* block the merge-queue. In moving from experimental to beta or GA, tests +that are expected to pass by default should simply remove the `[Feature:.+]` +label, and will be incorporated into our core suites. If tests are not expected +to pass by default, (e.g. they require a special environment such as added +quota,) they should remain with the `[Feature:.+]` label, and the suites that +run them should be incorporated into the +[munger config](https://github.com/kubernetes/contrib/blob/master/mungegithub/mungers/submit-queue.go) +via the `jenkins-jobs` flag. + +Occasionally, we'll want to add tests to better exercise features that are +already GA. These tests also shouldn't go straight to CI. They should begin by +being marked as `[Flaky]` to be run outside of CI, and once a track-record for +them is established, they may be promoted out of `[Flaky]`. + +### Moving a test out of CI + +If we have determined that a test is known-flaky and cannot be fixed in the +short-term, we may move it out of CI indefinitely. This move should be used +sparingly, as it effectively means that we have no coverage of that test. When a +test is demoted, it should be marked `[Flaky]` with a comment accompanying the +label with a reference to an issue opened to fix the test. + +## Performance Evaluation + +Another benefit of the e2e tests is the ability to create reproducible loads on +the system, which can then be used to determine the responsiveness, or analyze +other characteristics of the system. For example, the density tests load the +system to 30,50,100 pods per/node and measures the different characteristics of +the system, such as throughput, api-latency, etc. + +For a good overview of how we analyze performance data, please read the +following [post](http://blog.kubernetes.io/2015/09/kubernetes-performance-measurements-and.html) + +For developers who are interested in doing their own performance analysis, we +recommend setting up [prometheus](http://prometheus.io/) for data collection, +and using [promdash](http://prometheus.io/docs/visualization/promdash/) to +visualize the data. There also exists the option of pushing your own metrics in +from the tests using a +[prom-push-gateway](http://prometheus.io/docs/instrumenting/pushing/). +Containers for all of these components can be found +[here](https://hub.docker.com/u/prom/). + +For more accurate measurements, you may wish to set up prometheus external to +kubernetes in an environment where it can access the major system components +(api-server, controller-manager, scheduler). This is especially useful when +attempting to gather metrics in a load-balanced api-server environment, because +all api-servers can be analyzed independently as well as collectively. On +startup, configuration file is passed to prometheus that specifies the endpoints +that prometheus will scrape, as well as the sampling interval. + +``` +#prometheus.conf +job: { + name: "kubernetes" + scrape_interval: "1s" + target_group: { + # apiserver(s) + target: "http://localhost:8080/metrics" + # scheduler + target: "http://localhost:10251/metrics" + # controller-manager + target: "http://localhost:10252/metrics" + } +} +``` + +Once prometheus is scraping the kubernetes endpoints, that data can then be +plotted using promdash, and alerts can be created against the assortment of +metrics that kubernetes provides. + +## One More Thing + +You should also know the [testing conventions](coding-conventions.md#testing-conventions). + +**HAPPY TESTING!** + + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/e2e-tests.md?pixel)]() + diff --git a/devel/faster_reviews.md b/devel/faster_reviews.md new file mode 100644 index 00000000..85568d3f --- /dev/null +++ b/devel/faster_reviews.md @@ -0,0 +1,218 @@ +# How to get faster PR reviews + +Most of what is written here is not at all specific to Kubernetes, but it bears +being written down in the hope that it will occasionally remind people of "best +practices" around code reviews. + +You've just had a brilliant idea on how to make Kubernetes better. Let's call +that idea "Feature-X". Feature-X is not even that complicated. You have a pretty +good idea of how to implement it. You jump in and implement it, fixing a bunch +of stuff along the way. You send your PR - this is awesome! And it sits. And +sits. A week goes by and nobody reviews it. Finally someone offers a few +comments, which you fix up and wait for more review. And you wait. Another +week or two goes by. This is horrible. + +What went wrong? One particular problem that comes up frequently is this - your +PR is too big to review. You've touched 39 files and have 8657 insertions. When +your would-be reviewers pull up the diffs they run away - this PR is going to +take 4 hours to review and they don't have 4 hours right now. They'll get to it +later, just as soon as they have more free time (ha!). + +Let's talk about how to avoid this. + +## 0. Familiarize yourself with project conventions + +* [Development guide](development.md) +* [Coding conventions](coding-conventions.md) +* [API conventions](api-conventions.md) +* [Kubectl conventions](kubectl-conventions.md) + +## 1. Don't build a cathedral in one PR + +Are you sure Feature-X is something the Kubernetes team wants or will accept, or +that it is implemented to fit with other changes in flight? Are you willing to +bet a few days or weeks of work on it? If you have any doubt at all about the +usefulness of your feature or the design - make a proposal doc (in +docs/proposals; for example [the QoS proposal](http://prs.k8s.io/11713)) or a +sketch PR (e.g., just the API or Go interface) or both. Write or code up just +enough to express the idea and the design and why you made those choices, then +get feedback on this. Be clear about what type of feedback you are asking for. +Now, if we ask you to change a bunch of facets of the design, you won't have to +re-write it all. + +## 2. Smaller diffs are exponentially better + +Small PRs get reviewed faster and are more likely to be correct than big ones. +Let's face it - attention wanes over time. If your PR takes 60 minutes to +review, I almost guarantee that the reviewer's eye for detail is not as keen in +the last 30 minutes as it was in the first. This leads to multiple rounds of +review when one might have sufficed. In some cases the review is delayed in its +entirety by the need for a large contiguous block of time to sit and read your +code. + +Whenever possible, break up your PRs into multiple commits. Making a series of +discrete commits is a powerful way to express the evolution of an idea or the +different ideas that make up a single feature. There's a balance to be struck, +obviously. If your commits are too small they become more cumbersome to deal +with. Strive to group logically distinct ideas into separate commits. + +For example, if you found that Feature-X needed some "prefactoring" to fit in, +make a commit that JUST does that prefactoring. Then make a new commit for +Feature-X. Don't lump unrelated things together just because you didn't think +about prefactoring. If you need to, fork a new branch, do the prefactoring +there and send a PR for that. If you can explain why you are doing seemingly +no-op work ("it makes the Feature-X change easier, I promise") we'll probably be +OK with it. + +Obviously, a PR with 25 commits is still very cumbersome to review, so use +common sense. + +## 3. Multiple small PRs are often better than multiple commits + +If you can extract whole ideas from your PR and send those as PRs of their own, +you can avoid the painful problem of continually rebasing. Kubernetes is a +fast-moving codebase - lock in your changes ASAP, and make merges be someone +else's problem. + +Obviously, we want every PR to be useful on its own, so you'll have to use +common sense in deciding what can be a PR vs. what should be a commit in a larger +PR. Rule of thumb - if this commit or set of commits is directly related to +Feature-X and nothing else, it should probably be part of the Feature-X PR. If +you can plausibly imagine someone finding value in this commit outside of +Feature-X, try it as a PR. + +Don't worry about flooding us with PRs. We'd rather have 100 small, obvious PRs +than 10 unreviewable monoliths. + +## 4. Don't rename, reformat, comment, etc in the same PR + +Often, as you are implementing Feature-X, you find things that are just wrong. +Bad comments, poorly named functions, bad structure, weak type-safety. You +should absolutely fix those things (or at least file issues, please) - but not +in this PR. See the above points - break unrelated changes out into different +PRs or commits. Otherwise your diff will have WAY too many changes, and your +reviewer won't see the forest because of all the trees. + +## 5. Comments matter + +Read up on GoDoc - follow those general rules. If you're writing code and you +think there is any possible chance that someone might not understand why you did +something (or that you won't remember what you yourself did), comment it. If +you think there's something pretty obvious that we could follow up on, add a +TODO. Many code-review comments are about this exact issue. + +## 5. Tests are almost always required + +Nothing is more frustrating than doing a review, only to find that the tests are +inadequate or even entirely absent. Very few PRs can touch code and NOT touch +tests. If you don't know how to test Feature-X - ask! We'll be happy to help +you design things for easy testing or to suggest appropriate test cases. + +## 6. Look for opportunities to generify + +If you find yourself writing something that touches a lot of modules, think hard +about the dependencies you are introducing between packages. Can some of what +you're doing be made more generic and moved up and out of the Feature-X package? +Do you need to use a function or type from an otherwise unrelated package? If +so, promote! We have places specifically for hosting more generic code. + +Likewise if Feature-X is similar in form to Feature-W which was checked in last +month and it happens to exactly duplicate some tricky stuff from Feature-W, +consider prefactoring core logic out and using it in both Feature-W and +Feature-X. But do that in a different commit or PR, please. + +## 7. Fix feedback in a new commit + +Your reviewer has finally sent you some feedback on Feature-X. You make a bunch +of changes and ... what? You could patch those into your commits with git +"squash" or "fixup" logic. But that makes your changes hard to verify. Unless +your whole PR is pretty trivial, you should instead put your fixups into a new +commit and re-push. Your reviewer can then look at that commit on its own - so +much faster to review than starting over. + +We might still ask you to clean up your commits at the very end, for the sake +of a more readable history, but don't do this until asked, typically at the +point where the PR would otherwise be tagged LGTM. + +General squashing guidelines: + +* Sausage => squash + + When there are several commits to fix bugs in the original commit(s), address +reviewer feedback, etc. Really we only want to see the end state and commit +message for the whole PR. + +* Layers => don't squash + + When there are independent changes layered upon each other to achieve a single +goal. For instance, writing a code munger could be one commit, applying it could +be another, and adding a precommit check could be a third. One could argue they +should be separate PRs, but there's really no way to test/review the munger +without seeing it applied, and there needs to be a precommit check to ensure the +munged output doesn't immediately get out of date. + +A commit, as much as possible, should be a single logical change. Each commit +should always have a good title line (<70 characters) and include an additional +description paragraph describing in more detail the change intended. Do not link +pull requests by `#` in a commit description, because GitHub creates lots of +spam. Instead, reference other PRs via the PR your commit is in. + +## 8. KISS, YAGNI, MVP, etc + +Sometimes we need to remind each other of core tenets of software design - Keep +It Simple, You Aren't Gonna Need It, Minimum Viable Product, and so on. Adding +features "because we might need it later" is antithetical to software that +ships. Add the things you need NOW and (ideally) leave room for things you +might need later - but don't implement them now. + +## 9. Push back + +We understand that it is hard to imagine, but sometimes we make mistakes. It's +OK to push back on changes requested during a review. If you have a good reason +for doing something a certain way, you are absolutely allowed to debate the +merits of a requested change. You might be overruled, but you might also +prevail. We're mostly pretty reasonable people. Mostly. + +## 10. I'm still getting stalled - help?! + +So, you've done all that and you still aren't getting any PR love? Here's some +things you can do that might help kick a stalled process along: + + * Make sure that your PR has an assigned reviewer (assignee in GitHub). If +this is not the case, reply to the PR comment stream asking for one to be +assigned. + + * Ping the assignee (@username) on the PR comment stream asking for an +estimate of when they can get to it. + + * Ping the assignee by email (many of us have email addresses that are well +published or are the same as our GitHub handle @google.com or @redhat.com). + + * Ping the [team](https://github.com/orgs/kubernetes/teams) (via @team-name) +that works in the area you're submitting code. + +If you think you have fixed all the issues in a round of review, and you haven't +heard back, you should ping the reviewer (assignee) on the comment stream with a +"please take another look" (PTAL) or similar comment indicating you are done and +you think it is ready for re-review. In fact, this is probably a good habit for +all PRs. + +One phenomenon of open-source projects (where anyone can comment on any issue) +is the dog-pile - your PR gets so many comments from so many people it becomes +hard to follow. In this situation you can ask the primary reviewer (assignee) +whether they want you to fork a new PR to clear out all the comments. Remember: +you don't HAVE to fix every issue raised by every person who feels like +commenting, but you should at least answer reasonable comments with an +explanation. + +## Final: Use common sense + +Obviously, none of these points are hard rules. There is no document that can +take the place of common sense and good taste. Use your best judgment, but put +a bit of thought into how your work can be made easier to review. If you do +these things your PRs will flow much more easily. + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/faster_reviews.md?pixel)]() + diff --git a/devel/flaky-tests.md b/devel/flaky-tests.md new file mode 100644 index 00000000..9656bd5f --- /dev/null +++ b/devel/flaky-tests.md @@ -0,0 +1,194 @@ +# Flaky tests + +Any test that fails occasionally is "flaky". Since our merges only proceed when +all tests are green, and we have a number of different CI systems running the +tests in various combinations, even a small percentage of flakes results in a +lot of pain for people waiting for their PRs to merge. + +Therefore, it's very important that we write tests defensively. Situations that +"almost never happen" happen with some regularity when run thousands of times in +resource-constrained environments. Since flakes can often be quite hard to +reproduce while still being common enough to block merges occasionally, it's +additionally important that the test logs be useful for narrowing down exactly +what caused the failure. + +Note that flakes can occur in unit tests, integration tests, or end-to-end +tests, but probably occur most commonly in end-to-end tests. + +## Filing issues for flaky tests + +Because flakes may be rare, it's very important that all relevant logs be +discoverable from the issue. + +1. Search for the test name. If you find an open issue and you're 90% sure the + flake is exactly the same, add a comment instead of making a new issue. +2. If you make a new issue, you should title it with the test name, prefixed by + "e2e/unit/integration flake:" (whichever is appropriate) +3. Reference any old issues you found in step one. Also, make a comment in the + old issue referencing your new issue, because people monitoring only their + email do not see the backlinks github adds. Alternatively, tag the person or + people who most recently worked on it. +4. Paste, in block quotes, the entire log of the individual failing test, not + just the failure line. +5. Link to durable storage with the rest of the logs. This means (for all the + tests that Google runs) the GCS link is mandatory! The Jenkins test result + link is nice but strictly optional: not only does it expire more quickly, + it's not accessible to non-Googlers. + +## Finding filed flaky test cases + +Find flaky tests issues on GitHub under the [kind/flake issue label][flake]. +There are significant numbers of flaky tests reported on a regular basis and P2 +flakes are under-investigated. Fixing flakes is a quick way to gain expertise +and community goodwill. + +[flake]: https://github.com/kubernetes/kubernetes/issues?q=is%3Aopen+is%3Aissue+label%3Akind%2Fflake + +## Expectations when a flaky test is assigned to you + +Note that we won't randomly assign these issues to you unless you've opted in or +you're part of a group that has opted in. We are more than happy to accept help +from anyone in fixing these, but due to the severity of the problem when merges +are blocked, we need reasonably quick turn-around time on test flakes. Therefore +we have the following guidelines: + +1. If a flaky test is assigned to you, it's more important than anything else + you're doing unless you can get a special dispensation (in which case it will + be reassigned). If you have too many flaky tests assigned to you, or you + have such a dispensation, then it's *still* your responsibility to find new + owners (this may just mean giving stuff back to the relevant Team or SIG Lead). +2. You should make a reasonable effort to reproduce it. Somewhere between an + hour and half a day of concentrated effort is "reasonable". It is perfectly + reasonable to ask for help! +3. If you can reproduce it (or it's obvious from the logs what happened), you + should then be able to fix it, or in the case where someone is clearly more + qualified to fix it, reassign it with very clear instructions. +4. PRs that fix or help debug flakes may have the P0 priority set to get them + through the merge queue as fast as possible. +5. Once you have made a change that you believe fixes a flake, it is conservative + to keep the issue for the flake open and see if it manifests again after the + change is merged. +6. If you can't reproduce a flake: __don't just close it!__ Every time a flake comes + back, at least 2 hours of merge time is wasted. So we need to make monotonic + progress towards narrowing it down every time a flake occurs. If you can't + figure it out from the logs, add log messages that would have help you figure + it out. If you make changes to make a flake more reproducible, please link + your pull request to the flake you're working on. +7. If a flake has been open, could not be reproduced, and has not manifested in + 3 months, it is reasonable to close the flake issue with a note saying + why. + +# Reproducing unit test flakes + +Try the [stress command](https://godoc.org/golang.org/x/tools/cmd/stress). + +Just + +``` +$ go install golang.org/x/tools/cmd/stress +``` + +Then build your test binary + +``` +$ go test -c -race +``` + +Then run it under stress + +``` +$ stress ./package.test -test.run=FlakyTest +``` + +It runs the command and writes output to `/tmp/gostress-*` files when it fails. +It periodically reports with run counts. Be careful with tests that use the +`net/http/httptest` package; they could exhaust the available ports on your +system! + +# Hunting flaky unit tests in Kubernetes + +Sometimes unit tests are flaky. This means that due to (usually) race +conditions, they will occasionally fail, even though most of the time they pass. + +We have a goal of 99.9% flake free tests. This means that there is only one +flake in one thousand runs of a test. + +Running a test 1000 times on your own machine can be tedious and time consuming. +Fortunately, there is a better way to achieve this using Kubernetes. + +_Note: these instructions are mildly hacky for now, as we get run once semantics +and logging they will get better_ + +There is a testing image `brendanburns/flake` up on the docker hub. We will use +this image to test our fix. + +Create a replication controller with the following config: + +```yaml +apiVersion: v1 +kind: ReplicationController +metadata: + name: flakecontroller +spec: + replicas: 24 + template: + metadata: + labels: + name: flake + spec: + containers: + - name: flake + image: brendanburns/flake + env: + - name: TEST_PACKAGE + value: pkg/tools + - name: REPO_SPEC + value: https://github.com/kubernetes/kubernetes +``` + +Note that we omit the labels and the selector fields of the replication +controller, because they will be populated from the labels field of the pod +template by default. + +```sh +kubectl create -f ./controller.yaml +``` + +This will spin up 24 instances of the test. They will run to completion, then +exit, and the kubelet will restart them, accumulating more and more runs of the +test. + +You can examine the recent runs of the test by calling `docker ps -a` and +looking for tasks that exited with non-zero exit codes. Unfortunately, docker +ps -a only keeps around the exit status of the last 15-20 containers with the +same image, so you have to check them frequently. + +You can use this script to automate checking for failures, assuming your cluster +is running on GCE and has four nodes: + +```sh +echo "" > output.txt +for i in {1..4}; do + echo "Checking kubernetes-node-${i}" + echo "kubernetes-node-${i}:" >> output.txt + gcloud compute ssh "kubernetes-node-${i}" --command="sudo docker ps -a" >> output.txt +done +grep "Exited ([^0])" output.txt +``` + +Eventually you will have sufficient runs for your purposes. At that point you +can delete the replication controller by running: + +```sh +kubectl delete replicationcontroller flakecontroller +``` + +If you do a final check for flakes with `docker ps -a`, ignore tasks that +exited -1, since that's what happens when you stop the replication controller. + +Happy flake hunting! + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/flaky-tests.md?pixel)]() + diff --git a/devel/generating-clientset.md b/devel/generating-clientset.md new file mode 100644 index 00000000..cbb6141c --- /dev/null +++ b/devel/generating-clientset.md @@ -0,0 +1,41 @@ +# Generation and release cycle of clientset + +Client-gen is an automatic tool that generates [clientset](../../docs/proposals/client-package-structure.md#high-level-client-sets) based on API types. This doc introduces the use the client-gen, and the release cycle of the generated clientsets. + +## Using client-gen + +The workflow includes three steps: + +1. Marking API types with tags: in `pkg/apis/${GROUP}/${VERSION}/types.go`, mark the types (e.g., Pods) that you want to generate clients for with the `// +genclient=true` tag. If the resource associated with the type is not namespace scoped (e.g., PersistentVolume), you need to append the `nonNamespaced=true` tag as well. + +2. + - a. If you are developing in the k8s.io/kubernetes repository, you just need to run hack/update-codegen.sh. + + - b. If you are running client-gen outside of k8s.io/kubernetes, you need to use the command line argument `--input` to specify the groups and versions of the APIs you want to generate clients for, client-gen will then look into `pkg/apis/${GROUP}/${VERSION}/types.go` and generate clients for the types you have marked with the `genclient` tags. For example, to generated a clientset named "my_release" including clients for api/v1 objects and extensions/v1beta1 objects, you need to run: + +``` +$ client-gen --input="api/v1,extensions/v1beta1" --clientset-name="my_release" +``` + +3. ***Adding expansion methods***: client-gen only generates the common methods, such as CRUD. You can manually add additional methods through the expansion interface. For example, this [file](../../pkg/client/clientset_generated/release_1_5/typed/core/v1/pod_expansion.go) adds additional methods to Pod's client. As a convention, we put the expansion interface and its methods in file ${TYPE}_expansion.go. In most cases, you don't want to remove existing expansion files. So to make life easier, instead of creating a new clientset from scratch, ***you can copy and rename an existing clientset (so that all the expansion files are copied)***, and then run client-gen. + +## Output of client-gen + +- clientset: the clientset will be generated at `pkg/client/clientset_generated/` by default, and you can change the path via the `--clientset-path` command line argument. + +- Individual typed clients and client for group: They will be generated at `pkg/client/clientset_generated/${clientset_name}/typed/generated/${GROUP}/${VERSION}/` + +## Released clientsets + +If you are contributing code to k8s.io/kubernetes, try to use the release_X_Y clientset in this [directory](../../pkg/client/clientset_generated/). + +If you need a stable Go client to build your own project, please refer to the [client-go repository](https://github.com/kubernetes/client-go). + +We are migrating k8s.io/kubernetes to use client-go as well, see issue [#35159](https://github.com/kubernetes/kubernetes/issues/35159). + + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/generating-clientset.md?pixel)]() + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/generating-clientset.md?pixel)]() + diff --git a/devel/getting-builds.md b/devel/getting-builds.md new file mode 100644 index 00000000..86563390 --- /dev/null +++ b/devel/getting-builds.md @@ -0,0 +1,52 @@ +# Getting Kubernetes Builds + +You can use [hack/get-build.sh](http://releases.k8s.io/HEAD/hack/get-build.sh) +to get a build or to use as a reference on how to get the most recent builds +with curl. With `get-build.sh` you can grab the most recent stable build, the +most recent release candidate, or the most recent build to pass our ci and gce +e2e tests (essentially a nightly build). + +Run `./hack/get-build.sh -h` for its usage. + +To get a build at a specific version (v1.1.1) use: + +```console +./hack/get-build.sh v1.1.1 +``` + +To get the latest stable release: + +```console +./hack/get-build.sh release/stable +``` + +Use the "-v" option to print the version number of a build without retrieving +it. For example, the following prints the version number for the latest ci +build: + +```console +./hack/get-build.sh -v ci/latest +``` + +You can also use the gsutil tool to explore the Google Cloud Storage release +buckets. Here are some examples: + +```sh +gsutil cat gs://kubernetes-release-dev/ci/latest.txt # output the latest ci version number +gsutil cat gs://kubernetes-release-dev/ci/latest-green.txt # output the latest ci version number that passed gce e2e +gsutil ls gs://kubernetes-release-dev/ci/v0.20.0-29-g29a55cc/ # list the contents of a ci release +gsutil ls gs://kubernetes-release/release # list all official releases and rcs +``` + +## Install `gsutil` + +Example installation: + +```console +$ curl -sSL https://storage.googleapis.com/pub/gsutil.tar.gz | sudo tar -xz -C /usr/local/src +$ sudo ln -s /usr/local/src/gsutil/gsutil /usr/bin/gsutil +``` + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/getting-builds.md?pixel)]() + diff --git a/devel/git_workflow.png b/devel/git_workflow.png new file mode 100644 index 00000000..80a66248 Binary files /dev/null and b/devel/git_workflow.png differ diff --git a/devel/go-code.md b/devel/go-code.md new file mode 100644 index 00000000..2af055f4 --- /dev/null +++ b/devel/go-code.md @@ -0,0 +1,32 @@ +# Kubernetes Go Tools and Tips + +Kubernetes is one of the largest open source Go projects, so good tooling a solid understanding of +Go is critical to Kubernetes development. This document provides a collection of resources, tools +and tips that our developers have found useful. + +## Recommended Reading + +- [Kubernetes Go development environment](development.md#go-development-environment) +- [The Go Spec](https://golang.org/ref/spec) - The Go Programming Language + Specification. +- [Go Tour](https://tour.golang.org/welcome/2) - Official Go tutorial. +- [Effective Go](https://golang.org/doc/effective_go.html) - A good collection of Go advice. +- [Kubernetes Code conventions](coding-conventions.md) - Style guide for Kubernetes code. +- [Three Go Landmines](https://gist.github.com/lavalamp/4bd23295a9f32706a48f) - Surprising behavior in the Go language. These have caused real bugs! + +## Recommended Tools + +- [godep](https://github.com/tools/godep) - Used for Kubernetes dependency management. See also [Kubernetes godep and dependency management](development.md#godep-and-dependency-management) +- [Go Version Manager](https://github.com/moovweb/gvm) - A handy tool for managing Go versions. +- [godepq](https://github.com/google/godepq) - A tool for analyzing go import trees. + +## Go Tips + +- [Godoc bookmarklet](https://gist.github.com/timstclair/c891fb8aeb24d663026371d91dcdb3fc) - navigate from a github page to the corresponding godoc page. +- Consider making a separate Go tree for each project, which can make overlapping dependency management much easier. Remember to set the `$GOPATH` correctly! Consider [scripting](https://gist.github.com/timstclair/17ca792a20e0d83b06dddef7d77b1ea0) this. +- Emacs users - setup [go-mode](https://github.com/dominikh/go-mode.el) + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/go-code.md?pixel)]() + diff --git a/devel/godep.md b/devel/godep.md new file mode 100644 index 00000000..ddd6c5b1 --- /dev/null +++ b/devel/godep.md @@ -0,0 +1,123 @@ +# Using godep to manage dependencies + +This document is intended to show a way for managing `vendor/` tree dependencies +in Kubernetes. If you are not planning on managing `vendor` dependencies go here +[Godep dependency management](development.md#godep-dependency-management). + +## Alternate GOPATH for installing and using godep + +There are many ways to build and host Go binaries. Here is one way to get +utilities like `godep` installed: + +Create a new GOPATH just for your go tools and install godep: + +```sh +export GOPATH=$HOME/go-tools +mkdir -p $GOPATH +go get -u github.com/tools/godep +``` + +Add this $GOPATH/bin to your path. Typically you'd add this to your ~/.profile: + +```sh +export GOPATH=$HOME/go-tools +export PATH=$PATH:$GOPATH/bin +``` + +## Using godep + +Here's a quick walkthrough of one way to use godeps to add or update a +Kubernetes dependency into `vendor/`. For more details, please see the +instructions in [godep's documentation](https://github.com/tools/godep). + +1) Devote a directory to this endeavor: + +_Devoting a separate directory is not strictly required, but it is helpful to +separate dependency updates from other changes._ + +```sh +export KPATH=$HOME/code/kubernetes +mkdir -p $KPATH/src/k8s.io +cd $KPATH/src/k8s.io +git clone https://github.com/$YOUR_GITHUB_USERNAME/kubernetes.git # assumes your fork is 'kubernetes' +# Or copy your existing local repo here. IMPORTANT: making a symlink doesn't work. +``` + +2) Set up your GOPATH. + +```sh +# This will *not* let your local builds see packages that exist elsewhere on your system. +export GOPATH=$KPATH +``` + +3) Populate your new GOPATH. + +```sh +cd $KPATH/src/k8s.io/kubernetes +godep restore +``` + +4) Next, you can either add a new dependency or update an existing one. + +To add a new dependency is simple (if a bit slow): + +```sh +cd $KPATH/src/k8s.io/kubernetes +DEP=example.com/path/to/dependency +godep get $DEP/... +# Now change code in Kubernetes to use the dependency. +./hack/godep-save.sh +``` + +To update an existing dependency is a bit more complicated. Godep has an +`update` command, but none of us can figure out how to actually make it work. +Instead, this procedure seems to work reliably: + +```sh +cd $KPATH/src/k8s.io/kubernetes +DEP=example.com/path/to/dependency +# NB: For the next step, $DEP is assumed be the repo root. If it is actually a +# subdir of the repo, use the repo root here. This is required to keep godep +# from getting angry because `godep restore` left the tree in a "detached head" +# state. +rm -rf $KPATH/src/$DEP # repo root +godep get $DEP/... +# Change code in Kubernetes, if necessary. +rm -rf Godeps +rm -rf vendor +./hack/godep-save.sh +git checkout -- $(git status -s | grep "^ D" | awk '{print $2}' | grep ^Godeps) +``` + +_If `go get -u path/to/dependency` fails with compilation errors, instead try +`go get -d -u path/to/dependency` to fetch the dependencies without compiling +them. This is unusual, but has been observed._ + +After all of this is done, `git status` should show you what files have been +modified and added/removed. Make sure to `git add` and `git rm` them. It is +commonly advised to make one `git commit` which includes just the dependency +update and Godeps files, and another `git commit` that includes changes to +Kubernetes code to use the new/updated dependency. These commits can go into a +single pull request. + +5) Before sending your PR, it's a good idea to sanity check that your +Godeps.json file and the contents of `vendor/ `are ok by running `hack/verify-godeps.sh` + +_If `hack/verify-godeps.sh` fails after a `godep update`, it is possible that a +transitive dependency was added or removed but not updated by godeps. It then +may be necessary to perform a `hack/godep-save.sh` to pick up the transitive +dependency changes._ + +It is sometimes expedient to manually fix the /Godeps/Godeps.json file to +minimize the changes. However without great care this can lead to failures +with `hack/verify-godeps.sh`. This must pass for every PR. + +6) If you updated the Godeps, please also update `Godeps/LICENSES` by running +`hack/update-godep-licenses.sh`. + + + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/godep.md?pixel)]() + diff --git a/devel/gubernator-images/filterpage.png b/devel/gubernator-images/filterpage.png new file mode 100644 index 00000000..2d08bd8e Binary files /dev/null and b/devel/gubernator-images/filterpage.png differ diff --git a/devel/gubernator-images/filterpage1.png b/devel/gubernator-images/filterpage1.png new file mode 100644 index 00000000..838cb0fa Binary files /dev/null and b/devel/gubernator-images/filterpage1.png differ diff --git a/devel/gubernator-images/filterpage2.png b/devel/gubernator-images/filterpage2.png new file mode 100644 index 00000000..63da782e Binary files /dev/null and b/devel/gubernator-images/filterpage2.png differ diff --git a/devel/gubernator-images/filterpage3.png b/devel/gubernator-images/filterpage3.png new file mode 100644 index 00000000..33066d78 Binary files /dev/null and b/devel/gubernator-images/filterpage3.png differ diff --git a/devel/gubernator-images/skipping1.png b/devel/gubernator-images/skipping1.png new file mode 100644 index 00000000..a5dea440 Binary files /dev/null and b/devel/gubernator-images/skipping1.png differ diff --git a/devel/gubernator-images/skipping2.png b/devel/gubernator-images/skipping2.png new file mode 100644 index 00000000..b133347e Binary files /dev/null and b/devel/gubernator-images/skipping2.png differ diff --git a/devel/gubernator-images/testfailures.png b/devel/gubernator-images/testfailures.png new file mode 100644 index 00000000..1b331248 Binary files /dev/null and b/devel/gubernator-images/testfailures.png differ diff --git a/devel/gubernator.md b/devel/gubernator.md new file mode 100644 index 00000000..3fd2e445 --- /dev/null +++ b/devel/gubernator.md @@ -0,0 +1,142 @@ +# Gubernator + +*This document is oriented at developers who want to use Gubernator to debug while developing for Kubernetes.* + + + +- [Gubernator](#gubernator) + - [What is Gubernator?](#what-is-gubernator) + - [Gubernator Features](#gubernator-features) + - [Test Failures list](#test-failures-list) + - [Log Filtering](#log-filtering) + - [Gubernator for Local Tests](#gubernator-for-local-tests) + - [Future Work](#future-work) + + + +## What is Gubernator? + +[Gubernator](https://k8s-gubernator.appspot.com/) is a webpage for viewing and filtering Kubernetes +test results. + +Gubernator simplifies the debugging proccess and makes it easier to track down failures by automating many +steps commonly taken in searching through logs, and by offering tools to filter through logs to find relevant lines. +Gubernator automates the steps of finding the failed tests, displaying relevant logs, and determining the +failed pods and the corresponing pod UID, namespace, and container ID. +It also allows for filtering of the log files to display relevant lines based on selected keywords, and +allows for multiple logs to be woven together by timestamp. + +Gubernator runs on Google App Engine and fetches logs stored on Google Cloud Storage. + +## Gubernator Features + +### Test Failures list + +Issues made by k8s-merge-robot will post a link to a page listing the failed tests. +Each failed test comes with the corresponding error log from a junit file and a link +to filter logs for that test. + +Based on the message logged in the junit file, the pod name may be displayed. + +![alt text](gubernator-images/testfailures.png) + +[Test Failures List Example](https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/11721) + +### Log Filtering + +The log filtering page comes with checkboxes and textboxes to aid in filtering. Filtered keywords will be bolded +and lines including keywords will be highlighted. Up to four lines around the line of interest will also be displayed. + +![alt text](gubernator-images/filterpage.png) + +If less than 100 lines are skipped, the "... skipping xx lines ..." message can be clicked to expand and show +the hidden lines. + +Before expansion: +![alt text](gubernator-images/skipping1.png) +After expansion: +![alt text](gubernator-images/skipping2.png) + +If the pod name was displayed in the Test Failures list, it will automatically be included in the filters. +If it is not found in the error message, it can be manually entered into the textbox. Once a pod name +is entered, the Pod UID, Namespace, and ContainerID may be automatically filled in as well. These can be +altered as well. To apply the filter, check off the options corresponding to the filter. + +![alt text](gubernator-images/filterpage1.png) + +To add a filter, type the term to be filtered into the textbox labeled "Add filter:" and press enter. +Additional filters will be displayed as checkboxes under the textbox. + +![alt text](gubernator-images/filterpage3.png) + +To choose which logs to view check off the checkboxes corresponding to the logs of interest. If multiple logs are +included, the "Weave by timestamp" option can weave the selected logs together based on the timestamp in each line. + +![alt text](gubernator-images/filterpage2.png) + +[Log Filtering Example 1](https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubelet-gce-e2e-ci/5535/nodelog?pod=pod-configmaps-b5b876cb-3e1e-11e6-8956-42010af0001d&junit=junit_03.xml&wrap=on&logfiles=%2Fkubernetes-jenkins%2Flogs%2Fkubelet-gce-e2e-ci%2F5535%2Fartifacts%2Ftmp-node-e2e-7a5a3b40-e2e-node-coreos-stable20160622-image%2Fkube-apiserver.log&logfiles=%2Fkubernetes-jenkins%2Flogs%2Fkubelet-gce-e2e-ci%2F5535%2Fartifacts%2Ftmp-node-e2e-7a5a3b40-e2e-node-coreos-stable20160622-image%2Fkubelet.log&UID=on&poduid=b5b8a59e-3e1e-11e6-b358-42010af0001d&ns=e2e-tests-configmap-oi12h&cID=tmp-node-e2e-7a5a3b40-e2e-node-coreos-stable20160622-image) + +[Log Filtering Example 2](https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/11721/nodelog?pod=client-containers-a53f813c-503e-11e6-88dd-0242ac110003&junit=junit_19.xml&wrap=on) + + +### Gubernator for Local Tests + +*Currently Gubernator can only be used with remote node e2e tests.* + +**NOTE: Using Gubernator with local tests will publically upload your test logs to Google Cloud Storage** + +To use Gubernator to view logs from local test runs, set the GUBERNATOR tag to true. +A URL link to view the test results will be printed to the console. +Please note that running with the Gubernator tag will bypass the user confirmation for uploading to GCS. + +```console + +$ make test-e2e-node REMOTE=true GUBERNATOR=true +... +================================================================ +Running gubernator.sh + +Gubernator linked below: +k8s-gubernator.appspot.com/build/yourusername-g8r-logs/logs/e2e-node/timestamp +``` + +The gubernator.sh script can be run after running a remote node e2e test for the same effect. + +```console +$ ./test/e2e_node/gubernator.sh +Do you want to run gubernator.sh and upload logs publicly to GCS? [y/n]y +... +Gubernator linked below: +k8s-gubernator.appspot.com/build/yourusername-g8r-logs/logs/e2e-node/timestamp +``` + +## Future Work + +Gubernator provides a framework for debugging failures and introduces useful features. +There is still a lot of room for more features and growth to make the debugging process more efficient. + +How to contribute (see https://github.com/kubernetes/test-infra/blob/master/gubernator/README.md) + +* Extend GUBERNATOR flag to all local tests + +* More accurate identification of pod name, container ID, etc. + * Change content of logged strings for failures to include more information + * Better regex in Gubernator + +* Automate discovery of more keywords + * Volume Name + * Disk Name + * Pod IP + +* Clickable API objects in the displayed lines in order to add them as filters + +* Construct story of pod's lifetime + * Have concise view of what a pod went through from when pod was started to failure + +* Improve UI + * Have separate folders of logs in rows instead of in one long column + * Improve interface for adding additional features (maybe instead of textbox and checkbox, have chips) + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/gubernator.md?pixel)]() + diff --git a/devel/how-to-doc.md b/devel/how-to-doc.md new file mode 100644 index 00000000..891969d7 --- /dev/null +++ b/devel/how-to-doc.md @@ -0,0 +1,205 @@ +# Document Conventions + +Updated: 11/3/2015 + +*This document is oriented at users and developers who want to write documents +for Kubernetes.* + +**Table of Contents** + + +- [Document Conventions](#document-conventions) + - [General Concepts](#general-concepts) + - [How to Get a Table of Contents](#how-to-get-a-table-of-contents) + - [How to Write Links](#how-to-write-links) + - [How to Include an Example](#how-to-include-an-example) + - [Misc.](#misc) + - [Code formatting](#code-formatting) + - [Syntax Highlighting](#syntax-highlighting) + - [Headings](#headings) + - [What Are Mungers?](#what-are-mungers) + - [Auto-added Mungers](#auto-added-mungers) + - [Generate Analytics](#generate-analytics) +- [Generated documentation](#generated-documentation) + + + +## General Concepts + +Each document needs to be munged to ensure its format is correct, links are +valid, etc. To munge a document, simply run `hack/update-munge-docs.sh`. We +verify that all documents have been munged using `hack/verify-munge-docs.sh`. +The scripts for munging documents are called mungers, see the +[mungers section](#what-are-mungers) below if you're curious about how mungers +are implemented or if you want to write one. + +## How to Get a Table of Contents + +Instead of writing table of contents by hand, insert the following code in your +md file: + +``` + + +``` + +After running `hack/update-munge-docs.sh`, you'll see a table of contents +generated for you, layered based on the headings. + +## How to Write Links + +It's important to follow the rules when writing links. It helps us correctly +versionize documents for each release. + +Use inline links instead of urls at all times. When you add internal links to +`docs/` or `examples/`, use relative links; otherwise, use +`http://releases.k8s.io/HEAD/`. For example, avoid using: + +``` +[GCE](https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/gce.md) # note that it's under docs/ +[Kubernetes package](../../pkg/) # note that it's under pkg/ +http://kubernetes.io/ # external link +``` + +Instead, use: + +``` +[GCE](../getting-started-guides/gce.md) # note that it's under docs/ +[Kubernetes package](http://releases.k8s.io/HEAD/pkg/) # note that it's under pkg/ +[Kubernetes](http://kubernetes.io/) # external link +``` + +The above example generates the following links: +[GCE](../getting-started-guides/gce.md), +[Kubernetes package](http://releases.k8s.io/HEAD/pkg/), and +[Kubernetes](http://kubernetes.io/). + +## How to Include an Example + +While writing examples, you may want to show the content of certain example +files (e.g. [pod.yaml](../../test/fixtures/doc-yaml/user-guide/pod.yaml)). In this case, insert the +following code in the md file: + +``` + + +``` + +Note that you should replace `path/to/file` with the relative path to the +example file. Then `hack/update-munge-docs.sh` will generate a code block with +the content of the specified file, and a link to download it. This way, you save +the time to do the copy-and-paste; what's better, the content won't become +out-of-date every time you update the example file. + +For example, the following: + +``` + + +``` + +generates the following after `hack/update-munge-docs.sh`: + + + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: nginx + labels: + app: nginx +spec: + containers: + - name: nginx + image: nginx + ports: + - containerPort: 80 +``` + +[Download example](../../test/fixtures/doc-yaml/user-guide/pod.yaml?raw=true) + + +## Misc. + +### Code formatting + +Wrap a span of code with single backticks (`` ` ``). To format multiple lines of +code as its own code block, use triple backticks (```` ``` ````). + +### Syntax Highlighting + +Adding syntax highlighting to code blocks improves readability. To do so, in +your fenced block, add an optional language identifier. Some useful identifier +includes `yaml`, `console` (for console output), and `sh` (for shell quote +format). Note that in a console output, put `$ ` at the beginning of each +command and put nothing at the beginning of the output. Here's an example of +console code block: + +``` +```console + +$ kubectl create -f test/fixtures/doc-yaml/user-guide/pod.yaml +pod "foo" created + +```  +``` + +which renders as: + +```console +$ kubectl create -f test/fixtures/doc-yaml/user-guide/pod.yaml +pod "foo" created +``` + +### Headings + +Add a single `#` before the document title to create a title heading, and add +`##` to the next level of section title, and so on. Note that the number of `#` +will determine the size of the heading. + +## What Are Mungers? + +Mungers are like gofmt for md docs which we use to format documents. To use it, +simply place + +``` + + +``` + +in your md files. Note that xxxx is the placeholder for a specific munger. +Appropriate content will be generated and inserted between two brackets after +you run `hack/update-munge-docs.sh`. See +[munger document](http://releases.k8s.io/HEAD/cmd/mungedocs/) for more details. + +## Auto-added Mungers + +After running `hack/update-munge-docs.sh`, you may see some code / mungers in +your md file that are auto-added. You don't have to add them manually. It's +recommended to just read this section as a reference instead of messing up with +the following mungers. + +### Generate Analytics + +ANALYTICS munger inserts a Google Anaylytics link for this page. + +``` + + +``` + +# Generated documentation + +Some documents can be generated automatically. Run `hack/generate-docs.sh` to +populate your repository with these generated documents, and a list of the files +it generates is placed in `.generated_docs`. To reduce merge conflicts, we do +not want to check these documents in; however, to make the link checker in the +munger happy, we check in a placeholder. `hack/update-generated-docs.sh` puts a +placeholder in the location where each generated document would go, and +`hack/verify-generated-docs.sh` verifies that the placeholder is in place. + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/how-to-doc.md?pixel)]() + diff --git a/devel/instrumentation.md b/devel/instrumentation.md new file mode 100644 index 00000000..b73221a9 --- /dev/null +++ b/devel/instrumentation.md @@ -0,0 +1,52 @@ +## Instrumenting Kubernetes with a new metric + +The following is a step-by-step guide for adding a new metric to the Kubernetes +code base. + +We use the Prometheus monitoring system's golang client library for +instrumenting our code. Once you've picked out a file that you want to add a +metric to, you should: + +1. Import "github.com/prometheus/client_golang/prometheus". + +2. Create a top-level var to define the metric. For this, you have to: + + 1. Pick the type of metric. Use a Gauge for things you want to set to a +particular value, a Counter for things you want to increment, or a Histogram or +Summary for histograms/distributions of values (typically for latency). +Histograms are better if you're going to aggregate the values across jobs, while +summaries are better if you just want the job to give you a useful summary of +the values. + 2. Give the metric a name and description. + 3. Pick whether you want to distinguish different categories of things using +labels on the metric. If so, add "Vec" to the name of the type of metric you +want and add a slice of the label names to the definition. + + https://github.com/kubernetes/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/apiserver/apiserver.go#L53 + https://github.com/kubernetes/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/kubelet/metrics/metrics.go#L31 + +3. Register the metric so that prometheus will know to export it. + + https://github.com/kubernetes/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/kubelet/metrics/metrics.go#L74 + https://github.com/kubernetes/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/apiserver/apiserver.go#L78 + +4. Use the metric by calling the appropriate method for your metric type (Set, +Inc/Add, or Observe, respectively for Gauge, Counter, or Histogram/Summary), +first calling WithLabelValues if your metric has any labels + + https://github.com/kubernetes/kubernetes/blob/3ce7fe8310ff081dbbd3d95490193e1d5250d2c9/pkg/kubelet/kubelet.go#L1384 + https://github.com/kubernetes/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/apiserver/apiserver.go#L87 + + +These are the metric type definitions if you're curious to learn about them or +need more information: + +https://github.com/prometheus/client_golang/blob/master/prometheus/gauge.go +https://github.com/prometheus/client_golang/blob/master/prometheus/counter.go +https://github.com/prometheus/client_golang/blob/master/prometheus/histogram.go +https://github.com/prometheus/client_golang/blob/master/prometheus/summary.go + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/instrumentation.md?pixel)]() + diff --git a/devel/issues.md b/devel/issues.md new file mode 100644 index 00000000..fe9e94d9 --- /dev/null +++ b/devel/issues.md @@ -0,0 +1,59 @@ +## GitHub Issues for the Kubernetes Project + +A quick overview of how we will review and prioritize incoming issues at +https://github.com/kubernetes/kubernetes/issues + +### Priorities + +We use GitHub issue labels for prioritization. The absence of a priority label +means the bug has not been reviewed and prioritized yet. + +We try to apply these priority labels consistently across the entire project, +but if you notice an issue that you believe to be incorrectly prioritized, +please do let us know and we will evaluate your counter-proposal. + +- **priority/P0**: Must be actively worked on as someone's top priority right +now. Stuff is burning. If it's not being actively worked on, someone is expected +to drop what they're doing immediately to work on it. Team leaders are +responsible for making sure that all P0's in their area are being actively +worked on. Examples include user-visible bugs in core features, broken builds or +tests and critical security issues. + +- **priority/P1**: Must be staffed and worked on either currently, or very soon, +ideally in time for the next release. + +- **priority/P2**: There appears to be general agreement that this would be good +to have, but we may not have anyone available to work on it right now or in the +immediate future. Community contributions would be most welcome in the mean time +(although it might take a while to get them reviewed if reviewers are fully +occupied with higher priority issues, for example immediately before a release). + +- **priority/P3**: Possibly useful, but not yet enough support to actually get +it done. These are mostly place-holders for potentially good ideas, so that they +don't get completely forgotten, and can be referenced/deduped every time they +come up. + +### Milestones + +We additionally use milestones, based on minor version, for determining if a bug +should be fixed for the next release. These milestones will be especially +scrutinized as we get to the weeks just before a release. We can release a new +version of Kubernetes once they are empty. We will have two milestones per minor +release. + +- **vX.Y**: The list of bugs that will be merged for that milestone once ready. + +- **vX.Y-candidate**: The list of bug that we might merge for that milestone. A +bug shouldn't be in this milestone for more than a day or two towards the end of +a milestone. It should be triaged either into vX.Y, or moved out of the release +milestones. + +The above priority scheme still applies. P0 and P1 issues are work we feel must +get done before release. P2 and P3 issues are work we would merge into the +release if it gets done, but we wouldn't block the release on it. A few days +before release, we will probably move all P2 and P3 bugs out of that milestone +in bulk. + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/issues.md?pixel)]() + diff --git a/devel/kubectl-conventions.md b/devel/kubectl-conventions.md new file mode 100644 index 00000000..1e94b3ba --- /dev/null +++ b/devel/kubectl-conventions.md @@ -0,0 +1,411 @@ +# Kubectl Conventions + +Updated: 8/27/2015 + +**Table of Contents** + + +- [Kubectl Conventions](#kubectl-conventions) + - [Principles](#principles) + - [Command conventions](#command-conventions) + - [Create commands](#create-commands) + - [Rules for extending special resource alias - "all"](#rules-for-extending-special-resource-alias---all) + - [Flag conventions](#flag-conventions) + - [Output conventions](#output-conventions) + - [Documentation conventions](#documentation-conventions) + - [Command implementation conventions](#command-implementation-conventions) + - [Generators](#generators) + + + +## Principles + +* Strive for consistency across commands + +* Explicit should always override implicit + + * Environment variables should override default values + + * Command-line flags should override default values and environment variables + + * `--namespace` should also override the value specified in a specified +resource + +## Command conventions + +* Command names are all lowercase, and hyphenated if multiple words. + +* kubectl VERB NOUNs for commands that apply to multiple resource types. + +* Command itself should not have built-in aliases. + +* NOUNs may be specified as `TYPE name1 name2` or `TYPE/name1 TYPE/name2` or +`TYPE1,TYPE2,TYPE3/name1`; TYPE is omitted when only a single type is expected. + +* Resource types are all lowercase, with no hyphens; both singular and plural +forms are accepted. + +* NOUNs may also be specified by one or more file arguments: `-f file1 -f file2 +...` + +* Resource types may have 2- or 3-letter aliases. + +* Business logic should be decoupled from the command framework, so that it can +be reused independently of kubectl, cobra, etc. + * Ideally, commonly needed functionality would be implemented server-side in +order to avoid problems typical of "fat" clients and to make it readily +available to non-Go clients. + +* Commands that generate resources, such as `run` or `expose`, should obey +specific conventions, see [generators](#generators). + +* A command group (e.g., `kubectl config`) may be used to group related +non-standard commands, such as custom generators, mutations, and computations. + + +### Create commands + +`kubectl create ` commands fill the gap between "I want to try +Kubernetes, but I don't know or care what gets created" (`kubectl run`) and "I +want to create exactly this" (author yaml and run `kubectl create -f`). They +provide an easy way to create a valid object without having to know the vagaries +of particular kinds, nested fields, and object key typos that are ignored by the +yaml/json parser. Because editing an already created object is easier than +authoring one from scratch, these commands only need to have enough parameters +to create a valid object and set common immutable fields. It should default as +much as is reasonably possible. Once that valid object is created, it can be +further manipulated using `kubectl edit` or the eventual `kubectl set` commands. + +`kubectl create ` commands help in cases where you need +to perform non-trivial configuration generation/transformation tailored for a +common use case. `kubectl create secret` is a good example, there's a `generic` +flavor with keys mapping to files, then there's a `docker-registry` flavor that +is tailored for creating an image pull secret, and there's a `tls` flavor for +creating tls secrets. You create these as separate commands to get distinct +flags and separate help that is tailored for the particular usage. + + +### Rules for extending special resource alias - "all" + +Here are the rules to add a new resource to the `kubectl get all` output. + +* No cluster scoped resources + +* No namespace admin level resources (limits, quota, policy, authorization +rules) + +* No resources that are potentially unrecoverable (secrets and pvc) + +* Resources that are considered "similar" to #3 should be grouped +the same (configmaps) + + +## Flag conventions + +* Flags are all lowercase, with words separated by hyphens + +* Flag names and single-character aliases should have the same meaning across +all commands + +* Flag descriptions should start with an uppercase letter and not have a +period at the end of a sentence + +* Command-line flags corresponding to API fields should accept API enums +exactly (e.g., `--restart=Always`) + +* Do not reuse flags for different semantic purposes, and do not use different +flag names for the same semantic purpose -- grep for `"Flags()"` before adding a +new flag + +* Use short flags sparingly, only for the most frequently used options, prefer +lowercase over uppercase for the most common cases, try to stick to well known +conventions for UNIX commands and/or Docker, where they exist, and update this +list when adding new short flags + + * `-f`: Resource file + * also used for `--follow` in `logs`, but should be deprecated in favor of `-F` + * `-n`: Namespace scope + * `-l`: Label selector + * also used for `--labels` in `expose`, but should be deprecated + * `-L`: Label columns + * `-c`: Container + * also used for `--client` in `version`, but should be deprecated + * `-i`: Attach stdin + * `-t`: Allocate TTY + * `-w`: Watch (currently also used for `--www` in `proxy`, but should be deprecated) + * `-p`: Previous + * also used for `--pod` in `exec`, but deprecated + * also used for `--patch` in `patch`, but should be deprecated + * also used for `--port` in `proxy`, but should be deprecated + * `-P`: Static file prefix in `proxy`, but should be deprecated + * `-r`: Replicas + * `-u`: Unix socket + * `-v`: Verbose logging level + + +* `--dry-run`: Don't modify the live state; simulate the mutation and display +the output. All mutations should support it. + +* `--local`: Don't contact the server; just do local read, transformation, +generation, etc., and display the output + +* `--output-version=...`: Convert the output to a different API group/version + +* `--short`: Output a compact summary of normal output; the format is subject +to change and is optimizied for reading not parsing. + +* `--validate`: Validate the resource schema + +## Output conventions + +* By default, output is intended for humans rather than programs + * However, affordances are made for simple parsing of `get` output + +* Only errors should be directed to stderr + +* `get` commands should output one row per resource, and one resource per row + + * Column titles and values should not contain spaces in order to facilitate +commands that break lines into fields: cut, awk, etc. Instead, use `-` as the +word separator. + + * By default, `get` output should fit within about 80 columns + + * Eventually we could perhaps auto-detect width + * `-o wide` may be used to display additional columns + + + * The first column should be the resource name, titled `NAME` (may change this +to an abbreviation of resource type) + + * NAMESPACE should be displayed as the first column when --all-namespaces is +specified + + * The last default column should be time since creation, titled `AGE` + + * `-Lkey` should append a column containing the value of label with key `key`, +with `` if not present + + * json, yaml, Go template, and jsonpath template formats should be supported +and encouraged for subsequent processing + + * Users should use --api-version or --output-version to ensure the output +uses the version they expect + + +* `describe` commands may output on multiple lines and may include information +from related resources, such as events. Describe should add additional +information from related resources that a normal user may need to know - if a +user would always run "describe resource1" and the immediately want to run a +"get type2" or "describe resource2", consider including that info. Examples, +persistent volume claims for pods that reference claims, events for most +resources, nodes and the pods scheduled on them. When fetching related +resources, a targeted field selector should be used in favor of client side +filtering of related resources. + +* For fields that can be explicitly unset (booleans, integers, structs), the +output should say ``. Likewise, for arrays `` should be used; for +external IP, `` should be used; for load balancer, `` should be +used. Lastly `` should be used where unrecognized field type was +specified. + +* Mutations should output TYPE/name verbed by default, where TYPE is singular; +`-o name` may be used to just display TYPE/name, which may be used to specify +resources in other commands + +## Documentation conventions + +* Commands are documented using Cobra; docs are then auto-generated by +`hack/update-generated-docs.sh`. + + * Use should contain a short usage string for the most common use case(s), not +an exhaustive specification + + * Short should contain a one-line explanation of what the command does + * Short descriptions should start with an uppercase case letter and not + have a period at the end of a sentence + * Short descriptions should (if possible) start with a first person + (singular present tense) verb + + * Long may contain multiple lines, including additional information about +input, output, commonly used flags, etc. + * Long descriptions should use proper grammar, start with an uppercase + letter and have a period at the end of a sentence + + + * Example should contain examples + * Start commands with `$` + * A comment should precede each example command, and should begin with `#` + + +* Use "FILENAME" for filenames + +* Use "TYPE" for the particular flavor of resource type accepted by kubectl, +rather than "RESOURCE" or "KIND" + +* Use "NAME" for resource names + +## Command implementation conventions + +For every command there should be a `NewCmd` function that creates +the command and returns a pointer to a `cobra.Command`, which can later be added +to other parent commands to compose the structure tree. There should also be a +`Config` struct with a variable to every flag and argument declared +by the command (and any other variable required for the command to run). This +makes tests and mocking easier. The struct ideally exposes three methods: + +* `Complete`: Completes the struct fields with values that may or may not be +directly provided by the user, for example, by flags pointers, by the `args` +slice, by using the Factory, etc. + +* `Validate`: performs validation on the struct fields and returns appropriate +errors. + +* `Run`: runs the actual logic of the command, taking as assumption +that the struct is complete with all required values to run, and they are valid. + +Sample command skeleton: + +```go +// MineRecommendedName is the recommended command name for kubectl mine. +const MineRecommendedName = "mine" + +// Long command description and examples. +var ( + mineLong = templates.LongDesc(` + mine which is described here + with lots of details.`) + + mineExample = templates.Examples(` + # Run my command's first action + kubectl mine first_action + + # Run my command's second action on latest stuff + kubectl mine second_action --flag`) +) + +// MineConfig contains all the options for running the mine cli command. +type MineConfig struct { + mineLatest bool +} + +// NewCmdMine implements the kubectl mine command. +func NewCmdMine(parent, name string, f *cmdutil.Factory, out io.Writer) *cobra.Command { + opts := &MineConfig{} + + cmd := &cobra.Command{ + Use: fmt.Sprintf("%s [--latest]", name), + Short: "Run my command", + Long: mineLong, + Example: fmt.Sprintf(mineExample, parent+" "+name), + Run: func(cmd *cobra.Command, args []string) { + if err := opts.Complete(f, cmd, args, out); err != nil { + cmdutil.CheckErr(err) + } + if err := opts.Validate(); err != nil { + cmdutil.CheckErr(cmdutil.UsageError(cmd, err.Error())) + } + if err := opts.RunMine(); err != nil { + cmdutil.CheckErr(err) + } + }, + } + + cmd.Flags().BoolVar(&options.mineLatest, "latest", false, "Use latest stuff") + return cmd +} + +// Complete completes all the required options for mine. +func (o *MineConfig) Complete(f *cmdutil.Factory, cmd *cobra.Command, args []string, out io.Writer) error { + return nil +} + +// Validate validates all the required options for mine. +func (o MineConfig) Validate() error { + return nil +} + +// RunMine implements all the necessary functionality for mine. +func (o MineConfig) RunMine() error { + return nil +} +``` + +The `Run` method should contain the business logic of the command +and as noted in [command conventions](#command-conventions), ideally that logic +should exist server-side so any client could take advantage of it. Notice that +this is not a mandatory structure and not every command is implemented this way, +but this is a nice convention so try to be compliant with it. As an example, +have a look at how [kubectl logs](../../pkg/kubectl/cmd/logs.go) is implemented. + +## Generators + +Generators are kubectl commands that generate resources based on a set of inputs +(other resources, flags, or a combination of both). + +The point of generators is: + +* to enable users using kubectl in a scripted fashion to pin to a particular +behavior which may change in the future. Explicit use of a generator will always +guarantee that the expected behavior stays the same. + +* to enable potential expansion of the generated resources for scenarios other +than just creation, similar to how -f is supported for most general-purpose +commands. + +Generator commands shoud obey to the following conventions: + +* A `--generator` flag should be defined. Users then can choose between +different generators, if the command supports them (for example, `kubectl run` +currently supports generators for pods, jobs, replication controllers, and +deployments), or between different versions of a generator so that users +depending on a specific behavior may pin to that version (for example, `kubectl +expose` currently supports two different versions of a service generator). + +* Generation should be decoupled from creation. A generator should implement the +`kubectl.StructuredGenerator` interface and have no dependencies on cobra or the +Factory. See, for example, how the first version of the namespace generator is +defined: + +```go +// NamespaceGeneratorV1 supports stable generation of a namespace +type NamespaceGeneratorV1 struct { + // Name of namespace + Name string +} + +// Ensure it supports the generator pattern that uses parameters specified during construction +var _ StructuredGenerator = &NamespaceGeneratorV1{} + +// StructuredGenerate outputs a namespace object using the configured fields +func (g *NamespaceGeneratorV1) StructuredGenerate() (runtime.Object, error) { + if err := g.validate(); err != nil { + return nil, err + } + namespace := &api.Namespace{} + namespace.Name = g.Name + return namespace, nil +} + +// validate validates required fields are set to support structured generation +func (g *NamespaceGeneratorV1) validate() error { + if len(g.Name) == 0 { + return fmt.Errorf("name must be specified") + } + return nil +} +``` + +The generator struct (`NamespaceGeneratorV1`) holds the necessary fields for +namespace generation. It also satisfies the `kubectl.StructuredGenerator` +interface by implementing the `StructuredGenerate() (runtime.Object, error)` +method which configures the generated namespace that callers of the generator +(`kubectl create namespace` in our case) need to create. + +* `--dry-run` should output the resource that would be created, without +creating it. + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/kubectl-conventions.md?pixel)]() + diff --git a/devel/kubemark-guide.md b/devel/kubemark-guide.md new file mode 100755 index 00000000..e914226d --- /dev/null +++ b/devel/kubemark-guide.md @@ -0,0 +1,212 @@ +# Kubemark User Guide + +## Introduction + +Kubemark is a performance testing tool which allows users to run experiments on +simulated clusters. The primary use case is scalability testing, as simulated +clusters can be much bigger than the real ones. The objective is to expose +problems with the master components (API server, controller manager or +scheduler) that appear only on bigger clusters (e.g. small memory leaks). + +This document serves as a primer to understand what Kubemark is, what it is not, +and how to use it. + +## Architecture + +On a very high level Kubemark cluster consists of two parts: real master +components and a set of “Hollow” Nodes. The prefix “Hollow” means an +implementation/instantiation of a component with all “moving” parts mocked out. +The best example is HollowKubelet, which pretends to be an ordinary Kubelet, but +does not start anything, nor mount any volumes - it just lies it does. More +detailed design and implementation details are at the end of this document. + +Currently master components run on a dedicated machine(s), and HollowNodes run +on an ‘external’ Kubernetes cluster. This design has a slight advantage, over +running master components on external cluster, of completely isolating master +resources from everything else. + +## Requirements + +To run Kubemark you need a Kubernetes cluster (called `external cluster`) +for running all your HollowNodes and a dedicated machine for a master. +Master machine has to be directly routable from HollowNodes. You also need an +access to some Docker repository. + +Currently scripts are written to be easily usable by GCE, but it should be +relatively straightforward to port them to different providers or bare metal. + +## Common use cases and helper scripts + +Common workflow for Kubemark is: +- starting a Kubemark cluster (on GCE) +- running e2e tests on Kubemark cluster +- monitoring test execution and debugging problems +- turning down Kubemark cluster + +Included in descriptions there will be comments helpful for anyone who’ll want to +port Kubemark to different providers. + +### Starting a Kubemark cluster + +To start a Kubemark cluster on GCE you need to create an external kubernetes +cluster (it can be GCE, GKE or anything else) by yourself, make sure that kubeconfig +points to it by default, build a kubernetes release (e.g. by running +`make quick-release`) and run `test/kubemark/start-kubemark.sh` script. +This script will create a VM for master components, Pods for HollowNodes +and do all the setup necessary to let them talk to each other. It will use the +configuration stored in `cluster/kubemark/config-default.sh` - you can tweak it +however you want, but note that some features may not be implemented yet, as +implementation of Hollow components/mocks will probably be lagging behind ‘real’ +one. For performance tests interesting variables are `NUM_NODES` and +`MASTER_SIZE`. After start-kubemark script is finished you’ll have a ready +Kubemark cluster, a kubeconfig file for talking to the Kubemark cluster is +stored in `test/kubemark/kubeconfig.kubemark`. + +Currently we're running HollowNode with limit of 0.05 a CPU core and ~60MB or +memory, which taking into account default cluster addons and fluentD running on +an 'external' cluster, allows running ~17.5 HollowNodes per core. + +#### Behind the scene details: + +Start-kubemark script does quite a lot of things: + +- Creates a master machine called hollow-cluster-master and PD for it (*uses +gcloud, should be easy to do outside of GCE*) + +- Creates a firewall rule which opens port 443\* on the master machine (*uses +gcloud, should be easy to do outside of GCE*) + +- Builds a Docker image for HollowNode from the current repository and pushes it +to the Docker repository (*GCR for us, using scripts from +`cluster/gce/util.sh` - it may get tricky outside of GCE*) + +- Generates certificates and kubeconfig files, writes a kubeconfig locally to +`test/kubemark/kubeconfig.kubemark` and creates a Secret which stores kubeconfig for +HollowKubelet/HollowProxy use (*used gcloud to transfer files to Master, should +be easy to do outside of GCE*). + +- Creates a ReplicationController for HollowNodes and starts them up. (*will +work exactly the same everywhere as long as MASTER_IP will be populated +correctly, but you’ll need to update docker image address if you’re not using +GCR and default image name*) + +- Waits until all HollowNodes are in the Running phase (*will work exactly the +same everywhere*) + +\* Port 443 is a secured port on the master machine which is used for all +external communication with the API server. In the last sentence *external* +means all traffic coming from other machines, including all the Nodes, not only +from outside of the cluster. Currently local components, i.e. ControllerManager +and Scheduler talk with API server using insecure port 8080. + +### Running e2e tests on Kubemark cluster + +To run standard e2e test on your Kubemark cluster created in the previous step +you execute `test/kubemark/run-e2e-tests.sh` script. It will configure ginkgo to +use Kubemark cluster instead of something else and start an e2e test. This +script should not need any changes to work on other cloud providers. + +By default (if nothing will be passed to it) the script will run a Density '30 +test. If you want to run a different e2e test you just need to provide flags you want to be +passed to `hack/ginkgo-e2e.sh` script, e.g. `--ginkgo.focus="Load"` to run the +Load test. + +By default, at the end of each test, it will delete namespaces and everything +under it (e.g. events, replication controllers) on Kubemark master, which takes +a lot of time. Such work aren't needed in most cases: if you delete your +Kubemark cluster after running `run-e2e-tests.sh`; you don't care about +namespace deletion performance, specifically related to etcd; etc. There is a +flag that enables you to avoid namespace deletion: `--delete-namespace=false`. +Adding the flag should let you see in logs: `Found DeleteNamespace=false, +skipping namespace deletion!` + +### Monitoring test execution and debugging problems + +Run-e2e-tests prints the same output on Kubemark as on ordinary e2e cluster, but +if you need to dig deeper you need to learn how to debug HollowNodes and how +Master machine (currently) differs from the ordinary one. + +If you need to debug master machine you can do similar things as you do on your +ordinary master. The difference between Kubemark setup and ordinary setup is +that in Kubemark etcd is run as a plain docker container, and all master +components are run as normal processes. There’s no Kubelet overseeing them. Logs +are stored in exactly the same place, i.e. `/var/logs/` directory. Because +binaries are not supervised by anything they won't be restarted in the case of a +crash. + +To help you with debugging from inside the cluster startup script puts a +`~/configure-kubectl.sh` script on the master. It downloads `gcloud` and +`kubectl` tool and configures kubectl to work on unsecured master port (useful +if there are problems with security). After the script is run you can use +kubectl command from the master machine to play with the cluster. + +Debugging HollowNodes is a bit more tricky, as if you experience a problem on +one of them you need to learn which hollow-node pod corresponds to a given +HollowNode known by the Master. During self-registeration HollowNodes provide +their cluster IPs as Names, which means that if you need to find a HollowNode +named `10.2.4.5` you just need to find a Pod in external cluster with this +cluster IP. There’s a helper script +`test/kubemark/get-real-pod-for-hollow-node.sh` that does this for you. + +When you have a Pod name you can use `kubectl logs` on external cluster to get +logs, or use a `kubectl describe pod` call to find an external Node on which +this particular HollowNode is running so you can ssh to it. + +E.g. you want to see the logs of HollowKubelet on which pod `my-pod` is running. +To do so you can execute: + +``` +$ kubectl kubernetes/test/kubemark/kubeconfig.kubemark describe pod my-pod +``` + +Which outputs pod description and among it a line: + +``` +Node: 1.2.3.4/1.2.3.4 +``` + +To learn the `hollow-node` pod corresponding to node `1.2.3.4` you use +aforementioned script: + +``` +$ kubernetes/test/kubemark/get-real-pod-for-hollow-node.sh 1.2.3.4 +``` + +which will output the line: + +``` +hollow-node-1234 +``` + +Now you just use ordinary kubectl command to get the logs: + +``` +kubectl --namespace=kubemark logs hollow-node-1234 +``` + +All those things should work exactly the same on all cloud providers. + +### Turning down Kubemark cluster + +On GCE you just need to execute `test/kubemark/stop-kubemark.sh` script, which +will delete HollowNode ReplicationController and all the resources for you. On +other providers you’ll need to delete all this stuff by yourself. + +## Some current implementation details + +Kubemark master uses exactly the same binaries as ordinary Kubernetes does. This +means that it will never be out of date. On the other hand HollowNodes use +existing fake for Kubelet (called SimpleKubelet), which mocks its runtime +manager with `pkg/kubelet/dockertools/fake_manager.go`, where most logic sits. +Because there’s no easy way of mocking other managers (e.g. VolumeManager), they +are not supported in Kubemark (e.g. we can’t schedule Pods with volumes in them +yet). + +As the time passes more fakes will probably be plugged into HollowNodes, but +it’s crucial to make it as simple as possible to allow running a big number of +Hollows on a single core. + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/kubemark-guide.md?pixel)]() + diff --git a/devel/local-cluster/docker.md b/devel/local-cluster/docker.md new file mode 100644 index 00000000..78768f80 --- /dev/null +++ b/devel/local-cluster/docker.md @@ -0,0 +1,269 @@ +**Stop. This guide has been superseded by [Minikube](https://github.com/kubernetes/minikube) which is the recommended method of running Kubernetes on your local machine.** + + +The following instructions show you how to set up a simple, single node Kubernetes cluster using Docker. + +Here's a diagram of what the final result will look like: + +![Kubernetes Single Node on Docker](k8s-singlenode-docker.png) + +## Prerequisites + +**Note: These steps have not been tested with the [Docker For Mac or Docker For Windows beta programs](https://blog.docker.com/2016/03/docker-for-mac-windows-beta/).** + +1. You need to have Docker version >= "1.10" installed on the machine. +2. Enable mount propagation. Hyperkube is running in a container which has to mount volumes for other containers, for example in case of persistent storage. The required steps depend on the init system. + + + In case of **systemd**, change MountFlags in the Docker unit file to shared. + + ```shell + DOCKER_CONF=$(systemctl cat docker | head -1 | awk '{print $2}') + sed -i.bak 's/^\(MountFlags=\).*/\1shared/' $DOCKER_CONF + systemctl daemon-reload + systemctl restart docker + ``` + + **Otherwise**, manually set the mount point used by Hyperkube to be shared: + + ```shell + mkdir -p /var/lib/kubelet + mount --bind /var/lib/kubelet /var/lib/kubelet + mount --make-shared /var/lib/kubelet + ``` + + +### Run it + +1. Decide which Kubernetes version to use. Set the `${K8S_VERSION}` variable to a version of Kubernetes >= "v1.2.0". + + + If you'd like to use the current **stable** version of Kubernetes, run the following: + + ```sh + export K8S_VERSION=$(curl -sS https://storage.googleapis.com/kubernetes-release/release/stable.txt) + ``` + + and for the **latest** available version (including unstable releases): + + ```sh + export K8S_VERSION=$(curl -sS https://storage.googleapis.com/kubernetes-release/release/latest.txt) + ``` + +2. Start Hyperkube + + ```shell + export ARCH=amd64 + docker run -d \ + --volume=/sys:/sys:rw \ + --volume=/var/lib/docker/:/var/lib/docker:rw \ + --volume=/var/lib/kubelet/:/var/lib/kubelet:rw,shared \ + --volume=/var/run:/var/run:rw \ + --net=host \ + --pid=host \ + --privileged \ + --name=kubelet \ + gcr.io/google_containers/hyperkube-${ARCH}:${K8S_VERSION} \ + /hyperkube kubelet \ + --hostname-override=127.0.0.1 \ + --api-servers=http://localhost:8080 \ + --config=/etc/kubernetes/manifests \ + --cluster-dns=10.0.0.10 \ + --cluster-domain=cluster.local \ + --allow-privileged --v=2 + ``` + + > Note that `--cluster-dns` and `--cluster-domain` is used to deploy dns, feel free to discard them if dns is not needed. + + > If you would like to mount an external device as a volume, add `--volume=/dev:/dev` to the command above. It may however, cause some problems described in [#18230](https://github.com/kubernetes/kubernetes/issues/18230) + + > Architectures other than `amd64` are experimental and sometimes unstable, but feel free to try them out! Valid values: `arm`, `arm64` and `ppc64le`. ARM is available with Kubernetes version `v1.3.0-alpha.2` and higher. ARM 64-bit and PowerPC 64 little-endian are available with `v1.3.0-alpha.3` and higher. Track progress on multi-arch support [here](https://github.com/kubernetes/kubernetes/issues/17981) + + > If you are behind a proxy, you need to pass the proxy setup to curl in the containers to pull the certificates. Create a .curlrc under /root folder (because the containers are running as root) with the following line: + + ``` + proxy = : + ``` + + This actually runs the kubelet, which in turn runs a [pod](http://kubernetes.io/docs/user-guide/pods/) that contains the other master components. + + ** **SECURITY WARNING** ** services exposed via Kubernetes using Hyperkube are available on the host node's public network interface / IP address. Because of this, this guide is not suitable for any host node/server that is directly internet accessible. Refer to [#21735](https://github.com/kubernetes/kubernetes/issues/21735) for additional info. + +### Download `kubectl` + +At this point you should have a running Kubernetes cluster. You can test it out +by downloading the kubectl binary for `${K8S_VERSION}` (in this example: `{{page.version}}.0`). + + +Downloads: + + - `linux/amd64`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/amd64/kubectl + - `linux/386`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/386/kubectl + - `linux/arm`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/arm/kubectl + - `linux/arm64`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/arm64/kubectl + - `linux/ppc64le`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/ppc64le/kubectl + - `OS X/amd64`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/darwin/amd64/kubectl + - `OS X/386`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/darwin/386/kubectl + - `windows/amd64`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/windows/amd64/kubectl.exe + - `windows/386`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/windows/386/kubectl.exe + +The generic download path is: + +``` +http://storage.googleapis.com/kubernetes-release/release/${K8S_VERSION}/bin/${GOOS}/${GOARCH}/${K8S_BINARY} +``` + +An example install with `linux/amd64`: + +``` +curl -sSL "https://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/amd64/kubectl" > /usr/bin/kubectl +chmod +x /usr/bin/kubectl +``` + +On OS X, to make the API server accessible locally, setup a ssh tunnel. + +```shell +docker-machine ssh `docker-machine active` -N -L 8080:localhost:8080 +``` + +Setting up a ssh tunnel is applicable to remote docker hosts as well. + +(Optional) Create kubernetes cluster configuration: + +```shell +kubectl config set-cluster test-doc --server=http://localhost:8080 +kubectl config set-context test-doc --cluster=test-doc +kubectl config use-context test-doc +``` + +### Test it out + +List the nodes in your cluster by running: + +```shell +kubectl get nodes +``` + +This should print: + +```shell +NAME STATUS AGE +127.0.0.1 Ready 1h +``` + +### Run an application + +```shell +kubectl run nginx --image=nginx --port=80 +``` + +Now run `docker ps` you should see nginx running. You may need to wait a few minutes for the image to get pulled. + +### Expose it as a service + +```shell +kubectl expose deployment nginx --port=80 +``` + +Run the following command to obtain the cluster local IP of this service we just created: + +```shell{% raw %} +ip=$(kubectl get svc nginx --template={{.spec.clusterIP}}) +echo $ip +{% endraw %}``` + +Hit the webserver with this IP: + +```shell{% raw %} + +curl $ip +{% endraw %}``` + +On OS X, since docker is running inside a VM, run the following command instead: + +```shell +docker-machine ssh `docker-machine active` curl $ip +``` + +## Deploy a DNS + +Read [documentation for manually deploying a DNS](http://kubernetes.io/docs/getting-started-guides/docker-multinode/#deploy-dns-manually-for-v12x) for instructions. + +### Turning down your cluster + +1. Delete the nginx service and deployment: + +If you plan on re-creating your nginx deployment and service you will need to clean it up. + +```shell +kubectl delete service,deployments nginx +``` + +2. Delete all the containers including the kubelet: + +```shell +docker rm -f kubelet +docker rm -f `docker ps | grep k8s | awk '{print $1}'` +``` + +3. Cleanup the filesystem: + +On OS X, first ssh into the docker VM: + +```shell +docker-machine ssh `docker-machine active` +``` + +```shell +grep /var/lib/kubelet /proc/mounts | awk '{print $2}' | sudo xargs -n1 umount +sudo rm -rf /var/lib/kubelet +``` + +### Troubleshooting + +#### Node is in `NotReady` state + +If you see your node as `NotReady` it's possible that your OS does not have memcg enabled. + +1. Your kernel should support memory accounting. Ensure that the +following configs are turned on in your linux kernel: + +```shell +CONFIG_RESOURCE_COUNTERS=y +CONFIG_MEMCG=y +``` + +2. Enable the memory accounting in the kernel, at boot, as command line +parameters as follows: + +```shell +GRUB_CMDLINE_LINUX="cgroup_enable=memory=1" +``` + +NOTE: The above is specifically for GRUB2. +You can check the command line parameters passed to your kernel by looking at the +output of /proc/cmdline: + +```shell +$ cat /proc/cmdline +BOOT_IMAGE=/boot/vmlinuz-3.18.4-aufs root=/dev/sda5 ro cgroup_enable=memory=1 +``` + +## Support Level + + +IaaS Provider | Config. Mgmt | OS | Networking | Conforms | Support Level +-------------------- | ------------ | ------ | ---------- | ---------| ---------------------------- +Docker Single Node | custom | N/A | local | | Project ([@brendandburns](https://github.com/brendandburns)) + + + +## Further reading + +Please see the [Kubernetes docs](http://kubernetes.io/docs) for more details on administering +and using a Kubernetes cluster. + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/local-cluster/docker.md?pixel)]() + diff --git a/devel/local-cluster/k8s-singlenode-docker.png b/devel/local-cluster/k8s-singlenode-docker.png new file mode 100644 index 00000000..5ebf8126 Binary files /dev/null and b/devel/local-cluster/k8s-singlenode-docker.png differ diff --git a/devel/local-cluster/local.md b/devel/local-cluster/local.md new file mode 100644 index 00000000..60bd5a8f --- /dev/null +++ b/devel/local-cluster/local.md @@ -0,0 +1,125 @@ +**Stop. This guide has been superseded by [Minikube](https://github.com/kubernetes/minikube) which is the recommended method of running Kubernetes on your local machine.** + +### Requirements + +#### Linux + +Not running Linux? Consider running Linux in a local virtual machine with [vagrant](https://www.vagrantup.com/), or on a cloud provider like Google Compute Engine + +#### Docker + +At least [Docker](https://docs.docker.com/installation/#installation) +1.8.3+. Ensure the Docker daemon is running and can be contacted (try `docker +ps`). Some of the Kubernetes components need to run as root, which normally +works fine with docker. + +#### etcd + +You need an [etcd](https://github.com/coreos/etcd/releases) in your path, please make sure it is installed and in your ``$PATH``. + +#### go + +You need [go](https://golang.org/doc/install) at least 1.4+ in your path, please make sure it is installed and in your ``$PATH``. + +### Starting the cluster + +First, you need to [download Kubernetes](http://kubernetes.io/docs/getting-started-guides/binary_release/). Then open a separate tab of your terminal +and run the following (since one needs sudo access to start/stop Kubernetes daemons, it is easier to run the entire script as root): + +```shell +cd kubernetes +hack/local-up-cluster.sh +``` + +This will build and start a lightweight local cluster, consisting of a master +and a single node. Type Control-C to shut it down. + +You can use the cluster/kubectl.sh script to interact with the local cluster. hack/local-up-cluster.sh will +print the commands to run to point kubectl at the local cluster. + + +### Running a container + +Your cluster is running, and you want to start running containers! + +You can now use any of the cluster/kubectl.sh commands to interact with your local setup. + +```shell +export KUBERNETES_PROVIDER=local +cluster/kubectl.sh get pods +cluster/kubectl.sh get services +cluster/kubectl.sh get deployments +cluster/kubectl.sh run my-nginx --image=nginx --replicas=2 --port=80 + +## begin wait for provision to complete, you can monitor the docker pull by opening a new terminal + sudo docker images + ## you should see it pulling the nginx image, once the above command returns it + sudo docker ps + ## you should see your container running! + exit +## end wait + +## create a service for nginx, which serves on port 80 +cluster/kubectl.sh expose deployment my-nginx --port=80 --name=my-nginx + +## introspect Kubernetes! +cluster/kubectl.sh get pods +cluster/kubectl.sh get services +cluster/kubectl.sh get deployments + +## Test the nginx service with the IP/port from "get services" command +curl http://10.X.X.X:80/ +``` + +### Running a user defined pod + +Note the difference between a [container](http://kubernetes.io/docs/user-guide/containers/) +and a [pod](http://kubernetes.io/docs/user-guide/pods/). Since you only asked for the former, Kubernetes will create a wrapper pod for you. +However you cannot view the nginx start page on localhost. To verify that nginx is running you need to run `curl` within the docker container (try `docker exec`). + +You can control the specifications of a pod via a user defined manifest, and reach nginx through your browser on the port specified therein: + +```shell +cluster/kubectl.sh create -f test/fixtures/doc-yaml/user-guide/pod.yaml +``` + +Congratulations! + +### FAQs + +#### I cannot reach service IPs on the network. + +Some firewall software that uses iptables may not interact well with +kubernetes. If you have trouble around networking, try disabling any +firewall or other iptables-using systems, first. Also, you can check +if SELinux is blocking anything by running a command such as `journalctl --since yesterday | grep avc`. + +By default the IP range for service cluster IPs is 10.0.*.* - depending on your +docker installation, this may conflict with IPs for containers. If you find +containers running with IPs in this range, edit hack/local-cluster-up.sh and +change the service-cluster-ip-range flag to something else. + +#### I changed Kubernetes code, how do I run it? + +```shell +cd kubernetes +hack/build-go.sh +hack/local-up-cluster.sh +``` + +#### kubectl claims to start a container but `get pods` and `docker ps` don't show it. + +One or more of the Kubernetes daemons might've crashed. Tail the [logs](http://kubernetes.io/docs/admin/cluster-troubleshooting/#looking-at-logs) of each in /tmp. + +```shell +$ ls /tmp/kube*.log +$ tail -f /tmp/kube-apiserver.log +``` + +#### The pods fail to connect to the services by host names + +The local-up-cluster.sh script doesn't start a DNS service. Similar situation can be found [here](http://issue.k8s.io/6667). You can start a manually. + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/local-cluster/local.md?pixel)]() + diff --git a/devel/local-cluster/vagrant.md b/devel/local-cluster/vagrant.md new file mode 100644 index 00000000..0f0fe91c --- /dev/null +++ b/devel/local-cluster/vagrant.md @@ -0,0 +1,397 @@ +Running Kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/develop on your local machine (Linux, Mac OS X). + +### Prerequisites + +1. Install latest version >= 1.7.4 of [Vagrant](http://www.vagrantup.com/downloads.html) +2. Install one of: + 1. The latest version of [Virtual Box](https://www.virtualbox.org/wiki/Downloads) + 2. [VMWare Fusion](https://www.vmware.com/products/fusion/) version 5 or greater as well as the appropriate [Vagrant VMWare Fusion provider](https://www.vagrantup.com/vmware) + 3. [VMWare Workstation](https://www.vmware.com/products/workstation/) version 9 or greater as well as the [Vagrant VMWare Workstation provider](https://www.vagrantup.com/vmware) + 4. [Parallels Desktop](https://www.parallels.com/products/desktop/) version 9 or greater as well as the [Vagrant Parallels provider](https://parallels.github.io/vagrant-parallels/) + 5. libvirt with KVM and enable support of hardware virtualisation. [Vagrant-libvirt](https://github.com/pradels/vagrant-libvirt). For fedora provided official rpm, and possible to use `yum install vagrant-libvirt` + +### Setup + +Setting up a cluster is as simple as running: + +```sh +export KUBERNETES_PROVIDER=vagrant +curl -sS https://get.k8s.io | bash +``` + +Alternatively, you can download [Kubernetes release](https://github.com/kubernetes/kubernetes/releases) and extract the archive. To start your local cluster, open a shell and run: + +```sh +cd kubernetes + +export KUBERNETES_PROVIDER=vagrant +./cluster/kube-up.sh +``` + +The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine. + +By default, the Vagrant setup will create a single master VM (called kubernetes-master) and one node (called kubernetes-node-1). Each VM will take 1 GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate free disk space). + +If you'd like more than one node, set the `NUM_NODES` environment variable to the number you want: + +```sh +export NUM_NODES=3 +``` + +Vagrant will provision each machine in the cluster with all the necessary components to run Kubernetes. The initial setup can take a few minutes to complete on each machine. + +If you installed more than one Vagrant provider, Kubernetes will usually pick the appropriate one. However, you can override which one Kubernetes will use by setting the [`VAGRANT_DEFAULT_PROVIDER`](https://docs.vagrantup.com/v2/providers/default.html) environment variable: + +```sh +export VAGRANT_DEFAULT_PROVIDER=parallels +export KUBERNETES_PROVIDER=vagrant +./cluster/kube-up.sh +``` + +By default, each VM in the cluster is running Fedora. + +To access the master or any node: + +```sh +vagrant ssh master +vagrant ssh node-1 +``` + +If you are running more than one node, you can access the others by: + +```sh +vagrant ssh node-2 +vagrant ssh node-3 +``` + +Each node in the cluster installs the docker daemon and the kubelet. + +The master node instantiates the Kubernetes master components as pods on the machine. + +To view the service status and/or logs on the kubernetes-master: + +```console +[vagrant@kubernetes-master ~] $ vagrant ssh master +[vagrant@kubernetes-master ~] $ sudo su + +[root@kubernetes-master ~] $ systemctl status kubelet +[root@kubernetes-master ~] $ journalctl -ru kubelet + +[root@kubernetes-master ~] $ systemctl status docker +[root@kubernetes-master ~] $ journalctl -ru docker + +[root@kubernetes-master ~] $ tail -f /var/log/kube-apiserver.log +[root@kubernetes-master ~] $ tail -f /var/log/kube-controller-manager.log +[root@kubernetes-master ~] $ tail -f /var/log/kube-scheduler.log +``` + +To view the services on any of the nodes: + +```console +[vagrant@kubernetes-master ~] $ vagrant ssh node-1 +[vagrant@kubernetes-master ~] $ sudo su + +[root@kubernetes-master ~] $ systemctl status kubelet +[root@kubernetes-master ~] $ journalctl -ru kubelet + +[root@kubernetes-master ~] $ systemctl status docker +[root@kubernetes-master ~] $ journalctl -ru docker +``` + +### Interacting with your Kubernetes cluster with Vagrant. + +With your Kubernetes cluster up, you can manage the nodes in your cluster with the regular Vagrant commands. + +To push updates to new Kubernetes code after making source changes: + +```sh +./cluster/kube-push.sh +``` + +To stop and then restart the cluster: + +```sh +vagrant halt +./cluster/kube-up.sh +``` + +To destroy the cluster: + +```sh +vagrant destroy +``` + +Once your Vagrant machines are up and provisioned, the first thing to do is to check that you can use the `kubectl.sh` script. + +You may need to build the binaries first, you can do this with `make` + +```console +$ ./cluster/kubectl.sh get nodes + +NAME LABELS +10.245.1.4 +10.245.1.5 +10.245.1.3 +``` + +### Authenticating with your master + +When using the vagrant provider in Kubernetes, the `cluster/kubectl.sh` script will cache your credentials in a `~/.kubernetes_vagrant_auth` file so you will not be prompted for them in the future. + +```sh +cat ~/.kubernetes_vagrant_auth +``` + +```json +{ "User": "vagrant", + "Password": "vagrant", + "CAFile": "/home/k8s_user/.kubernetes.vagrant.ca.crt", + "CertFile": "/home/k8s_user/.kubecfg.vagrant.crt", + "KeyFile": "/home/k8s_user/.kubecfg.vagrant.key" +} +``` + +You should now be set to use the `cluster/kubectl.sh` script. For example try to list the nodes that you have started with: + +```sh +./cluster/kubectl.sh get nodes +``` + +### Running containers + +Your cluster is running, you can list the nodes in your cluster: + +```sh +$ ./cluster/kubectl.sh get nodes + +NAME LABELS +10.245.2.4 +10.245.2.3 +10.245.2.2 +``` + +Now start running some containers! + +You can now use any of the `cluster/kube-*.sh` commands to interact with your VM machines. +Before starting a container there will be no pods, services and replication controllers. + +```sh +$ ./cluster/kubectl.sh get pods +NAME READY STATUS RESTARTS AGE + +$ ./cluster/kubectl.sh get services +NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE + +$ ./cluster/kubectl.sh get replicationcontrollers +CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS +``` + +Start a container running nginx with a replication controller and three replicas + +```sh +$ ./cluster/kubectl.sh run my-nginx --image=nginx --replicas=3 --port=80 +``` + +When listing the pods, you will see that three containers have been started and are in Waiting state: + +```sh +$ ./cluster/kubectl.sh get pods +NAME READY STATUS RESTARTS AGE +my-nginx-5kq0g 0/1 Pending 0 10s +my-nginx-gr3hh 0/1 Pending 0 10s +my-nginx-xql4j 0/1 Pending 0 10s +``` + +You need to wait for the provisioning to complete, you can monitor the nodes by doing: + +```sh +$ vagrant ssh node-1 -c 'sudo docker images' +kubernetes-node-1: + REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE + 96864a7d2df3 26 hours ago 204.4 MB + google/cadvisor latest e0575e677c50 13 days ago 12.64 MB + kubernetes/pause latest 6c4579af347b 8 weeks ago 239.8 kB +``` + +Once the docker image for nginx has been downloaded, the container will start and you can list it: + +```sh +$ vagrant ssh node-1 -c 'sudo docker ps' +kubernetes-node-1: + CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES + dbe79bf6e25b nginx:latest "nginx" 21 seconds ago Up 19 seconds k8s--mynginx.8c5b8a3a--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--fcfa837f + fa0e29c94501 kubernetes/pause:latest "/pause" 8 minutes ago Up 8 minutes 0.0.0.0:8080->80/tcp k8s--net.a90e7ce4--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--baf5b21b + aa2ee3ed844a google/cadvisor:latest "/usr/bin/cadvisor" 38 minutes ago Up 38 minutes k8s--cadvisor.9e90d182--cadvisor_-_agent.file--4626b3a2 + 65a3a926f357 kubernetes/pause:latest "/pause" 39 minutes ago Up 39 minutes 0.0.0.0:4194->8080/tcp k8s--net.c5ba7f0e--cadvisor_-_agent.file--342fd561 +``` + +Going back to listing the pods, services and replicationcontrollers, you now have: + +```sh +$ ./cluster/kubectl.sh get pods +NAME READY STATUS RESTARTS AGE +my-nginx-5kq0g 1/1 Running 0 1m +my-nginx-gr3hh 1/1 Running 0 1m +my-nginx-xql4j 1/1 Running 0 1m + +$ ./cluster/kubectl.sh get services +NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE + +$ ./cluster/kubectl.sh get replicationcontrollers +CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE +my-nginx my-nginx nginx run=my-nginx 3 1m +``` + +We did not start any services, hence there are none listed. But we see three replicas displayed properly. + +Learn about [running your first containers](http://kubernetes.io/docs/user-guide/simple-nginx/) application to learn how to create a service. + +You can already play with scaling the replicas with: + +```sh +$ ./cluster/kubectl.sh scale rc my-nginx --replicas=2 +$ ./cluster/kubectl.sh get pods +NAME READY STATUS RESTARTS AGE +my-nginx-5kq0g 1/1 Running 0 2m +my-nginx-gr3hh 1/1 Running 0 2m +``` + +Congratulations! + +## Troubleshooting + +#### I keep downloading the same (large) box all the time! + +By default the Vagrantfile will download the box from S3. You can change this (and cache the box locally) by providing a name and an alternate URL when calling `kube-up.sh` + +```sh +export KUBERNETES_BOX_NAME=choose_your_own_name_for_your_kuber_box +export KUBERNETES_BOX_URL=path_of_your_kuber_box +export KUBERNETES_PROVIDER=vagrant +./cluster/kube-up.sh +``` + +#### I am getting timeouts when trying to curl the master from my host! + +During provision of the cluster, you may see the following message: + +```sh +Validating node-1 +............. +Waiting for each node to be registered with cloud provider +error: couldn't read version from server: Get https://10.245.1.2/api: dial tcp 10.245.1.2:443: i/o timeout +``` + +Some users have reported VPNs may prevent traffic from being routed to the host machine into the virtual machine network. + +To debug, first verify that the master is binding to the proper IP address: + +```sh +$ vagrant ssh master +$ ifconfig | grep eth1 -C 2 +eth1: flags=4163 mtu 1500 inet 10.245.1.2 netmask + 255.255.255.0 broadcast 10.245.1.255 +``` + +Then verify that your host machine has a network connection to a bridge that can serve that address: + +```sh +$ ifconfig | grep 10.245.1 -C 2 + +vboxnet5: flags=4163 mtu 1500 + inet 10.245.1.1 netmask 255.255.255.0 broadcast 10.245.1.255 + inet6 fe80::800:27ff:fe00:5 prefixlen 64 scopeid 0x20 + ether 0a:00:27:00:00:05 txqueuelen 1000 (Ethernet) +``` + +If you do not see a response on your host machine, you will most likely need to connect your host to the virtual network created by the virtualization provider. + +If you do see a network, but are still unable to ping the machine, check if your VPN is blocking the request. + +#### I just created the cluster, but I am getting authorization errors! + +You probably have an incorrect ~/.kubernetes_vagrant_auth file for the cluster you are attempting to contact. + +```sh +rm ~/.kubernetes_vagrant_auth +``` + +After using kubectl.sh make sure that the correct credentials are set: + +```sh +cat ~/.kubernetes_vagrant_auth +``` + +```json +{ + "User": "vagrant", + "Password": "vagrant" +} +``` + +#### I just created the cluster, but I do not see my container running! + +If this is your first time creating the cluster, the kubelet on each node schedules a number of docker pull requests to fetch prerequisite images. This can take some time and as a result may delay your initial pod getting provisioned. + +#### I have brought Vagrant up but the nodes cannot validate! + +Log on to one of the nodes (`vagrant ssh node-1`) and inspect the salt minion log (`sudo cat /var/log/salt/minion`). + +#### I want to change the number of nodes! + +You can control the number of nodes that are instantiated via the environment variable `NUM_NODES` on your host machine. If you plan to work with replicas, we strongly encourage you to work with enough nodes to satisfy your largest intended replica size. If you do not plan to work with replicas, you can save some system resources by running with a single node. You do this, by setting `NUM_NODES` to 1 like so: + +```sh +export NUM_NODES=1 +``` + +#### I want my VMs to have more memory! + +You can control the memory allotted to virtual machines with the `KUBERNETES_MEMORY` environment variable. +Just set it to the number of megabytes you would like the machines to have. For example: + +```sh +export KUBERNETES_MEMORY=2048 +``` + +If you need more granular control, you can set the amount of memory for the master and nodes independently. For example: + +```sh +export KUBERNETES_MASTER_MEMORY=1536 +export KUBERNETES_NODE_MEMORY=2048 +``` + +#### I want to set proxy settings for my Kubernetes cluster boot strapping! + +If you are behind a proxy, you need to install vagrant proxy plugin and set the proxy settings by + +```sh +vagrant plugin install vagrant-proxyconf +export VAGRANT_HTTP_PROXY=http://username:password@proxyaddr:proxyport +export VAGRANT_HTTPS_PROXY=https://username:password@proxyaddr:proxyport +``` + +Optionally you can specify addresses to not proxy, for example + +```sh +export VAGRANT_NO_PROXY=127.0.0.1 +``` + +If you are using sudo to make kubernetes build for example make quick-release, you need run `sudo -E make quick-release` to pass the environment variables. + +#### I ran vagrant suspend and nothing works! + +`vagrant suspend` seems to mess up the network. This is not supported at this time. + +#### I want vagrant to sync folders via nfs! + +You can ensure that vagrant uses nfs to sync folders with virtual machines by setting the KUBERNETES_VAGRANT_USE_NFS environment variable to 'true'. nfs is faster than virtualbox or vmware's 'shared folders' and does not require guest additions. See the [vagrant docs](http://docs.vagrantup.com/v2/synced-folders/nfs.html) for details on configuring nfs on the host. This setting will have no effect on the libvirt provider, which uses nfs by default. For example: + +```sh +export KUBERNETES_VAGRANT_USE_NFS=true +``` + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/local-cluster/vagrant.md?pixel)]() + diff --git a/devel/logging.md b/devel/logging.md new file mode 100644 index 00000000..1241ee7f --- /dev/null +++ b/devel/logging.md @@ -0,0 +1,36 @@ +## Logging Conventions + +The following conventions for the glog levels to use. +[glog](http://godoc.org/github.com/golang/glog) is globally preferred to +[log](http://golang.org/pkg/log/) for better runtime control. + +* glog.Errorf() - Always an error + +* glog.Warningf() - Something unexpected, but probably not an error + +* glog.Infof() has multiple levels: + * glog.V(0) - Generally useful for this to ALWAYS be visible to an operator + * Programmer errors + * Logging extra info about a panic + * CLI argument handling + * glog.V(1) - A reasonable default log level if you don't want verbosity. + * Information about config (listening on X, watching Y) + * Errors that repeat frequently that relate to conditions that can be corrected (pod detected as unhealthy) + * glog.V(2) - Useful steady state information about the service and important log messages that may correlate to significant changes in the system. This is the recommended default log level for most systems. + * Logging HTTP requests and their exit code + * System state changing (killing pod) + * Controller state change events (starting pods) + * Scheduler log messages + * glog.V(3) - Extended information about changes + * More info about system state changes + * glog.V(4) - Debug level verbosity (for now) + * Logging in particularly thorny parts of code where you may want to come back later and check it + +As per the comments, the practical default level is V(2). Developers and QE +environments may wish to run at V(3) or V(4). If you wish to change the log +level, you can pass in `-v=X` where X is the desired maximum level to log. + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/logging.md?pixel)]() + diff --git a/devel/mesos-style.md b/devel/mesos-style.md new file mode 100644 index 00000000..81554ce8 --- /dev/null +++ b/devel/mesos-style.md @@ -0,0 +1,218 @@ +# Building Mesos/Omega-style frameworks on Kubernetes + +## Introduction + +We have observed two different cluster management architectures, which can be +categorized as "Borg-style" and "Mesos/Omega-style." In the remainder of this +document, we will abbreviate the latter as "Mesos-style." Although out-of-the +box Kubernetes uses a Borg-style architecture, it can also be configured in a +Mesos-style architecture, and in fact can support both styles at the same time. +This document describes the two approaches and describes how to deploy a +Mesos-style architecture on Kubernetes. + +As an aside, the converse is also true: one can deploy a Borg/Kubernetes-style +architecture on Mesos. + +This document is NOT intended to provide a comprehensive comparison of Borg and +Mesos. For example, we omit discussion of the tradeoffs between scheduling with +full knowledge of cluster state vs. scheduling using the "offer" model. That +issue is discussed in some detail in the Omega paper. +(See [references](#references) below.) + + +## What is a Borg-style architecture? + +A Borg-style architecture is characterized by: + +* a single logical API endpoint for clients, where some amount of processing is +done on requests, such as admission control and applying defaults + +* generic (non-application-specific) collection abstractions described +declaratively, + +* generic controllers/state machines that manage the lifecycle of the collection +abstractions and the containers spawned from them + +* a generic scheduler + +For example, Borg's primary collection abstraction is a Job, and every +application that runs on Borg--whether it's a user-facing service like the GMail +front-end, a batch job like a MapReduce, or an infrastructure service like +GFS--must represent itself as a Job. Borg has corresponding state machine logic +for managing Jobs and their instances, and a scheduler that's responsible for +assigning the instances to machines. + +The flow of a request in Borg is: + +1. Client submits a collection object to the Borgmaster API endpoint + +1. Admission control, quota, applying defaults, etc. run on the collection + +1. If the collection is admitted, it is persisted, and the collection state +machine creates the underlying instances + +1. The scheduler assigns a hostname to the instance, and tells the Borglet to +start the instance's container(s) + +1. Borglet starts the container(s) + +1. The instance state machine manages the instances and the collection state +machine manages the collection during their lifetimes + +Out-of-the-box Kubernetes has *workload-specific* abstractions (ReplicaSet, Job, +DaemonSet, etc.) and corresponding controllers, and in the future may have +[workload-specific schedulers](../../docs/proposals/multiple-schedulers.md), +e.g. different schedulers for long-running services vs. short-running batch. But +these abstractions, controllers, and schedulers are not *application-specific*. + +The usual request flow in Kubernetes is very similar, namely + +1. Client submits a collection object (e.g. ReplicaSet, Job, ...) to the API +server + +1. Admission control, quota, applying defaults, etc. run on the collection + +1. If the collection is admitted, it is persisted, and the corresponding +collection controller creates the underlying pods + +1. Admission control, quota, applying defaults, etc. runs on each pod; if there +are multiple schedulers, one of the admission controllers will write the +scheduler name as an annotation based on a policy + +1. If a pod is admitted, it is persisted + +1. The appropriate scheduler assigns a nodeName to the instance, which triggers +the Kubelet to start the pod's container(s) + +1. Kubelet starts the container(s) + +1. The controller corresponding to the collection manages the pod and the +collection during their lifetime + +In the Borg model, application-level scheduling and cluster-level scheduling are +handled by separate components. For example, a MapReduce master might request +Borg to create a job with a certain number of instances with a particular +resource shape, where each instance corresponds to a MapReduce worker; the +MapReduce master would then schedule individual units of work onto those +workers. + +## What is a Mesos-style architecture? + +Mesos is fundamentally designed to support multiple application-specific +"frameworks." A framework is composed of a "framework scheduler" and a +"framework executor." We will abbreviate "framework scheduler" as "framework" +since "scheduler" means something very different in Kubernetes (something that +just assigns pods to nodes). + +Unlike Borg and Kubernetes, where there is a single logical endpoint that +receives all API requests (the Borgmaster and API server, respectively), in +Mesos every framework is a separate API endpoint. Mesos does not have any +standard set of collection abstractions, controllers/state machines, or +schedulers; the logic for all of these things is contained in each +[application-specific framework](http://mesos.apache.org/documentation/latest/frameworks/) +individually. (Note that the notion of application-specific does sometimes blur +into the realm of workload-specific, for example +[Chronos](https://github.com/mesos/chronos) is a generic framework for batch +jobs. However, regardless of what set of Mesos frameworks you are using, the key +properties remain: each framework is its own API endpoint with its own +client-facing and internal abstractions, state machines, and scheduler). + +A Mesos framework can integrate application-level scheduling and cluster-level +scheduling into a single component. + +Note: Although Mesos frameworks expose their own API endpoints to clients, they +consume a common infrastructure via a common API endpoint for controlling tasks +(launching, detecting failure, etc.) and learning about available cluster +resources. More details +[here](http://mesos.apache.org/documentation/latest/scheduler-http-api/). + +## Building a Mesos-style framework on Kubernetes + +Implementing the Mesos model on Kubernetes boils down to enabling +application-specific collection abstractions, controllers/state machines, and +scheduling. There are just three steps: + +* Use API plugins to create API resources for your new application-specific +collection abstraction(s) + +* Implement controllers for the new abstractions (and for managing the lifecycle +of the pods the controllers generate) + +* Implement a scheduler with the application-specific scheduling logic + +Note that the last two can be combined: a Kubernetes controller can do the +scheduling for the pods it creates, by writing node name to the pods when it +creates them. + +Once you've done this, you end up with an architecture that is extremely similar +to the Mesos-style--the Kubernetes controller is effectively a Mesos framework. +The remaining differences are: + +* In Kubernetes, all API operations go through a single logical endpoint, the +API server (we say logical because the API server can be replicated). In +contrast, in Mesos, API operations go to a particular framework. However, the +Kubernetes API plugin model makes this difference fairly small. + +* In Kubernetes, application-specific admission control, quota, defaulting, etc. +rules can be implemented in the API server rather than in the controller. Of +course you can choose to make these operations be no-ops for your +application-specific collection abstractions, and handle them in your controller. + +* On the node level, Mesos allows application-specific executors, whereas +Kubernetes only has executors for Docker and rkt containers. + +The end-to-end flow is: + +1. Client submits an application-specific collection object to the API server + +2. The API server plugin for that collection object forwards the request to the +API server that handles that collection type + +3. Admission control, quota, applying defaults, etc. runs on the collection +object + +4. If the collection is admitted, it is persisted + +5. The collection controller sees the collection object and in response creates +the underlying pods and chooses which nodes they will run on by setting node +name + +6. Kubelet sees the pods with node name set and starts the container(s) + +7. The collection controller manages the pods and the collection during their +lifetimes + +*Note: if the controller and scheduler are separated, then step 5 breaks +down into multiple steps:* + +(5a) collection controller creates pods with empty node name. + +(5b) API server admission control, quota, defaulting, etc. runs on the +pods; one of the admission controller steps writes the scheduler name as an +annotation on each pods (see pull request `#18262` for more details). + +(5c) The corresponding application-specific scheduler chooses a node and +writes node name, which triggers the Kubelet to start the pod's container(s). + +As a final note, the Kubernetes model allows multiple levels of iterative +refinement of runtime abstractions, as long as the lowest level is the pod. For +example, clients of application Foo might create a `FooSet` which is picked up +by the FooController which in turn creates `BatchFooSet` and `ServiceFooSet` +objects, which are picked up by the BatchFoo controller and ServiceFoo +controller respectively, which in turn create pods. In between each of these +steps there is an opportunity for object-specific admission control, quota, and +defaulting to run in the API server, though these can instead be handled by the +controllers. + +## References + +Mesos is described [here](https://www.usenix.org/legacy/event/nsdi11/tech/full_papers/Hindman_new.pdf). +Omega is described [here](http://research.google.com/pubs/pub41684.html). +Borg is described [here](http://research.google.com/pubs/pub43438.html). + + + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/mesos-style.md?pixel)]() + diff --git a/devel/node-performance-testing.md b/devel/node-performance-testing.md new file mode 100644 index 00000000..d6bb657f --- /dev/null +++ b/devel/node-performance-testing.md @@ -0,0 +1,127 @@ +# Measuring Node Performance + +This document outlines the issues and pitfalls of measuring Node performance, as +well as the tools available. + +## Cluster Set-up + +There are lots of factors which can affect node performance numbers, so care +must be taken in setting up the cluster to make the intended measurements. In +addition to taking the following steps into consideration, it is important to +document precisely which setup was used. For example, performance can vary +wildly from commit-to-commit, so it is very important to **document which commit +or version** of Kubernetes was used, which Docker version was used, etc. + +### Addon pods + +Be aware of which addon pods are running on which nodes. By default Kubernetes +runs 8 addon pods, plus another 2 per node (`fluentd-elasticsearch` and +`kube-proxy`) in the `kube-system` namespace. The addon pods can be disabled for +more consistent results, but doing so can also have performance implications. + +For example, Heapster polls each node regularly to collect stats data. Disabling +Heapster will hide the performance cost of serving those stats in the Kubelet. + +#### Disabling Add-ons + +Disabling addons is simple. Just ssh into the Kubernetes master and move the +addon from `/etc/kubernetes/addons/` to a backup location. More details +[here](../../cluster/addons/). + +### Which / how many pods? + +Performance will vary a lot between a node with 0 pods and a node with 100 pods. +In many cases you'll want to make measurements with several different amounts of +pods. On a single node cluster scaling a replication controller makes this easy, +just make sure the system reaches a steady-state before starting the +measurement. E.g. `kubectl scale replicationcontroller pause --replicas=100` + +In most cases pause pods will yield the most consistent measurements since the +system will not be affected by pod load. However, in some special cases +Kubernetes has been tuned to optimize pods that are not doing anything, such as +the cAdvisor housekeeping (stats gathering). In these cases, performing a very +light task (such as a simple network ping) can make a difference. + +Finally, you should also consider which features yours pods should be using. For +example, if you want to measure performance with probing, you should obviously +use pods with liveness or readiness probes configured. Likewise for volumes, +number of containers, etc. + +### Other Tips + +**Number of nodes** - On the one hand, it can be easier to manage logs, pods, +environment etc. with a single node to worry about. On the other hand, having +multiple nodes will let you gather more data in parallel for more robust +sampling. + +## E2E Performance Test + +There is an end-to-end test for collecting overall resource usage of node +components: [kubelet_perf.go](../../test/e2e/kubelet_perf.go). To +run the test, simply make sure you have an e2e cluster running (`go run +hack/e2e.go -up`) and [set up](#cluster-set-up) correctly. + +Run the test with `go run hack/e2e.go -v -test +--test_args="--ginkgo.focus=resource\susage\stracking"`. You may also wish to +customise the number of pods or other parameters of the test (remember to rerun +`make WHAT=test/e2e/e2e.test` after you do). + +## Profiling + +Kubelet installs the [go pprof handlers] +(https://golang.org/pkg/net/http/pprof/), which can be queried for CPU profiles: + +```console +$ kubectl proxy & +Starting to serve on 127.0.0.1:8001 +$ curl -G "http://localhost:8001/api/v1/proxy/nodes/${NODE}:10250/debug/pprof/profile?seconds=${DURATION_SECONDS}" > $OUTPUT +$ KUBELET_BIN=_output/dockerized/bin/linux/amd64/kubelet +$ go tool pprof -web $KUBELET_BIN $OUTPUT +``` + +`pprof` can also provide heap usage, from the `/debug/pprof/heap` endpoint +(e.g. `http://localhost:8001/api/v1/proxy/nodes/${NODE}:10250/debug/pprof/heap`). + +More information on go profiling can be found +[here](http://blog.golang.org/profiling-go-programs). + +## Benchmarks + +Before jumping through all the hoops to measure a live Kubernetes node in a real +cluster, it is worth considering whether the data you need can be gathered +through a Benchmark test. Go provides a really simple benchmarking mechanism, +just add a unit test of the form: + +```go +// In foo_test.go +func BenchmarkFoo(b *testing.B) { + b.StopTimer() + setupFoo() // Perform any global setup + b.StartTimer() + for i := 0; i < b.N; i++ { + foo() // Functionality to measure + } +} +``` + +Then: + +```console +$ go test -bench=. -benchtime=${SECONDS}s foo_test.go +``` + +More details on benchmarking [here](https://golang.org/pkg/testing/). + +## TODO + +- (taotao) Measuring docker performance +- Expand cluster set-up section +- (vishh) Measuring disk usage +- (yujuhong) Measuring memory usage +- Add section on monitoring kubelet metrics (e.g. with prometheus) + + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/node-performance-testing.md?pixel)]() + diff --git a/devel/on-call-build-cop.md b/devel/on-call-build-cop.md new file mode 100644 index 00000000..15c71e5d --- /dev/null +++ b/devel/on-call-build-cop.md @@ -0,0 +1,151 @@ +## Kubernetes "Github and Build-cop" Rotation + +### Preqrequisites + +* Ensure you have [write access to http://github.com/kubernetes/kubernetes](https://github.com/orgs/kubernetes/teams/kubernetes-maintainers) + * Test your admin access by e.g. adding a label to an issue. + +### Traffic sources and responsibilities + +* GitHub Kubernetes [issues](https://github.com/kubernetes/kubernetes/issues) +and [pulls](https://github.com/kubernetes/kubernetes/pulls): Your job is to be +the first responder to all new issues and PRs. If you are not equipped to do +this (which is fine!), it is your job to seek guidance! + + * Support issues should be closed and redirected to Stackoverflow (see example +response below). + + * All incoming issues should be tagged with a team label +(team/{api,ux,control-plane,node,cluster,csi,redhat,mesosphere,gke,release-infra,test-infra,none}); +for issues that overlap teams, you can use multiple team labels + + * There is a related concept of "Github teams" which allow you to @ mention +a set of people; feel free to @ mention a Github team if you wish, but this is +not a substitute for adding a team/* label, which is required + + * [Google teams](https://github.com/orgs/kubernetes/teams?utf8=%E2%9C%93&query=goog-) + * [Redhat teams](https://github.com/orgs/kubernetes/teams?utf8=%E2%9C%93&query=rh-) + * [SIGs](https://github.com/orgs/kubernetes/teams?utf8=%E2%9C%93&query=sig-) + + * If the issue is reporting broken builds, broken e2e tests, or other +obvious P0 issues, label the issue with priority/P0 and assign it to someone. +This is the only situation in which you should add a priority/* label + * non-P0 issues do not need a reviewer assigned initially + + * Assign any issues related to Vagrant to @derekwaynecarr (and @mention him +in the issue) + + * All incoming PRs should be assigned a reviewer. + + * unless it is a WIP (Work in Progress), RFC (Request for Comments), or design proposal. + * An auto-assigner [should do this for you] (https://github.com/kubernetes/kubernetes/pull/12365/files) + * When in doubt, choose a TL or team maintainer of the most relevant team; they can delegate + + * Keep in mind that you can @ mention people in an issue/PR to bring it to +their attention without assigning it to them. You can also @ mention github +teams, such as @kubernetes/goog-ux or @kubernetes/kubectl + + * If you need help triaging an issue or PR, consult with (or assign it to) +@brendandburns, @thockin, @bgrant0607, @quinton-hoole, @davidopp, @dchen1107, +@lavalamp (all U.S. Pacific Time) or @fgrzadkowski (Central European Time). + + * At the beginning of your shift, please add team/* labels to any issues that +have fallen through the cracks and don't have one. Likewise, be fair to the next +person in rotation: try to ensure that every issue that gets filed while you are +on duty is handled. The Github query to find issues with no team/* label is: +[here](https://github.com/kubernetes/kubernetes/issues?utf8=%E2%9C%93&q=is%3Aopen+is%3Aissue+-label%3Ateam%2Fcontrol-plane+-label%3Ateam%2Fmesosphere+-label%3Ateam%2Fredhat+-label%3Ateam%2Frelease-infra+-label%3Ateam%2Fnone+-label%3Ateam%2Fnode+-label%3Ateam%2Fcluster+-label%3Ateam%2Fux+-label%3Ateam%2Fapi+-label%3Ateam%2Ftest-infra+-label%3Ateam%2Fgke+-label%3A"team%2FCSI-API+Machinery+SIG"+-label%3Ateam%2Fhuawei+-label%3Ateam%2Fsig-aws). + +Example response for support issues: + +```code +Please re-post your question to [stackoverflow] +(http://stackoverflow.com/questions/tagged/kubernetes). + +We are trying to consolidate the channels to which questions for help/support +are posted so that we can improve our efficiency in responding to your requests, +and to make it easier for you to find answers to frequently asked questions and +how to address common use cases. + +We regularly see messages posted in multiple forums, with the full response +thread only in one place or, worse, spread across multiple forums. Also, the +large volume of support issues on github is making it difficult for us to use +issues to identify real bugs. + +The Kubernetes team scans stackoverflow on a regular basis, and will try to +ensure your questions don't go unanswered. + +Before posting a new question, please search stackoverflow for answers to +similar questions, and also familiarize yourself with: + + * [user guide](http://kubernetes.io/docs/user-guide/) + * [troubleshooting guide](http://kubernetes.io/docs/admin/cluster-troubleshooting/) + +Again, thanks for using Kubernetes. + +The Kubernetes Team +``` + +### Build-copping + +* The [merge-bot submit queue](http://submit-queue.k8s.io/) +([source](https://github.com/kubernetes/contrib/tree/master/mungegithub/mungers/submit-queue.go)) +should auto-merge all eligible PRs for you once they've passed all the relevant +checks mentioned below and all [critical e2e tests] +(https://goto.google.com/k8s-test/view/Critical%20Builds/) are passing. If the +merge-bot been disabled for some reason, or tests are failing, you might need to +do some manual merging to get things back on track. + +* Once a day or so, look at the [flaky test builds] +(https://goto.google.com/k8s-test/view/Flaky/); if they are timing out, clusters +are failing to start, or tests are consistently failing (instead of just +flaking), file an issue to get things back on track. + +* Jobs that are not in [critical e2e tests](https://goto.google.com/k8s-test/view/Critical%20Builds/) +or [flaky test builds](https://goto.google.com/k8s-test/view/Flaky/) are not +your responsibility to monitor. The `Test owner:` in the job description will be +automatically emailed if the job is failing. + +* If you are oncall, ensure that PRs confirming to the following +pre-requisites are being merged at a reasonable rate: + + * [Have been LGTMd](https://github.com/kubernetes/kubernetes/labels/lgtm) + * Pass Travis and Jenkins per-PR tests. + * Author has signed CLA if applicable. + + +* Although the shift schedule shows you as being scheduled Monday to Monday, + working on the weekend is neither expected nor encouraged. Enjoy your time + off. + +* When the build is broken, roll back the PRs responsible ASAP + +* When E2E tests are unstable, a "merge freeze" may be instituted. During a +merge freeze: + + * Oncall should slowly merge LGTMd changes throughout the day while monitoring +E2E to ensure stability. + + * Ideally the E2E run should be green, but some tests are flaky and can fail +randomly (not as a result of a particular change). + * If a large number of tests fail, or tests that normally pass fail, that +is an indication that one or more of the PR(s) in that build might be +problematic (and should be reverted). + * Use the Test Results Analyzer to see individual test history over time. + + +* Flake mitigation + + * Tests that flake (fail a small percentage of the time) need an issue filed +against them. Please read [this](flaky-tests.md#filing-issues-for-flaky-tests); +the build cop is expected to file issues for any flaky tests they encounter. + + * It's reasonable to manually merge PRs that fix a flake or otherwise mitigate it. + +### Contact information + +[@k8s-oncall](https://github.com/k8s-oncall) will reach the current person on +call. + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/on-call-build-cop.md?pixel)]() + diff --git a/devel/on-call-rotations.md b/devel/on-call-rotations.md new file mode 100644 index 00000000..a6535e82 --- /dev/null +++ b/devel/on-call-rotations.md @@ -0,0 +1,43 @@ +## Kubernetes On-Call Rotations + +### Kubernetes "first responder" rotations + +Kubernetes has generated a lot of public traffic: email, pull-requests, bugs, +etc. So much traffic that it's becoming impossible to keep up with it all! This +is a fantastic problem to have. In order to be sure that SOMEONE, but not +EVERYONE on the team is paying attention to public traffic, we have instituted +two "first responder" rotations, listed below. Please read this page before +proceeding to the pages linked below, which are specific to each rotation. + +Please also read our [notes on OSS collaboration](collab.md), particularly the +bits about hours. Specifically, each rotation is expected to be active primarily +during work hours, less so off hours. + +During regular workday work hours of your shift, your primary responsibility is +to monitor the traffic sources specific to your rotation. You can check traffic +in the evenings if you feel so inclined, but it is not expected to be as highly +focused as work hours. For weekends, you should check traffic very occasionally +(e.g. once or twice a day). Again, it is not expected to be as highly focused as +workdays. It is assumed that over time, everyone will get weekday and weekend +shifts, so the workload will balance out. + +If you can not serve your shift, and you know this ahead of time, it is your +responsibility to find someone to cover and to change the rotation. If you have +an emergency, your responsibilities fall on the primary of the other rotation, +who acts as your secondary. If you need help to cover all of the tasks, partners +with oncall rotations (e.g., +[Redhat](https://github.com/orgs/kubernetes/teams/rh-oncall)). + +If you are not on duty you DO NOT need to do these things. You are free to focus +on "real work". + +Note that Kubernetes will occasionally enter code slush/freeze, prior to +milestones. When it does, there might be changes in the instructions (assigning +milestones, for instance). + +* [Github and Build Cop Rotation](on-call-build-cop.md) +* [User Support Rotation](on-call-user-support.md) + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/on-call-rotations.md?pixel)]() + diff --git a/devel/on-call-user-support.md b/devel/on-call-user-support.md new file mode 100644 index 00000000..a111c6fe --- /dev/null +++ b/devel/on-call-user-support.md @@ -0,0 +1,89 @@ +## Kubernetes "User Support" Rotation + +### Traffic sources and responsibilities + +* [StackOverflow](http://stackoverflow.com/questions/tagged/kubernetes) and +[ServerFault](http://serverfault.com/questions/tagged/google-kubernetes): +Respond to any thread that has no responses and is more than 6 hours old (over +time we will lengthen this timeout to allow community responses). If you are not +equipped to respond, it is your job to redirect to someone who can. + + * [Query for unanswered Kubernetes StackOverflow questions](http://stackoverflow.com/search?q=%5Bkubernetes%5D+answers%3A0) + * [Query for unanswered Kubernetes ServerFault questions](http://serverfault.com/questions/tagged/google-kubernetes?sort=unanswered&pageSize=15) + * Direct poorly formulated questions to [stackoverflow's tips about how to ask](http://stackoverflow.com/help/how-to-ask) + * Direct off-topic questions to [stackoverflow's policy](http://stackoverflow.com/help/on-topic) + +* [Slack](https://kubernetes.slack.com) ([registration](http://slack.k8s.io)): +Your job is to be on Slack, watching for questions and answering or redirecting +as needed. Also check out the [Slack Archive](http://kubernetes.slackarchive.io/). + +* [Email/Groups](https://groups.google.com/forum/#!forum/google-containers): +Respond to any thread that has no responses and is more than 6 hours old (over +time we will lengthen this timeout to allow community responses). If you are not +equipped to respond, it is your job to redirect to someone who can. + +* [Legacy] [IRC](irc://irc.freenode.net/#google-containers) +(irc.freenode.net #google-containers): watch IRC for questions and try to +redirect users to Slack. Also check out the +[IRC logs](https://botbot.me/freenode/google-containers/). + +In general, try to direct support questions to: + +1. Documentation, such as the [user guide](../user-guide/README.md) and +[troubleshooting guide](http://kubernetes.io/docs/troubleshooting/) + +2. Stackoverflow + +If you see questions on a forum other than Stackoverflow, try to redirect them +to Stackoverflow. Example response: + +```code +Please re-post your question to [stackoverflow] +(http://stackoverflow.com/questions/tagged/kubernetes). + +We are trying to consolidate the channels to which questions for help/support +are posted so that we can improve our efficiency in responding to your requests, +and to make it easier for you to find answers to frequently asked questions and +how to address common use cases. + +We regularly see messages posted in multiple forums, with the full response +thread only in one place or, worse, spread across multiple forums. Also, the +large volume of support issues on github is making it difficult for us to use +issues to identify real bugs. + +The Kubernetes team scans stackoverflow on a regular basis, and will try to +ensure your questions don't go unanswered. + +Before posting a new question, please search stackoverflow for answers to +similar questions, and also familiarize yourself with: + + * [user guide](http://kubernetes.io/docs/user-guide/) + * [troubleshooting guide](http://kubernetes.io/docs/troubleshooting/) + +Again, thanks for using Kubernetes. + +The Kubernetes Team +``` + +If you answer a question (in any of the above forums) that you think might be +useful for someone else in the future, *please add it to one of the FAQs in the +wiki*: + +* [User FAQ](https://github.com/kubernetes/kubernetes/wiki/User-FAQ) +* [Developer FAQ](https://github.com/kubernetes/kubernetes/wiki/Developer-FAQ) +* [Debugging FAQ](https://github.com/kubernetes/kubernetes/wiki/Debugging-FAQ). + +Getting it into the FAQ is more important than polish. Please indicate the date +it was added, so people can judge the likelihood that it is out-of-date (and +please correct any FAQ entries that you see contain out-of-date information). + +### Contact information + +[@k8s-support-oncall](https://github.com/k8s-support-oncall) will reach the +current person on call. + + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/on-call-user-support.md?pixel)]() + diff --git a/devel/owners.md b/devel/owners.md new file mode 100644 index 00000000..217585ce --- /dev/null +++ b/devel/owners.md @@ -0,0 +1,100 @@ +# Owners files + +_Note_: This is a design for a feature that is not yet implemented. See the [contrib PR](https://github.com/kubernetes/contrib/issues/1389) for the current progress. + +## Overview + +We want to establish owners for different parts of the code in the Kubernetes codebase. These owners +will serve as the approvers for code to be submitted to these parts of the repository. Notably, owners +are not necessarily expected to do the first code review for all commits to these areas, but they are +required to approve changes before they can be merged. + +**Note** The Kubernetes project has a hiatus on adding new approvers to OWNERS files. At this time we are [adding more reviewers](https://github.com/kubernetes/kubernetes/pulls?utf8=%E2%9C%93&q=is%3Apr%20%22Curating%20owners%3A%22%20) to take the load off of the current set of approvers and once we have had a chance to flush this out for a release we will begin adding new approvers again. Adding new approvers is planned for after the Kubernetes 1.6.0 release. + +## High Level flow + +### Step One: A PR is submitted + +After a PR is submitted, the automated kubernetes PR robot will append a message to the PR indicating the owners +that are required for the PR to be submitted. + +Subsequently, a user can also request the approval message from the robot by writing: + +``` +@k8s-bot approvers +``` + +into a comment. + +In either case, the automation replies with an annotation that indicates +the owners required to approve. The annotation is a comment that is applied to the PR. +This comment will say: + +``` +Approval is required from OR , AND OR , AND ... +``` + +The set of required owners is drawn from the OWNERS files in the repository (see below). For each file +there should be multiple different OWNERS, these owners are listed in the `OR` clause(s). Because +it is possible that a PR may cover different directories, with disjoint sets of OWNERS, a PR may require +approval from more than one person, this is where the `AND` clauses come from. + +`` should be the github user id of the owner _without_ a leading `@` symbol to prevent the owner +from being cc'd into the PR by email. + +### Step Two: A PR is LGTM'd + +Once a PR is reviewed and LGTM'd it is eligible for submission. However, for it to be submitted +an owner for all of the files changed in the PR have to 'approve' the PR. A user is an owner for a +file if they are included in the OWNERS hierarchy (see below) for that file. + +Owner approval comes in two forms: + + * An owner adds a comment to the PR saying "I approve" or "approved" + * An owner is the original author of the PR + +In the case of a comment based approval, the same rules as for the 'lgtm' label apply. If the PR is +changed by pushing new commits to the PR, the previous approval is invalidated, and the owner(s) must +approve again. Because of this is recommended that PR authors squash their PRs prior to getting approval +from owners. + +### Step Three: A PR is merged + +Once a PR is LGTM'd and all required owners have approved, it is eligible for merge. The merge bot takes care of +the actual merging. + +## Design details + +We need to build new features into the existing github munger in order to accomplish this. Additionally +we need to add owners files to the repository. + +### Approval Munger + +We need to add a munger that adds comments to PRs indicating whose approval they require. This munger will +look for PRs that do not have approvers already present in the comments, or where approvers have been +requested, and add an appropriate comment to the PR. + + +### Status Munger + +GitHub has a [status api](https://developer.github.com/v3/repos/statuses/), we will add a status munger that pushes a status onto a PR of approval status. This status will only be approved if the relevant +approvers have approved the PR. + +### Requiring approval status + +Github has the ability to [require status checks prior to merging](https://help.github.com/articles/enabling-required-status-checks/) + +Once we have the status check munger described above implemented, we will add this required status check +to our main branch as well as any release branches. + +### Adding owners files + +In each directory in the repository we may add an OWNERS file. This file will contain the github OWNERS +for that directory. OWNERSHIP is hierarchical, so if a directory does not container an OWNERS file, its +parent's OWNERS file is used instead. There will be a top-level OWNERS file to back-stop the system. + +Obviously changing the OWNERS file requires OWNERS permission. + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/owners.md?pixel)]() + diff --git a/devel/pr_workflow.dia b/devel/pr_workflow.dia new file mode 100644 index 00000000..753a284b Binary files /dev/null and b/devel/pr_workflow.dia differ diff --git a/devel/pr_workflow.png b/devel/pr_workflow.png new file mode 100644 index 00000000..0e2bd5d6 Binary files /dev/null and b/devel/pr_workflow.png differ diff --git a/devel/profiling.md b/devel/profiling.md new file mode 100644 index 00000000..f50537f1 --- /dev/null +++ b/devel/profiling.md @@ -0,0 +1,46 @@ +# Profiling Kubernetes + +This document explain how to plug in profiler and how to profile Kubernetes services. + +## Profiling library + +Go comes with inbuilt 'net/http/pprof' profiling library and profiling web service. The way service works is binding debug/pprof/ subtree on a running webserver to the profiler. Reading from subpages of debug/pprof returns pprof-formatted profiles of the running binary. The output can be processed offline by the tool of choice, or used as an input to handy 'go tool pprof', which can graphically represent the result. + +## Adding profiling to services to APIserver. + +TL;DR: Add lines: + +```go +m.mux.HandleFunc("/debug/pprof/", pprof.Index) +m.mux.HandleFunc("/debug/pprof/profile", pprof.Profile) +m.mux.HandleFunc("/debug/pprof/symbol", pprof.Symbol) +``` + +to the init(c *Config) method in 'pkg/master/master.go' and import 'net/http/pprof' package. + +In most use cases to use profiler service it's enough to do 'import _ net/http/pprof', which automatically registers a handler in the default http.Server. Slight inconvenience is that APIserver uses default server for intra-cluster communication, so plugging profiler to it is not really useful. In 'pkg/kubelet/server/server.go' more servers are created and started as separate goroutines. The one that is usually serving external traffic is secureServer. The handler for this traffic is defined in 'pkg/master/master.go' and stored in Handler variable. It is created from HTTP multiplexer, so the only thing that needs to be done is adding profiler handler functions to this multiplexer. This is exactly what lines after TL;DR do. + +## Connecting to the profiler + +Even when running profiler I found not really straightforward to use 'go tool pprof' with it. The problem is that at least for dev purposes certificates generated for APIserver are not signed by anyone trusted and because secureServer serves only secure traffic it isn't straightforward to connect to the service. The best workaround I found is by creating an ssh tunnel from the kubernetes_master open unsecured port to some external server, and use this server as a proxy. To save everyone looking for correct ssh flags, it is done by running: + +```sh +ssh kubernetes_master -L:localhost:8080 +``` + +or analogous one for you Cloud provider. Afterwards you can e.g. run + +```sh +go tool pprof http://localhost:/debug/pprof/profile +``` + +to get 30 sec. CPU profile. + +## Contention profiling + +To enable contention profiling you need to add line `rt.SetBlockProfileRate(1)` in addition to `m.mux.HandleFunc(...)` added before (`rt` stands for `runtime` in `master.go`). This enables 'debug/pprof/block' subpage, which can be used as an input to `go tool pprof`. + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/profiling.md?pixel)]() + diff --git a/devel/pull-requests.md b/devel/pull-requests.md new file mode 100644 index 00000000..888d7320 --- /dev/null +++ b/devel/pull-requests.md @@ -0,0 +1,105 @@ + + +- [Pull Request Process](#pull-request-process) +- [Life of a Pull Request](#life-of-a-pull-request) + - [Before sending a pull request](#before-sending-a-pull-request) + - [Release Notes](#release-notes) + - [Reviewing pre-release notes](#reviewing-pre-release-notes) + - [Visual overview](#visual-overview) +- [Other notes](#other-notes) +- [Automation](#automation) + + + +# Pull Request Process + +An overview of how pull requests are managed for kubernetes. This document +assumes the reader has already followed the [development guide](development.md) +to set up their environment. + +# Life of a Pull Request + +Unless in the last few weeks of a milestone when we need to reduce churn and stabilize, we aim to be always accepting pull requests. + +Either the [on call](on-call-rotations.md) manually or the [github "munger"](https://github.com/kubernetes/contrib/tree/master/mungegithub) submit-queue plugin automatically will manage merging PRs. + +There are several requirements for the submit-queue to work: +* Author must have signed CLA ("cla: yes" label added to PR) +* No changes can be made since last lgtm label was applied +* k8s-bot must have reported the GCE E2E build and test steps passed (Jenkins unit/integration, Jenkins e2e) + +Additionally, for infrequent or new contributors, we require the on call to apply the "ok-to-merge" label manually. This is gated by the [whitelist](https://github.com/kubernetes/contrib/blob/master/mungegithub/whitelist.txt). + +## Before sending a pull request + +The following will save time for both you and your reviewer: + +* Enable [pre-commit hooks](development.md#committing-changes-to-your-fork) and verify they pass. +* Verify `make verify` passes. +* Verify `make test` passes. +* Verify `make test-integration` passes. + +## Release Notes + +This section applies only to pull requests on the master branch. +For cherry-pick PRs, see the [Cherrypick instructions](cherry-picks.md) + +1. All pull requests are initiated with a `release-note-label-needed` label. +1. For a PR to be ready to merge, the `release-note-label-needed` label must be removed and one of the other `release-note-*` labels must be added. +1. `release-note-none` is a valid option if the PR does not need to be mentioned + at release time. +1. `release-note` labeled PRs generate a release note using the PR title by + default OR the release-note block in the PR template if filled in. + * See the [PR template](../../.github/PULL_REQUEST_TEMPLATE.md) for more + details. + * PR titles and body comments are mutable and can be modified at any time + prior to the release to reflect a release note friendly message. + +The only exception to these rules is when a PR is not a cherry-pick and is +targeted directly to the non-master branch. In this case, a `release-note-*` +label is required for that non-master PR. + +### Reviewing pre-release notes + +At any time, you can see what the release notes will look like on any branch. +(NOTE: This only works on Linux for now) + +``` +$ git pull https://github.com/kubernetes/release +$ RELNOTES=$PWD/release/relnotes +$ cd /to/your/kubernetes/repo +$ $RELNOTES -man # for details on how to use the tool +# Show release notes from the last release on a branch to HEAD +$ $RELNOTES --branch=master +``` + +## Visual overview + +![PR workflow](pr_workflow.png) + +# Other notes + +Pull requests that are purely support questions will be closed and +redirected to [stackoverflow](http://stackoverflow.com/questions/tagged/kubernetes). +We do this to consolidate help/support questions into a single channel, +improve efficiency in responding to requests and make FAQs easier +to find. + +Pull requests older than 2 weeks will be closed. Exceptions can be made +for PRs that have active review comments, or that are awaiting other dependent PRs. +Closed pull requests are easy to recreate, and little work is lost by closing a pull +request that subsequently needs to be reopened. We want to limit the total number of PRs in flight to: +* Maintain a clean project +* Remove old PRs that would be difficult to rebase as the underlying code has changed over time +* Encourage code velocity + + +# Automation + +We use a variety of automation to manage pull requests. This automation is described in detail +[elsewhere.](automation.md) + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/pull-requests.md?pixel)]() + diff --git a/devel/running-locally.md b/devel/running-locally.md new file mode 100644 index 00000000..327d685e --- /dev/null +++ b/devel/running-locally.md @@ -0,0 +1,170 @@ +Getting started locally +----------------------- + +**Table of Contents** + +- [Requirements](#requirements) + - [Linux](#linux) + - [Docker](#docker) + - [etcd](#etcd) + - [go](#go) + - [OpenSSL](#openssl) +- [Clone the repository](#clone-the-repository) +- [Starting the cluster](#starting-the-cluster) +- [Running a container](#running-a-container) +- [Running a user defined pod](#running-a-user-defined-pod) +- [Troubleshooting](#troubleshooting) + - [I cannot reach service IPs on the network.](#i-cannot-reach-service-ips-on-the-network) + - [I cannot create a replication controller with replica size greater than 1! What gives?](#i-cannot-create-a-replication-controller-with-replica-size-greater-than-1--what-gives) + - [I changed Kubernetes code, how do I run it?](#i-changed-kubernetes-code-how-do-i-run-it) + - [kubectl claims to start a container but `get pods` and `docker ps` don't show it.](#kubectl-claims-to-start-a-container-but-get-pods-and-docker-ps-dont-show-it) + - [The pods fail to connect to the services by host names](#the-pods-fail-to-connect-to-the-services-by-host-names) + +### Requirements + +#### Linux + +Not running Linux? Consider running [Minikube](http://kubernetes.io/docs/getting-started-guides/minikube/), or on a cloud provider like [Google Compute Engine](../getting-started-guides/gce.md). + +#### Docker + +At least [Docker](https://docs.docker.com/installation/#installation) +1.3+. Ensure the Docker daemon is running and can be contacted (try `docker +ps`). Some of the Kubernetes components need to run as root, which normally +works fine with docker. + +#### etcd + +You need an [etcd](https://github.com/coreos/etcd/releases) in your path, please make sure it is installed and in your ``$PATH``. + +#### go + +You need [go](https://golang.org/doc/install) in your path (see [here](development.md#go-versions) for supported versions), please make sure it is installed and in your ``$PATH``. + +#### OpenSSL + +You need [OpenSSL](https://www.openssl.org/) installed. If you do not have the `openssl` command available, you may see the following error in `/tmp/kube-apiserver.log`: + +``` +server.go:333] Invalid Authentication Config: open /tmp/kube-serviceaccount.key: no such file or directory +``` + +### Clone the repository + +In order to run kubernetes you must have the kubernetes code on the local machine. Cloning this repository is sufficient. + +```$ git clone --depth=1 https://github.com/kubernetes/kubernetes.git``` + +The `--depth=1` parameter is optional and will ensure a smaller download. + +### Starting the cluster + +In a separate tab of your terminal, run the following (since one needs sudo access to start/stop Kubernetes daemons, it is easier to run the entire script as root): + +```sh +cd kubernetes +hack/local-up-cluster.sh +``` + +This will build and start a lightweight local cluster, consisting of a master +and a single node. Type Control-C to shut it down. + +If you've already compiled the Kubernetes components, then you can avoid rebuilding them with this script by using the `-O` flag. + +```sh +./hack/local-up-cluster.sh -O +``` + +You can use the cluster/kubectl.sh script to interact with the local cluster. hack/local-up-cluster.sh will +print the commands to run to point kubectl at the local cluster. + + +### Running a container + +Your cluster is running, and you want to start running containers! + +You can now use any of the cluster/kubectl.sh commands to interact with your local setup. + +```sh +cluster/kubectl.sh get pods +cluster/kubectl.sh get services +cluster/kubectl.sh get replicationcontrollers +cluster/kubectl.sh run my-nginx --image=nginx --replicas=2 --port=80 + + +## begin wait for provision to complete, you can monitor the docker pull by opening a new terminal + sudo docker images + ## you should see it pulling the nginx image, once the above command returns it + sudo docker ps + ## you should see your container running! + exit +## end wait + +## introspect Kubernetes! +cluster/kubectl.sh get pods +cluster/kubectl.sh get services +cluster/kubectl.sh get replicationcontrollers +``` + + +### Running a user defined pod + +Note the difference between a [container](../user-guide/containers.md) +and a [pod](../user-guide/pods.md). Since you only asked for the former, Kubernetes will create a wrapper pod for you. +However you cannot view the nginx start page on localhost. To verify that nginx is running you need to run `curl` within the docker container (try `docker exec`). + +You can control the specifications of a pod via a user defined manifest, and reach nginx through your browser on the port specified therein: + +```sh +cluster/kubectl.sh create -f test/fixtures/doc-yaml/user-guide/pod.yaml +``` + +Congratulations! + +### Troubleshooting + +#### I cannot reach service IPs on the network. + +Some firewall software that uses iptables may not interact well with +kubernetes. If you have trouble around networking, try disabling any +firewall or other iptables-using systems, first. Also, you can check +if SELinux is blocking anything by running a command such as `journalctl --since yesterday | grep avc`. + +By default the IP range for service cluster IPs is 10.0.*.* - depending on your +docker installation, this may conflict with IPs for containers. If you find +containers running with IPs in this range, edit hack/local-cluster-up.sh and +change the service-cluster-ip-range flag to something else. + +#### I cannot create a replication controller with replica size greater than 1! What gives? + +You are running a single node setup. This has the limitation of only supporting a single replica of a given pod. If you are interested in running with larger replica sizes, we encourage you to try the local vagrant setup or one of the cloud providers. + +#### I changed Kubernetes code, how do I run it? + +```sh +cd kubernetes +make +hack/local-up-cluster.sh +``` + +#### kubectl claims to start a container but `get pods` and `docker ps` don't show it. + +One or more of the Kubernetes daemons might've crashed. Tail the logs of each in /tmp. + +#### The pods fail to connect to the services by host names + +To start the DNS service, you need to set the following variables: + +```sh +KUBE_ENABLE_CLUSTER_DNS=true +KUBE_DNS_SERVER_IP="10.0.0.10" +KUBE_DNS_DOMAIN="cluster.local" +KUBE_DNS_REPLICAS=1 +``` + +To know more on DNS service you can look [here](http://issue.k8s.io/6667). Related documents can be found [here](../../build-tools/kube-dns/#how-do-i-configure-it) + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/running-locally.md?pixel)]() + diff --git a/devel/scheduler.md b/devel/scheduler.md new file mode 100755 index 00000000..b1cfea7a --- /dev/null +++ b/devel/scheduler.md @@ -0,0 +1,72 @@ +# The Kubernetes Scheduler + +The Kubernetes scheduler runs as a process alongside the other master +components such as the API server. Its interface to the API server is to watch +for Pods with an empty PodSpec.NodeName, and for each Pod, it posts a Binding +indicating where the Pod should be scheduled. + +## The scheduling process + +``` + +-------+ + +---------------+ node 1| + | +-------+ + | + +----> | Apply pred. filters + | | + | | +-------+ + | +----+---------->+node 2 | + | | +--+----+ + | watch | | + | | | +------+ + | +---------------------->+node 3| ++--+---------------+ | +--+---+ +| Pods in apiserver| | | ++------------------+ | | + | | + | | + +------------V------v--------+ + | Priority function | + +-------------+--------------+ + | + | node 1: p=2 + | node 2: p=5 + v + select max{node priority} = node 2 + +``` + +The Scheduler tries to find a node for each Pod, one at a time. +- First it applies a set of "predicates" to filter out inappropriate nodes. For example, if the PodSpec specifies resource requests, then the scheduler will filter out nodes that don't have at least that much resources available (computed as the capacity of the node minus the sum of the resource requests of the containers that are already running on the node). +- Second, it applies a set of "priority functions" +that rank the nodes that weren't filtered out by the predicate check. For example, it tries to spread Pods across nodes and zones while at the same time favoring the least (theoretically) loaded nodes (where "load" - in theory - is measured as the sum of the resource requests of the containers running on the node, divided by the node's capacity). +- Finally, the node with the highest priority is chosen (or, if there are multiple such nodes, then one of them is chosen at random). The code for this main scheduling loop is in the function `Schedule()` in [plugin/pkg/scheduler/generic_scheduler.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/generic_scheduler.go) + +## Scheduler extensibility + +The scheduler is extensible: the cluster administrator can choose which of the pre-defined +scheduling policies to apply, and can add new ones. + +### Policies (Predicates and Priorities) + +The built-in predicates and priorities are +defined in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/predicates/predicates.go) and +[plugin/pkg/scheduler/algorithm/priorities/priorities.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/priorities/priorities.go), respectively. + +### Modifying policies + +The policies that are applied when scheduling can be chosen in one of two ways. Normally, +the policies used are selected by the functions `defaultPredicates()` and `defaultPriorities()` in +[plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). +However, the choice of policies can be overridden by passing the command-line flag `--policy-config-file` to the scheduler, pointing to a JSON file specifying which scheduling policies to use. See [examples/scheduler-policy-config.json](../../examples/scheduler-policy-config.json) for an example +config file. (Note that the config file format is versioned; the API is defined in [plugin/pkg/scheduler/api](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/api/)). +Thus to add a new scheduling policy, you should modify predicates.go or priorities.go, and either register the policy in `defaultPredicates()` or `defaultPriorities()`, or use a policy config file. + +## Exploring the code + +If you want to get a global picture of how the scheduler works, you can start in +[plugin/cmd/kube-scheduler/app/server.go](http://releases.k8s.io/HEAD/plugin/cmd/kube-scheduler/app/server.go) + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/scheduler.md?pixel)]() + diff --git a/devel/scheduler_algorithm.md b/devel/scheduler_algorithm.md new file mode 100755 index 00000000..28c6c2bc --- /dev/null +++ b/devel/scheduler_algorithm.md @@ -0,0 +1,44 @@ +# Scheduler Algorithm in Kubernetes + +For each unscheduled Pod, the Kubernetes scheduler tries to find a node across the cluster according to a set of rules. A general introduction to the Kubernetes scheduler can be found at [scheduler.md](scheduler.md). In this document, the algorithm of how to select a node for the Pod is explained. There are two steps before a destination node of a Pod is chosen. The first step is filtering all the nodes and the second is ranking the remaining nodes to find a best fit for the Pod. + +## Filtering the nodes + +The purpose of filtering the nodes is to filter out the nodes that do not meet certain requirements of the Pod. For example, if the free resource on a node (measured by the capacity minus the sum of the resource requests of all the Pods that already run on the node) is less than the Pod's required resource, the node should not be considered in the ranking phase so it is filtered out. Currently, there are several "predicates" implementing different filtering policies, including: + +- `NoDiskConflict`: Evaluate if a pod can fit due to the volumes it requests, and those that are already mounted. Currently supported volumes are: AWS EBS, GCE PD, and Ceph RBD. Only Persistent Volume Claims for those supported types are checked. Persistent Volumes added directly to pods are not evaluated and are not constrained by this policy. +- `NoVolumeZoneConflict`: Evaluate if the volumes a pod requests are available on the node, given the Zone restrictions. +- `PodFitsResources`: Check if the free resource (CPU and Memory) meets the requirement of the Pod. The free resource is measured by the capacity minus the sum of requests of all Pods on the node. To learn more about the resource QoS in Kubernetes, please check [QoS proposal](../design/resource-qos.md). +- `PodFitsHostPorts`: Check if any HostPort required by the Pod is already occupied on the node. +- `HostName`: Filter out all nodes except the one specified in the PodSpec's NodeName field. +- `MatchNodeSelector`: Check if the labels of the node match the labels specified in the Pod's `nodeSelector` field and, as of Kubernetes v1.2, also match the `scheduler.alpha.kubernetes.io/affinity` pod annotation if present. See [here](../user-guide/node-selection/) for more details on both. +- `MaxEBSVolumeCount`: Ensure that the number of attached ElasticBlockStore volumes does not exceed a maximum value (by default, 39, since Amazon recommends a maximum of 40 with one of those 40 reserved for the root volume -- see [Amazon's documentation](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/volume_limits.html#linux-specific-volume-limits)). The maximum value can be controlled by setting the `KUBE_MAX_PD_VOLS` environment variable. +- `MaxGCEPDVolumeCount`: Ensure that the number of attached GCE PersistentDisk volumes does not exceed a maximum value (by default, 16, which is the maximum GCE allows -- see [GCE's documentation](https://cloud.google.com/compute/docs/disks/persistent-disks#limits_for_predefined_machine_types)). The maximum value can be controlled by setting the `KUBE_MAX_PD_VOLS` environment variable. +- `CheckNodeMemoryPressure`: Check if a pod can be scheduled on a node reporting memory pressure condition. Currently, no ``BestEffort`` should be placed on a node under memory pressure as it gets automatically evicted by kubelet. +- `CheckNodeDiskPressure`: Check if a pod can be scheduled on a node reporting disk pressure condition. Currently, no pods should be placed on a node under disk pressure as it gets automatically evicted by kubelet. + +The details of the above predicates can be found in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/predicates/predicates.go). All predicates mentioned above can be used in combination to perform a sophisticated filtering policy. Kubernetes uses some, but not all, of these predicates by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). + +## Ranking the nodes + +The filtered nodes are considered suitable to host the Pod, and it is often that there are more than one nodes remaining. Kubernetes prioritizes the remaining nodes to find the "best" one for the Pod. The prioritization is performed by a set of priority functions. For each remaining node, a priority function gives a score which scales from 0-10 with 10 representing for "most preferred" and 0 for "least preferred". Each priority function is weighted by a positive number and the final score of each node is calculated by adding up all the weighted scores. For example, suppose there are two priority functions, `priorityFunc1` and `priorityFunc2` with weighting factors `weight1` and `weight2` respectively, the final score of some NodeA is: + + finalScoreNodeA = (weight1 * priorityFunc1) + (weight2 * priorityFunc2) + +After the scores of all nodes are calculated, the node with highest score is chosen as the host of the Pod. If there are more than one nodes with equal highest scores, a random one among them is chosen. + +Currently, Kubernetes scheduler provides some practical priority functions, including: + +- `LeastRequestedPriority`: The node is prioritized based on the fraction of the node that would be free if the new Pod were scheduled onto the node. (In other words, (capacity - sum of requests of all Pods already on the node - request of Pod that is being scheduled) / capacity). CPU and memory are equally weighted. The node with the highest free fraction is the most preferred. Note that this priority function has the effect of spreading Pods across the nodes with respect to resource consumption. +- `BalancedResourceAllocation`: This priority function tries to put the Pod on a node such that the CPU and Memory utilization rate is balanced after the Pod is deployed. +- `SelectorSpreadPriority`: Spread Pods by minimizing the number of Pods belonging to the same service, replication controller, or replica set on the same node. If zone information is present on the nodes, the priority will be adjusted so that pods are spread across zones and nodes. +- `CalculateAntiAffinityPriority`: Spread Pods by minimizing the number of Pods belonging to the same service on nodes with the same value for a particular label. +- `ImageLocalityPriority`: Nodes are prioritized based on locality of images requested by a pod. Nodes with larger size of already-installed packages required by the pod will be preferred over nodes with no already-installed packages required by the pod or a small total size of already-installed packages required by the pod. +- `NodeAffinityPriority`: (Kubernetes v1.2) Implements `preferredDuringSchedulingIgnoredDuringExecution` node affinity; see [here](../user-guide/node-selection/) for more details. + +The details of the above priority functions can be found in [plugin/pkg/scheduler/algorithm/priorities](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/priorities/). Kubernetes uses some, but not all, of these priority functions by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). Similar as predicates, you can combine the above priority functions and assign weight factors (positive number) to them as you want (check [scheduler.md](scheduler.md) for how to customize). + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/scheduler_algorithm.md?pixel)]() + diff --git a/devel/testing.md b/devel/testing.md new file mode 100644 index 00000000..45848f3b --- /dev/null +++ b/devel/testing.md @@ -0,0 +1,230 @@ +# Testing guide + +Updated: 5/21/2016 + +**Table of Contents** + + +- [Testing guide](#testing-guide) + - [Unit tests](#unit-tests) + - [Run all unit tests](#run-all-unit-tests) + - [Set go flags during unit tests](#set-go-flags-during-unit-tests) + - [Run unit tests from certain packages](#run-unit-tests-from-certain-packages) + - [Run specific unit test cases in a package](#run-specific-unit-test-cases-in-a-package) + - [Stress running unit tests](#stress-running-unit-tests) + - [Unit test coverage](#unit-test-coverage) + - [Benchmark unit tests](#benchmark-unit-tests) + - [Integration tests](#integration-tests) + - [Install etcd dependency](#install-etcd-dependency) + - [Etcd test data](#etcd-test-data) + - [Run integration tests](#run-integration-tests) + - [Run a specific integration test](#run-a-specific-integration-test) + - [End-to-End tests](#end-to-end-tests) + + + +This assumes you already read the [development guide](development.md) to +install go, godeps, and configure your git client. All command examples are +relative to the `kubernetes` root directory. + +Before sending pull requests you should at least make sure your changes have +passed both unit and integration tests. + +Kubernetes only merges pull requests when unit, integration, and e2e tests are +passing, so it is often a good idea to make sure the e2e tests work as well. + +## Unit tests + +* Unit tests should be fully hermetic + - Only access resources in the test binary. +* All packages and any significant files require unit tests. +* The preferred method of testing multiple scenarios or input is + [table driven testing](https://github.com/golang/go/wiki/TableDrivenTests) + - Example: [TestNamespaceAuthorization](../../test/integration/auth/auth_test.go) +* Unit tests must pass on OS X and Windows platforms. + - Tests using linux-specific features must be skipped or compiled out. + - Skipped is better, compiled out is required when it won't compile. +* Concurrent unit test runs must pass. +* See [coding conventions](coding-conventions.md). + +### Run all unit tests + +`make test` is the entrypoint for running the unit tests that ensures that +`GOPATH` is set up correctly. If you have `GOPATH` set up correctly, you can +also just use `go test` directly. + +```sh +cd kubernetes +make test # Run all unit tests. +``` + +### Set go flags during unit tests + +You can set [go flags](https://golang.org/cmd/go/) by setting the +`KUBE_GOFLAGS` environment variable. + +### Run unit tests from certain packages + +`make test` accepts packages as arguments; the `k8s.io/kubernetes` prefix is +added automatically to these: + +```sh +make test WHAT=pkg/api # run tests for pkg/api +``` + +To run multiple targets you need quotes: + +```sh +make test WHAT="pkg/api pkg/kubelet" # run tests for pkg/api and pkg/kubelet +``` + +In a shell, it's often handy to use brace expansion: + +```sh +make test WHAT=pkg/{api,kubelet} # run tests for pkg/api and pkg/kubelet +``` + +### Run specific unit test cases in a package + +You can set the test args using the `KUBE_TEST_ARGS` environment variable. +You can use this to pass the `-run` argument to `go test`, which accepts a +regular expression for the name of the test that should be run. + +```sh +# Runs TestValidatePod in pkg/api/validation with the verbose flag set +make test WHAT=pkg/api/validation KUBE_GOFLAGS="-v" KUBE_TEST_ARGS='-run ^TestValidatePod$' + +# Runs tests that match the regex ValidatePod|ValidateConfigMap in pkg/api/validation +make test WHAT=pkg/api/validation KUBE_GOFLAGS="-v" KUBE_TEST_ARGS="-run ValidatePod\|ValidateConfigMap$" +``` + +For other supported test flags, see the [golang +documentation](https://golang.org/cmd/go/#hdr-Description_of_testing_flags). + +### Stress running unit tests + +Running the same tests repeatedly is one way to root out flakes. +You can do this efficiently. + +```sh +# Have 2 workers run all tests 5 times each (10 total iterations). +make test PARALLEL=2 ITERATION=5 +``` + +For more advanced ideas please see [flaky-tests.md](flaky-tests.md). + +### Unit test coverage + +Currently, collecting coverage is only supported for the Go unit tests. + +To run all unit tests and generate an HTML coverage report, run the following: + +```sh +make test KUBE_COVER=y +``` + +At the end of the run, an HTML report will be generated with the path +printed to stdout. + +To run tests and collect coverage in only one package, pass its relative path +under the `kubernetes` directory as an argument, for example: + +```sh +make test WHAT=pkg/kubectl KUBE_COVER=y +``` + +Multiple arguments can be passed, in which case the coverage results will be +combined for all tests run. + +### Benchmark unit tests + +To run benchmark tests, you'll typically use something like: + +```sh +go test ./pkg/apiserver -benchmem -run=XXX -bench=BenchmarkWatch +``` + +This will do the following: + +1. `-run=XXX` is a regular expression filter on the name of test cases to run +2. `-bench=BenchmarkWatch` will run test methods with BenchmarkWatch in the name + * See `grep -nr BenchmarkWatch .` for examples +3. `-benchmem` enables memory allocation stats + +See `go help test` and `go help testflag` for additional info. + +## Integration tests + +* Integration tests should only access other resources on the local machine + - Most commonly etcd or a service listening on localhost. +* All significant features require integration tests. + - This includes kubectl commands +* The preferred method of testing multiple scenarios or inputs +is [table driven testing](https://github.com/golang/go/wiki/TableDrivenTests) + - Example: [TestNamespaceAuthorization](../../test/integration/auth/auth_test.go) +* Each test should create its own master, httpserver and config. + - Example: [TestPodUpdateActiveDeadlineSeconds](../../test/integration/pods/pods_test.go) +* See [coding conventions](coding-conventions.md). + +### Install etcd dependency + +Kubernetes integration tests require your `PATH` to include an +[etcd](https://github.com/coreos/etcd/releases) installation. Kubernetes +includes a script to help install etcd on your machine. + +```sh +# Install etcd and add to PATH + +# Option a) install inside kubernetes root +hack/install-etcd.sh # Installs in ./third_party/etcd +echo export PATH="\$PATH:$(pwd)/third_party/etcd" >> ~/.profile # Add to PATH + +# Option b) install manually +grep -E "image.*etcd" cluster/saltbase/etcd/etcd.manifest # Find version +# Install that version using yum/apt-get/etc +echo export PATH="\$PATH:" >> ~/.profile # Add to PATH +``` + +### Etcd test data + +Many tests start an etcd server internally, storing test data in the operating system's temporary directory. + +If you see test failures because the temporary directory does not have sufficient space, +or is on a volume with unpredictable write latency, you can override the test data directory +for those internal etcd instances with the `TEST_ETCD_DIR` environment variable. + +### Run integration tests + +The integration tests are run using `make test-integration`. +The Kubernetes integration tests are writting using the normal golang testing +package but expect to have a running etcd instance to connect to. The `test- +integration.sh` script wraps `make test` and sets up an etcd instance +for the integration tests to use. + +```sh +make test-integration # Run all integration tests. +``` + +This script runs the golang tests in package +[`test/integration`](../../test/integration/). + +### Run a specific integration test + +You can use also use the `KUBE_TEST_ARGS` environment variable with the `hack +/test-integration.sh` script to run a specific integration test case: + +```sh +# Run integration test TestPodUpdateActiveDeadlineSeconds with the verbose flag set. +make test-integration KUBE_GOFLAGS="-v" KUBE_TEST_ARGS="-run ^TestPodUpdateActiveDeadlineSeconds$" +``` + +If you set `KUBE_TEST_ARGS`, the test case will be run with only the `v1` API +version and the watch cache test is skipped. + +## End-to-End tests + +Please refer to [End-to-End Testing in Kubernetes](e2e-tests.md). + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/testing.md?pixel)]() + diff --git a/devel/update-release-docs.md b/devel/update-release-docs.md new file mode 100644 index 00000000..1e0988db --- /dev/null +++ b/devel/update-release-docs.md @@ -0,0 +1,115 @@ +# Table of Contents + + + +- [Table of Contents](#table-of-contents) +- [Overview](#overview) +- [Adding a new docs collection for a release](#adding-a-new-docs-collection-for-a-release) +- [Updating docs in an existing collection](#updating-docs-in-an-existing-collection) + - [Updating docs on HEAD](#updating-docs-on-head) + - [Updating docs in release branch](#updating-docs-in-release-branch) + - [Updating docs in gh-pages branch](#updating-docs-in-gh-pages-branch) + + + +# Overview + +This document explains how to update kubernetes release docs hosted at http://kubernetes.io/docs/. + +http://kubernetes.io is served using the [gh-pages +branch](https://github.com/kubernetes/kubernetes/tree/gh-pages) of kubernetes repo on github. +Updating docs in that branch will update http://kubernetes.io + +There are 2 scenarios which require updating docs: +* Adding a new docs collection for a release. +* Updating docs in an existing collection. + +# Adding a new docs collection for a release + +Whenever a new release series (`release-X.Y`) is cut from `master`, we push the +corresponding set of docs to `http://kubernetes.io/vX.Y/docs`. The steps are as follows: + +* Create a `_vX.Y` folder in `gh-pages` branch. +* Add `vX.Y` as a valid collection in [_config.yml](https://github.com/kubernetes/kubernetes/blob/gh-pages/_config.yml) +* Create a new `_includes/nav_vX.Y.html` file with the navigation menu. This can + be a copy of `_includes/nav_vX.Y-1.html` with links to new docs added and links + to deleted docs removed. Update [_layouts/docwithnav.html] + (https://github.com/kubernetes/kubernetes/blob/gh-pages/_layouts/docwithnav.html) + to include this new navigation html file. Example PR: [#16143](https://github.com/kubernetes/kubernetes/pull/16143). +* [Pull docs from release branch](#updating-docs-in-gh-pages-branch) in `_vX.Y` + folder. + +Once these changes have been submitted, you should be able to reach the docs at +`http://kubernetes.io/vX.Y/docs/` where you can test them. + +To make `X.Y` the default version of docs: + +* Update [_config.yml](https://github.com/kubernetes/kubernetes/blob/gh-pages/_config.yml) + and [/kubernetes/kubernetes/blob/gh-pages/_docs/index.md](https://github.com/kubernetes/kubernetes/blob/gh-pages/_docs/index.md) + to point to the new version. Example PR: [#16416](https://github.com/kubernetes/kubernetes/pull/16416). +* Update [_includes/docversionselector.html](https://github.com/kubernetes/kubernetes/blob/gh-pages/_includes/docversionselector.html) + to make `vX.Y` the default version. +* Add "Disallow: /vX.Y-1/" to existing [robots.txt](https://github.com/kubernetes/kubernetes/blob/gh-pages/robots.txt) + file to hide old content from web crawlers and focus SEO on new docs. Example PR: + [#16388](https://github.com/kubernetes/kubernetes/pull/16388). +* Regenerate [sitemaps.xml](https://github.com/kubernetes/kubernetes/blob/gh-pages/sitemap.xml) + so that it now contains `vX.Y` links. Sitemap can be regenerated using + https://www.xml-sitemaps.com. Example PR: [#17126](https://github.com/kubernetes/kubernetes/pull/17126). +* Resubmit the updated sitemaps file to [Google + webmasters](https://www.google.com/webmasters/tools/sitemap-list?siteUrl=http://kubernetes.io/) for google to index the new links. +* Update [_layouts/docwithnav.html] (https://github.com/kubernetes/kubernetes/blob/gh-pages/_layouts/docwithnav.html) + to include [_includes/archivedocnotice.html](https://github.com/kubernetes/kubernetes/blob/gh-pages/_includes/archivedocnotice.html) + for `vX.Y-1` docs which need to be archived. +* Ping @thockin to update docs.k8s.io to redirect to `http://kubernetes.io/vX.Y/`. [#18788](https://github.com/kubernetes/kubernetes/issues/18788). + +http://kubernetes.io/docs/ should now be redirecting to `http://kubernetes.io/vX.Y/`. + +# Updating docs in an existing collection + +The high level steps to update docs in an existing collection are: + +1. Update docs on `HEAD` (master branch) +2. Cherryick the change in relevant release branch. +3. Update docs on `gh-pages`. + +## Updating docs on HEAD + +[Development guide](development.md) provides general instructions on how to contribute to kubernetes github repo. +[Docs how to guide](how-to-doc.md) provides conventions to follow while writing docs. + +## Updating docs in release branch + +Once docs have been updated in the master branch, the changes need to be +cherrypicked in the latest release branch. +[Cherrypick guide](cherry-picks.md) has more details on how to cherrypick your change. + +## Updating docs in gh-pages branch + +Once release branch has all the relevant changes, we can pull in the latest docs +in `gh-pages` branch. +Run the following 2 commands in `gh-pages` branch to update docs for release `X.Y`: + +``` +_tools/import_docs vX.Y _vX.Y release-X.Y release-X.Y +``` + +For ex: to pull in docs for release 1.1, run: + +``` +_tools/import_docs v1.1 _v1.1 release-1.1 release-1.1 +``` + +Apart from copying over the docs, `_tools/release_docs` also does some post processing +(like updating the links to docs to point to http://kubernetes.io/docs/ instead of pointing to github repo). +Note that we always pull in the docs from release branch and not from master (pulling docs +from master requires some extra processing like versionizing the links and removing unversioned warnings). + +We delete all existing docs before pulling in new ones to ensure that deleted +docs go away. + +If the change added or deleted a doc, then update the corresponding `_includes/nav_vX.Y.html` file as well. + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/update-release-docs.md?pixel)]() + diff --git a/devel/updating-docs-for-feature-changes.md b/devel/updating-docs-for-feature-changes.md new file mode 100644 index 00000000..309b809d --- /dev/null +++ b/devel/updating-docs-for-feature-changes.md @@ -0,0 +1,76 @@ +# How to update docs for new kubernetes features + +This document describes things to consider when updating Kubernetes docs for new features or changes to existing features (including removing features). + +## Who should read this doc? + +Anyone making user facing changes to kubernetes. This is especially important for Api changes or anything impacting the getting started experience. + +## What docs changes are needed when adding or updating a feature in kubernetes? + +### When making Api changes + +*e.g. adding Deployments* +* Always make sure docs for downstream effects are updated *(StatefulSet -> PVC, Deployment -> ReplicationController)* +* Add or update the corresponding *[Glossary](http://kubernetes.io/docs/reference/)* item +* Verify the guides / walkthroughs do not require any changes: + * **If your change will be recommended over the approaches shown in these guides, then they must be updated to reflect your change** + * [Hello Node](http://kubernetes.io/docs/hellonode/) + * [K8s101](http://kubernetes.io/docs/user-guide/walkthrough/) + * [K8S201](http://kubernetes.io/docs/user-guide/walkthrough/k8s201/) + * [Guest-book](https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/guestbook) + * [Thorough-walkthrough](http://kubernetes.io/docs/user-guide/) +* Verify the [landing page examples](http://kubernetes.io/docs/samples/) do not require any changes (those under "Recently updated samples") + * **If your change will be recommended over the approaches shown in the "Updated" examples, then they must be updated to reflect your change** + * If you are aware that your change will be recommended over the approaches shown in non-"Updated" examples, create an Issue +* Verify the collection of docs under the "Guides" section do not require updates (may need to use grep for this until are docs are more organized) + +### When making Tools changes + +*e.g. updating kube-dash or kubectl* +* If changing kubectl, verify the guides / walkthroughs do not require any changes: + * **If your change will be recommended over the approaches shown in these guides, then they must be updated to reflect your change** + * [Hello Node](http://kubernetes.io/docs/hellonode/) + * [K8s101](http://kubernetes.io/docs/user-guide/walkthrough/) + * [K8S201](http://kubernetes.io/docs/user-guide/walkthrough/k8s201/) + * [Guest-book](https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/guestbook) + * [Thorough-walkthrough](http://kubernetes.io/docs/user-guide/) +* If updating an existing tool + * Search for any docs about the tool and update them +* If adding a new tool for end users + * Add a new page under [Guides](http://kubernetes.io/docs/) +* **If removing a tool (kube-ui), make sure documentation that references it is updated appropriately!** + +### When making cluster setup changes + +*e.g. adding Multi-AZ support* +* Update the relevant [Administering Clusters](http://kubernetes.io/docs/) pages + +### When making Kubernetes binary changes + +*e.g. adding a flag, changing Pod GC behavior, etc* +* Add or update a page under [Configuring Kubernetes](http://kubernetes.io/docs/) + +## Where do the docs live? + +1. Most external user facing docs live in the [kubernetes/docs](https://github.com/kubernetes/kubernetes.github.io) repo + * Also see the *[general instructions](http://kubernetes.io/editdocs/)* for making changes to the docs website +2. Internal design and development docs live in the [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes) repo + +## Who should help review docs changes? + +* cc *@kubernetes/docs* +* Changes to [kubernetes/docs](https://github.com/kubernetes/kubernetes.github.io) repo must have both a Technical Review and a Docs Review + +## Tips for writing new docs + +* Try to keep new docs small and focused +* Document pre-requisites (if they exist) +* Document what concepts will be covered in the document +* Include screen shots or pictures in documents for GUIs +* *TODO once we have a standard widget set we are happy with* - include diagrams to help describe complex ideas (not required yet) + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/updating-docs-for-feature-changes.md?pixel)]() + diff --git a/devel/writing-a-getting-started-guide.md b/devel/writing-a-getting-started-guide.md new file mode 100644 index 00000000..b1d65d60 --- /dev/null +++ b/devel/writing-a-getting-started-guide.md @@ -0,0 +1,101 @@ +# Writing a Getting Started Guide + +This page gives some advice for anyone planning to write or update a Getting Started Guide for Kubernetes. +It also gives some guidelines which reviewers should follow when reviewing a pull request for a +guide. + +A Getting Started Guide is instructions on how to create a Kubernetes cluster on top of a particular +type(s) of infrastructure. Infrastructure includes: the IaaS provider for VMs; +the node OS; inter-node networking; and node Configuration Management system. +A guide refers to scripts, Configuration Management files, and/or binary assets such as RPMs. We call +the combination of all these things needed to run on a particular type of infrastructure a +**distro**. + +[The Matrix](../../docs/getting-started-guides/README.md) lists the distros. If there is already a guide +which is similar to the one you have planned, consider improving that one. + + +Distros fall into two categories: + - **versioned distros** are tested to work with a particular binary release of Kubernetes. These + come in a wide variety, reflecting a wide range of ideas and preferences in how to run a cluster. + - **development distros** are tested work with the latest Kubernetes source code. But, there are + relatively few of these and the bar is much higher for creating one. They must support + fully automated cluster creation, deletion, and upgrade. + +There are different guidelines for each. + +## Versioned Distro Guidelines + +These guidelines say *what* to do. See the Rationale section for *why*. + - Send us a PR. + - Put the instructions in `docs/getting-started-guides/...`. Scripts go there too. This helps devs easily + search for uses of flags by guides. + - We may ask that you host binary assets or large amounts of code in our `contrib` directory or on your + own repo. + - Add or update a row in [The Matrix](../../docs/getting-started-guides/README.md). + - State the binary version of Kubernetes that you tested clearly in your Guide doc. + - Setup a cluster and run the [conformance tests](e2e-tests.md#conformance-tests) against it, and report the + results in your PR. + - Versioned distros should typically not modify or add code in `cluster/`. That is just scripts for developer + distros. + - When a new major or minor release of Kubernetes comes out, we may also release a new + conformance test, and require a new conformance test run to earn a conformance checkmark. + +If you have a cluster partially working, but doing all the above steps seems like too much work, +we still want to hear from you. We suggest you write a blog post or a Gist, and we will link to it on our wiki page. +Just file an issue or chat us on [Slack](http://slack.kubernetes.io) and one of the committers will link to it from the wiki. + +## Development Distro Guidelines + +These guidelines say *what* to do. See the Rationale section for *why*. + - the main reason to add a new development distro is to support a new IaaS provider (VM and + network management). This means implementing a new `pkg/cloudprovider/providers/$IAAS_NAME`. + - Development distros should use Saltstack for Configuration Management. + - development distros need to support automated cluster creation, deletion, upgrading, etc. + This mean writing scripts in `cluster/$IAAS_NAME`. + - all commits to the tip of this repo need to not break any of the development distros + - the author of the change is responsible for making changes necessary on all the cloud-providers if the + change affects any of them, and reverting the change if it breaks any of the CIs. + - a development distro needs to have an organization which owns it. This organization needs to: + - Setting up and maintaining Continuous Integration that runs e2e frequently (multiple times per day) against the + Distro at head, and which notifies all devs of breakage. + - being reasonably available for questions and assisting with + refactoring and feature additions that affect code for their IaaS. + +## Rationale + + - We want people to create Kubernetes clusters with whatever IaaS, Node OS, + configuration management tools, and so on, which they are familiar with. The + guidelines for **versioned distros** are designed for flexibility. + - We want developers to be able to work without understanding all the permutations of + IaaS, NodeOS, and configuration management. The guidelines for **developer distros** are designed + for consistency. + - We want users to have a uniform experience with Kubernetes whenever they follow instructions anywhere + in our Github repository. So, we ask that versioned distros pass a **conformance test** to make sure + really work. + - We want to **limit the number of development distros** for several reasons. Developers should + only have to change a limited number of places to add a new feature. Also, since we will + gate commits on passing CI for all distros, and since end-to-end tests are typically somewhat + flaky, it would be highly likely for there to be false positives and CI backlogs with many CI pipelines. + - We do not require versioned distros to do **CI** for several reasons. It is a steep + learning curve to understand our automated testing scripts. And it is considerable effort + to fully automate setup and teardown of a cluster, which is needed for CI. And, not everyone + has the time and money to run CI. We do not want to + discourage people from writing and sharing guides because of this. + - Versioned distro authors are free to run their own CI and let us know if there is breakage, but we + will not include them as commit hooks -- there cannot be so many commit checks that it is impossible + to pass them all. + - We prefer a single Configuration Management tool for development distros. If there were more + than one, the core developers would have to learn multiple tools and update config in multiple + places. **Saltstack** happens to be the one we picked when we started the project. We + welcome versioned distros that use any tool; there are already examples of + CoreOS Fleet, Ansible, and others. + - You can still run code from head or your own branch + if you use another Configuration Management tool -- you just have to do some manual steps + during testing and deployment. + + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/writing-a-getting-started-guide.md?pixel)]() + diff --git a/devel/writing-good-e2e-tests.md b/devel/writing-good-e2e-tests.md new file mode 100644 index 00000000..ab13aff2 --- /dev/null +++ b/devel/writing-good-e2e-tests.md @@ -0,0 +1,235 @@ +# Writing good e2e tests for Kubernetes # + +## Patterns and Anti-Patterns ## + +### Goals of e2e tests ### + +Beyond the obvious goal of providing end-to-end system test coverage, +there are a few less obvious goals that you should bear in mind when +designing, writing and debugging your end-to-end tests. In +particular, "flaky" tests, which pass most of the time but fail +intermittently for difficult-to-diagnose reasons are extremely costly +in terms of blurring our regression signals and slowing down our +automated merge queue. Up-front time and effort designing your test +to be reliable is very well spent. Bear in mind that we have hundreds +of tests, each running in dozens of different environments, and if any +test in any test environment fails, we have to assume that we +potentially have some sort of regression. So if a significant number +of tests fail even only 1% of the time, basic statistics dictates that +we will almost never have a "green" regression indicator. Stated +another way, writing a test that is only 99% reliable is just about +useless in the harsh reality of a CI environment. In fact it's worse +than useless, because not only does it not provide a reliable +regression indicator, but it also costs a lot of subsequent debugging +time, and delayed merges. + +#### Debuggability #### + +If your test fails, it should provide as detailed as possible reasons +for the failure in it's output. "Timeout" is not a useful error +message. "Timed out after 60 seconds waiting for pod xxx to enter +running state, still in pending state" is much more useful to someone +trying to figure out why your test failed and what to do about it. +Specifically, +[assertion](https://onsi.github.io/gomega/#making-assertions) code +like the following generates rather useless errors: + +``` +Expect(err).NotTo(HaveOccurred()) +``` + +Rather +[annotate](https://onsi.github.io/gomega/#annotating-assertions) your assertion with something like this: + +``` +Expect(err).NotTo(HaveOccurred(), "Failed to create %d foobars, only created %d", foobarsReqd, foobarsCreated) +``` + +On the other hand, overly verbose logging, particularly of non-error conditions, can make +it unnecessarily difficult to figure out whether a test failed and if +so why? So don't log lots of irrelevant stuff either. + +#### Ability to run in non-dedicated test clusters #### + +To reduce end-to-end delay and improve resource utilization when +running e2e tests, we try, where possible, to run large numbers of +tests in parallel against the same test cluster. This means that: + +1. you should avoid making any assumption (implicit or explicit) that +your test is the only thing running against the cluster. For example, +making the assumption that your test can run a pod on every node in a +cluster is not a safe assumption, as some other tests, running at the +same time as yours, might have saturated one or more nodes in the +cluster. Similarly, running a pod in the system namespace, and +assuming that that will increase the count of pods in the system +namespace by one is not safe, as some other test might be creating or +deleting pods in the system namespace at the same time as your test. +If you do legitimately need to write a test like that, make sure to +label it ["\[Serial\]"](e2e-tests.md#kinds_of_tests) so that it's easy +to identify, and not run in parallel with any other tests. +1. You should avoid doing things to the cluster that make it difficult +for other tests to reliably do what they're trying to do, at the same +time. For example, rebooting nodes, disconnecting network interfaces, +or upgrading cluster software as part of your test is likely to +violate the assumptions that other tests might have made about a +reasonably stable cluster environment. If you need to write such +tests, please label them as +["\[Disruptive\]"](e2e-tests.md#kinds_of_tests) so that it's easy to +identify them, and not run them in parallel with other tests. +1. You should avoid making assumptions about the Kubernetes API that +are not part of the API specification, as your tests will break as +soon as these assumptions become invalid. For example, relying on +specific Events, Event reasons or Event messages will make your tests +very brittle. + +#### Speed of execution #### + +We have hundreds of e2e tests, some of which we run in serial, one +after the other, in some cases. If each test takes just a few minutes +to run, that very quickly adds up to many, many hours of total +execution time. We try to keep such total execution time down to a +few tens of minutes at most. Therefore, try (very hard) to keep the +execution time of your individual tests below 2 minutes, ideally +shorter than that. Concretely, adding inappropriately long 'sleep' +statements or other gratuitous waits to tests is a killer. If under +normal circumstances your pod enters the running state within 10 +seconds, and 99.9% of the time within 30 seconds, it would be +gratuitous to wait 5 minutes for this to happen. Rather just fail +after 30 seconds, with a clear error message as to why your test +failed ("e.g. Pod x failed to become ready after 30 seconds, it +usually takes 10 seconds"). If you do have a truly legitimate reason +for waiting longer than that, or writing a test which takes longer +than 2 minutes to run, comment very clearly in the code why this is +necessary, and label the test as +["\[Slow\]"](e2e-tests.md#kinds_of_tests), so that it's easy to +identify and avoid in test runs that are required to complete +timeously (for example those that are run against every code +submission before it is allowed to be merged). +Note that completing within, say, 2 minutes only when the test +passes is not generally good enough. Your test should also fail in a +reasonable time. We have seen tests that, for example, wait up to 10 +minutes for each of several pods to become ready. Under good +conditions these tests might pass within a few seconds, but if the +pods never become ready (e.g. due to a system regression) they take a +very long time to fail and typically cause the entire test run to time +out, so that no results are produced. Again, this is a lot less +useful than a test that fails reliably within a minute or two when the +system is not working correctly. + +#### Resilience to relatively rare, temporary infrastructure glitches or delays #### + +Remember that your test will be run many thousands of +times, at different times of day and night, probably on different +cloud providers, under different load conditions. And often the +underlying state of these systems is stored in eventually consistent +data stores. So, for example, if a resource creation request is +theoretically asynchronous, even if you observe it to be practically +synchronous most of the time, write your test to assume that it's +asynchronous (e.g. make the "create" call, and poll or watch the +resource until it's in the correct state before proceeding). +Similarly, don't assume that API endpoints are 100% available. +They're not. Under high load conditions, API calls might temporarily +fail or time-out. In such cases it's appropriate to back off and retry +a few times before failing your test completely (in which case make +the error message very clear about what happened, e.g. "Retried +http://... 3 times - all failed with xxx". Use the standard +retry mechanisms provided in the libraries detailed below. + +### Some concrete tools at your disposal ### + +Obviously most of the above goals apply to many tests, not just yours. +So we've developed a set of reusable test infrastructure, libraries +and best practises to help you to do the right thing, or at least do +the same thing as other tests, so that if that turns out to be the +wrong thing, it can be fixed in one place, not hundreds, to be the +right thing. + +Here are a few pointers: + ++ [E2e Framework](../../test/e2e/framework/framework.go): + Familiarise yourself with this test framework and how to use it. + Amongst others, it automatically creates uniquely named namespaces + within which your tests can run to avoid name clashes, and reliably + automates cleaning up the mess after your test has completed (it + just deletes everything in the namespace). This helps to ensure + that tests do not leak resources. Note that deleting a namespace + (and by implication everything in it) is currently an expensive + operation. So the fewer resources you create, the less cleaning up + the framework needs to do, and the faster your test (and other + tests running concurrently with yours) will complete. Your tests + should always use this framework. Trying other home-grown + approaches to avoiding name clashes and resource leaks has proven + to be a very bad idea. ++ [E2e utils library](../../test/e2e/framework/util.go): + This handy library provides tons of reusable code for a host of + commonly needed test functionality, including waiting for resources + to enter specified states, safely and consistently retrying failed + operations, usefully reporting errors, and much more. Make sure + that you're familiar with what's available there, and use it. + Likewise, if you come across a generally useful mechanism that's + not yet implemented there, add it so that others can benefit from + your brilliance. In particular pay attention to the variety of + timeout and retry related constants at the top of that file. Always + try to reuse these constants rather than try to dream up your own + values. Even if the values there are not precisely what you would + like to use (timeout periods, retry counts etc), the benefit of + having them be consistent and centrally configurable across our + entire test suite typically outweighs your personal preferences. ++ **Follow the examples of stable, well-written tests:** Some of our + existing end-to-end tests are better written and more reliable than + others. A few examples of well-written tests include: + [Replication Controllers](../../test/e2e/rc.go), + [Services](../../test/e2e/service.go), + [Reboot](../../test/e2e/reboot.go). ++ [Ginkgo Test Framework](https://github.com/onsi/ginkgo): This is the + test library and runner upon which our e2e tests are built. Before + you write or refactor a test, read the docs and make sure that you + understand how it works. In particular be aware that every test is + uniquely identified and described (e.g. in test reports) by the + concatenation of it's `Describe` clause and nested `It` clauses. + So for example `Describe("Pods",...).... It(""should be scheduled + with cpu and memory limits")` produces a sane test identifier and + descriptor `Pods should be scheduled with cpu and memory limits`, + which makes it clear what's being tested, and hence what's not + working if it fails. Other good examples include: + +``` + CAdvisor should be healthy on every node +``` + +and + +``` + Daemon set should run and stop complex daemon +``` + + On the contrary +(these are real examples), the following are less good test +descriptors: + +``` + KubeProxy should test kube-proxy +``` + +and + +``` +Nodes [Disruptive] Network when a node becomes unreachable +[replication controller] recreates pods scheduled on the +unreachable node AND allows scheduling of pods on a node after +it rejoins the cluster +``` + +An improvement might be + +``` +Unreachable nodes are evacuated and then repopulated upon rejoining [Disruptive] +``` + +Note that opening issues for specific better tooling is welcome, and +code implementing that tooling is even more welcome :-). + + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/writing-good-e2e-tests.md?pixel)]() + diff --git a/developer-guides/vagrant.md b/developer-guides/vagrant.md deleted file mode 100755 index b53b0002..00000000 --- a/developer-guides/vagrant.md +++ /dev/null @@ -1,432 +0,0 @@ -## Getting started with Vagrant - -Running Kubernetes with Vagrant is an easy way to run/test/develop on your -local machine in an environment using the same setup procedures when running on -GCE or AWS cloud providers. This provider is not tested on a per PR basis, if -you experience bugs when testing from HEAD, please open an issue. - -### Prerequisites - -1. Install latest version >= 1.8.1 of vagrant from -http://www.vagrantup.com/downloads.html - -2. Install a virtual machine host. Examples: - 1. [Virtual Box](https://www.virtualbox.org/wiki/Downloads) - 2. [VMWare Fusion](https://www.vmware.com/products/fusion/) plus -[Vagrant VMWare Fusion provider](https://www.vagrantup.com/vmware) - 3. [Parallels Desktop](https://www.parallels.com/products/desktop/) -plus -[Vagrant Parallels provider](https://parallels.github.io/vagrant-parallels/) - -3. Get or build a -[binary release](../../../docs/getting-started-guides/binary_release.md) - -### Setup - -Setting up a cluster is as simple as running: - -```shell -export KUBERNETES_PROVIDER=vagrant -curl -sS https://get.k8s.io | bash -``` - -Alternatively, you can download -[Kubernetes release](https://github.com/kubernetes/kubernetes/releases) and -extract the archive. To start your local cluster, open a shell and run: - -```shell -cd kubernetes - -export KUBERNETES_PROVIDER=vagrant -./cluster/kube-up.sh -``` - -The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster -management scripts which variant to use. If you forget to set this, the -assumption is you are running on Google Compute Engine. - -By default, the Vagrant setup will create a single master VM (called -kubernetes-master) and one node (called kubernetes-node-1). Each VM will take 1 -GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate -free disk space). - -Vagrant will provision each machine in the cluster with all the necessary -components to run Kubernetes. The initial setup can take a few minutes to -complete on each machine. - -If you installed more than one Vagrant provider, Kubernetes will usually pick -the appropriate one. However, you can override which one Kubernetes will use by -setting the -[`VAGRANT_DEFAULT_PROVIDER`](https://docs.vagrantup.com/v2/providers/default.html) -environment variable: - -```shell -export VAGRANT_DEFAULT_PROVIDER=parallels -export KUBERNETES_PROVIDER=vagrant -./cluster/kube-up.sh -``` - -By default, each VM in the cluster is running Fedora. - -To access the master or any node: - -```shell -vagrant ssh master -vagrant ssh node-1 -``` - -If you are running more than one node, you can access the others by: - -```shell -vagrant ssh node-2 -vagrant ssh node-3 -``` - -Each node in the cluster installs the docker daemon and the kubelet. - -The master node instantiates the Kubernetes master components as pods on the -machine. - -To view the service status and/or logs on the kubernetes-master: - -```shell -[vagrant@kubernetes-master ~] $ vagrant ssh master -[vagrant@kubernetes-master ~] $ sudo su - -[root@kubernetes-master ~] $ systemctl status kubelet -[root@kubernetes-master ~] $ journalctl -ru kubelet - -[root@kubernetes-master ~] $ systemctl status docker -[root@kubernetes-master ~] $ journalctl -ru docker - -[root@kubernetes-master ~] $ tail -f /var/log/kube-apiserver.log -[root@kubernetes-master ~] $ tail -f /var/log/kube-controller-manager.log -[root@kubernetes-master ~] $ tail -f /var/log/kube-scheduler.log -``` - -To view the services on any of the nodes: - -```shell -[vagrant@kubernetes-master ~] $ vagrant ssh node-1 -[vagrant@kubernetes-master ~] $ sudo su - -[root@kubernetes-master ~] $ systemctl status kubelet -[root@kubernetes-master ~] $ journalctl -ru kubelet - -[root@kubernetes-master ~] $ systemctl status docker -[root@kubernetes-master ~] $ journalctl -ru docker -``` - -### Interacting with your Kubernetes cluster with Vagrant. - -With your Kubernetes cluster up, you can manage the nodes in your cluster with -the regular Vagrant commands. - -To push updates to new Kubernetes code after making source changes: - -```shell -./cluster/kube-push.sh -``` - -To stop and then restart the cluster: - -```shell -vagrant halt -./cluster/kube-up.sh -``` - -To destroy the cluster: - -```shell -vagrant destroy -``` - -Once your Vagrant machines are up and provisioned, the first thing to do is to -check that you can use the `kubectl.sh` script. - -You may need to build the binaries first, you can do this with `make` - -```shell -$ ./cluster/kubectl.sh get nodes -``` - -### Authenticating with your master - -When using the vagrant provider in Kubernetes, the `cluster/kubectl.sh` script -will cache your credentials in a `~/.kubernetes_vagrant_auth` file so you will -not be prompted for them in the future. - -```shell -cat ~/.kubernetes_vagrant_auth -``` - -```json -{ "User": "vagrant", - "Password": "vagrant", - "CAFile": "/home/k8s_user/.kubernetes.vagrant.ca.crt", - "CertFile": "/home/k8s_user/.kubecfg.vagrant.crt", - "KeyFile": "/home/k8s_user/.kubecfg.vagrant.key" -} -``` - -You should now be set to use the `cluster/kubectl.sh` script. For example try to -list the nodes that you have started with: - -```shell -./cluster/kubectl.sh get nodes -``` - -### Running containers - -You can use `cluster/kube-*.sh` commands to interact with your VM machines: - -```shell -$ ./cluster/kubectl.sh get pods -NAME READY STATUS RESTARTS AGE - -$ ./cluster/kubectl.sh get services -NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE - -$ ./cluster/kubectl.sh get deployments -CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS -``` - -To Start a container running nginx with a Deployment and three replicas: - -```shell -$ ./cluster/kubectl.sh run my-nginx --image=nginx --replicas=3 --port=80 -``` - -When listing the pods, you will see that three containers have been started and -are in Waiting state: - -```shell -$ ./cluster/kubectl.sh get pods -NAME READY STATUS RESTARTS AGE -my-nginx-3800858182-4e6pe 0/1 ContainerCreating 0 3s -my-nginx-3800858182-8ko0s 1/1 Running 0 3s -my-nginx-3800858182-seu3u 0/1 ContainerCreating 0 3s -``` - -When the provisioning is complete: - -```shell -$ ./cluster/kubectl.sh get pods -NAME READY STATUS RESTARTS AGE -my-nginx-3800858182-4e6pe 1/1 Running 0 40s -my-nginx-3800858182-8ko0s 1/1 Running 0 40s -my-nginx-3800858182-seu3u 1/1 Running 0 40s - -$ ./cluster/kubectl.sh get services -NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE - -$ ./cluster/kubectl.sh get deployments -NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE -my-nginx 3 3 3 3 1m -``` - -We did not start any Services, hence there are none listed. But we see three -replicas displayed properly. Check the -[guestbook](https://github.com/kubernetes/kubernetes/tree/%7B%7Bpage.githubbranch%7D%7D/examples/guestbook) -application to learn how to create a Service. You can already play with scaling -the replicas with: - -```shell -$ ./cluster/kubectl.sh scale deployments my-nginx --replicas=2 -$ ./cluster/kubectl.sh get pods -NAME READY STATUS RESTARTS AGE -my-nginx-3800858182-4e6pe 1/1 Running 0 2m -my-nginx-3800858182-8ko0s 1/1 Running 0 2m -``` - -Congratulations! - -### Testing - -The following will run all of the end-to-end testing scenarios assuming you set -your environment: - -```shell -NUM_NODES=3 go run hack/e2e.go -v --build --up --test --down -``` - -### Troubleshooting - -#### I keep downloading the same (large) box all the time! - -By default the Vagrantfile will download the box from S3. You can change this -(and cache the box locally) by providing a name and an alternate URL when -calling `kube-up.sh` - -```shell -export KUBERNETES_BOX_NAME=choose_your_own_name_for_your_kuber_box -export KUBERNETES_BOX_URL=path_of_your_kuber_box -export KUBERNETES_PROVIDER=vagrant -./cluster/kube-up.sh -``` - -#### I am getting timeouts when trying to curl the master from my host! - -During provision of the cluster, you may see the following message: - -```shell -Validating node-1 -............. -Waiting for each node to be registered with cloud provider -error: couldn't read version from server: Get https://10.245.1.2/api: dial tcp 10.245.1.2:443: i/o timeout -``` - -Some users have reported VPNs may prevent traffic from being routed to the host -machine into the virtual machine network. - -To debug, first verify that the master is binding to the proper IP address: - -``` -$ vagrant ssh master -$ ifconfig | grep eth1 -C 2 -eth1: flags=4163 mtu 1500 inet 10.245.1.2 netmask - 255.255.255.0 broadcast 10.245.1.255 -``` - -Then verify that your host machine has a network connection to a bridge that can -serve that address: - -```shell -$ ifconfig | grep 10.245.1 -C 2 - -vboxnet5: flags=4163 mtu 1500 - inet 10.245.1.1 netmask 255.255.255.0 broadcast 10.245.1.255 - inet6 fe80::800:27ff:fe00:5 prefixlen 64 scopeid 0x20 - ether 0a:00:27:00:00:05 txqueuelen 1000 (Ethernet) -``` - -If you do not see a response on your host machine, you will most likely need to -connect your host to the virtual network created by the virtualization provider. - -If you do see a network, but are still unable to ping the machine, check if your -VPN is blocking the request. - -#### I just created the cluster, but I am getting authorization errors! - -You probably have an incorrect ~/.kubernetes_vagrant_auth file for the cluster -you are attempting to contact. - -```shell -rm ~/.kubernetes_vagrant_auth -``` - -After using kubectl.sh make sure that the correct credentials are set: - -```shell -cat ~/.kubernetes_vagrant_auth -``` - -```json -{ - "User": "vagrant", - "Password": "vagrant" -} -``` - -#### I just created the cluster, but I do not see my container running! - -If this is your first time creating the cluster, the kubelet on each node -schedules a number of docker pull requests to fetch prerequisite images. This -can take some time and as a result may delay your initial pod getting -provisioned. - -#### I have Vagrant up but the nodes won't validate! - -Log on to one of the nodes (`vagrant ssh node-1`) and inspect the salt node -log (`sudo cat /var/log/salt/node`). - -#### I want to change the number of nodes! - -You can control the number of nodes that are instantiated via the environment -variable `NUM_NODES` on your host machine. If you plan to work with replicas, we -strongly encourage you to work with enough nodes to satisfy your largest -intended replica size. If you do not plan to work with replicas, you can save -some system resources by running with a single node. You do this, by setting -`NUM_NODES` to 1 like so: - -```shell -export NUM_NODES=1 -``` - -#### I want my VMs to have more memory! - -You can control the memory allotted to virtual machines with the -`KUBERNETES_MEMORY` environment variable. Just set it to the number of megabytes -you would like the machines to have. For example: - -```shell -export KUBERNETES_MEMORY=2048 -``` - -If you need more granular control, you can set the amount of memory for the -master and nodes independently. For example: - -```shell -export KUBERNETES_MASTER_MEMORY=1536 -export KUBERNETES_NODE_MEMORY=2048 -``` - -#### I want to set proxy settings for my Kubernetes cluster boot strapping! - -If you are behind a proxy, you need to install the Vagrant proxy plugin and set -the proxy settings: - -```shell -vagrant plugin install vagrant-proxyconf -export KUBERNETES_HTTP_PROXY=http://username:password@proxyaddr:proxyport -export KUBERNETES_HTTPS_PROXY=https://username:password@proxyaddr:proxyport -``` - -You can also specify addresses that bypass the proxy, for example: - -```shell -export KUBERNETES_NO_PROXY=127.0.0.1 -``` - -If you are using sudo to make Kubernetes build, use the `-E` flag to pass in the -environment variables. For example, if running `make quick-release`, use: - -```shell -sudo -E make quick-release -``` - -#### I have repository access errors during VM provisioning! - -Sometimes VM provisioning may fail with errors that look like this: - -``` -Timeout was reached for https://mirrors.fedoraproject.org/metalink?repo=fedora-23&arch=x86_64 [Connection timed out after 120002 milliseconds] -``` - -You may use a custom Fedora repository URL to fix this: - -```shell -export CUSTOM_FEDORA_REPOSITORY_URL=https://download.fedoraproject.org/pub/fedora/ -``` - -#### I ran vagrant suspend and nothing works! - -`vagrant suspend` seems to mess up the network. It's not supported at this time. - -#### I want vagrant to sync folders via nfs! - -You can ensure that vagrant uses nfs to sync folders with virtual machines by -setting the KUBERNETES_VAGRANT_USE_NFS environment variable to 'true'. nfs is -faster than virtualbox or vmware's 'shared folders' and does not require guest -additions. See the -[vagrant docs](http://docs.vagrantup.com/v2/synced-folders/nfs.html) for details -on configuring nfs on the host. This setting will have no effect on the libvirt -provider, which uses nfs by default. For example: - -```shell -export KUBERNETES_VAGRANT_USE_NFS=true -``` - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/developer-guides/vagrant.md?pixel)]() - diff --git a/development.md b/development.md deleted file mode 100644 index 1349e003..00000000 --- a/development.md +++ /dev/null @@ -1,251 +0,0 @@ -# Development Guide - -This document is intended to be the canonical source of truth for things like -supported toolchain versions for building Kubernetes. If you find a -requirement that this doc does not capture, please -[submit an issue](https://github.com/kubernetes/kubernetes/issues) on github. If -you find other docs with references to requirements that are not simply links to -this doc, please [submit an issue](https://github.com/kubernetes/kubernetes/issues). - -This document is intended to be relative to the branch in which it is found. -It is guaranteed that requirements will change over time for the development -branch, but release branches of Kubernetes should not change. - -## Building Kubernetes with Docker - -Official releases are built using Docker containers. To build Kubernetes using -Docker please follow [these instructions] -(http://releases.k8s.io/HEAD/build-tools/README.md). - -## Building Kubernetes on a local OS/shell environment - -Many of the Kubernetes development helper scripts rely on a fairly up-to-date -GNU tools environment, so most recent Linux distros should work just fine -out-of-the-box. Note that Mac OS X ships with somewhat outdated BSD-based tools, -some of which may be incompatible in subtle ways, so we recommend -[replacing those with modern GNU tools] -(https://www.topbug.net/blog/2013/04/14/install-and-use-gnu-command-line-tools-in-mac-os-x/). - -### Go development environment - -Kubernetes is written in the [Go](http://golang.org) programming language. -To build Kubernetes without using Docker containers, you'll need a Go -development environment. Builds for Kubernetes 1.0 - 1.2 require Go version -1.4.2. Builds for Kubernetes 1.3 and higher require Go version 1.6.0. If you -haven't set up a Go development environment, please follow [these -instructions](http://golang.org/doc/code.html) to install the go tools. - -Set up your GOPATH and add a path entry for go binaries to your PATH. Typically -added to your ~/.profile: - -```sh -export GOPATH=$HOME/go -export PATH=$PATH:$GOPATH/bin -``` - -### Godep dependency management - -Kubernetes build and test scripts use [godep](https://github.com/tools/godep) to -manage dependencies. - -#### Install godep - -Ensure that [mercurial](http://mercurial.selenic.com/wiki/Download) is -installed on your system. (some of godep's dependencies use the mercurial -source control system). Use `apt-get install mercurial` or `yum install -mercurial` on Linux, or [brew.sh](http://brew.sh) on OS X, or download directly -from mercurial. - -Install godep and go-bindata (may require sudo): - -```sh -go get -u github.com/tools/godep -go get -u github.com/jteeuwen/go-bindata/go-bindata -``` - -Note: -At this time, godep version >= v63 is known to work in the Kubernetes project. - -To check your version of godep: - -```sh -$ godep version -godep v74 (linux/amd64/go1.6.2) -``` - -Developers planning to managing dependencies in the `vendor/` tree may want to -explore alternative environment setups. See -[using godep to manage dependencies](godep.md). - -### Local build using make - -To build Kubernetes using your local Go development environment (generate linux -binaries): - -```sh - make -``` - -You may pass build options and packages to the script as necessary. For example, -to build with optimizations disabled for enabling use of source debug tools: - -```sh - make GOGCFLAGS="-N -l" -``` - -To build binaries for all platforms: - -```sh - make cross -``` - -### How to update the Go version used to test & build k8s - -The kubernetes project tries to stay on the latest version of Go so it can -benefit from the improvements to the language over time and can easily -bump to a minor release version for security updates. - -Since kubernetes is mostly built and tested in containers, there are a few -unique places you need to update the go version. - -- The image for cross compiling in [build-tools/build-image/cross/](../../build-tools/build-image/cross/). The `VERSION` file and `Dockerfile`. -- Update [dockerized-e2e-runner.sh](https://github.com/kubernetes/test-infra/blob/master/jenkins/dockerized-e2e-runner.sh) to run a kubekins-e2e with the desired go version, which requires pushing [e2e-image](https://github.com/kubernetes/test-infra/tree/master/jenkins/e2e-image) and [test-image](https://github.com/kubernetes/test-infra/tree/master/jenkins/test-image) images that are `FROM` the desired go version. -- The docker image being run in [gotest-dockerized.sh](https://github.com/kubernetes/test-infra/tree/master/jenkins/gotest-dockerized.sh). -- The cross tag `KUBE_BUILD_IMAGE_CROSS_TAG` in [build-tools/common.sh](../../build-tools/common.sh) - -## Workflow - -Below, we outline one of the more common git workflows that core developers use. -Other git workflows are also valid. - -### Visual overview - -![Git workflow](git_workflow.png) - -### Fork the main repository - -1. Go to https://github.com/kubernetes/kubernetes -2. Click the "Fork" button (at the top right) - -### Clone your fork - -The commands below require that you have $GOPATH set ([$GOPATH -docs](https://golang.org/doc/code.html#GOPATH)). We highly recommend you put -Kubernetes' code into your GOPATH. Note: the commands below will not work if -there is more than one directory in your `$GOPATH`. - -```sh -mkdir -p $GOPATH/src/k8s.io -cd $GOPATH/src/k8s.io -# Replace "$YOUR_GITHUB_USERNAME" below with your github username -git clone https://github.com/$YOUR_GITHUB_USERNAME/kubernetes.git -cd kubernetes -git remote add upstream 'https://github.com/kubernetes/kubernetes.git' -``` - -### Create a branch and make changes - -```sh -git checkout -b my-feature -# Make your code changes -``` - -### Keeping your development fork in sync - -```sh -git fetch upstream -git rebase upstream/master -``` - -Note: If you have write access to the main repository at -github.com/kubernetes/kubernetes, you should modify your git configuration so -that you can't accidentally push to upstream: - -```sh -git remote set-url --push upstream no_push -``` - -### Committing changes to your fork - -Before committing any changes, please link/copy the pre-commit hook into your -.git directory. This will keep you from accidentally committing non-gofmt'd Go -code. This hook will also do a build and test whether documentation generation -scripts need to be executed. - -The hook requires both Godep and etcd on your `PATH`. - -```sh -cd kubernetes/.git/hooks/ -ln -s ../../hooks/pre-commit . -``` - -Then you can commit your changes and push them to your fork: - -```sh -git commit -git push -f origin my-feature -``` - -### Creating a pull request - -1. Visit https://github.com/$YOUR_GITHUB_USERNAME/kubernetes -2. Click the "Compare & pull request" button next to your "my-feature" branch. -3. Check out the pull request [process](pull-requests.md) for more details - -**Note:** If you have write access, please refrain from using the GitHub UI for creating PRs, because GitHub will create the PR branch inside the main repository rather than inside your fork. - -### Getting a code review - -Once your pull request has been opened it will be assigned to one or more -reviewers. Those reviewers will do a thorough code review, looking for -correctness, bugs, opportunities for improvement, documentation and comments, -and style. - -Very small PRs are easy to review. Very large PRs are very difficult to -review. Github has a built-in code review tool, which is what most people use. -At the assigned reviewer's discretion, a PR may be switched to use -[Reviewable](https://reviewable.k8s.io) instead. Once a PR is switched to -Reviewable, please ONLY send or reply to comments through reviewable. Mixing -code review tools can be very confusing. - -See [Faster Reviews](faster_reviews.md) for some thoughts on how to streamline -the review process. - -### When to retain commits and when to squash - -Upon merge, all git commits should represent meaningful milestones or units of -work. Use commits to add clarity to the development and review process. - -Before merging a PR, squash any "fix review feedback", "typo", and "rebased" -sorts of commits. It is not imperative that every commit in a PR compile and -pass tests independently, but it is worth striving for. For mass automated -fixups (e.g. automated doc formatting), use one or more commits for the -changes to tooling and a final commit to apply the fixup en masse. This makes -reviews much easier. - -## Testing - -Three basic commands let you run unit, integration and/or e2e tests: - -```sh -cd kubernetes -make test # Run every unit test -make test WHAT=pkg/util/cache GOFLAGS=-v # Run tests of a package verbosely -make test-integration # Run integration tests, requires etcd -make test-e2e # Run e2e tests -``` - -See the [testing guide](testing.md) and [end-to-end tests](e2e-tests.md) for additional information and scenarios. - -## Regenerating the CLI documentation - -```sh -hack/update-generated-docs.sh -``` - - - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/development.md?pixel)]() - diff --git a/e2e-node-tests.md b/e2e-node-tests.md deleted file mode 100644 index 5e5f5b49..00000000 --- a/e2e-node-tests.md +++ /dev/null @@ -1,231 +0,0 @@ -# Node End-To-End tests - -Node e2e tests are component tests meant for testing the Kubelet code on a custom host environment. - -Tests can be run either locally or against a host running on GCE. - -Node e2e tests are run as both pre- and post- submit tests by the Kubernetes project. - -*Note: Linux only. Mac and Windows unsupported.* - -*Note: There is no scheduler running. The e2e tests have to do manual scheduling, e.g. by using `framework.PodClient`.* - -# Running tests - -## Locally - -Why run tests *Locally*? Much faster than running tests Remotely. - -Prerequisites: -- [Install etcd](https://github.com/coreos/etcd/releases) on your PATH - - Verify etcd is installed correctly by running `which etcd` - - Or make etcd binary available and executable at `/tmp/etcd` -- [Install ginkgo](https://github.com/onsi/ginkgo) on your PATH - - Verify ginkgo is installed correctly by running `which ginkgo` - -From the Kubernetes base directory, run: - -```sh -make test-e2e-node -``` - -This will: run the *ginkgo* binary against the subdirectory *test/e2e_node*, which will in turn: -- Ask for sudo access (needed for running some of the processes) -- Build the Kubernetes source code -- Pre-pull docker images used by the tests -- Start a local instance of *etcd* -- Start a local instance of *kube-apiserver* -- Start a local instance of *kubelet* -- Run the test using the locally started processes -- Output the test results to STDOUT -- Stop *kubelet*, *kube-apiserver*, and *etcd* - -## Remotely - -Why Run tests *Remotely*? Tests will be run in a customized pristine environment. Closely mimics what will be done -as pre- and post- submit testing performed by the project. - -Prerequisites: -- [join the googlegroup](https://groups.google.com/forum/#!forum/kubernetes-dev) -`kubernetes-dev@googlegroups.com` - - *This provides read access to the node test images.* -- Setup a [Google Cloud Platform](https://cloud.google.com/) account and project with Google Compute Engine enabled -- Install and setup the [gcloud sdk](https://cloud.google.com/sdk/downloads) - - Verify the sdk is setup correctly by running `gcloud compute instances list` and `gcloud compute images list --project kubernetes-node-e2e-images` - -Run: - -```sh -make test-e2e-node REMOTE=true -``` - -This will: -- Build the Kubernetes source code -- Create a new GCE instance using the default test image - - Instance will be called **test-e2e-node-containervm-v20160321-image** -- Lookup the instance public ip address -- Copy a compressed archive file to the host containing the following binaries: - - ginkgo - - kubelet - - kube-apiserver - - e2e_node.test (this binary contains the actual tests to be run) -- Unzip the archive to a directory under **/tmp/gcloud** -- Run the tests using the `ginkgo` command - - Starts etcd, kube-apiserver, kubelet - - The ginkgo command is used because this supports more features than running the test binary directly -- Output the remote test results to STDOUT -- `scp` the log files back to the local host under /tmp/_artifacts/e2e-node-containervm-v20160321-image -- Stop the processes on the remote host -- **Leave the GCE instance running** - -**Note: Subsequent tests run using the same image will *reuse the existing host* instead of deleting it and -provisioning a new one. To delete the GCE instance after each test see -*[DELETE_INSTANCE](#delete-instance-after-tests-run)*.** - - -# Additional Remote Options - -## Run tests using different images - -This is useful if you want to run tests against a host using a different OS distro or container runtime than -provided by the default image. - -List the available test images using gcloud. - -```sh -make test-e2e-node LIST_IMAGES=true -``` - -This will output a list of the available images for the default image project. - -Then run: - -```sh -make test-e2e-node REMOTE=true IMAGES="" -``` - -## Run tests against a running GCE instance (not an image) - -This is useful if you have an host instance running already and want to run the tests there instead of on a new instance. - -```sh -make test-e2e-node REMOTE=true HOSTS="" -``` - -## Delete instance after tests run - -This is useful if you want recreate the instance for each test run to trigger flakes related to starting the instance. - -```sh -make test-e2e-node REMOTE=true DELETE_INSTANCES=true -``` - -## Keep instance, test binaries, and *processes* around after tests run - -This is useful if you want to manually inspect or debug the kubelet process run as part of the tests. - -```sh -make test-e2e-node REMOTE=true CLEANUP=false -``` - -## Run tests using an image in another project - -This is useful if you want to create your own host image in another project and use it for testing. - -```sh -make test-e2e-node REMOTE=true IMAGE_PROJECT="" IMAGES="" -``` - -Setting up your own host image may require additional steps such as installing etcd or docker. See -[setup_host.sh](../../test/e2e_node/environment/setup_host.sh) for common steps to setup hosts to run node tests. - -## Create instances using a different instance name prefix - -This is useful if you want to create instances using a different name so that you can run multiple copies of the -test in parallel against different instances of the same image. - -```sh -make test-e2e-node REMOTE=true INSTANCE_PREFIX="my-prefix" -``` - -# Additional Test Options for both Remote and Local execution - -## Only run a subset of the tests - -To run tests matching a regex: - -```sh -make test-e2e-node REMOTE=true FOCUS="" -``` - -To run tests NOT matching a regex: - -```sh -make test-e2e-node REMOTE=true SKIP="" -``` - -## Run tests continually until they fail - -This is useful if you are trying to debug a flaky test failure. This will cause ginkgo to continually -run the tests until they fail. **Note: this will only perform test setup once (e.g. creating the instance) and is -less useful for catching flakes related creating the instance from an image.** - -```sh -make test-e2e-node REMOTE=true RUN_UNTIL_FAILURE=true -``` - -## Run tests in parallel - -Running test in parallel can usually shorten the test duration. By default node -e2e test runs with`--nodes=8` (see ginkgo flag -[--nodes](https://onsi.github.io/ginkgo/#parallel-specs)). You can use the -`PARALLELISM` option to change the parallelism. - -```sh -make test-e2e-node PARALLELISM=4 # run test with 4 parallel nodes -make test-e2e-node PARALLELISM=1 # run test sequentially -``` - -## Run tests with kubenet network plugin - -[kubenet](http://kubernetes.io/docs/admin/network-plugins/#kubenet) is -the default network plugin used by kubelet since Kubernetes 1.3. The -plugin requires [CNI](https://github.com/containernetworking/cni) and -[nsenter](http://man7.org/linux/man-pages/man1/nsenter.1.html). - -Currently, kubenet is enabled by default for Remote execution `REMOTE=true`, -but disabled for Local execution. **Note: kubenet is not supported for -local execution currently. This may cause network related test result to be -different for Local and Remote execution. So if you want to run network -related test, Remote execution is recommended.** - -To enable/disable kubenet: - -```sh -make test_e2e_node TEST_ARGS="--disable-kubenet=true" # enable kubenet -make test_e2e_node TEST_ARGS="--disable-kubenet=false" # disable kubenet -``` - -## Additional QoS Cgroups Hierarchy level testing - -For testing with the QoS Cgroup Hierarchy enabled, you can pass --experimental-cgroups-per-qos flag as an argument into Ginkgo using TEST_ARGS - -```sh -make test_e2e_node TEST_ARGS="--experimental-cgroups-per-qos=true" -``` - -# Notes on tests run by the Kubernetes project during pre-, post- submit. - -The node e2e tests are run by the PR builder for each Pull Request and the results published at -the bottom of the comments section. To re-run just the node e2e tests from the PR builder add the comment -`@k8s-bot node e2e test this issue: #` and **include a link to the test -failure logs if caused by a flake.** - -The PR builder runs tests against the images listed in [jenkins-pull.properties](../../test/e2e_node/jenkins/jenkins-pull.properties) - -The post submit tests run against the images listed in [jenkins-ci.properties](../../test/e2e_node/jenkins/jenkins-ci.properties) - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/e2e-node-tests.md?pixel)]() - diff --git a/e2e-tests.md b/e2e-tests.md deleted file mode 100644 index fc8f1995..00000000 --- a/e2e-tests.md +++ /dev/null @@ -1,719 +0,0 @@ -# End-to-End Testing in Kubernetes - -Updated: 5/3/2016 - -**Table of Contents** - - -- [End-to-End Testing in Kubernetes](#end-to-end-testing-in-kubernetes) - - [Overview](#overview) - - [Building and Running the Tests](#building-and-running-the-tests) - - [Cleaning up](#cleaning-up) - - [Advanced testing](#advanced-testing) - - [Bringing up a cluster for testing](#bringing-up-a-cluster-for-testing) - - [Federation e2e tests](#federation-e2e-tests) - - [Configuring federation e2e tests](#configuring-federation-e2e-tests) - - [Image Push Repository](#image-push-repository) - - [Build](#build) - - [Deploy federation control plane](#deploy-federation-control-plane) - - [Run the Tests](#run-the-tests) - - [Teardown](#teardown) - - [Shortcuts for test developers](#shortcuts-for-test-developers) - - [Debugging clusters](#debugging-clusters) - - [Local clusters](#local-clusters) - - [Testing against local clusters](#testing-against-local-clusters) - - [Version-skewed and upgrade testing](#version-skewed-and-upgrade-testing) - - [Kinds of tests](#kinds-of-tests) - - [Viper configuration and hierarchichal test parameters.](#viper-configuration-and-hierarchichal-test-parameters) - - [Conformance tests](#conformance-tests) - - [Defining Conformance Subset](#defining-conformance-subset) - - [Continuous Integration](#continuous-integration) - - [What is CI?](#what-is-ci) - - [What runs in CI?](#what-runs-in-ci) - - [Non-default tests](#non-default-tests) - - [The PR-builder](#the-pr-builder) - - [Adding a test to CI](#adding-a-test-to-ci) - - [Moving a test out of CI](#moving-a-test-out-of-ci) - - [Performance Evaluation](#performance-evaluation) - - [One More Thing](#one-more-thing) - - - -## Overview - -End-to-end (e2e) tests for Kubernetes provide a mechanism to test end-to-end -behavior of the system, and is the last signal to ensure end user operations -match developer specifications. Although unit and integration tests provide a -good signal, in a distributed system like Kubernetes it is not uncommon that a -minor change may pass all unit and integration tests, but cause unforeseen -changes at the system level. - -The primary objectives of the e2e tests are to ensure a consistent and reliable -behavior of the kubernetes code base, and to catch hard-to-test bugs before -users do, when unit and integration tests are insufficient. - -The e2e tests in kubernetes are built atop of -[Ginkgo](http://onsi.github.io/ginkgo/) and -[Gomega](http://onsi.github.io/gomega/). There are a host of features that this -Behavior-Driven Development (BDD) testing framework provides, and it is -recommended that the developer read the documentation prior to diving into the - tests. - -The purpose of *this* document is to serve as a primer for developers who are -looking to execute or add tests using a local development environment. - -Before writing new tests or making substantive changes to existing tests, you -should also read [Writing Good e2e Tests](writing-good-e2e-tests.md) - -## Building and Running the Tests - -There are a variety of ways to run e2e tests, but we aim to decrease the number -of ways to run e2e tests to a canonical way: `hack/e2e.go`. - -You can run an end-to-end test which will bring up a master and nodes, perform -some tests, and then tear everything down. Make sure you have followed the -getting started steps for your chosen cloud platform (which might involve -changing the `KUBERNETES_PROVIDER` environment variable to something other than -"gce"). - -To build Kubernetes, up a cluster, run tests, and tear everything down, use: - -```sh -go run hack/e2e.go -v --build --up --test --down -``` - -If you'd like to just perform one of these steps, here are some examples: - -```sh -# Build binaries for testing -go run hack/e2e.go -v --build - -# Create a fresh cluster. Deletes a cluster first, if it exists -go run hack/e2e.go -v --up - -# Run all tests -go run hack/e2e.go -v --test - -# Run tests matching the regex "\[Feature:Performance\]" -go run hack/e2e.go -v --test --test_args="--ginkgo.focus=\[Feature:Performance\]" - -# Conversely, exclude tests that match the regex "Pods.*env" -go run hack/e2e.go -v --test --test_args="--ginkgo.skip=Pods.*env" - -# Run tests in parallel, skip any that must be run serially -GINKGO_PARALLEL=y go run hack/e2e.go --v --test --test_args="--ginkgo.skip=\[Serial\]" - -# Run tests in parallel, skip any that must be run serially and keep the test namespace if test failed -GINKGO_PARALLEL=y go run hack/e2e.go --v --test --test_args="--ginkgo.skip=\[Serial\] --delete-namespace-on-failure=false" - -# Flags can be combined, and their actions will take place in this order: -# --build, --up, --test, --down -# -# You can also specify an alternative provider, such as 'aws' -# -# e.g.: -KUBERNETES_PROVIDER=aws go run hack/e2e.go -v --build --up --test --down - -# -ctl can be used to quickly call kubectl against your e2e cluster. Useful for -# cleaning up after a failed test or viewing logs. Use -v to avoid suppressing -# kubectl output. -go run hack/e2e.go -v -ctl='get events' -go run hack/e2e.go -v -ctl='delete pod foobar' -``` - -The tests are built into a single binary which can be run used to deploy a -Kubernetes system or run tests against an already-deployed Kubernetes system. -See `go run hack/e2e.go --help` (or the flag definitions in `hack/e2e.go`) for -more options, such as reusing an existing cluster. - -### Cleaning up - -During a run, pressing `control-C` should result in an orderly shutdown, but if -something goes wrong and you still have some VMs running you can force a cleanup -with this command: - -```sh -go run hack/e2e.go -v --down -``` - -## Advanced testing - -### Bringing up a cluster for testing - -If you want, you may bring up a cluster in some other manner and run tests -against it. To do so, or to do other non-standard test things, you can pass -arguments into Ginkgo using `--test_args` (e.g. see above). For the purposes of -brevity, we will look at a subset of the options, which are listed below: - -``` ---ginkgo.dryRun=false: If set, ginkgo will walk the test hierarchy without -actually running anything. Best paired with -v. - ---ginkgo.failFast=false: If set, ginkgo will stop running a test suite after a -failure occurs. - ---ginkgo.failOnPending=false: If set, ginkgo will mark the test suite as failed -if any specs are pending. - ---ginkgo.focus="": If set, ginkgo will only run specs that match this regular -expression. - ---ginkgo.skip="": If set, ginkgo will only run specs that do not match this -regular expression. - ---ginkgo.trace=false: If set, default reporter prints out the full stack trace -when a failure occurs - ---ginkgo.v=false: If set, default reporter print out all specs as they begin. - ---host="": The host, or api-server, to connect to - ---kubeconfig="": Path to kubeconfig containing embedded authinfo. - ---prom-push-gateway="": The URL to prometheus gateway, so that metrics can be -pushed during e2es and scraped by prometheus. Typically something like -127.0.0.1:9091. - ---provider="": The name of the Kubernetes provider (gce, gke, local, vagrant, -etc.) - ---repo-root="../../": Root directory of kubernetes repository, for finding test -files. -``` - -Prior to running the tests, you may want to first create a simple auth file in -your home directory, e.g. `$HOME/.kube/config`, with the following: - -``` -{ - "User": "root", - "Password": "" -} -``` - -As mentioned earlier there are a host of other options that are available, but -they are left to the developer. - -**NOTE:** If you are running tests on a local cluster repeatedly, you may need -to periodically perform some manual cleanup: - - - `rm -rf /var/run/kubernetes`, clear kube generated credentials, sometimes -stale permissions can cause problems. - - - `sudo iptables -F`, clear ip tables rules left by the kube-proxy. - -### Federation e2e tests - -By default, `e2e.go` provisions a single Kubernetes cluster, and any `Feature:Federation` ginkgo tests will be skipped. - -Federation e2e testing involve bringing up multiple "underlying" Kubernetes clusters, -and deploying the federation control plane as a Kubernetes application on the underlying clusters. - -The federation e2e tests are still managed via `e2e.go`, but require some extra configuration items. - -#### Configuring federation e2e tests - -The following environment variables will enable federation e2e building, provisioning and testing. - -```sh -$ export FEDERATION=true -$ export E2E_ZONES="us-central1-a us-central1-b us-central1-f" -``` - -A Kubernetes cluster will be provisioned in each zone listed in `E2E_ZONES`. A zone can only appear once in the `E2E_ZONES` list. - -#### Image Push Repository - -Next, specify the docker repository where your ci images will be pushed. - -* **If `KUBERNETES_PROVIDER=gce` or `KUBERNETES_PROVIDER=gke`**: - - If you use the same GCP project where you to run the e2e tests as the container image repository, - FEDERATION_PUSH_REPO_BASE environment variable will be defaulted to "gcr.io/${DEFAULT_GCP_PROJECT_NAME}". - You can skip ahead to the **Build** section. - - You can simply set your push repo base based on your project name, and the necessary repositories will be - auto-created when you first push your container images. - - ```sh - $ export FEDERATION_PUSH_REPO_BASE="gcr.io/${GCE_PROJECT_NAME}" - ``` - - Skip ahead to the **Build** section. - -* **For all other providers**: - - You'll be responsible for creating and managing access to the repositories manually. - - ```sh - $ export FEDERATION_PUSH_REPO_BASE="quay.io/colin_hom" - ``` - - Given this example, the `federation-apiserver` container image will be pushed to the repository - `quay.io/colin_hom/federation-apiserver`. - - The docker client on the machine running `e2e.go` must have push access for the following pre-existing repositories: - - * `${FEDERATION_PUSH_REPO_BASE}/federation-apiserver` - * `${FEDERATION_PUSH_REPO_BASE}/federation-controller-manager` - - These repositories must allow public read access, as the e2e node docker daemons will not have any credentials. If you're using - GCE/GKE as your provider, the repositories will have read-access by default. - -#### Build - -* Compile the binaries and build container images: - - ```sh - $ KUBE_RELEASE_RUN_TESTS=n KUBE_FASTBUILD=true go run hack/e2e.go -v -build - ``` - -* Push the federation container images - - ```sh - $ build-tools/push-federation-images.sh - ``` - -#### Deploy federation control plane - -The following command will create the underlying Kubernetes clusters in each of `E2E_ZONES`, and then provision the -federation control plane in the cluster occupying the last zone in the `E2E_ZONES` list. - -```sh -$ go run hack/e2e.go -v --up -``` - -#### Run the Tests - -This will run only the `Feature:Federation` e2e tests. You can omit the `ginkgo.focus` argument to run the entire e2e suite. - -```sh -$ go run hack/e2e.go -v --test --test_args="--ginkgo.focus=\[Feature:Federation\]" -``` - -#### Teardown - -```sh -$ go run hack/e2e.go -v --down -``` - -#### Shortcuts for test developers - -* To speed up `e2e.go -up`, provision a single-node kubernetes cluster in a single e2e zone: - - `NUM_NODES=1 E2E_ZONES="us-central1-f"` - - Keep in mind that some tests may require multiple underlying clusters and/or minimum compute resource availability. - -* You can quickly recompile the e2e testing framework via `go install ./test/e2e`. This will not do anything besides - allow you to verify that the go code compiles. - -* If you want to run your e2e testing framework without re-provisioning the e2e setup, you can do so via - `make WHAT=test/e2e/e2e.test` and then re-running the ginkgo tests. - -* If you're hacking around with the federation control plane deployment itself, - you can quickly re-deploy the federation control plane Kubernetes manifests without tearing any resources down. - To re-deploy the federation control plane after running `-up` for the first time: - - ```sh - $ federation/cluster/federation-up.sh - ``` - -### Debugging clusters - -If a cluster fails to initialize, or you'd like to better understand cluster -state to debug a failed e2e test, you can use the `cluster/log-dump.sh` script -to gather logs. - -This script requires that the cluster provider supports ssh. Assuming it does, -running: - -``` -cluster/log-dump.sh -```` - -will ssh to the master and all nodes and download a variety of useful logs to -the provided directory (which should already exist). - -The Google-run Jenkins builds automatically collected these logs for every -build, saving them in the `artifacts` directory uploaded to GCS. - -### Local clusters - -It can be much faster to iterate on a local cluster instead of a cloud-based -one. To start a local cluster, you can run: - -```sh -# The PATH construction is needed because PATH is one of the special-cased -# environment variables not passed by sudo -E -sudo PATH=$PATH hack/local-up-cluster.sh -``` - -This will start a single-node Kubernetes cluster than runs pods using the local -docker daemon. Press Control-C to stop the cluster. - -You can generate a valid kubeconfig file by following instructions printed at the -end of aforementioned script. - -#### Testing against local clusters - -In order to run an E2E test against a locally running cluster, point the tests -at a custom host directly: - -```sh -export KUBECONFIG=/path/to/kubeconfig -export KUBE_MASTER_IP="http://127.0.0.1:" -export KUBE_MASTER=local -go run hack/e2e.go -v --test -``` - -To control the tests that are run: - -```sh -go run hack/e2e.go -v --test --test_args="--ginkgo.focus=\"Secrets\"" -``` - -### Version-skewed and upgrade testing - -We run version-skewed tests to check that newer versions of Kubernetes work -similarly enough to older versions. The general strategy is to cover the following cases: - -1. One version of `kubectl` with another version of the cluster and tests (e.g. - that v1.2 and v1.4 `kubectl` doesn't break v1.3 tests running against a v1.3 - cluster). -1. A newer version of the Kubernetes master with older nodes and tests (e.g. - that upgrading a master to v1.3 with nodes at v1.2 still passes v1.2 tests). -1. A newer version of the whole cluster with older tests (e.g. that a cluster - upgraded---master and nodes---to v1.3 still passes v1.2 tests). -1. That an upgraded cluster functions the same as a brand-new cluster of the - same version (e.g. a cluster upgraded to v1.3 passes the same v1.3 tests as - a newly-created v1.3 cluster). - -[hack/e2e-runner.sh](http://releases.k8s.io/HEAD/hack/jenkins/e2e-runner.sh) is -the authoritative source on how to run version-skewed tests, but below is a -quick-and-dirty tutorial. - -```sh -# Assume you have two copies of the Kubernetes repository checked out, at -# ./kubernetes and ./kubernetes_old - -# If using GKE: -export KUBERNETES_PROVIDER=gke -export CLUSTER_API_VERSION=${OLD_VERSION} - -# Deploy a cluster at the old version; see above for more details -cd ./kubernetes_old -go run ./hack/e2e.go -v --up - -# Upgrade the cluster to the new version -# -# If using GKE, add --upgrade-target=${NEW_VERSION} -# -# You can target Feature:MasterUpgrade or Feature:ClusterUpgrade -cd ../kubernetes -go run ./hack/e2e.go -v --test --check_version_skew=false --test_args="--ginkgo.focus=\[Feature:MasterUpgrade\]" - -# Run old tests with new kubectl -cd ../kubernetes_old -go run ./hack/e2e.go -v --test --test_args="--kubectl-path=$(pwd)/../kubernetes/cluster/kubectl.sh" -``` - -If you are just testing version-skew, you may want to just deploy at one -version and then test at another version, instead of going through the whole -upgrade process: - -```sh -# With the same setup as above - -# Deploy a cluster at the new version -cd ./kubernetes -go run ./hack/e2e.go -v --up - -# Run new tests with old kubectl -go run ./hack/e2e.go -v --test --test_args="--kubectl-path=$(pwd)/../kubernetes_old/cluster/kubectl.sh" - -# Run old tests with new kubectl -cd ../kubernetes_old -go run ./hack/e2e.go -v --test --test_args="--kubectl-path=$(pwd)/../kubernetes/cluster/kubectl.sh" -``` - -## Kinds of tests - -We are working on implementing clearer partitioning of our e2e tests to make -running a known set of tests easier (#10548). Tests can be labeled with any of -the following labels, in order of increasing precedence (that is, each label -listed below supersedes the previous ones): - - - If a test has no labels, it is expected to run fast (under five minutes), be -able to be run in parallel, and be consistent. - - - `[Slow]`: If a test takes more than five minutes to run (by itself or in -parallel with many other tests), it is labeled `[Slow]`. This partition allows -us to run almost all of our tests quickly in parallel, without waiting for the -stragglers to finish. - - - `[Serial]`: If a test cannot be run in parallel with other tests (e.g. it -takes too many resources or restarts nodes), it is labeled `[Serial]`, and -should be run in serial as part of a separate suite. - - - `[Disruptive]`: If a test restarts components that might cause other tests -to fail or break the cluster completely, it is labeled `[Disruptive]`. Any -`[Disruptive]` test is also assumed to qualify for the `[Serial]` label, but -need not be labeled as both. These tests are not run against soak clusters to -avoid restarting components. - - - `[Flaky]`: If a test is found to be flaky and we have decided that it's too -hard to fix in the short term (e.g. it's going to take a full engineer-week), it -receives the `[Flaky]` label until it is fixed. The `[Flaky]` label should be -used very sparingly, and should be accompanied with a reference to the issue for -de-flaking the test, because while a test remains labeled `[Flaky]`, it is not -monitored closely in CI. `[Flaky]` tests are by default not run, unless a -`focus` or `skip` argument is explicitly given. - - - `[Feature:.+]`: If a test has non-default requirements to run or targets -some non-core functionality, and thus should not be run as part of the standard -suite, it receives a `[Feature:.+]` label, e.g. `[Feature:Performance]` or -`[Feature:Ingress]`. `[Feature:.+]` tests are not run in our core suites, -instead running in custom suites. If a feature is experimental or alpha and is -not enabled by default due to being incomplete or potentially subject to -breaking changes, it does *not* block the merge-queue, and thus should run in -some separate test suites owned by the feature owner(s) -(see [Continuous Integration](#continuous-integration) below). - -### Viper configuration and hierarchichal test parameters. - -The future of e2e test configuration idioms will be increasingly defined using viper, and decreasingly via flags. - -Flags in general fall apart once tests become sufficiently complicated. So, even if we could use another flag library, it wouldn't be ideal. - -To use viper, rather than flags, to configure your tests: - -- Just add "e2e.json" to the current directory you are in, and define parameters in it... i.e. `"kubeconfig":"/tmp/x"`. - -Note that advanced testing parameters, and hierarchichally defined parameters, are only defined in viper, to see what they are, you can dive into [TestContextType](../../test/e2e/framework/test_context.go). - -In time, it is our intent to add or autogenerate a sample viper configuration that includes all e2e parameters, to ship with kubernetes. - -### Conformance tests - -Finally, `[Conformance]` tests represent a subset of the e2e-tests we expect to -pass on **any** Kubernetes cluster. The `[Conformance]` label does not supersede -any other labels. - -As each new release of Kubernetes providers new functionality, the subset of -tests necessary to demonstrate conformance grows with each release. Conformance -is thus considered versioned, with the same backwards compatibility guarantees -as laid out in [our versioning policy](../design/versioning.md#supported-releases). -Conformance tests for a given version should be run off of the release branch -that corresponds to that version. Thus `v1.2` conformance tests would be run -from the head of the `release-1.2` branch. eg: - - - A v1.3 development cluster should pass v1.1, v1.2 conformance tests - - - A v1.2 cluster should pass v1.1, v1.2 conformance tests - - - A v1.1 cluster should pass v1.0, v1.1 conformance tests, and fail v1.2 -conformance tests - -Conformance tests are designed to be run with no cloud provider configured. -Conformance tests can be run against clusters that have not been created with -`hack/e2e.go`, just provide a kubeconfig with the appropriate endpoint and -credentials. - -```sh -# setup for conformance tests -export KUBECONFIG=/path/to/kubeconfig -export KUBERNETES_CONFORMANCE_TEST=y -export KUBERNETES_PROVIDER=skeleton - -# run all conformance tests -go run hack/e2e.go -v --test --test_args="--ginkgo.focus=\[Conformance\]" - -# run all parallel-safe conformance tests in parallel -GINKGO_PARALLEL=y go run hack/e2e.go -v --test --test_args="--ginkgo.focus=\[Conformance\] --ginkgo.skip=\[Serial\]" - -# ... and finish up with remaining tests in serial -go run hack/e2e.go -v --test --test_args="--ginkgo.focus=\[Serial\].*\[Conformance\]" -``` - -### Defining Conformance Subset - -It is impossible to define the entire space of Conformance tests without knowing -the future, so instead, we define the compliment of conformance tests, below -(`Please update this with companion PRs as necessary`): - - - A conformance test cannot test cloud provider specific features (i.e. GCE -monitoring, S3 Bucketing, ...) - - - A conformance test cannot rely on any particular non-standard file system -permissions granted to containers or users (i.e. sharing writable host /tmp with -a container) - - - A conformance test cannot rely on any binaries that are not required for the -linux kernel or for a kubelet to run (i.e. git) - - - A conformance test cannot test a feature which obviously cannot be supported -on a broad range of platforms (i.e. testing of multiple disk mounts, GPUs, high -density) - -## Continuous Integration - -A quick overview of how we run e2e CI on Kubernetes. - -### What is CI? - -We run a battery of `e2e` tests against `HEAD` of the master branch on a -continuous basis, and block merges via the [submit -queue](http://submit-queue.k8s.io/) on a subset of those tests if they fail (the -subset is defined in the [munger config] -(https://github.com/kubernetes/contrib/blob/master/mungegithub/mungers/submit-queue.go) -via the `jenkins-jobs` flag; note we also block on `kubernetes-build` and -`kubernetes-test-go` jobs for build and unit and integration tests). - -CI results can be found at [ci-test.k8s.io](http://ci-test.k8s.io), e.g. -[ci-test.k8s.io/kubernetes-e2e-gce/10594](http://ci-test.k8s.io/kubernetes-e2e-gce/10594). - -### What runs in CI? - -We run all default tests (those that aren't marked `[Flaky]` or `[Feature:.+]`) -against GCE and GKE. To minimize the time from regression-to-green-run, we -partition tests across different jobs: - - - `kubernetes-e2e-` runs all non-`[Slow]`, non-`[Serial]`, -non-`[Disruptive]`, non-`[Flaky]`, non-`[Feature:.+]` tests in parallel. - - - `kubernetes-e2e--slow` runs all `[Slow]`, non-`[Serial]`, -non-`[Disruptive]`, non-`[Flaky]`, non-`[Feature:.+]` tests in parallel. - - - `kubernetes-e2e--serial` runs all `[Serial]` and `[Disruptive]`, -non-`[Flaky]`, non-`[Feature:.+]` tests in serial. - -We also run non-default tests if the tests exercise general-availability ("GA") -features that require a special environment to run in, e.g. -`kubernetes-e2e-gce-scalability` and `kubernetes-kubemark-gce`, which test for -Kubernetes performance. - -#### Non-default tests - -Many `[Feature:.+]` tests we don't run in CI. These tests are for features that -are experimental (often in the `experimental` API), and aren't enabled by -default. - -### The PR-builder - -We also run a battery of tests against every PR before we merge it. These tests -are equivalent to `kubernetes-gce`: it runs all non-`[Slow]`, non-`[Serial]`, -non-`[Disruptive]`, non-`[Flaky]`, non-`[Feature:.+]` tests in parallel. These -tests are considered "smoke tests" to give a decent signal that the PR doesn't -break most functionality. Results for your PR can be found at -[pr-test.k8s.io](http://pr-test.k8s.io), e.g. -[pr-test.k8s.io/20354](http://pr-test.k8s.io/20354) for #20354. - -### Adding a test to CI - -As mentioned above, prior to adding a new test, it is a good idea to perform a -`-ginkgo.dryRun=true` on the system, in order to see if a behavior is already -being tested, or to determine if it may be possible to augment an existing set -of tests for a specific use case. - -If a behavior does not currently have coverage and a developer wishes to add a -new e2e test, navigate to the ./test/e2e directory and create a new test using -the existing suite as a guide. - -TODO(#20357): Create a self-documented example which has been disabled, but can -be copied to create new tests and outlines the capabilities and libraries used. - -When writing a test, consult #kinds_of_tests above to determine how your test -should be marked, (e.g. `[Slow]`, `[Serial]`; remember, by default we assume a -test can run in parallel with other tests!). - -When first adding a test it should *not* go straight into CI, because failures -block ordinary development. A test should only be added to CI after is has been -running in some non-CI suite long enough to establish a track record showing -that the test does not fail when run against *working* software. Note also that -tests running in CI are generally running on a well-loaded cluster, so must -contend for resources; see above about [kinds of tests](#kinds_of_tests). - -Generally, a feature starts as `experimental`, and will be run in some suite -owned by the team developing the feature. If a feature is in beta or GA, it -*should* block the merge-queue. In moving from experimental to beta or GA, tests -that are expected to pass by default should simply remove the `[Feature:.+]` -label, and will be incorporated into our core suites. If tests are not expected -to pass by default, (e.g. they require a special environment such as added -quota,) they should remain with the `[Feature:.+]` label, and the suites that -run them should be incorporated into the -[munger config](https://github.com/kubernetes/contrib/blob/master/mungegithub/mungers/submit-queue.go) -via the `jenkins-jobs` flag. - -Occasionally, we'll want to add tests to better exercise features that are -already GA. These tests also shouldn't go straight to CI. They should begin by -being marked as `[Flaky]` to be run outside of CI, and once a track-record for -them is established, they may be promoted out of `[Flaky]`. - -### Moving a test out of CI - -If we have determined that a test is known-flaky and cannot be fixed in the -short-term, we may move it out of CI indefinitely. This move should be used -sparingly, as it effectively means that we have no coverage of that test. When a -test is demoted, it should be marked `[Flaky]` with a comment accompanying the -label with a reference to an issue opened to fix the test. - -## Performance Evaluation - -Another benefit of the e2e tests is the ability to create reproducible loads on -the system, which can then be used to determine the responsiveness, or analyze -other characteristics of the system. For example, the density tests load the -system to 30,50,100 pods per/node and measures the different characteristics of -the system, such as throughput, api-latency, etc. - -For a good overview of how we analyze performance data, please read the -following [post](http://blog.kubernetes.io/2015/09/kubernetes-performance-measurements-and.html) - -For developers who are interested in doing their own performance analysis, we -recommend setting up [prometheus](http://prometheus.io/) for data collection, -and using [promdash](http://prometheus.io/docs/visualization/promdash/) to -visualize the data. There also exists the option of pushing your own metrics in -from the tests using a -[prom-push-gateway](http://prometheus.io/docs/instrumenting/pushing/). -Containers for all of these components can be found -[here](https://hub.docker.com/u/prom/). - -For more accurate measurements, you may wish to set up prometheus external to -kubernetes in an environment where it can access the major system components -(api-server, controller-manager, scheduler). This is especially useful when -attempting to gather metrics in a load-balanced api-server environment, because -all api-servers can be analyzed independently as well as collectively. On -startup, configuration file is passed to prometheus that specifies the endpoints -that prometheus will scrape, as well as the sampling interval. - -``` -#prometheus.conf -job: { - name: "kubernetes" - scrape_interval: "1s" - target_group: { - # apiserver(s) - target: "http://localhost:8080/metrics" - # scheduler - target: "http://localhost:10251/metrics" - # controller-manager - target: "http://localhost:10252/metrics" - } -} -``` - -Once prometheus is scraping the kubernetes endpoints, that data can then be -plotted using promdash, and alerts can be created against the assortment of -metrics that kubernetes provides. - -## One More Thing - -You should also know the [testing conventions](coding-conventions.md#testing-conventions). - -**HAPPY TESTING!** - - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/e2e-tests.md?pixel)]() - diff --git a/faster_reviews.md b/faster_reviews.md deleted file mode 100644 index 85568d3f..00000000 --- a/faster_reviews.md +++ /dev/null @@ -1,218 +0,0 @@ -# How to get faster PR reviews - -Most of what is written here is not at all specific to Kubernetes, but it bears -being written down in the hope that it will occasionally remind people of "best -practices" around code reviews. - -You've just had a brilliant idea on how to make Kubernetes better. Let's call -that idea "Feature-X". Feature-X is not even that complicated. You have a pretty -good idea of how to implement it. You jump in and implement it, fixing a bunch -of stuff along the way. You send your PR - this is awesome! And it sits. And -sits. A week goes by and nobody reviews it. Finally someone offers a few -comments, which you fix up and wait for more review. And you wait. Another -week or two goes by. This is horrible. - -What went wrong? One particular problem that comes up frequently is this - your -PR is too big to review. You've touched 39 files and have 8657 insertions. When -your would-be reviewers pull up the diffs they run away - this PR is going to -take 4 hours to review and they don't have 4 hours right now. They'll get to it -later, just as soon as they have more free time (ha!). - -Let's talk about how to avoid this. - -## 0. Familiarize yourself with project conventions - -* [Development guide](development.md) -* [Coding conventions](coding-conventions.md) -* [API conventions](api-conventions.md) -* [Kubectl conventions](kubectl-conventions.md) - -## 1. Don't build a cathedral in one PR - -Are you sure Feature-X is something the Kubernetes team wants or will accept, or -that it is implemented to fit with other changes in flight? Are you willing to -bet a few days or weeks of work on it? If you have any doubt at all about the -usefulness of your feature or the design - make a proposal doc (in -docs/proposals; for example [the QoS proposal](http://prs.k8s.io/11713)) or a -sketch PR (e.g., just the API or Go interface) or both. Write or code up just -enough to express the idea and the design and why you made those choices, then -get feedback on this. Be clear about what type of feedback you are asking for. -Now, if we ask you to change a bunch of facets of the design, you won't have to -re-write it all. - -## 2. Smaller diffs are exponentially better - -Small PRs get reviewed faster and are more likely to be correct than big ones. -Let's face it - attention wanes over time. If your PR takes 60 minutes to -review, I almost guarantee that the reviewer's eye for detail is not as keen in -the last 30 minutes as it was in the first. This leads to multiple rounds of -review when one might have sufficed. In some cases the review is delayed in its -entirety by the need for a large contiguous block of time to sit and read your -code. - -Whenever possible, break up your PRs into multiple commits. Making a series of -discrete commits is a powerful way to express the evolution of an idea or the -different ideas that make up a single feature. There's a balance to be struck, -obviously. If your commits are too small they become more cumbersome to deal -with. Strive to group logically distinct ideas into separate commits. - -For example, if you found that Feature-X needed some "prefactoring" to fit in, -make a commit that JUST does that prefactoring. Then make a new commit for -Feature-X. Don't lump unrelated things together just because you didn't think -about prefactoring. If you need to, fork a new branch, do the prefactoring -there and send a PR for that. If you can explain why you are doing seemingly -no-op work ("it makes the Feature-X change easier, I promise") we'll probably be -OK with it. - -Obviously, a PR with 25 commits is still very cumbersome to review, so use -common sense. - -## 3. Multiple small PRs are often better than multiple commits - -If you can extract whole ideas from your PR and send those as PRs of their own, -you can avoid the painful problem of continually rebasing. Kubernetes is a -fast-moving codebase - lock in your changes ASAP, and make merges be someone -else's problem. - -Obviously, we want every PR to be useful on its own, so you'll have to use -common sense in deciding what can be a PR vs. what should be a commit in a larger -PR. Rule of thumb - if this commit or set of commits is directly related to -Feature-X and nothing else, it should probably be part of the Feature-X PR. If -you can plausibly imagine someone finding value in this commit outside of -Feature-X, try it as a PR. - -Don't worry about flooding us with PRs. We'd rather have 100 small, obvious PRs -than 10 unreviewable monoliths. - -## 4. Don't rename, reformat, comment, etc in the same PR - -Often, as you are implementing Feature-X, you find things that are just wrong. -Bad comments, poorly named functions, bad structure, weak type-safety. You -should absolutely fix those things (or at least file issues, please) - but not -in this PR. See the above points - break unrelated changes out into different -PRs or commits. Otherwise your diff will have WAY too many changes, and your -reviewer won't see the forest because of all the trees. - -## 5. Comments matter - -Read up on GoDoc - follow those general rules. If you're writing code and you -think there is any possible chance that someone might not understand why you did -something (or that you won't remember what you yourself did), comment it. If -you think there's something pretty obvious that we could follow up on, add a -TODO. Many code-review comments are about this exact issue. - -## 5. Tests are almost always required - -Nothing is more frustrating than doing a review, only to find that the tests are -inadequate or even entirely absent. Very few PRs can touch code and NOT touch -tests. If you don't know how to test Feature-X - ask! We'll be happy to help -you design things for easy testing or to suggest appropriate test cases. - -## 6. Look for opportunities to generify - -If you find yourself writing something that touches a lot of modules, think hard -about the dependencies you are introducing between packages. Can some of what -you're doing be made more generic and moved up and out of the Feature-X package? -Do you need to use a function or type from an otherwise unrelated package? If -so, promote! We have places specifically for hosting more generic code. - -Likewise if Feature-X is similar in form to Feature-W which was checked in last -month and it happens to exactly duplicate some tricky stuff from Feature-W, -consider prefactoring core logic out and using it in both Feature-W and -Feature-X. But do that in a different commit or PR, please. - -## 7. Fix feedback in a new commit - -Your reviewer has finally sent you some feedback on Feature-X. You make a bunch -of changes and ... what? You could patch those into your commits with git -"squash" or "fixup" logic. But that makes your changes hard to verify. Unless -your whole PR is pretty trivial, you should instead put your fixups into a new -commit and re-push. Your reviewer can then look at that commit on its own - so -much faster to review than starting over. - -We might still ask you to clean up your commits at the very end, for the sake -of a more readable history, but don't do this until asked, typically at the -point where the PR would otherwise be tagged LGTM. - -General squashing guidelines: - -* Sausage => squash - - When there are several commits to fix bugs in the original commit(s), address -reviewer feedback, etc. Really we only want to see the end state and commit -message for the whole PR. - -* Layers => don't squash - - When there are independent changes layered upon each other to achieve a single -goal. For instance, writing a code munger could be one commit, applying it could -be another, and adding a precommit check could be a third. One could argue they -should be separate PRs, but there's really no way to test/review the munger -without seeing it applied, and there needs to be a precommit check to ensure the -munged output doesn't immediately get out of date. - -A commit, as much as possible, should be a single logical change. Each commit -should always have a good title line (<70 characters) and include an additional -description paragraph describing in more detail the change intended. Do not link -pull requests by `#` in a commit description, because GitHub creates lots of -spam. Instead, reference other PRs via the PR your commit is in. - -## 8. KISS, YAGNI, MVP, etc - -Sometimes we need to remind each other of core tenets of software design - Keep -It Simple, You Aren't Gonna Need It, Minimum Viable Product, and so on. Adding -features "because we might need it later" is antithetical to software that -ships. Add the things you need NOW and (ideally) leave room for things you -might need later - but don't implement them now. - -## 9. Push back - -We understand that it is hard to imagine, but sometimes we make mistakes. It's -OK to push back on changes requested during a review. If you have a good reason -for doing something a certain way, you are absolutely allowed to debate the -merits of a requested change. You might be overruled, but you might also -prevail. We're mostly pretty reasonable people. Mostly. - -## 10. I'm still getting stalled - help?! - -So, you've done all that and you still aren't getting any PR love? Here's some -things you can do that might help kick a stalled process along: - - * Make sure that your PR has an assigned reviewer (assignee in GitHub). If -this is not the case, reply to the PR comment stream asking for one to be -assigned. - - * Ping the assignee (@username) on the PR comment stream asking for an -estimate of when they can get to it. - - * Ping the assignee by email (many of us have email addresses that are well -published or are the same as our GitHub handle @google.com or @redhat.com). - - * Ping the [team](https://github.com/orgs/kubernetes/teams) (via @team-name) -that works in the area you're submitting code. - -If you think you have fixed all the issues in a round of review, and you haven't -heard back, you should ping the reviewer (assignee) on the comment stream with a -"please take another look" (PTAL) or similar comment indicating you are done and -you think it is ready for re-review. In fact, this is probably a good habit for -all PRs. - -One phenomenon of open-source projects (where anyone can comment on any issue) -is the dog-pile - your PR gets so many comments from so many people it becomes -hard to follow. In this situation you can ask the primary reviewer (assignee) -whether they want you to fork a new PR to clear out all the comments. Remember: -you don't HAVE to fix every issue raised by every person who feels like -commenting, but you should at least answer reasonable comments with an -explanation. - -## Final: Use common sense - -Obviously, none of these points are hard rules. There is no document that can -take the place of common sense and good taste. Use your best judgment, but put -a bit of thought into how your work can be made easier to review. If you do -these things your PRs will flow much more easily. - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/faster_reviews.md?pixel)]() - diff --git a/flaky-tests.md b/flaky-tests.md deleted file mode 100644 index 9656bd5f..00000000 --- a/flaky-tests.md +++ /dev/null @@ -1,194 +0,0 @@ -# Flaky tests - -Any test that fails occasionally is "flaky". Since our merges only proceed when -all tests are green, and we have a number of different CI systems running the -tests in various combinations, even a small percentage of flakes results in a -lot of pain for people waiting for their PRs to merge. - -Therefore, it's very important that we write tests defensively. Situations that -"almost never happen" happen with some regularity when run thousands of times in -resource-constrained environments. Since flakes can often be quite hard to -reproduce while still being common enough to block merges occasionally, it's -additionally important that the test logs be useful for narrowing down exactly -what caused the failure. - -Note that flakes can occur in unit tests, integration tests, or end-to-end -tests, but probably occur most commonly in end-to-end tests. - -## Filing issues for flaky tests - -Because flakes may be rare, it's very important that all relevant logs be -discoverable from the issue. - -1. Search for the test name. If you find an open issue and you're 90% sure the - flake is exactly the same, add a comment instead of making a new issue. -2. If you make a new issue, you should title it with the test name, prefixed by - "e2e/unit/integration flake:" (whichever is appropriate) -3. Reference any old issues you found in step one. Also, make a comment in the - old issue referencing your new issue, because people monitoring only their - email do not see the backlinks github adds. Alternatively, tag the person or - people who most recently worked on it. -4. Paste, in block quotes, the entire log of the individual failing test, not - just the failure line. -5. Link to durable storage with the rest of the logs. This means (for all the - tests that Google runs) the GCS link is mandatory! The Jenkins test result - link is nice but strictly optional: not only does it expire more quickly, - it's not accessible to non-Googlers. - -## Finding filed flaky test cases - -Find flaky tests issues on GitHub under the [kind/flake issue label][flake]. -There are significant numbers of flaky tests reported on a regular basis and P2 -flakes are under-investigated. Fixing flakes is a quick way to gain expertise -and community goodwill. - -[flake]: https://github.com/kubernetes/kubernetes/issues?q=is%3Aopen+is%3Aissue+label%3Akind%2Fflake - -## Expectations when a flaky test is assigned to you - -Note that we won't randomly assign these issues to you unless you've opted in or -you're part of a group that has opted in. We are more than happy to accept help -from anyone in fixing these, but due to the severity of the problem when merges -are blocked, we need reasonably quick turn-around time on test flakes. Therefore -we have the following guidelines: - -1. If a flaky test is assigned to you, it's more important than anything else - you're doing unless you can get a special dispensation (in which case it will - be reassigned). If you have too many flaky tests assigned to you, or you - have such a dispensation, then it's *still* your responsibility to find new - owners (this may just mean giving stuff back to the relevant Team or SIG Lead). -2. You should make a reasonable effort to reproduce it. Somewhere between an - hour and half a day of concentrated effort is "reasonable". It is perfectly - reasonable to ask for help! -3. If you can reproduce it (or it's obvious from the logs what happened), you - should then be able to fix it, or in the case where someone is clearly more - qualified to fix it, reassign it with very clear instructions. -4. PRs that fix or help debug flakes may have the P0 priority set to get them - through the merge queue as fast as possible. -5. Once you have made a change that you believe fixes a flake, it is conservative - to keep the issue for the flake open and see if it manifests again after the - change is merged. -6. If you can't reproduce a flake: __don't just close it!__ Every time a flake comes - back, at least 2 hours of merge time is wasted. So we need to make monotonic - progress towards narrowing it down every time a flake occurs. If you can't - figure it out from the logs, add log messages that would have help you figure - it out. If you make changes to make a flake more reproducible, please link - your pull request to the flake you're working on. -7. If a flake has been open, could not be reproduced, and has not manifested in - 3 months, it is reasonable to close the flake issue with a note saying - why. - -# Reproducing unit test flakes - -Try the [stress command](https://godoc.org/golang.org/x/tools/cmd/stress). - -Just - -``` -$ go install golang.org/x/tools/cmd/stress -``` - -Then build your test binary - -``` -$ go test -c -race -``` - -Then run it under stress - -``` -$ stress ./package.test -test.run=FlakyTest -``` - -It runs the command and writes output to `/tmp/gostress-*` files when it fails. -It periodically reports with run counts. Be careful with tests that use the -`net/http/httptest` package; they could exhaust the available ports on your -system! - -# Hunting flaky unit tests in Kubernetes - -Sometimes unit tests are flaky. This means that due to (usually) race -conditions, they will occasionally fail, even though most of the time they pass. - -We have a goal of 99.9% flake free tests. This means that there is only one -flake in one thousand runs of a test. - -Running a test 1000 times on your own machine can be tedious and time consuming. -Fortunately, there is a better way to achieve this using Kubernetes. - -_Note: these instructions are mildly hacky for now, as we get run once semantics -and logging they will get better_ - -There is a testing image `brendanburns/flake` up on the docker hub. We will use -this image to test our fix. - -Create a replication controller with the following config: - -```yaml -apiVersion: v1 -kind: ReplicationController -metadata: - name: flakecontroller -spec: - replicas: 24 - template: - metadata: - labels: - name: flake - spec: - containers: - - name: flake - image: brendanburns/flake - env: - - name: TEST_PACKAGE - value: pkg/tools - - name: REPO_SPEC - value: https://github.com/kubernetes/kubernetes -``` - -Note that we omit the labels and the selector fields of the replication -controller, because they will be populated from the labels field of the pod -template by default. - -```sh -kubectl create -f ./controller.yaml -``` - -This will spin up 24 instances of the test. They will run to completion, then -exit, and the kubelet will restart them, accumulating more and more runs of the -test. - -You can examine the recent runs of the test by calling `docker ps -a` and -looking for tasks that exited with non-zero exit codes. Unfortunately, docker -ps -a only keeps around the exit status of the last 15-20 containers with the -same image, so you have to check them frequently. - -You can use this script to automate checking for failures, assuming your cluster -is running on GCE and has four nodes: - -```sh -echo "" > output.txt -for i in {1..4}; do - echo "Checking kubernetes-node-${i}" - echo "kubernetes-node-${i}:" >> output.txt - gcloud compute ssh "kubernetes-node-${i}" --command="sudo docker ps -a" >> output.txt -done -grep "Exited ([^0])" output.txt -``` - -Eventually you will have sufficient runs for your purposes. At that point you -can delete the replication controller by running: - -```sh -kubectl delete replicationcontroller flakecontroller -``` - -If you do a final check for flakes with `docker ps -a`, ignore tasks that -exited -1, since that's what happens when you stop the replication controller. - -Happy flake hunting! - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/flaky-tests.md?pixel)]() - diff --git a/generating-clientset.md b/generating-clientset.md deleted file mode 100644 index cbb6141c..00000000 --- a/generating-clientset.md +++ /dev/null @@ -1,41 +0,0 @@ -# Generation and release cycle of clientset - -Client-gen is an automatic tool that generates [clientset](../../docs/proposals/client-package-structure.md#high-level-client-sets) based on API types. This doc introduces the use the client-gen, and the release cycle of the generated clientsets. - -## Using client-gen - -The workflow includes three steps: - -1. Marking API types with tags: in `pkg/apis/${GROUP}/${VERSION}/types.go`, mark the types (e.g., Pods) that you want to generate clients for with the `// +genclient=true` tag. If the resource associated with the type is not namespace scoped (e.g., PersistentVolume), you need to append the `nonNamespaced=true` tag as well. - -2. - - a. If you are developing in the k8s.io/kubernetes repository, you just need to run hack/update-codegen.sh. - - - b. If you are running client-gen outside of k8s.io/kubernetes, you need to use the command line argument `--input` to specify the groups and versions of the APIs you want to generate clients for, client-gen will then look into `pkg/apis/${GROUP}/${VERSION}/types.go` and generate clients for the types you have marked with the `genclient` tags. For example, to generated a clientset named "my_release" including clients for api/v1 objects and extensions/v1beta1 objects, you need to run: - -``` -$ client-gen --input="api/v1,extensions/v1beta1" --clientset-name="my_release" -``` - -3. ***Adding expansion methods***: client-gen only generates the common methods, such as CRUD. You can manually add additional methods through the expansion interface. For example, this [file](../../pkg/client/clientset_generated/release_1_5/typed/core/v1/pod_expansion.go) adds additional methods to Pod's client. As a convention, we put the expansion interface and its methods in file ${TYPE}_expansion.go. In most cases, you don't want to remove existing expansion files. So to make life easier, instead of creating a new clientset from scratch, ***you can copy and rename an existing clientset (so that all the expansion files are copied)***, and then run client-gen. - -## Output of client-gen - -- clientset: the clientset will be generated at `pkg/client/clientset_generated/` by default, and you can change the path via the `--clientset-path` command line argument. - -- Individual typed clients and client for group: They will be generated at `pkg/client/clientset_generated/${clientset_name}/typed/generated/${GROUP}/${VERSION}/` - -## Released clientsets - -If you are contributing code to k8s.io/kubernetes, try to use the release_X_Y clientset in this [directory](../../pkg/client/clientset_generated/). - -If you need a stable Go client to build your own project, please refer to the [client-go repository](https://github.com/kubernetes/client-go). - -We are migrating k8s.io/kubernetes to use client-go as well, see issue [#35159](https://github.com/kubernetes/kubernetes/issues/35159). - - [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/generating-clientset.md?pixel)]() - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/generating-clientset.md?pixel)]() - diff --git a/getting-builds.md b/getting-builds.md deleted file mode 100644 index 86563390..00000000 --- a/getting-builds.md +++ /dev/null @@ -1,52 +0,0 @@ -# Getting Kubernetes Builds - -You can use [hack/get-build.sh](http://releases.k8s.io/HEAD/hack/get-build.sh) -to get a build or to use as a reference on how to get the most recent builds -with curl. With `get-build.sh` you can grab the most recent stable build, the -most recent release candidate, or the most recent build to pass our ci and gce -e2e tests (essentially a nightly build). - -Run `./hack/get-build.sh -h` for its usage. - -To get a build at a specific version (v1.1.1) use: - -```console -./hack/get-build.sh v1.1.1 -``` - -To get the latest stable release: - -```console -./hack/get-build.sh release/stable -``` - -Use the "-v" option to print the version number of a build without retrieving -it. For example, the following prints the version number for the latest ci -build: - -```console -./hack/get-build.sh -v ci/latest -``` - -You can also use the gsutil tool to explore the Google Cloud Storage release -buckets. Here are some examples: - -```sh -gsutil cat gs://kubernetes-release-dev/ci/latest.txt # output the latest ci version number -gsutil cat gs://kubernetes-release-dev/ci/latest-green.txt # output the latest ci version number that passed gce e2e -gsutil ls gs://kubernetes-release-dev/ci/v0.20.0-29-g29a55cc/ # list the contents of a ci release -gsutil ls gs://kubernetes-release/release # list all official releases and rcs -``` - -## Install `gsutil` - -Example installation: - -```console -$ curl -sSL https://storage.googleapis.com/pub/gsutil.tar.gz | sudo tar -xz -C /usr/local/src -$ sudo ln -s /usr/local/src/gsutil/gsutil /usr/bin/gsutil -``` - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/getting-builds.md?pixel)]() - diff --git a/git_workflow.png b/git_workflow.png deleted file mode 100644 index 80a66248..00000000 Binary files a/git_workflow.png and /dev/null differ diff --git a/go-code.md b/go-code.md deleted file mode 100644 index 2af055f4..00000000 --- a/go-code.md +++ /dev/null @@ -1,32 +0,0 @@ -# Kubernetes Go Tools and Tips - -Kubernetes is one of the largest open source Go projects, so good tooling a solid understanding of -Go is critical to Kubernetes development. This document provides a collection of resources, tools -and tips that our developers have found useful. - -## Recommended Reading - -- [Kubernetes Go development environment](development.md#go-development-environment) -- [The Go Spec](https://golang.org/ref/spec) - The Go Programming Language - Specification. -- [Go Tour](https://tour.golang.org/welcome/2) - Official Go tutorial. -- [Effective Go](https://golang.org/doc/effective_go.html) - A good collection of Go advice. -- [Kubernetes Code conventions](coding-conventions.md) - Style guide for Kubernetes code. -- [Three Go Landmines](https://gist.github.com/lavalamp/4bd23295a9f32706a48f) - Surprising behavior in the Go language. These have caused real bugs! - -## Recommended Tools - -- [godep](https://github.com/tools/godep) - Used for Kubernetes dependency management. See also [Kubernetes godep and dependency management](development.md#godep-and-dependency-management) -- [Go Version Manager](https://github.com/moovweb/gvm) - A handy tool for managing Go versions. -- [godepq](https://github.com/google/godepq) - A tool for analyzing go import trees. - -## Go Tips - -- [Godoc bookmarklet](https://gist.github.com/timstclair/c891fb8aeb24d663026371d91dcdb3fc) - navigate from a github page to the corresponding godoc page. -- Consider making a separate Go tree for each project, which can make overlapping dependency management much easier. Remember to set the `$GOPATH` correctly! Consider [scripting](https://gist.github.com/timstclair/17ca792a20e0d83b06dddef7d77b1ea0) this. -- Emacs users - setup [go-mode](https://github.com/dominikh/go-mode.el) - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/go-code.md?pixel)]() - diff --git a/godep.md b/godep.md deleted file mode 100644 index ddd6c5b1..00000000 --- a/godep.md +++ /dev/null @@ -1,123 +0,0 @@ -# Using godep to manage dependencies - -This document is intended to show a way for managing `vendor/` tree dependencies -in Kubernetes. If you are not planning on managing `vendor` dependencies go here -[Godep dependency management](development.md#godep-dependency-management). - -## Alternate GOPATH for installing and using godep - -There are many ways to build and host Go binaries. Here is one way to get -utilities like `godep` installed: - -Create a new GOPATH just for your go tools and install godep: - -```sh -export GOPATH=$HOME/go-tools -mkdir -p $GOPATH -go get -u github.com/tools/godep -``` - -Add this $GOPATH/bin to your path. Typically you'd add this to your ~/.profile: - -```sh -export GOPATH=$HOME/go-tools -export PATH=$PATH:$GOPATH/bin -``` - -## Using godep - -Here's a quick walkthrough of one way to use godeps to add or update a -Kubernetes dependency into `vendor/`. For more details, please see the -instructions in [godep's documentation](https://github.com/tools/godep). - -1) Devote a directory to this endeavor: - -_Devoting a separate directory is not strictly required, but it is helpful to -separate dependency updates from other changes._ - -```sh -export KPATH=$HOME/code/kubernetes -mkdir -p $KPATH/src/k8s.io -cd $KPATH/src/k8s.io -git clone https://github.com/$YOUR_GITHUB_USERNAME/kubernetes.git # assumes your fork is 'kubernetes' -# Or copy your existing local repo here. IMPORTANT: making a symlink doesn't work. -``` - -2) Set up your GOPATH. - -```sh -# This will *not* let your local builds see packages that exist elsewhere on your system. -export GOPATH=$KPATH -``` - -3) Populate your new GOPATH. - -```sh -cd $KPATH/src/k8s.io/kubernetes -godep restore -``` - -4) Next, you can either add a new dependency or update an existing one. - -To add a new dependency is simple (if a bit slow): - -```sh -cd $KPATH/src/k8s.io/kubernetes -DEP=example.com/path/to/dependency -godep get $DEP/... -# Now change code in Kubernetes to use the dependency. -./hack/godep-save.sh -``` - -To update an existing dependency is a bit more complicated. Godep has an -`update` command, but none of us can figure out how to actually make it work. -Instead, this procedure seems to work reliably: - -```sh -cd $KPATH/src/k8s.io/kubernetes -DEP=example.com/path/to/dependency -# NB: For the next step, $DEP is assumed be the repo root. If it is actually a -# subdir of the repo, use the repo root here. This is required to keep godep -# from getting angry because `godep restore` left the tree in a "detached head" -# state. -rm -rf $KPATH/src/$DEP # repo root -godep get $DEP/... -# Change code in Kubernetes, if necessary. -rm -rf Godeps -rm -rf vendor -./hack/godep-save.sh -git checkout -- $(git status -s | grep "^ D" | awk '{print $2}' | grep ^Godeps) -``` - -_If `go get -u path/to/dependency` fails with compilation errors, instead try -`go get -d -u path/to/dependency` to fetch the dependencies without compiling -them. This is unusual, but has been observed._ - -After all of this is done, `git status` should show you what files have been -modified and added/removed. Make sure to `git add` and `git rm` them. It is -commonly advised to make one `git commit` which includes just the dependency -update and Godeps files, and another `git commit` that includes changes to -Kubernetes code to use the new/updated dependency. These commits can go into a -single pull request. - -5) Before sending your PR, it's a good idea to sanity check that your -Godeps.json file and the contents of `vendor/ `are ok by running `hack/verify-godeps.sh` - -_If `hack/verify-godeps.sh` fails after a `godep update`, it is possible that a -transitive dependency was added or removed but not updated by godeps. It then -may be necessary to perform a `hack/godep-save.sh` to pick up the transitive -dependency changes._ - -It is sometimes expedient to manually fix the /Godeps/Godeps.json file to -minimize the changes. However without great care this can lead to failures -with `hack/verify-godeps.sh`. This must pass for every PR. - -6) If you updated the Godeps, please also update `Godeps/LICENSES` by running -`hack/update-godep-licenses.sh`. - - - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/godep.md?pixel)]() - diff --git a/gubernator-images/filterpage.png b/gubernator-images/filterpage.png deleted file mode 100644 index 2d08bd8e..00000000 Binary files a/gubernator-images/filterpage.png and /dev/null differ diff --git a/gubernator-images/filterpage1.png b/gubernator-images/filterpage1.png deleted file mode 100644 index 838cb0fa..00000000 Binary files a/gubernator-images/filterpage1.png and /dev/null differ diff --git a/gubernator-images/filterpage2.png b/gubernator-images/filterpage2.png deleted file mode 100644 index 63da782e..00000000 Binary files a/gubernator-images/filterpage2.png and /dev/null differ diff --git a/gubernator-images/filterpage3.png b/gubernator-images/filterpage3.png deleted file mode 100644 index 33066d78..00000000 Binary files a/gubernator-images/filterpage3.png and /dev/null differ diff --git a/gubernator-images/skipping1.png b/gubernator-images/skipping1.png deleted file mode 100644 index a5dea440..00000000 Binary files a/gubernator-images/skipping1.png and /dev/null differ diff --git a/gubernator-images/skipping2.png b/gubernator-images/skipping2.png deleted file mode 100644 index b133347e..00000000 Binary files a/gubernator-images/skipping2.png and /dev/null differ diff --git a/gubernator-images/testfailures.png b/gubernator-images/testfailures.png deleted file mode 100644 index 1b331248..00000000 Binary files a/gubernator-images/testfailures.png and /dev/null differ diff --git a/gubernator.md b/gubernator.md deleted file mode 100644 index 3fd2e445..00000000 --- a/gubernator.md +++ /dev/null @@ -1,142 +0,0 @@ -# Gubernator - -*This document is oriented at developers who want to use Gubernator to debug while developing for Kubernetes.* - - - -- [Gubernator](#gubernator) - - [What is Gubernator?](#what-is-gubernator) - - [Gubernator Features](#gubernator-features) - - [Test Failures list](#test-failures-list) - - [Log Filtering](#log-filtering) - - [Gubernator for Local Tests](#gubernator-for-local-tests) - - [Future Work](#future-work) - - - -## What is Gubernator? - -[Gubernator](https://k8s-gubernator.appspot.com/) is a webpage for viewing and filtering Kubernetes -test results. - -Gubernator simplifies the debugging proccess and makes it easier to track down failures by automating many -steps commonly taken in searching through logs, and by offering tools to filter through logs to find relevant lines. -Gubernator automates the steps of finding the failed tests, displaying relevant logs, and determining the -failed pods and the corresponing pod UID, namespace, and container ID. -It also allows for filtering of the log files to display relevant lines based on selected keywords, and -allows for multiple logs to be woven together by timestamp. - -Gubernator runs on Google App Engine and fetches logs stored on Google Cloud Storage. - -## Gubernator Features - -### Test Failures list - -Issues made by k8s-merge-robot will post a link to a page listing the failed tests. -Each failed test comes with the corresponding error log from a junit file and a link -to filter logs for that test. - -Based on the message logged in the junit file, the pod name may be displayed. - -![alt text](gubernator-images/testfailures.png) - -[Test Failures List Example](https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/11721) - -### Log Filtering - -The log filtering page comes with checkboxes and textboxes to aid in filtering. Filtered keywords will be bolded -and lines including keywords will be highlighted. Up to four lines around the line of interest will also be displayed. - -![alt text](gubernator-images/filterpage.png) - -If less than 100 lines are skipped, the "... skipping xx lines ..." message can be clicked to expand and show -the hidden lines. - -Before expansion: -![alt text](gubernator-images/skipping1.png) -After expansion: -![alt text](gubernator-images/skipping2.png) - -If the pod name was displayed in the Test Failures list, it will automatically be included in the filters. -If it is not found in the error message, it can be manually entered into the textbox. Once a pod name -is entered, the Pod UID, Namespace, and ContainerID may be automatically filled in as well. These can be -altered as well. To apply the filter, check off the options corresponding to the filter. - -![alt text](gubernator-images/filterpage1.png) - -To add a filter, type the term to be filtered into the textbox labeled "Add filter:" and press enter. -Additional filters will be displayed as checkboxes under the textbox. - -![alt text](gubernator-images/filterpage3.png) - -To choose which logs to view check off the checkboxes corresponding to the logs of interest. If multiple logs are -included, the "Weave by timestamp" option can weave the selected logs together based on the timestamp in each line. - -![alt text](gubernator-images/filterpage2.png) - -[Log Filtering Example 1](https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubelet-gce-e2e-ci/5535/nodelog?pod=pod-configmaps-b5b876cb-3e1e-11e6-8956-42010af0001d&junit=junit_03.xml&wrap=on&logfiles=%2Fkubernetes-jenkins%2Flogs%2Fkubelet-gce-e2e-ci%2F5535%2Fartifacts%2Ftmp-node-e2e-7a5a3b40-e2e-node-coreos-stable20160622-image%2Fkube-apiserver.log&logfiles=%2Fkubernetes-jenkins%2Flogs%2Fkubelet-gce-e2e-ci%2F5535%2Fartifacts%2Ftmp-node-e2e-7a5a3b40-e2e-node-coreos-stable20160622-image%2Fkubelet.log&UID=on&poduid=b5b8a59e-3e1e-11e6-b358-42010af0001d&ns=e2e-tests-configmap-oi12h&cID=tmp-node-e2e-7a5a3b40-e2e-node-coreos-stable20160622-image) - -[Log Filtering Example 2](https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/11721/nodelog?pod=client-containers-a53f813c-503e-11e6-88dd-0242ac110003&junit=junit_19.xml&wrap=on) - - -### Gubernator for Local Tests - -*Currently Gubernator can only be used with remote node e2e tests.* - -**NOTE: Using Gubernator with local tests will publically upload your test logs to Google Cloud Storage** - -To use Gubernator to view logs from local test runs, set the GUBERNATOR tag to true. -A URL link to view the test results will be printed to the console. -Please note that running with the Gubernator tag will bypass the user confirmation for uploading to GCS. - -```console - -$ make test-e2e-node REMOTE=true GUBERNATOR=true -... -================================================================ -Running gubernator.sh - -Gubernator linked below: -k8s-gubernator.appspot.com/build/yourusername-g8r-logs/logs/e2e-node/timestamp -``` - -The gubernator.sh script can be run after running a remote node e2e test for the same effect. - -```console -$ ./test/e2e_node/gubernator.sh -Do you want to run gubernator.sh and upload logs publicly to GCS? [y/n]y -... -Gubernator linked below: -k8s-gubernator.appspot.com/build/yourusername-g8r-logs/logs/e2e-node/timestamp -``` - -## Future Work - -Gubernator provides a framework for debugging failures and introduces useful features. -There is still a lot of room for more features and growth to make the debugging process more efficient. - -How to contribute (see https://github.com/kubernetes/test-infra/blob/master/gubernator/README.md) - -* Extend GUBERNATOR flag to all local tests - -* More accurate identification of pod name, container ID, etc. - * Change content of logged strings for failures to include more information - * Better regex in Gubernator - -* Automate discovery of more keywords - * Volume Name - * Disk Name - * Pod IP - -* Clickable API objects in the displayed lines in order to add them as filters - -* Construct story of pod's lifetime - * Have concise view of what a pod went through from when pod was started to failure - -* Improve UI - * Have separate folders of logs in rows instead of in one long column - * Improve interface for adding additional features (maybe instead of textbox and checkbox, have chips) - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/gubernator.md?pixel)]() - diff --git a/how-to-doc.md b/how-to-doc.md deleted file mode 100644 index 891969d7..00000000 --- a/how-to-doc.md +++ /dev/null @@ -1,205 +0,0 @@ -# Document Conventions - -Updated: 11/3/2015 - -*This document is oriented at users and developers who want to write documents -for Kubernetes.* - -**Table of Contents** - - -- [Document Conventions](#document-conventions) - - [General Concepts](#general-concepts) - - [How to Get a Table of Contents](#how-to-get-a-table-of-contents) - - [How to Write Links](#how-to-write-links) - - [How to Include an Example](#how-to-include-an-example) - - [Misc.](#misc) - - [Code formatting](#code-formatting) - - [Syntax Highlighting](#syntax-highlighting) - - [Headings](#headings) - - [What Are Mungers?](#what-are-mungers) - - [Auto-added Mungers](#auto-added-mungers) - - [Generate Analytics](#generate-analytics) -- [Generated documentation](#generated-documentation) - - - -## General Concepts - -Each document needs to be munged to ensure its format is correct, links are -valid, etc. To munge a document, simply run `hack/update-munge-docs.sh`. We -verify that all documents have been munged using `hack/verify-munge-docs.sh`. -The scripts for munging documents are called mungers, see the -[mungers section](#what-are-mungers) below if you're curious about how mungers -are implemented or if you want to write one. - -## How to Get a Table of Contents - -Instead of writing table of contents by hand, insert the following code in your -md file: - -``` - - -``` - -After running `hack/update-munge-docs.sh`, you'll see a table of contents -generated for you, layered based on the headings. - -## How to Write Links - -It's important to follow the rules when writing links. It helps us correctly -versionize documents for each release. - -Use inline links instead of urls at all times. When you add internal links to -`docs/` or `examples/`, use relative links; otherwise, use -`http://releases.k8s.io/HEAD/`. For example, avoid using: - -``` -[GCE](https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/gce.md) # note that it's under docs/ -[Kubernetes package](../../pkg/) # note that it's under pkg/ -http://kubernetes.io/ # external link -``` - -Instead, use: - -``` -[GCE](../getting-started-guides/gce.md) # note that it's under docs/ -[Kubernetes package](http://releases.k8s.io/HEAD/pkg/) # note that it's under pkg/ -[Kubernetes](http://kubernetes.io/) # external link -``` - -The above example generates the following links: -[GCE](../getting-started-guides/gce.md), -[Kubernetes package](http://releases.k8s.io/HEAD/pkg/), and -[Kubernetes](http://kubernetes.io/). - -## How to Include an Example - -While writing examples, you may want to show the content of certain example -files (e.g. [pod.yaml](../../test/fixtures/doc-yaml/user-guide/pod.yaml)). In this case, insert the -following code in the md file: - -``` - - -``` - -Note that you should replace `path/to/file` with the relative path to the -example file. Then `hack/update-munge-docs.sh` will generate a code block with -the content of the specified file, and a link to download it. This way, you save -the time to do the copy-and-paste; what's better, the content won't become -out-of-date every time you update the example file. - -For example, the following: - -``` - - -``` - -generates the following after `hack/update-munge-docs.sh`: - - - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: nginx - labels: - app: nginx -spec: - containers: - - name: nginx - image: nginx - ports: - - containerPort: 80 -``` - -[Download example](../../test/fixtures/doc-yaml/user-guide/pod.yaml?raw=true) - - -## Misc. - -### Code formatting - -Wrap a span of code with single backticks (`` ` ``). To format multiple lines of -code as its own code block, use triple backticks (```` ``` ````). - -### Syntax Highlighting - -Adding syntax highlighting to code blocks improves readability. To do so, in -your fenced block, add an optional language identifier. Some useful identifier -includes `yaml`, `console` (for console output), and `sh` (for shell quote -format). Note that in a console output, put `$ ` at the beginning of each -command and put nothing at the beginning of the output. Here's an example of -console code block: - -``` -```console - -$ kubectl create -f test/fixtures/doc-yaml/user-guide/pod.yaml -pod "foo" created - -```  -``` - -which renders as: - -```console -$ kubectl create -f test/fixtures/doc-yaml/user-guide/pod.yaml -pod "foo" created -``` - -### Headings - -Add a single `#` before the document title to create a title heading, and add -`##` to the next level of section title, and so on. Note that the number of `#` -will determine the size of the heading. - -## What Are Mungers? - -Mungers are like gofmt for md docs which we use to format documents. To use it, -simply place - -``` - - -``` - -in your md files. Note that xxxx is the placeholder for a specific munger. -Appropriate content will be generated and inserted between two brackets after -you run `hack/update-munge-docs.sh`. See -[munger document](http://releases.k8s.io/HEAD/cmd/mungedocs/) for more details. - -## Auto-added Mungers - -After running `hack/update-munge-docs.sh`, you may see some code / mungers in -your md file that are auto-added. You don't have to add them manually. It's -recommended to just read this section as a reference instead of messing up with -the following mungers. - -### Generate Analytics - -ANALYTICS munger inserts a Google Anaylytics link for this page. - -``` - - -``` - -# Generated documentation - -Some documents can be generated automatically. Run `hack/generate-docs.sh` to -populate your repository with these generated documents, and a list of the files -it generates is placed in `.generated_docs`. To reduce merge conflicts, we do -not want to check these documents in; however, to make the link checker in the -munger happy, we check in a placeholder. `hack/update-generated-docs.sh` puts a -placeholder in the location where each generated document would go, and -`hack/verify-generated-docs.sh` verifies that the placeholder is in place. - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/how-to-doc.md?pixel)]() - diff --git a/instrumentation.md b/instrumentation.md deleted file mode 100644 index b73221a9..00000000 --- a/instrumentation.md +++ /dev/null @@ -1,52 +0,0 @@ -## Instrumenting Kubernetes with a new metric - -The following is a step-by-step guide for adding a new metric to the Kubernetes -code base. - -We use the Prometheus monitoring system's golang client library for -instrumenting our code. Once you've picked out a file that you want to add a -metric to, you should: - -1. Import "github.com/prometheus/client_golang/prometheus". - -2. Create a top-level var to define the metric. For this, you have to: - - 1. Pick the type of metric. Use a Gauge for things you want to set to a -particular value, a Counter for things you want to increment, or a Histogram or -Summary for histograms/distributions of values (typically for latency). -Histograms are better if you're going to aggregate the values across jobs, while -summaries are better if you just want the job to give you a useful summary of -the values. - 2. Give the metric a name and description. - 3. Pick whether you want to distinguish different categories of things using -labels on the metric. If so, add "Vec" to the name of the type of metric you -want and add a slice of the label names to the definition. - - https://github.com/kubernetes/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/apiserver/apiserver.go#L53 - https://github.com/kubernetes/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/kubelet/metrics/metrics.go#L31 - -3. Register the metric so that prometheus will know to export it. - - https://github.com/kubernetes/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/kubelet/metrics/metrics.go#L74 - https://github.com/kubernetes/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/apiserver/apiserver.go#L78 - -4. Use the metric by calling the appropriate method for your metric type (Set, -Inc/Add, or Observe, respectively for Gauge, Counter, or Histogram/Summary), -first calling WithLabelValues if your metric has any labels - - https://github.com/kubernetes/kubernetes/blob/3ce7fe8310ff081dbbd3d95490193e1d5250d2c9/pkg/kubelet/kubelet.go#L1384 - https://github.com/kubernetes/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/apiserver/apiserver.go#L87 - - -These are the metric type definitions if you're curious to learn about them or -need more information: - -https://github.com/prometheus/client_golang/blob/master/prometheus/gauge.go -https://github.com/prometheus/client_golang/blob/master/prometheus/counter.go -https://github.com/prometheus/client_golang/blob/master/prometheus/histogram.go -https://github.com/prometheus/client_golang/blob/master/prometheus/summary.go - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/instrumentation.md?pixel)]() - diff --git a/issues.md b/issues.md deleted file mode 100644 index fe9e94d9..00000000 --- a/issues.md +++ /dev/null @@ -1,59 +0,0 @@ -## GitHub Issues for the Kubernetes Project - -A quick overview of how we will review and prioritize incoming issues at -https://github.com/kubernetes/kubernetes/issues - -### Priorities - -We use GitHub issue labels for prioritization. The absence of a priority label -means the bug has not been reviewed and prioritized yet. - -We try to apply these priority labels consistently across the entire project, -but if you notice an issue that you believe to be incorrectly prioritized, -please do let us know and we will evaluate your counter-proposal. - -- **priority/P0**: Must be actively worked on as someone's top priority right -now. Stuff is burning. If it's not being actively worked on, someone is expected -to drop what they're doing immediately to work on it. Team leaders are -responsible for making sure that all P0's in their area are being actively -worked on. Examples include user-visible bugs in core features, broken builds or -tests and critical security issues. - -- **priority/P1**: Must be staffed and worked on either currently, or very soon, -ideally in time for the next release. - -- **priority/P2**: There appears to be general agreement that this would be good -to have, but we may not have anyone available to work on it right now or in the -immediate future. Community contributions would be most welcome in the mean time -(although it might take a while to get them reviewed if reviewers are fully -occupied with higher priority issues, for example immediately before a release). - -- **priority/P3**: Possibly useful, but not yet enough support to actually get -it done. These are mostly place-holders for potentially good ideas, so that they -don't get completely forgotten, and can be referenced/deduped every time they -come up. - -### Milestones - -We additionally use milestones, based on minor version, for determining if a bug -should be fixed for the next release. These milestones will be especially -scrutinized as we get to the weeks just before a release. We can release a new -version of Kubernetes once they are empty. We will have two milestones per minor -release. - -- **vX.Y**: The list of bugs that will be merged for that milestone once ready. - -- **vX.Y-candidate**: The list of bug that we might merge for that milestone. A -bug shouldn't be in this milestone for more than a day or two towards the end of -a milestone. It should be triaged either into vX.Y, or moved out of the release -milestones. - -The above priority scheme still applies. P0 and P1 issues are work we feel must -get done before release. P2 and P3 issues are work we would merge into the -release if it gets done, but we wouldn't block the release on it. A few days -before release, we will probably move all P2 and P3 bugs out of that milestone -in bulk. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/issues.md?pixel)]() - diff --git a/kubectl-conventions.md b/kubectl-conventions.md deleted file mode 100644 index 1e94b3ba..00000000 --- a/kubectl-conventions.md +++ /dev/null @@ -1,411 +0,0 @@ -# Kubectl Conventions - -Updated: 8/27/2015 - -**Table of Contents** - - -- [Kubectl Conventions](#kubectl-conventions) - - [Principles](#principles) - - [Command conventions](#command-conventions) - - [Create commands](#create-commands) - - [Rules for extending special resource alias - "all"](#rules-for-extending-special-resource-alias---all) - - [Flag conventions](#flag-conventions) - - [Output conventions](#output-conventions) - - [Documentation conventions](#documentation-conventions) - - [Command implementation conventions](#command-implementation-conventions) - - [Generators](#generators) - - - -## Principles - -* Strive for consistency across commands - -* Explicit should always override implicit - - * Environment variables should override default values - - * Command-line flags should override default values and environment variables - - * `--namespace` should also override the value specified in a specified -resource - -## Command conventions - -* Command names are all lowercase, and hyphenated if multiple words. - -* kubectl VERB NOUNs for commands that apply to multiple resource types. - -* Command itself should not have built-in aliases. - -* NOUNs may be specified as `TYPE name1 name2` or `TYPE/name1 TYPE/name2` or -`TYPE1,TYPE2,TYPE3/name1`; TYPE is omitted when only a single type is expected. - -* Resource types are all lowercase, with no hyphens; both singular and plural -forms are accepted. - -* NOUNs may also be specified by one or more file arguments: `-f file1 -f file2 -...` - -* Resource types may have 2- or 3-letter aliases. - -* Business logic should be decoupled from the command framework, so that it can -be reused independently of kubectl, cobra, etc. - * Ideally, commonly needed functionality would be implemented server-side in -order to avoid problems typical of "fat" clients and to make it readily -available to non-Go clients. - -* Commands that generate resources, such as `run` or `expose`, should obey -specific conventions, see [generators](#generators). - -* A command group (e.g., `kubectl config`) may be used to group related -non-standard commands, such as custom generators, mutations, and computations. - - -### Create commands - -`kubectl create ` commands fill the gap between "I want to try -Kubernetes, but I don't know or care what gets created" (`kubectl run`) and "I -want to create exactly this" (author yaml and run `kubectl create -f`). They -provide an easy way to create a valid object without having to know the vagaries -of particular kinds, nested fields, and object key typos that are ignored by the -yaml/json parser. Because editing an already created object is easier than -authoring one from scratch, these commands only need to have enough parameters -to create a valid object and set common immutable fields. It should default as -much as is reasonably possible. Once that valid object is created, it can be -further manipulated using `kubectl edit` or the eventual `kubectl set` commands. - -`kubectl create ` commands help in cases where you need -to perform non-trivial configuration generation/transformation tailored for a -common use case. `kubectl create secret` is a good example, there's a `generic` -flavor with keys mapping to files, then there's a `docker-registry` flavor that -is tailored for creating an image pull secret, and there's a `tls` flavor for -creating tls secrets. You create these as separate commands to get distinct -flags and separate help that is tailored for the particular usage. - - -### Rules for extending special resource alias - "all" - -Here are the rules to add a new resource to the `kubectl get all` output. - -* No cluster scoped resources - -* No namespace admin level resources (limits, quota, policy, authorization -rules) - -* No resources that are potentially unrecoverable (secrets and pvc) - -* Resources that are considered "similar" to #3 should be grouped -the same (configmaps) - - -## Flag conventions - -* Flags are all lowercase, with words separated by hyphens - -* Flag names and single-character aliases should have the same meaning across -all commands - -* Flag descriptions should start with an uppercase letter and not have a -period at the end of a sentence - -* Command-line flags corresponding to API fields should accept API enums -exactly (e.g., `--restart=Always`) - -* Do not reuse flags for different semantic purposes, and do not use different -flag names for the same semantic purpose -- grep for `"Flags()"` before adding a -new flag - -* Use short flags sparingly, only for the most frequently used options, prefer -lowercase over uppercase for the most common cases, try to stick to well known -conventions for UNIX commands and/or Docker, where they exist, and update this -list when adding new short flags - - * `-f`: Resource file - * also used for `--follow` in `logs`, but should be deprecated in favor of `-F` - * `-n`: Namespace scope - * `-l`: Label selector - * also used for `--labels` in `expose`, but should be deprecated - * `-L`: Label columns - * `-c`: Container - * also used for `--client` in `version`, but should be deprecated - * `-i`: Attach stdin - * `-t`: Allocate TTY - * `-w`: Watch (currently also used for `--www` in `proxy`, but should be deprecated) - * `-p`: Previous - * also used for `--pod` in `exec`, but deprecated - * also used for `--patch` in `patch`, but should be deprecated - * also used for `--port` in `proxy`, but should be deprecated - * `-P`: Static file prefix in `proxy`, but should be deprecated - * `-r`: Replicas - * `-u`: Unix socket - * `-v`: Verbose logging level - - -* `--dry-run`: Don't modify the live state; simulate the mutation and display -the output. All mutations should support it. - -* `--local`: Don't contact the server; just do local read, transformation, -generation, etc., and display the output - -* `--output-version=...`: Convert the output to a different API group/version - -* `--short`: Output a compact summary of normal output; the format is subject -to change and is optimizied for reading not parsing. - -* `--validate`: Validate the resource schema - -## Output conventions - -* By default, output is intended for humans rather than programs - * However, affordances are made for simple parsing of `get` output - -* Only errors should be directed to stderr - -* `get` commands should output one row per resource, and one resource per row - - * Column titles and values should not contain spaces in order to facilitate -commands that break lines into fields: cut, awk, etc. Instead, use `-` as the -word separator. - - * By default, `get` output should fit within about 80 columns - - * Eventually we could perhaps auto-detect width - * `-o wide` may be used to display additional columns - - - * The first column should be the resource name, titled `NAME` (may change this -to an abbreviation of resource type) - - * NAMESPACE should be displayed as the first column when --all-namespaces is -specified - - * The last default column should be time since creation, titled `AGE` - - * `-Lkey` should append a column containing the value of label with key `key`, -with `` if not present - - * json, yaml, Go template, and jsonpath template formats should be supported -and encouraged for subsequent processing - - * Users should use --api-version or --output-version to ensure the output -uses the version they expect - - -* `describe` commands may output on multiple lines and may include information -from related resources, such as events. Describe should add additional -information from related resources that a normal user may need to know - if a -user would always run "describe resource1" and the immediately want to run a -"get type2" or "describe resource2", consider including that info. Examples, -persistent volume claims for pods that reference claims, events for most -resources, nodes and the pods scheduled on them. When fetching related -resources, a targeted field selector should be used in favor of client side -filtering of related resources. - -* For fields that can be explicitly unset (booleans, integers, structs), the -output should say ``. Likewise, for arrays `` should be used; for -external IP, `` should be used; for load balancer, `` should be -used. Lastly `` should be used where unrecognized field type was -specified. - -* Mutations should output TYPE/name verbed by default, where TYPE is singular; -`-o name` may be used to just display TYPE/name, which may be used to specify -resources in other commands - -## Documentation conventions - -* Commands are documented using Cobra; docs are then auto-generated by -`hack/update-generated-docs.sh`. - - * Use should contain a short usage string for the most common use case(s), not -an exhaustive specification - - * Short should contain a one-line explanation of what the command does - * Short descriptions should start with an uppercase case letter and not - have a period at the end of a sentence - * Short descriptions should (if possible) start with a first person - (singular present tense) verb - - * Long may contain multiple lines, including additional information about -input, output, commonly used flags, etc. - * Long descriptions should use proper grammar, start with an uppercase - letter and have a period at the end of a sentence - - - * Example should contain examples - * Start commands with `$` - * A comment should precede each example command, and should begin with `#` - - -* Use "FILENAME" for filenames - -* Use "TYPE" for the particular flavor of resource type accepted by kubectl, -rather than "RESOURCE" or "KIND" - -* Use "NAME" for resource names - -## Command implementation conventions - -For every command there should be a `NewCmd` function that creates -the command and returns a pointer to a `cobra.Command`, which can later be added -to other parent commands to compose the structure tree. There should also be a -`Config` struct with a variable to every flag and argument declared -by the command (and any other variable required for the command to run). This -makes tests and mocking easier. The struct ideally exposes three methods: - -* `Complete`: Completes the struct fields with values that may or may not be -directly provided by the user, for example, by flags pointers, by the `args` -slice, by using the Factory, etc. - -* `Validate`: performs validation on the struct fields and returns appropriate -errors. - -* `Run`: runs the actual logic of the command, taking as assumption -that the struct is complete with all required values to run, and they are valid. - -Sample command skeleton: - -```go -// MineRecommendedName is the recommended command name for kubectl mine. -const MineRecommendedName = "mine" - -// Long command description and examples. -var ( - mineLong = templates.LongDesc(` - mine which is described here - with lots of details.`) - - mineExample = templates.Examples(` - # Run my command's first action - kubectl mine first_action - - # Run my command's second action on latest stuff - kubectl mine second_action --flag`) -) - -// MineConfig contains all the options for running the mine cli command. -type MineConfig struct { - mineLatest bool -} - -// NewCmdMine implements the kubectl mine command. -func NewCmdMine(parent, name string, f *cmdutil.Factory, out io.Writer) *cobra.Command { - opts := &MineConfig{} - - cmd := &cobra.Command{ - Use: fmt.Sprintf("%s [--latest]", name), - Short: "Run my command", - Long: mineLong, - Example: fmt.Sprintf(mineExample, parent+" "+name), - Run: func(cmd *cobra.Command, args []string) { - if err := opts.Complete(f, cmd, args, out); err != nil { - cmdutil.CheckErr(err) - } - if err := opts.Validate(); err != nil { - cmdutil.CheckErr(cmdutil.UsageError(cmd, err.Error())) - } - if err := opts.RunMine(); err != nil { - cmdutil.CheckErr(err) - } - }, - } - - cmd.Flags().BoolVar(&options.mineLatest, "latest", false, "Use latest stuff") - return cmd -} - -// Complete completes all the required options for mine. -func (o *MineConfig) Complete(f *cmdutil.Factory, cmd *cobra.Command, args []string, out io.Writer) error { - return nil -} - -// Validate validates all the required options for mine. -func (o MineConfig) Validate() error { - return nil -} - -// RunMine implements all the necessary functionality for mine. -func (o MineConfig) RunMine() error { - return nil -} -``` - -The `Run` method should contain the business logic of the command -and as noted in [command conventions](#command-conventions), ideally that logic -should exist server-side so any client could take advantage of it. Notice that -this is not a mandatory structure and not every command is implemented this way, -but this is a nice convention so try to be compliant with it. As an example, -have a look at how [kubectl logs](../../pkg/kubectl/cmd/logs.go) is implemented. - -## Generators - -Generators are kubectl commands that generate resources based on a set of inputs -(other resources, flags, or a combination of both). - -The point of generators is: - -* to enable users using kubectl in a scripted fashion to pin to a particular -behavior which may change in the future. Explicit use of a generator will always -guarantee that the expected behavior stays the same. - -* to enable potential expansion of the generated resources for scenarios other -than just creation, similar to how -f is supported for most general-purpose -commands. - -Generator commands shoud obey to the following conventions: - -* A `--generator` flag should be defined. Users then can choose between -different generators, if the command supports them (for example, `kubectl run` -currently supports generators for pods, jobs, replication controllers, and -deployments), or between different versions of a generator so that users -depending on a specific behavior may pin to that version (for example, `kubectl -expose` currently supports two different versions of a service generator). - -* Generation should be decoupled from creation. A generator should implement the -`kubectl.StructuredGenerator` interface and have no dependencies on cobra or the -Factory. See, for example, how the first version of the namespace generator is -defined: - -```go -// NamespaceGeneratorV1 supports stable generation of a namespace -type NamespaceGeneratorV1 struct { - // Name of namespace - Name string -} - -// Ensure it supports the generator pattern that uses parameters specified during construction -var _ StructuredGenerator = &NamespaceGeneratorV1{} - -// StructuredGenerate outputs a namespace object using the configured fields -func (g *NamespaceGeneratorV1) StructuredGenerate() (runtime.Object, error) { - if err := g.validate(); err != nil { - return nil, err - } - namespace := &api.Namespace{} - namespace.Name = g.Name - return namespace, nil -} - -// validate validates required fields are set to support structured generation -func (g *NamespaceGeneratorV1) validate() error { - if len(g.Name) == 0 { - return fmt.Errorf("name must be specified") - } - return nil -} -``` - -The generator struct (`NamespaceGeneratorV1`) holds the necessary fields for -namespace generation. It also satisfies the `kubectl.StructuredGenerator` -interface by implementing the `StructuredGenerate() (runtime.Object, error)` -method which configures the generated namespace that callers of the generator -(`kubectl create namespace` in our case) need to create. - -* `--dry-run` should output the resource that would be created, without -creating it. - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/kubectl-conventions.md?pixel)]() - diff --git a/kubemark-guide.md b/kubemark-guide.md deleted file mode 100755 index e914226d..00000000 --- a/kubemark-guide.md +++ /dev/null @@ -1,212 +0,0 @@ -# Kubemark User Guide - -## Introduction - -Kubemark is a performance testing tool which allows users to run experiments on -simulated clusters. The primary use case is scalability testing, as simulated -clusters can be much bigger than the real ones. The objective is to expose -problems with the master components (API server, controller manager or -scheduler) that appear only on bigger clusters (e.g. small memory leaks). - -This document serves as a primer to understand what Kubemark is, what it is not, -and how to use it. - -## Architecture - -On a very high level Kubemark cluster consists of two parts: real master -components and a set of “Hollow” Nodes. The prefix “Hollow” means an -implementation/instantiation of a component with all “moving” parts mocked out. -The best example is HollowKubelet, which pretends to be an ordinary Kubelet, but -does not start anything, nor mount any volumes - it just lies it does. More -detailed design and implementation details are at the end of this document. - -Currently master components run on a dedicated machine(s), and HollowNodes run -on an ‘external’ Kubernetes cluster. This design has a slight advantage, over -running master components on external cluster, of completely isolating master -resources from everything else. - -## Requirements - -To run Kubemark you need a Kubernetes cluster (called `external cluster`) -for running all your HollowNodes and a dedicated machine for a master. -Master machine has to be directly routable from HollowNodes. You also need an -access to some Docker repository. - -Currently scripts are written to be easily usable by GCE, but it should be -relatively straightforward to port them to different providers or bare metal. - -## Common use cases and helper scripts - -Common workflow for Kubemark is: -- starting a Kubemark cluster (on GCE) -- running e2e tests on Kubemark cluster -- monitoring test execution and debugging problems -- turning down Kubemark cluster - -Included in descriptions there will be comments helpful for anyone who’ll want to -port Kubemark to different providers. - -### Starting a Kubemark cluster - -To start a Kubemark cluster on GCE you need to create an external kubernetes -cluster (it can be GCE, GKE or anything else) by yourself, make sure that kubeconfig -points to it by default, build a kubernetes release (e.g. by running -`make quick-release`) and run `test/kubemark/start-kubemark.sh` script. -This script will create a VM for master components, Pods for HollowNodes -and do all the setup necessary to let them talk to each other. It will use the -configuration stored in `cluster/kubemark/config-default.sh` - you can tweak it -however you want, but note that some features may not be implemented yet, as -implementation of Hollow components/mocks will probably be lagging behind ‘real’ -one. For performance tests interesting variables are `NUM_NODES` and -`MASTER_SIZE`. After start-kubemark script is finished you’ll have a ready -Kubemark cluster, a kubeconfig file for talking to the Kubemark cluster is -stored in `test/kubemark/kubeconfig.kubemark`. - -Currently we're running HollowNode with limit of 0.05 a CPU core and ~60MB or -memory, which taking into account default cluster addons and fluentD running on -an 'external' cluster, allows running ~17.5 HollowNodes per core. - -#### Behind the scene details: - -Start-kubemark script does quite a lot of things: - -- Creates a master machine called hollow-cluster-master and PD for it (*uses -gcloud, should be easy to do outside of GCE*) - -- Creates a firewall rule which opens port 443\* on the master machine (*uses -gcloud, should be easy to do outside of GCE*) - -- Builds a Docker image for HollowNode from the current repository and pushes it -to the Docker repository (*GCR for us, using scripts from -`cluster/gce/util.sh` - it may get tricky outside of GCE*) - -- Generates certificates and kubeconfig files, writes a kubeconfig locally to -`test/kubemark/kubeconfig.kubemark` and creates a Secret which stores kubeconfig for -HollowKubelet/HollowProxy use (*used gcloud to transfer files to Master, should -be easy to do outside of GCE*). - -- Creates a ReplicationController for HollowNodes and starts them up. (*will -work exactly the same everywhere as long as MASTER_IP will be populated -correctly, but you’ll need to update docker image address if you’re not using -GCR and default image name*) - -- Waits until all HollowNodes are in the Running phase (*will work exactly the -same everywhere*) - -\* Port 443 is a secured port on the master machine which is used for all -external communication with the API server. In the last sentence *external* -means all traffic coming from other machines, including all the Nodes, not only -from outside of the cluster. Currently local components, i.e. ControllerManager -and Scheduler talk with API server using insecure port 8080. - -### Running e2e tests on Kubemark cluster - -To run standard e2e test on your Kubemark cluster created in the previous step -you execute `test/kubemark/run-e2e-tests.sh` script. It will configure ginkgo to -use Kubemark cluster instead of something else and start an e2e test. This -script should not need any changes to work on other cloud providers. - -By default (if nothing will be passed to it) the script will run a Density '30 -test. If you want to run a different e2e test you just need to provide flags you want to be -passed to `hack/ginkgo-e2e.sh` script, e.g. `--ginkgo.focus="Load"` to run the -Load test. - -By default, at the end of each test, it will delete namespaces and everything -under it (e.g. events, replication controllers) on Kubemark master, which takes -a lot of time. Such work aren't needed in most cases: if you delete your -Kubemark cluster after running `run-e2e-tests.sh`; you don't care about -namespace deletion performance, specifically related to etcd; etc. There is a -flag that enables you to avoid namespace deletion: `--delete-namespace=false`. -Adding the flag should let you see in logs: `Found DeleteNamespace=false, -skipping namespace deletion!` - -### Monitoring test execution and debugging problems - -Run-e2e-tests prints the same output on Kubemark as on ordinary e2e cluster, but -if you need to dig deeper you need to learn how to debug HollowNodes and how -Master machine (currently) differs from the ordinary one. - -If you need to debug master machine you can do similar things as you do on your -ordinary master. The difference between Kubemark setup and ordinary setup is -that in Kubemark etcd is run as a plain docker container, and all master -components are run as normal processes. There’s no Kubelet overseeing them. Logs -are stored in exactly the same place, i.e. `/var/logs/` directory. Because -binaries are not supervised by anything they won't be restarted in the case of a -crash. - -To help you with debugging from inside the cluster startup script puts a -`~/configure-kubectl.sh` script on the master. It downloads `gcloud` and -`kubectl` tool and configures kubectl to work on unsecured master port (useful -if there are problems with security). After the script is run you can use -kubectl command from the master machine to play with the cluster. - -Debugging HollowNodes is a bit more tricky, as if you experience a problem on -one of them you need to learn which hollow-node pod corresponds to a given -HollowNode known by the Master. During self-registeration HollowNodes provide -their cluster IPs as Names, which means that if you need to find a HollowNode -named `10.2.4.5` you just need to find a Pod in external cluster with this -cluster IP. There’s a helper script -`test/kubemark/get-real-pod-for-hollow-node.sh` that does this for you. - -When you have a Pod name you can use `kubectl logs` on external cluster to get -logs, or use a `kubectl describe pod` call to find an external Node on which -this particular HollowNode is running so you can ssh to it. - -E.g. you want to see the logs of HollowKubelet on which pod `my-pod` is running. -To do so you can execute: - -``` -$ kubectl kubernetes/test/kubemark/kubeconfig.kubemark describe pod my-pod -``` - -Which outputs pod description and among it a line: - -``` -Node: 1.2.3.4/1.2.3.4 -``` - -To learn the `hollow-node` pod corresponding to node `1.2.3.4` you use -aforementioned script: - -``` -$ kubernetes/test/kubemark/get-real-pod-for-hollow-node.sh 1.2.3.4 -``` - -which will output the line: - -``` -hollow-node-1234 -``` - -Now you just use ordinary kubectl command to get the logs: - -``` -kubectl --namespace=kubemark logs hollow-node-1234 -``` - -All those things should work exactly the same on all cloud providers. - -### Turning down Kubemark cluster - -On GCE you just need to execute `test/kubemark/stop-kubemark.sh` script, which -will delete HollowNode ReplicationController and all the resources for you. On -other providers you’ll need to delete all this stuff by yourself. - -## Some current implementation details - -Kubemark master uses exactly the same binaries as ordinary Kubernetes does. This -means that it will never be out of date. On the other hand HollowNodes use -existing fake for Kubelet (called SimpleKubelet), which mocks its runtime -manager with `pkg/kubelet/dockertools/fake_manager.go`, where most logic sits. -Because there’s no easy way of mocking other managers (e.g. VolumeManager), they -are not supported in Kubemark (e.g. we can’t schedule Pods with volumes in them -yet). - -As the time passes more fakes will probably be plugged into HollowNodes, but -it’s crucial to make it as simple as possible to allow running a big number of -Hollows on a single core. - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/kubemark-guide.md?pixel)]() - diff --git a/local-cluster/docker.md b/local-cluster/docker.md deleted file mode 100644 index 78768f80..00000000 --- a/local-cluster/docker.md +++ /dev/null @@ -1,269 +0,0 @@ -**Stop. This guide has been superseded by [Minikube](https://github.com/kubernetes/minikube) which is the recommended method of running Kubernetes on your local machine.** - - -The following instructions show you how to set up a simple, single node Kubernetes cluster using Docker. - -Here's a diagram of what the final result will look like: - -![Kubernetes Single Node on Docker](k8s-singlenode-docker.png) - -## Prerequisites - -**Note: These steps have not been tested with the [Docker For Mac or Docker For Windows beta programs](https://blog.docker.com/2016/03/docker-for-mac-windows-beta/).** - -1. You need to have Docker version >= "1.10" installed on the machine. -2. Enable mount propagation. Hyperkube is running in a container which has to mount volumes for other containers, for example in case of persistent storage. The required steps depend on the init system. - - - In case of **systemd**, change MountFlags in the Docker unit file to shared. - - ```shell - DOCKER_CONF=$(systemctl cat docker | head -1 | awk '{print $2}') - sed -i.bak 's/^\(MountFlags=\).*/\1shared/' $DOCKER_CONF - systemctl daemon-reload - systemctl restart docker - ``` - - **Otherwise**, manually set the mount point used by Hyperkube to be shared: - - ```shell - mkdir -p /var/lib/kubelet - mount --bind /var/lib/kubelet /var/lib/kubelet - mount --make-shared /var/lib/kubelet - ``` - - -### Run it - -1. Decide which Kubernetes version to use. Set the `${K8S_VERSION}` variable to a version of Kubernetes >= "v1.2.0". - - - If you'd like to use the current **stable** version of Kubernetes, run the following: - - ```sh - export K8S_VERSION=$(curl -sS https://storage.googleapis.com/kubernetes-release/release/stable.txt) - ``` - - and for the **latest** available version (including unstable releases): - - ```sh - export K8S_VERSION=$(curl -sS https://storage.googleapis.com/kubernetes-release/release/latest.txt) - ``` - -2. Start Hyperkube - - ```shell - export ARCH=amd64 - docker run -d \ - --volume=/sys:/sys:rw \ - --volume=/var/lib/docker/:/var/lib/docker:rw \ - --volume=/var/lib/kubelet/:/var/lib/kubelet:rw,shared \ - --volume=/var/run:/var/run:rw \ - --net=host \ - --pid=host \ - --privileged \ - --name=kubelet \ - gcr.io/google_containers/hyperkube-${ARCH}:${K8S_VERSION} \ - /hyperkube kubelet \ - --hostname-override=127.0.0.1 \ - --api-servers=http://localhost:8080 \ - --config=/etc/kubernetes/manifests \ - --cluster-dns=10.0.0.10 \ - --cluster-domain=cluster.local \ - --allow-privileged --v=2 - ``` - - > Note that `--cluster-dns` and `--cluster-domain` is used to deploy dns, feel free to discard them if dns is not needed. - - > If you would like to mount an external device as a volume, add `--volume=/dev:/dev` to the command above. It may however, cause some problems described in [#18230](https://github.com/kubernetes/kubernetes/issues/18230) - - > Architectures other than `amd64` are experimental and sometimes unstable, but feel free to try them out! Valid values: `arm`, `arm64` and `ppc64le`. ARM is available with Kubernetes version `v1.3.0-alpha.2` and higher. ARM 64-bit and PowerPC 64 little-endian are available with `v1.3.0-alpha.3` and higher. Track progress on multi-arch support [here](https://github.com/kubernetes/kubernetes/issues/17981) - - > If you are behind a proxy, you need to pass the proxy setup to curl in the containers to pull the certificates. Create a .curlrc under /root folder (because the containers are running as root) with the following line: - - ``` - proxy = : - ``` - - This actually runs the kubelet, which in turn runs a [pod](http://kubernetes.io/docs/user-guide/pods/) that contains the other master components. - - ** **SECURITY WARNING** ** services exposed via Kubernetes using Hyperkube are available on the host node's public network interface / IP address. Because of this, this guide is not suitable for any host node/server that is directly internet accessible. Refer to [#21735](https://github.com/kubernetes/kubernetes/issues/21735) for additional info. - -### Download `kubectl` - -At this point you should have a running Kubernetes cluster. You can test it out -by downloading the kubectl binary for `${K8S_VERSION}` (in this example: `{{page.version}}.0`). - - -Downloads: - - - `linux/amd64`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/amd64/kubectl - - `linux/386`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/386/kubectl - - `linux/arm`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/arm/kubectl - - `linux/arm64`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/arm64/kubectl - - `linux/ppc64le`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/ppc64le/kubectl - - `OS X/amd64`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/darwin/amd64/kubectl - - `OS X/386`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/darwin/386/kubectl - - `windows/amd64`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/windows/amd64/kubectl.exe - - `windows/386`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/windows/386/kubectl.exe - -The generic download path is: - -``` -http://storage.googleapis.com/kubernetes-release/release/${K8S_VERSION}/bin/${GOOS}/${GOARCH}/${K8S_BINARY} -``` - -An example install with `linux/amd64`: - -``` -curl -sSL "https://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/amd64/kubectl" > /usr/bin/kubectl -chmod +x /usr/bin/kubectl -``` - -On OS X, to make the API server accessible locally, setup a ssh tunnel. - -```shell -docker-machine ssh `docker-machine active` -N -L 8080:localhost:8080 -``` - -Setting up a ssh tunnel is applicable to remote docker hosts as well. - -(Optional) Create kubernetes cluster configuration: - -```shell -kubectl config set-cluster test-doc --server=http://localhost:8080 -kubectl config set-context test-doc --cluster=test-doc -kubectl config use-context test-doc -``` - -### Test it out - -List the nodes in your cluster by running: - -```shell -kubectl get nodes -``` - -This should print: - -```shell -NAME STATUS AGE -127.0.0.1 Ready 1h -``` - -### Run an application - -```shell -kubectl run nginx --image=nginx --port=80 -``` - -Now run `docker ps` you should see nginx running. You may need to wait a few minutes for the image to get pulled. - -### Expose it as a service - -```shell -kubectl expose deployment nginx --port=80 -``` - -Run the following command to obtain the cluster local IP of this service we just created: - -```shell{% raw %} -ip=$(kubectl get svc nginx --template={{.spec.clusterIP}}) -echo $ip -{% endraw %}``` - -Hit the webserver with this IP: - -```shell{% raw %} - -curl $ip -{% endraw %}``` - -On OS X, since docker is running inside a VM, run the following command instead: - -```shell -docker-machine ssh `docker-machine active` curl $ip -``` - -## Deploy a DNS - -Read [documentation for manually deploying a DNS](http://kubernetes.io/docs/getting-started-guides/docker-multinode/#deploy-dns-manually-for-v12x) for instructions. - -### Turning down your cluster - -1. Delete the nginx service and deployment: - -If you plan on re-creating your nginx deployment and service you will need to clean it up. - -```shell -kubectl delete service,deployments nginx -``` - -2. Delete all the containers including the kubelet: - -```shell -docker rm -f kubelet -docker rm -f `docker ps | grep k8s | awk '{print $1}'` -``` - -3. Cleanup the filesystem: - -On OS X, first ssh into the docker VM: - -```shell -docker-machine ssh `docker-machine active` -``` - -```shell -grep /var/lib/kubelet /proc/mounts | awk '{print $2}' | sudo xargs -n1 umount -sudo rm -rf /var/lib/kubelet -``` - -### Troubleshooting - -#### Node is in `NotReady` state - -If you see your node as `NotReady` it's possible that your OS does not have memcg enabled. - -1. Your kernel should support memory accounting. Ensure that the -following configs are turned on in your linux kernel: - -```shell -CONFIG_RESOURCE_COUNTERS=y -CONFIG_MEMCG=y -``` - -2. Enable the memory accounting in the kernel, at boot, as command line -parameters as follows: - -```shell -GRUB_CMDLINE_LINUX="cgroup_enable=memory=1" -``` - -NOTE: The above is specifically for GRUB2. -You can check the command line parameters passed to your kernel by looking at the -output of /proc/cmdline: - -```shell -$ cat /proc/cmdline -BOOT_IMAGE=/boot/vmlinuz-3.18.4-aufs root=/dev/sda5 ro cgroup_enable=memory=1 -``` - -## Support Level - - -IaaS Provider | Config. Mgmt | OS | Networking | Conforms | Support Level --------------------- | ------------ | ------ | ---------- | ---------| ---------------------------- -Docker Single Node | custom | N/A | local | | Project ([@brendandburns](https://github.com/brendandburns)) - - - -## Further reading - -Please see the [Kubernetes docs](http://kubernetes.io/docs) for more details on administering -and using a Kubernetes cluster. - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/local-cluster/docker.md?pixel)]() - diff --git a/local-cluster/k8s-singlenode-docker.png b/local-cluster/k8s-singlenode-docker.png deleted file mode 100644 index 5ebf8126..00000000 Binary files a/local-cluster/k8s-singlenode-docker.png and /dev/null differ diff --git a/local-cluster/local.md b/local-cluster/local.md deleted file mode 100644 index 60bd5a8f..00000000 --- a/local-cluster/local.md +++ /dev/null @@ -1,125 +0,0 @@ -**Stop. This guide has been superseded by [Minikube](https://github.com/kubernetes/minikube) which is the recommended method of running Kubernetes on your local machine.** - -### Requirements - -#### Linux - -Not running Linux? Consider running Linux in a local virtual machine with [vagrant](https://www.vagrantup.com/), or on a cloud provider like Google Compute Engine - -#### Docker - -At least [Docker](https://docs.docker.com/installation/#installation) -1.8.3+. Ensure the Docker daemon is running and can be contacted (try `docker -ps`). Some of the Kubernetes components need to run as root, which normally -works fine with docker. - -#### etcd - -You need an [etcd](https://github.com/coreos/etcd/releases) in your path, please make sure it is installed and in your ``$PATH``. - -#### go - -You need [go](https://golang.org/doc/install) at least 1.4+ in your path, please make sure it is installed and in your ``$PATH``. - -### Starting the cluster - -First, you need to [download Kubernetes](http://kubernetes.io/docs/getting-started-guides/binary_release/). Then open a separate tab of your terminal -and run the following (since one needs sudo access to start/stop Kubernetes daemons, it is easier to run the entire script as root): - -```shell -cd kubernetes -hack/local-up-cluster.sh -``` - -This will build and start a lightweight local cluster, consisting of a master -and a single node. Type Control-C to shut it down. - -You can use the cluster/kubectl.sh script to interact with the local cluster. hack/local-up-cluster.sh will -print the commands to run to point kubectl at the local cluster. - - -### Running a container - -Your cluster is running, and you want to start running containers! - -You can now use any of the cluster/kubectl.sh commands to interact with your local setup. - -```shell -export KUBERNETES_PROVIDER=local -cluster/kubectl.sh get pods -cluster/kubectl.sh get services -cluster/kubectl.sh get deployments -cluster/kubectl.sh run my-nginx --image=nginx --replicas=2 --port=80 - -## begin wait for provision to complete, you can monitor the docker pull by opening a new terminal - sudo docker images - ## you should see it pulling the nginx image, once the above command returns it - sudo docker ps - ## you should see your container running! - exit -## end wait - -## create a service for nginx, which serves on port 80 -cluster/kubectl.sh expose deployment my-nginx --port=80 --name=my-nginx - -## introspect Kubernetes! -cluster/kubectl.sh get pods -cluster/kubectl.sh get services -cluster/kubectl.sh get deployments - -## Test the nginx service with the IP/port from "get services" command -curl http://10.X.X.X:80/ -``` - -### Running a user defined pod - -Note the difference between a [container](http://kubernetes.io/docs/user-guide/containers/) -and a [pod](http://kubernetes.io/docs/user-guide/pods/). Since you only asked for the former, Kubernetes will create a wrapper pod for you. -However you cannot view the nginx start page on localhost. To verify that nginx is running you need to run `curl` within the docker container (try `docker exec`). - -You can control the specifications of a pod via a user defined manifest, and reach nginx through your browser on the port specified therein: - -```shell -cluster/kubectl.sh create -f test/fixtures/doc-yaml/user-guide/pod.yaml -``` - -Congratulations! - -### FAQs - -#### I cannot reach service IPs on the network. - -Some firewall software that uses iptables may not interact well with -kubernetes. If you have trouble around networking, try disabling any -firewall or other iptables-using systems, first. Also, you can check -if SELinux is blocking anything by running a command such as `journalctl --since yesterday | grep avc`. - -By default the IP range for service cluster IPs is 10.0.*.* - depending on your -docker installation, this may conflict with IPs for containers. If you find -containers running with IPs in this range, edit hack/local-cluster-up.sh and -change the service-cluster-ip-range flag to something else. - -#### I changed Kubernetes code, how do I run it? - -```shell -cd kubernetes -hack/build-go.sh -hack/local-up-cluster.sh -``` - -#### kubectl claims to start a container but `get pods` and `docker ps` don't show it. - -One or more of the Kubernetes daemons might've crashed. Tail the [logs](http://kubernetes.io/docs/admin/cluster-troubleshooting/#looking-at-logs) of each in /tmp. - -```shell -$ ls /tmp/kube*.log -$ tail -f /tmp/kube-apiserver.log -``` - -#### The pods fail to connect to the services by host names - -The local-up-cluster.sh script doesn't start a DNS service. Similar situation can be found [here](http://issue.k8s.io/6667). You can start a manually. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/local-cluster/local.md?pixel)]() - diff --git a/local-cluster/vagrant.md b/local-cluster/vagrant.md deleted file mode 100644 index 0f0fe91c..00000000 --- a/local-cluster/vagrant.md +++ /dev/null @@ -1,397 +0,0 @@ -Running Kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/develop on your local machine (Linux, Mac OS X). - -### Prerequisites - -1. Install latest version >= 1.7.4 of [Vagrant](http://www.vagrantup.com/downloads.html) -2. Install one of: - 1. The latest version of [Virtual Box](https://www.virtualbox.org/wiki/Downloads) - 2. [VMWare Fusion](https://www.vmware.com/products/fusion/) version 5 or greater as well as the appropriate [Vagrant VMWare Fusion provider](https://www.vagrantup.com/vmware) - 3. [VMWare Workstation](https://www.vmware.com/products/workstation/) version 9 or greater as well as the [Vagrant VMWare Workstation provider](https://www.vagrantup.com/vmware) - 4. [Parallels Desktop](https://www.parallels.com/products/desktop/) version 9 or greater as well as the [Vagrant Parallels provider](https://parallels.github.io/vagrant-parallels/) - 5. libvirt with KVM and enable support of hardware virtualisation. [Vagrant-libvirt](https://github.com/pradels/vagrant-libvirt). For fedora provided official rpm, and possible to use `yum install vagrant-libvirt` - -### Setup - -Setting up a cluster is as simple as running: - -```sh -export KUBERNETES_PROVIDER=vagrant -curl -sS https://get.k8s.io | bash -``` - -Alternatively, you can download [Kubernetes release](https://github.com/kubernetes/kubernetes/releases) and extract the archive. To start your local cluster, open a shell and run: - -```sh -cd kubernetes - -export KUBERNETES_PROVIDER=vagrant -./cluster/kube-up.sh -``` - -The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine. - -By default, the Vagrant setup will create a single master VM (called kubernetes-master) and one node (called kubernetes-node-1). Each VM will take 1 GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate free disk space). - -If you'd like more than one node, set the `NUM_NODES` environment variable to the number you want: - -```sh -export NUM_NODES=3 -``` - -Vagrant will provision each machine in the cluster with all the necessary components to run Kubernetes. The initial setup can take a few minutes to complete on each machine. - -If you installed more than one Vagrant provider, Kubernetes will usually pick the appropriate one. However, you can override which one Kubernetes will use by setting the [`VAGRANT_DEFAULT_PROVIDER`](https://docs.vagrantup.com/v2/providers/default.html) environment variable: - -```sh -export VAGRANT_DEFAULT_PROVIDER=parallels -export KUBERNETES_PROVIDER=vagrant -./cluster/kube-up.sh -``` - -By default, each VM in the cluster is running Fedora. - -To access the master or any node: - -```sh -vagrant ssh master -vagrant ssh node-1 -``` - -If you are running more than one node, you can access the others by: - -```sh -vagrant ssh node-2 -vagrant ssh node-3 -``` - -Each node in the cluster installs the docker daemon and the kubelet. - -The master node instantiates the Kubernetes master components as pods on the machine. - -To view the service status and/or logs on the kubernetes-master: - -```console -[vagrant@kubernetes-master ~] $ vagrant ssh master -[vagrant@kubernetes-master ~] $ sudo su - -[root@kubernetes-master ~] $ systemctl status kubelet -[root@kubernetes-master ~] $ journalctl -ru kubelet - -[root@kubernetes-master ~] $ systemctl status docker -[root@kubernetes-master ~] $ journalctl -ru docker - -[root@kubernetes-master ~] $ tail -f /var/log/kube-apiserver.log -[root@kubernetes-master ~] $ tail -f /var/log/kube-controller-manager.log -[root@kubernetes-master ~] $ tail -f /var/log/kube-scheduler.log -``` - -To view the services on any of the nodes: - -```console -[vagrant@kubernetes-master ~] $ vagrant ssh node-1 -[vagrant@kubernetes-master ~] $ sudo su - -[root@kubernetes-master ~] $ systemctl status kubelet -[root@kubernetes-master ~] $ journalctl -ru kubelet - -[root@kubernetes-master ~] $ systemctl status docker -[root@kubernetes-master ~] $ journalctl -ru docker -``` - -### Interacting with your Kubernetes cluster with Vagrant. - -With your Kubernetes cluster up, you can manage the nodes in your cluster with the regular Vagrant commands. - -To push updates to new Kubernetes code after making source changes: - -```sh -./cluster/kube-push.sh -``` - -To stop and then restart the cluster: - -```sh -vagrant halt -./cluster/kube-up.sh -``` - -To destroy the cluster: - -```sh -vagrant destroy -``` - -Once your Vagrant machines are up and provisioned, the first thing to do is to check that you can use the `kubectl.sh` script. - -You may need to build the binaries first, you can do this with `make` - -```console -$ ./cluster/kubectl.sh get nodes - -NAME LABELS -10.245.1.4 -10.245.1.5 -10.245.1.3 -``` - -### Authenticating with your master - -When using the vagrant provider in Kubernetes, the `cluster/kubectl.sh` script will cache your credentials in a `~/.kubernetes_vagrant_auth` file so you will not be prompted for them in the future. - -```sh -cat ~/.kubernetes_vagrant_auth -``` - -```json -{ "User": "vagrant", - "Password": "vagrant", - "CAFile": "/home/k8s_user/.kubernetes.vagrant.ca.crt", - "CertFile": "/home/k8s_user/.kubecfg.vagrant.crt", - "KeyFile": "/home/k8s_user/.kubecfg.vagrant.key" -} -``` - -You should now be set to use the `cluster/kubectl.sh` script. For example try to list the nodes that you have started with: - -```sh -./cluster/kubectl.sh get nodes -``` - -### Running containers - -Your cluster is running, you can list the nodes in your cluster: - -```sh -$ ./cluster/kubectl.sh get nodes - -NAME LABELS -10.245.2.4 -10.245.2.3 -10.245.2.2 -``` - -Now start running some containers! - -You can now use any of the `cluster/kube-*.sh` commands to interact with your VM machines. -Before starting a container there will be no pods, services and replication controllers. - -```sh -$ ./cluster/kubectl.sh get pods -NAME READY STATUS RESTARTS AGE - -$ ./cluster/kubectl.sh get services -NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE - -$ ./cluster/kubectl.sh get replicationcontrollers -CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS -``` - -Start a container running nginx with a replication controller and three replicas - -```sh -$ ./cluster/kubectl.sh run my-nginx --image=nginx --replicas=3 --port=80 -``` - -When listing the pods, you will see that three containers have been started and are in Waiting state: - -```sh -$ ./cluster/kubectl.sh get pods -NAME READY STATUS RESTARTS AGE -my-nginx-5kq0g 0/1 Pending 0 10s -my-nginx-gr3hh 0/1 Pending 0 10s -my-nginx-xql4j 0/1 Pending 0 10s -``` - -You need to wait for the provisioning to complete, you can monitor the nodes by doing: - -```sh -$ vagrant ssh node-1 -c 'sudo docker images' -kubernetes-node-1: - REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE - 96864a7d2df3 26 hours ago 204.4 MB - google/cadvisor latest e0575e677c50 13 days ago 12.64 MB - kubernetes/pause latest 6c4579af347b 8 weeks ago 239.8 kB -``` - -Once the docker image for nginx has been downloaded, the container will start and you can list it: - -```sh -$ vagrant ssh node-1 -c 'sudo docker ps' -kubernetes-node-1: - CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES - dbe79bf6e25b nginx:latest "nginx" 21 seconds ago Up 19 seconds k8s--mynginx.8c5b8a3a--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--fcfa837f - fa0e29c94501 kubernetes/pause:latest "/pause" 8 minutes ago Up 8 minutes 0.0.0.0:8080->80/tcp k8s--net.a90e7ce4--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--baf5b21b - aa2ee3ed844a google/cadvisor:latest "/usr/bin/cadvisor" 38 minutes ago Up 38 minutes k8s--cadvisor.9e90d182--cadvisor_-_agent.file--4626b3a2 - 65a3a926f357 kubernetes/pause:latest "/pause" 39 minutes ago Up 39 minutes 0.0.0.0:4194->8080/tcp k8s--net.c5ba7f0e--cadvisor_-_agent.file--342fd561 -``` - -Going back to listing the pods, services and replicationcontrollers, you now have: - -```sh -$ ./cluster/kubectl.sh get pods -NAME READY STATUS RESTARTS AGE -my-nginx-5kq0g 1/1 Running 0 1m -my-nginx-gr3hh 1/1 Running 0 1m -my-nginx-xql4j 1/1 Running 0 1m - -$ ./cluster/kubectl.sh get services -NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE - -$ ./cluster/kubectl.sh get replicationcontrollers -CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE -my-nginx my-nginx nginx run=my-nginx 3 1m -``` - -We did not start any services, hence there are none listed. But we see three replicas displayed properly. - -Learn about [running your first containers](http://kubernetes.io/docs/user-guide/simple-nginx/) application to learn how to create a service. - -You can already play with scaling the replicas with: - -```sh -$ ./cluster/kubectl.sh scale rc my-nginx --replicas=2 -$ ./cluster/kubectl.sh get pods -NAME READY STATUS RESTARTS AGE -my-nginx-5kq0g 1/1 Running 0 2m -my-nginx-gr3hh 1/1 Running 0 2m -``` - -Congratulations! - -## Troubleshooting - -#### I keep downloading the same (large) box all the time! - -By default the Vagrantfile will download the box from S3. You can change this (and cache the box locally) by providing a name and an alternate URL when calling `kube-up.sh` - -```sh -export KUBERNETES_BOX_NAME=choose_your_own_name_for_your_kuber_box -export KUBERNETES_BOX_URL=path_of_your_kuber_box -export KUBERNETES_PROVIDER=vagrant -./cluster/kube-up.sh -``` - -#### I am getting timeouts when trying to curl the master from my host! - -During provision of the cluster, you may see the following message: - -```sh -Validating node-1 -............. -Waiting for each node to be registered with cloud provider -error: couldn't read version from server: Get https://10.245.1.2/api: dial tcp 10.245.1.2:443: i/o timeout -``` - -Some users have reported VPNs may prevent traffic from being routed to the host machine into the virtual machine network. - -To debug, first verify that the master is binding to the proper IP address: - -```sh -$ vagrant ssh master -$ ifconfig | grep eth1 -C 2 -eth1: flags=4163 mtu 1500 inet 10.245.1.2 netmask - 255.255.255.0 broadcast 10.245.1.255 -``` - -Then verify that your host machine has a network connection to a bridge that can serve that address: - -```sh -$ ifconfig | grep 10.245.1 -C 2 - -vboxnet5: flags=4163 mtu 1500 - inet 10.245.1.1 netmask 255.255.255.0 broadcast 10.245.1.255 - inet6 fe80::800:27ff:fe00:5 prefixlen 64 scopeid 0x20 - ether 0a:00:27:00:00:05 txqueuelen 1000 (Ethernet) -``` - -If you do not see a response on your host machine, you will most likely need to connect your host to the virtual network created by the virtualization provider. - -If you do see a network, but are still unable to ping the machine, check if your VPN is blocking the request. - -#### I just created the cluster, but I am getting authorization errors! - -You probably have an incorrect ~/.kubernetes_vagrant_auth file for the cluster you are attempting to contact. - -```sh -rm ~/.kubernetes_vagrant_auth -``` - -After using kubectl.sh make sure that the correct credentials are set: - -```sh -cat ~/.kubernetes_vagrant_auth -``` - -```json -{ - "User": "vagrant", - "Password": "vagrant" -} -``` - -#### I just created the cluster, but I do not see my container running! - -If this is your first time creating the cluster, the kubelet on each node schedules a number of docker pull requests to fetch prerequisite images. This can take some time and as a result may delay your initial pod getting provisioned. - -#### I have brought Vagrant up but the nodes cannot validate! - -Log on to one of the nodes (`vagrant ssh node-1`) and inspect the salt minion log (`sudo cat /var/log/salt/minion`). - -#### I want to change the number of nodes! - -You can control the number of nodes that are instantiated via the environment variable `NUM_NODES` on your host machine. If you plan to work with replicas, we strongly encourage you to work with enough nodes to satisfy your largest intended replica size. If you do not plan to work with replicas, you can save some system resources by running with a single node. You do this, by setting `NUM_NODES` to 1 like so: - -```sh -export NUM_NODES=1 -``` - -#### I want my VMs to have more memory! - -You can control the memory allotted to virtual machines with the `KUBERNETES_MEMORY` environment variable. -Just set it to the number of megabytes you would like the machines to have. For example: - -```sh -export KUBERNETES_MEMORY=2048 -``` - -If you need more granular control, you can set the amount of memory for the master and nodes independently. For example: - -```sh -export KUBERNETES_MASTER_MEMORY=1536 -export KUBERNETES_NODE_MEMORY=2048 -``` - -#### I want to set proxy settings for my Kubernetes cluster boot strapping! - -If you are behind a proxy, you need to install vagrant proxy plugin and set the proxy settings by - -```sh -vagrant plugin install vagrant-proxyconf -export VAGRANT_HTTP_PROXY=http://username:password@proxyaddr:proxyport -export VAGRANT_HTTPS_PROXY=https://username:password@proxyaddr:proxyport -``` - -Optionally you can specify addresses to not proxy, for example - -```sh -export VAGRANT_NO_PROXY=127.0.0.1 -``` - -If you are using sudo to make kubernetes build for example make quick-release, you need run `sudo -E make quick-release` to pass the environment variables. - -#### I ran vagrant suspend and nothing works! - -`vagrant suspend` seems to mess up the network. This is not supported at this time. - -#### I want vagrant to sync folders via nfs! - -You can ensure that vagrant uses nfs to sync folders with virtual machines by setting the KUBERNETES_VAGRANT_USE_NFS environment variable to 'true'. nfs is faster than virtualbox or vmware's 'shared folders' and does not require guest additions. See the [vagrant docs](http://docs.vagrantup.com/v2/synced-folders/nfs.html) for details on configuring nfs on the host. This setting will have no effect on the libvirt provider, which uses nfs by default. For example: - -```sh -export KUBERNETES_VAGRANT_USE_NFS=true -``` - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/local-cluster/vagrant.md?pixel)]() - diff --git a/logging.md b/logging.md deleted file mode 100644 index 1241ee7f..00000000 --- a/logging.md +++ /dev/null @@ -1,36 +0,0 @@ -## Logging Conventions - -The following conventions for the glog levels to use. -[glog](http://godoc.org/github.com/golang/glog) is globally preferred to -[log](http://golang.org/pkg/log/) for better runtime control. - -* glog.Errorf() - Always an error - -* glog.Warningf() - Something unexpected, but probably not an error - -* glog.Infof() has multiple levels: - * glog.V(0) - Generally useful for this to ALWAYS be visible to an operator - * Programmer errors - * Logging extra info about a panic - * CLI argument handling - * glog.V(1) - A reasonable default log level if you don't want verbosity. - * Information about config (listening on X, watching Y) - * Errors that repeat frequently that relate to conditions that can be corrected (pod detected as unhealthy) - * glog.V(2) - Useful steady state information about the service and important log messages that may correlate to significant changes in the system. This is the recommended default log level for most systems. - * Logging HTTP requests and their exit code - * System state changing (killing pod) - * Controller state change events (starting pods) - * Scheduler log messages - * glog.V(3) - Extended information about changes - * More info about system state changes - * glog.V(4) - Debug level verbosity (for now) - * Logging in particularly thorny parts of code where you may want to come back later and check it - -As per the comments, the practical default level is V(2). Developers and QE -environments may wish to run at V(3) or V(4). If you wish to change the log -level, you can pass in `-v=X` where X is the desired maximum level to log. - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/logging.md?pixel)]() - diff --git a/mesos-style.md b/mesos-style.md deleted file mode 100644 index 81554ce8..00000000 --- a/mesos-style.md +++ /dev/null @@ -1,218 +0,0 @@ -# Building Mesos/Omega-style frameworks on Kubernetes - -## Introduction - -We have observed two different cluster management architectures, which can be -categorized as "Borg-style" and "Mesos/Omega-style." In the remainder of this -document, we will abbreviate the latter as "Mesos-style." Although out-of-the -box Kubernetes uses a Borg-style architecture, it can also be configured in a -Mesos-style architecture, and in fact can support both styles at the same time. -This document describes the two approaches and describes how to deploy a -Mesos-style architecture on Kubernetes. - -As an aside, the converse is also true: one can deploy a Borg/Kubernetes-style -architecture on Mesos. - -This document is NOT intended to provide a comprehensive comparison of Borg and -Mesos. For example, we omit discussion of the tradeoffs between scheduling with -full knowledge of cluster state vs. scheduling using the "offer" model. That -issue is discussed in some detail in the Omega paper. -(See [references](#references) below.) - - -## What is a Borg-style architecture? - -A Borg-style architecture is characterized by: - -* a single logical API endpoint for clients, where some amount of processing is -done on requests, such as admission control and applying defaults - -* generic (non-application-specific) collection abstractions described -declaratively, - -* generic controllers/state machines that manage the lifecycle of the collection -abstractions and the containers spawned from them - -* a generic scheduler - -For example, Borg's primary collection abstraction is a Job, and every -application that runs on Borg--whether it's a user-facing service like the GMail -front-end, a batch job like a MapReduce, or an infrastructure service like -GFS--must represent itself as a Job. Borg has corresponding state machine logic -for managing Jobs and their instances, and a scheduler that's responsible for -assigning the instances to machines. - -The flow of a request in Borg is: - -1. Client submits a collection object to the Borgmaster API endpoint - -1. Admission control, quota, applying defaults, etc. run on the collection - -1. If the collection is admitted, it is persisted, and the collection state -machine creates the underlying instances - -1. The scheduler assigns a hostname to the instance, and tells the Borglet to -start the instance's container(s) - -1. Borglet starts the container(s) - -1. The instance state machine manages the instances and the collection state -machine manages the collection during their lifetimes - -Out-of-the-box Kubernetes has *workload-specific* abstractions (ReplicaSet, Job, -DaemonSet, etc.) and corresponding controllers, and in the future may have -[workload-specific schedulers](../../docs/proposals/multiple-schedulers.md), -e.g. different schedulers for long-running services vs. short-running batch. But -these abstractions, controllers, and schedulers are not *application-specific*. - -The usual request flow in Kubernetes is very similar, namely - -1. Client submits a collection object (e.g. ReplicaSet, Job, ...) to the API -server - -1. Admission control, quota, applying defaults, etc. run on the collection - -1. If the collection is admitted, it is persisted, and the corresponding -collection controller creates the underlying pods - -1. Admission control, quota, applying defaults, etc. runs on each pod; if there -are multiple schedulers, one of the admission controllers will write the -scheduler name as an annotation based on a policy - -1. If a pod is admitted, it is persisted - -1. The appropriate scheduler assigns a nodeName to the instance, which triggers -the Kubelet to start the pod's container(s) - -1. Kubelet starts the container(s) - -1. The controller corresponding to the collection manages the pod and the -collection during their lifetime - -In the Borg model, application-level scheduling and cluster-level scheduling are -handled by separate components. For example, a MapReduce master might request -Borg to create a job with a certain number of instances with a particular -resource shape, where each instance corresponds to a MapReduce worker; the -MapReduce master would then schedule individual units of work onto those -workers. - -## What is a Mesos-style architecture? - -Mesos is fundamentally designed to support multiple application-specific -"frameworks." A framework is composed of a "framework scheduler" and a -"framework executor." We will abbreviate "framework scheduler" as "framework" -since "scheduler" means something very different in Kubernetes (something that -just assigns pods to nodes). - -Unlike Borg and Kubernetes, where there is a single logical endpoint that -receives all API requests (the Borgmaster and API server, respectively), in -Mesos every framework is a separate API endpoint. Mesos does not have any -standard set of collection abstractions, controllers/state machines, or -schedulers; the logic for all of these things is contained in each -[application-specific framework](http://mesos.apache.org/documentation/latest/frameworks/) -individually. (Note that the notion of application-specific does sometimes blur -into the realm of workload-specific, for example -[Chronos](https://github.com/mesos/chronos) is a generic framework for batch -jobs. However, regardless of what set of Mesos frameworks you are using, the key -properties remain: each framework is its own API endpoint with its own -client-facing and internal abstractions, state machines, and scheduler). - -A Mesos framework can integrate application-level scheduling and cluster-level -scheduling into a single component. - -Note: Although Mesos frameworks expose their own API endpoints to clients, they -consume a common infrastructure via a common API endpoint for controlling tasks -(launching, detecting failure, etc.) and learning about available cluster -resources. More details -[here](http://mesos.apache.org/documentation/latest/scheduler-http-api/). - -## Building a Mesos-style framework on Kubernetes - -Implementing the Mesos model on Kubernetes boils down to enabling -application-specific collection abstractions, controllers/state machines, and -scheduling. There are just three steps: - -* Use API plugins to create API resources for your new application-specific -collection abstraction(s) - -* Implement controllers for the new abstractions (and for managing the lifecycle -of the pods the controllers generate) - -* Implement a scheduler with the application-specific scheduling logic - -Note that the last two can be combined: a Kubernetes controller can do the -scheduling for the pods it creates, by writing node name to the pods when it -creates them. - -Once you've done this, you end up with an architecture that is extremely similar -to the Mesos-style--the Kubernetes controller is effectively a Mesos framework. -The remaining differences are: - -* In Kubernetes, all API operations go through a single logical endpoint, the -API server (we say logical because the API server can be replicated). In -contrast, in Mesos, API operations go to a particular framework. However, the -Kubernetes API plugin model makes this difference fairly small. - -* In Kubernetes, application-specific admission control, quota, defaulting, etc. -rules can be implemented in the API server rather than in the controller. Of -course you can choose to make these operations be no-ops for your -application-specific collection abstractions, and handle them in your controller. - -* On the node level, Mesos allows application-specific executors, whereas -Kubernetes only has executors for Docker and rkt containers. - -The end-to-end flow is: - -1. Client submits an application-specific collection object to the API server - -2. The API server plugin for that collection object forwards the request to the -API server that handles that collection type - -3. Admission control, quota, applying defaults, etc. runs on the collection -object - -4. If the collection is admitted, it is persisted - -5. The collection controller sees the collection object and in response creates -the underlying pods and chooses which nodes they will run on by setting node -name - -6. Kubelet sees the pods with node name set and starts the container(s) - -7. The collection controller manages the pods and the collection during their -lifetimes - -*Note: if the controller and scheduler are separated, then step 5 breaks -down into multiple steps:* - -(5a) collection controller creates pods with empty node name. - -(5b) API server admission control, quota, defaulting, etc. runs on the -pods; one of the admission controller steps writes the scheduler name as an -annotation on each pods (see pull request `#18262` for more details). - -(5c) The corresponding application-specific scheduler chooses a node and -writes node name, which triggers the Kubelet to start the pod's container(s). - -As a final note, the Kubernetes model allows multiple levels of iterative -refinement of runtime abstractions, as long as the lowest level is the pod. For -example, clients of application Foo might create a `FooSet` which is picked up -by the FooController which in turn creates `BatchFooSet` and `ServiceFooSet` -objects, which are picked up by the BatchFoo controller and ServiceFoo -controller respectively, which in turn create pods. In between each of these -steps there is an opportunity for object-specific admission control, quota, and -defaulting to run in the API server, though these can instead be handled by the -controllers. - -## References - -Mesos is described [here](https://www.usenix.org/legacy/event/nsdi11/tech/full_papers/Hindman_new.pdf). -Omega is described [here](http://research.google.com/pubs/pub41684.html). -Borg is described [here](http://research.google.com/pubs/pub43438.html). - - - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/mesos-style.md?pixel)]() - diff --git a/node-performance-testing.md b/node-performance-testing.md deleted file mode 100644 index d6bb657f..00000000 --- a/node-performance-testing.md +++ /dev/null @@ -1,127 +0,0 @@ -# Measuring Node Performance - -This document outlines the issues and pitfalls of measuring Node performance, as -well as the tools available. - -## Cluster Set-up - -There are lots of factors which can affect node performance numbers, so care -must be taken in setting up the cluster to make the intended measurements. In -addition to taking the following steps into consideration, it is important to -document precisely which setup was used. For example, performance can vary -wildly from commit-to-commit, so it is very important to **document which commit -or version** of Kubernetes was used, which Docker version was used, etc. - -### Addon pods - -Be aware of which addon pods are running on which nodes. By default Kubernetes -runs 8 addon pods, plus another 2 per node (`fluentd-elasticsearch` and -`kube-proxy`) in the `kube-system` namespace. The addon pods can be disabled for -more consistent results, but doing so can also have performance implications. - -For example, Heapster polls each node regularly to collect stats data. Disabling -Heapster will hide the performance cost of serving those stats in the Kubelet. - -#### Disabling Add-ons - -Disabling addons is simple. Just ssh into the Kubernetes master and move the -addon from `/etc/kubernetes/addons/` to a backup location. More details -[here](../../cluster/addons/). - -### Which / how many pods? - -Performance will vary a lot between a node with 0 pods and a node with 100 pods. -In many cases you'll want to make measurements with several different amounts of -pods. On a single node cluster scaling a replication controller makes this easy, -just make sure the system reaches a steady-state before starting the -measurement. E.g. `kubectl scale replicationcontroller pause --replicas=100` - -In most cases pause pods will yield the most consistent measurements since the -system will not be affected by pod load. However, in some special cases -Kubernetes has been tuned to optimize pods that are not doing anything, such as -the cAdvisor housekeeping (stats gathering). In these cases, performing a very -light task (such as a simple network ping) can make a difference. - -Finally, you should also consider which features yours pods should be using. For -example, if you want to measure performance with probing, you should obviously -use pods with liveness or readiness probes configured. Likewise for volumes, -number of containers, etc. - -### Other Tips - -**Number of nodes** - On the one hand, it can be easier to manage logs, pods, -environment etc. with a single node to worry about. On the other hand, having -multiple nodes will let you gather more data in parallel for more robust -sampling. - -## E2E Performance Test - -There is an end-to-end test for collecting overall resource usage of node -components: [kubelet_perf.go](../../test/e2e/kubelet_perf.go). To -run the test, simply make sure you have an e2e cluster running (`go run -hack/e2e.go -up`) and [set up](#cluster-set-up) correctly. - -Run the test with `go run hack/e2e.go -v -test ---test_args="--ginkgo.focus=resource\susage\stracking"`. You may also wish to -customise the number of pods or other parameters of the test (remember to rerun -`make WHAT=test/e2e/e2e.test` after you do). - -## Profiling - -Kubelet installs the [go pprof handlers] -(https://golang.org/pkg/net/http/pprof/), which can be queried for CPU profiles: - -```console -$ kubectl proxy & -Starting to serve on 127.0.0.1:8001 -$ curl -G "http://localhost:8001/api/v1/proxy/nodes/${NODE}:10250/debug/pprof/profile?seconds=${DURATION_SECONDS}" > $OUTPUT -$ KUBELET_BIN=_output/dockerized/bin/linux/amd64/kubelet -$ go tool pprof -web $KUBELET_BIN $OUTPUT -``` - -`pprof` can also provide heap usage, from the `/debug/pprof/heap` endpoint -(e.g. `http://localhost:8001/api/v1/proxy/nodes/${NODE}:10250/debug/pprof/heap`). - -More information on go profiling can be found -[here](http://blog.golang.org/profiling-go-programs). - -## Benchmarks - -Before jumping through all the hoops to measure a live Kubernetes node in a real -cluster, it is worth considering whether the data you need can be gathered -through a Benchmark test. Go provides a really simple benchmarking mechanism, -just add a unit test of the form: - -```go -// In foo_test.go -func BenchmarkFoo(b *testing.B) { - b.StopTimer() - setupFoo() // Perform any global setup - b.StartTimer() - for i := 0; i < b.N; i++ { - foo() // Functionality to measure - } -} -``` - -Then: - -```console -$ go test -bench=. -benchtime=${SECONDS}s foo_test.go -``` - -More details on benchmarking [here](https://golang.org/pkg/testing/). - -## TODO - -- (taotao) Measuring docker performance -- Expand cluster set-up section -- (vishh) Measuring disk usage -- (yujuhong) Measuring memory usage -- Add section on monitoring kubelet metrics (e.g. with prometheus) - - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/node-performance-testing.md?pixel)]() - diff --git a/on-call-build-cop.md b/on-call-build-cop.md deleted file mode 100644 index 15c71e5d..00000000 --- a/on-call-build-cop.md +++ /dev/null @@ -1,151 +0,0 @@ -## Kubernetes "Github and Build-cop" Rotation - -### Preqrequisites - -* Ensure you have [write access to http://github.com/kubernetes/kubernetes](https://github.com/orgs/kubernetes/teams/kubernetes-maintainers) - * Test your admin access by e.g. adding a label to an issue. - -### Traffic sources and responsibilities - -* GitHub Kubernetes [issues](https://github.com/kubernetes/kubernetes/issues) -and [pulls](https://github.com/kubernetes/kubernetes/pulls): Your job is to be -the first responder to all new issues and PRs. If you are not equipped to do -this (which is fine!), it is your job to seek guidance! - - * Support issues should be closed and redirected to Stackoverflow (see example -response below). - - * All incoming issues should be tagged with a team label -(team/{api,ux,control-plane,node,cluster,csi,redhat,mesosphere,gke,release-infra,test-infra,none}); -for issues that overlap teams, you can use multiple team labels - - * There is a related concept of "Github teams" which allow you to @ mention -a set of people; feel free to @ mention a Github team if you wish, but this is -not a substitute for adding a team/* label, which is required - - * [Google teams](https://github.com/orgs/kubernetes/teams?utf8=%E2%9C%93&query=goog-) - * [Redhat teams](https://github.com/orgs/kubernetes/teams?utf8=%E2%9C%93&query=rh-) - * [SIGs](https://github.com/orgs/kubernetes/teams?utf8=%E2%9C%93&query=sig-) - - * If the issue is reporting broken builds, broken e2e tests, or other -obvious P0 issues, label the issue with priority/P0 and assign it to someone. -This is the only situation in which you should add a priority/* label - * non-P0 issues do not need a reviewer assigned initially - - * Assign any issues related to Vagrant to @derekwaynecarr (and @mention him -in the issue) - - * All incoming PRs should be assigned a reviewer. - - * unless it is a WIP (Work in Progress), RFC (Request for Comments), or design proposal. - * An auto-assigner [should do this for you] (https://github.com/kubernetes/kubernetes/pull/12365/files) - * When in doubt, choose a TL or team maintainer of the most relevant team; they can delegate - - * Keep in mind that you can @ mention people in an issue/PR to bring it to -their attention without assigning it to them. You can also @ mention github -teams, such as @kubernetes/goog-ux or @kubernetes/kubectl - - * If you need help triaging an issue or PR, consult with (or assign it to) -@brendandburns, @thockin, @bgrant0607, @quinton-hoole, @davidopp, @dchen1107, -@lavalamp (all U.S. Pacific Time) or @fgrzadkowski (Central European Time). - - * At the beginning of your shift, please add team/* labels to any issues that -have fallen through the cracks and don't have one. Likewise, be fair to the next -person in rotation: try to ensure that every issue that gets filed while you are -on duty is handled. The Github query to find issues with no team/* label is: -[here](https://github.com/kubernetes/kubernetes/issues?utf8=%E2%9C%93&q=is%3Aopen+is%3Aissue+-label%3Ateam%2Fcontrol-plane+-label%3Ateam%2Fmesosphere+-label%3Ateam%2Fredhat+-label%3Ateam%2Frelease-infra+-label%3Ateam%2Fnone+-label%3Ateam%2Fnode+-label%3Ateam%2Fcluster+-label%3Ateam%2Fux+-label%3Ateam%2Fapi+-label%3Ateam%2Ftest-infra+-label%3Ateam%2Fgke+-label%3A"team%2FCSI-API+Machinery+SIG"+-label%3Ateam%2Fhuawei+-label%3Ateam%2Fsig-aws). - -Example response for support issues: - -```code -Please re-post your question to [stackoverflow] -(http://stackoverflow.com/questions/tagged/kubernetes). - -We are trying to consolidate the channels to which questions for help/support -are posted so that we can improve our efficiency in responding to your requests, -and to make it easier for you to find answers to frequently asked questions and -how to address common use cases. - -We regularly see messages posted in multiple forums, with the full response -thread only in one place or, worse, spread across multiple forums. Also, the -large volume of support issues on github is making it difficult for us to use -issues to identify real bugs. - -The Kubernetes team scans stackoverflow on a regular basis, and will try to -ensure your questions don't go unanswered. - -Before posting a new question, please search stackoverflow for answers to -similar questions, and also familiarize yourself with: - - * [user guide](http://kubernetes.io/docs/user-guide/) - * [troubleshooting guide](http://kubernetes.io/docs/admin/cluster-troubleshooting/) - -Again, thanks for using Kubernetes. - -The Kubernetes Team -``` - -### Build-copping - -* The [merge-bot submit queue](http://submit-queue.k8s.io/) -([source](https://github.com/kubernetes/contrib/tree/master/mungegithub/mungers/submit-queue.go)) -should auto-merge all eligible PRs for you once they've passed all the relevant -checks mentioned below and all [critical e2e tests] -(https://goto.google.com/k8s-test/view/Critical%20Builds/) are passing. If the -merge-bot been disabled for some reason, or tests are failing, you might need to -do some manual merging to get things back on track. - -* Once a day or so, look at the [flaky test builds] -(https://goto.google.com/k8s-test/view/Flaky/); if they are timing out, clusters -are failing to start, or tests are consistently failing (instead of just -flaking), file an issue to get things back on track. - -* Jobs that are not in [critical e2e tests](https://goto.google.com/k8s-test/view/Critical%20Builds/) -or [flaky test builds](https://goto.google.com/k8s-test/view/Flaky/) are not -your responsibility to monitor. The `Test owner:` in the job description will be -automatically emailed if the job is failing. - -* If you are oncall, ensure that PRs confirming to the following -pre-requisites are being merged at a reasonable rate: - - * [Have been LGTMd](https://github.com/kubernetes/kubernetes/labels/lgtm) - * Pass Travis and Jenkins per-PR tests. - * Author has signed CLA if applicable. - - -* Although the shift schedule shows you as being scheduled Monday to Monday, - working on the weekend is neither expected nor encouraged. Enjoy your time - off. - -* When the build is broken, roll back the PRs responsible ASAP - -* When E2E tests are unstable, a "merge freeze" may be instituted. During a -merge freeze: - - * Oncall should slowly merge LGTMd changes throughout the day while monitoring -E2E to ensure stability. - - * Ideally the E2E run should be green, but some tests are flaky and can fail -randomly (not as a result of a particular change). - * If a large number of tests fail, or tests that normally pass fail, that -is an indication that one or more of the PR(s) in that build might be -problematic (and should be reverted). - * Use the Test Results Analyzer to see individual test history over time. - - -* Flake mitigation - - * Tests that flake (fail a small percentage of the time) need an issue filed -against them. Please read [this](flaky-tests.md#filing-issues-for-flaky-tests); -the build cop is expected to file issues for any flaky tests they encounter. - - * It's reasonable to manually merge PRs that fix a flake or otherwise mitigate it. - -### Contact information - -[@k8s-oncall](https://github.com/k8s-oncall) will reach the current person on -call. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/on-call-build-cop.md?pixel)]() - diff --git a/on-call-rotations.md b/on-call-rotations.md deleted file mode 100644 index a6535e82..00000000 --- a/on-call-rotations.md +++ /dev/null @@ -1,43 +0,0 @@ -## Kubernetes On-Call Rotations - -### Kubernetes "first responder" rotations - -Kubernetes has generated a lot of public traffic: email, pull-requests, bugs, -etc. So much traffic that it's becoming impossible to keep up with it all! This -is a fantastic problem to have. In order to be sure that SOMEONE, but not -EVERYONE on the team is paying attention to public traffic, we have instituted -two "first responder" rotations, listed below. Please read this page before -proceeding to the pages linked below, which are specific to each rotation. - -Please also read our [notes on OSS collaboration](collab.md), particularly the -bits about hours. Specifically, each rotation is expected to be active primarily -during work hours, less so off hours. - -During regular workday work hours of your shift, your primary responsibility is -to monitor the traffic sources specific to your rotation. You can check traffic -in the evenings if you feel so inclined, but it is not expected to be as highly -focused as work hours. For weekends, you should check traffic very occasionally -(e.g. once or twice a day). Again, it is not expected to be as highly focused as -workdays. It is assumed that over time, everyone will get weekday and weekend -shifts, so the workload will balance out. - -If you can not serve your shift, and you know this ahead of time, it is your -responsibility to find someone to cover and to change the rotation. If you have -an emergency, your responsibilities fall on the primary of the other rotation, -who acts as your secondary. If you need help to cover all of the tasks, partners -with oncall rotations (e.g., -[Redhat](https://github.com/orgs/kubernetes/teams/rh-oncall)). - -If you are not on duty you DO NOT need to do these things. You are free to focus -on "real work". - -Note that Kubernetes will occasionally enter code slush/freeze, prior to -milestones. When it does, there might be changes in the instructions (assigning -milestones, for instance). - -* [Github and Build Cop Rotation](on-call-build-cop.md) -* [User Support Rotation](on-call-user-support.md) - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/on-call-rotations.md?pixel)]() - diff --git a/on-call-user-support.md b/on-call-user-support.md deleted file mode 100644 index a111c6fe..00000000 --- a/on-call-user-support.md +++ /dev/null @@ -1,89 +0,0 @@ -## Kubernetes "User Support" Rotation - -### Traffic sources and responsibilities - -* [StackOverflow](http://stackoverflow.com/questions/tagged/kubernetes) and -[ServerFault](http://serverfault.com/questions/tagged/google-kubernetes): -Respond to any thread that has no responses and is more than 6 hours old (over -time we will lengthen this timeout to allow community responses). If you are not -equipped to respond, it is your job to redirect to someone who can. - - * [Query for unanswered Kubernetes StackOverflow questions](http://stackoverflow.com/search?q=%5Bkubernetes%5D+answers%3A0) - * [Query for unanswered Kubernetes ServerFault questions](http://serverfault.com/questions/tagged/google-kubernetes?sort=unanswered&pageSize=15) - * Direct poorly formulated questions to [stackoverflow's tips about how to ask](http://stackoverflow.com/help/how-to-ask) - * Direct off-topic questions to [stackoverflow's policy](http://stackoverflow.com/help/on-topic) - -* [Slack](https://kubernetes.slack.com) ([registration](http://slack.k8s.io)): -Your job is to be on Slack, watching for questions and answering or redirecting -as needed. Also check out the [Slack Archive](http://kubernetes.slackarchive.io/). - -* [Email/Groups](https://groups.google.com/forum/#!forum/google-containers): -Respond to any thread that has no responses and is more than 6 hours old (over -time we will lengthen this timeout to allow community responses). If you are not -equipped to respond, it is your job to redirect to someone who can. - -* [Legacy] [IRC](irc://irc.freenode.net/#google-containers) -(irc.freenode.net #google-containers): watch IRC for questions and try to -redirect users to Slack. Also check out the -[IRC logs](https://botbot.me/freenode/google-containers/). - -In general, try to direct support questions to: - -1. Documentation, such as the [user guide](../user-guide/README.md) and -[troubleshooting guide](http://kubernetes.io/docs/troubleshooting/) - -2. Stackoverflow - -If you see questions on a forum other than Stackoverflow, try to redirect them -to Stackoverflow. Example response: - -```code -Please re-post your question to [stackoverflow] -(http://stackoverflow.com/questions/tagged/kubernetes). - -We are trying to consolidate the channels to which questions for help/support -are posted so that we can improve our efficiency in responding to your requests, -and to make it easier for you to find answers to frequently asked questions and -how to address common use cases. - -We regularly see messages posted in multiple forums, with the full response -thread only in one place or, worse, spread across multiple forums. Also, the -large volume of support issues on github is making it difficult for us to use -issues to identify real bugs. - -The Kubernetes team scans stackoverflow on a regular basis, and will try to -ensure your questions don't go unanswered. - -Before posting a new question, please search stackoverflow for answers to -similar questions, and also familiarize yourself with: - - * [user guide](http://kubernetes.io/docs/user-guide/) - * [troubleshooting guide](http://kubernetes.io/docs/troubleshooting/) - -Again, thanks for using Kubernetes. - -The Kubernetes Team -``` - -If you answer a question (in any of the above forums) that you think might be -useful for someone else in the future, *please add it to one of the FAQs in the -wiki*: - -* [User FAQ](https://github.com/kubernetes/kubernetes/wiki/User-FAQ) -* [Developer FAQ](https://github.com/kubernetes/kubernetes/wiki/Developer-FAQ) -* [Debugging FAQ](https://github.com/kubernetes/kubernetes/wiki/Debugging-FAQ). - -Getting it into the FAQ is more important than polish. Please indicate the date -it was added, so people can judge the likelihood that it is out-of-date (and -please correct any FAQ entries that you see contain out-of-date information). - -### Contact information - -[@k8s-support-oncall](https://github.com/k8s-support-oncall) will reach the -current person on call. - - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/on-call-user-support.md?pixel)]() - diff --git a/owners.md b/owners.md deleted file mode 100644 index 217585ce..00000000 --- a/owners.md +++ /dev/null @@ -1,100 +0,0 @@ -# Owners files - -_Note_: This is a design for a feature that is not yet implemented. See the [contrib PR](https://github.com/kubernetes/contrib/issues/1389) for the current progress. - -## Overview - -We want to establish owners for different parts of the code in the Kubernetes codebase. These owners -will serve as the approvers for code to be submitted to these parts of the repository. Notably, owners -are not necessarily expected to do the first code review for all commits to these areas, but they are -required to approve changes before they can be merged. - -**Note** The Kubernetes project has a hiatus on adding new approvers to OWNERS files. At this time we are [adding more reviewers](https://github.com/kubernetes/kubernetes/pulls?utf8=%E2%9C%93&q=is%3Apr%20%22Curating%20owners%3A%22%20) to take the load off of the current set of approvers and once we have had a chance to flush this out for a release we will begin adding new approvers again. Adding new approvers is planned for after the Kubernetes 1.6.0 release. - -## High Level flow - -### Step One: A PR is submitted - -After a PR is submitted, the automated kubernetes PR robot will append a message to the PR indicating the owners -that are required for the PR to be submitted. - -Subsequently, a user can also request the approval message from the robot by writing: - -``` -@k8s-bot approvers -``` - -into a comment. - -In either case, the automation replies with an annotation that indicates -the owners required to approve. The annotation is a comment that is applied to the PR. -This comment will say: - -``` -Approval is required from OR , AND OR , AND ... -``` - -The set of required owners is drawn from the OWNERS files in the repository (see below). For each file -there should be multiple different OWNERS, these owners are listed in the `OR` clause(s). Because -it is possible that a PR may cover different directories, with disjoint sets of OWNERS, a PR may require -approval from more than one person, this is where the `AND` clauses come from. - -`` should be the github user id of the owner _without_ a leading `@` symbol to prevent the owner -from being cc'd into the PR by email. - -### Step Two: A PR is LGTM'd - -Once a PR is reviewed and LGTM'd it is eligible for submission. However, for it to be submitted -an owner for all of the files changed in the PR have to 'approve' the PR. A user is an owner for a -file if they are included in the OWNERS hierarchy (see below) for that file. - -Owner approval comes in two forms: - - * An owner adds a comment to the PR saying "I approve" or "approved" - * An owner is the original author of the PR - -In the case of a comment based approval, the same rules as for the 'lgtm' label apply. If the PR is -changed by pushing new commits to the PR, the previous approval is invalidated, and the owner(s) must -approve again. Because of this is recommended that PR authors squash their PRs prior to getting approval -from owners. - -### Step Three: A PR is merged - -Once a PR is LGTM'd and all required owners have approved, it is eligible for merge. The merge bot takes care of -the actual merging. - -## Design details - -We need to build new features into the existing github munger in order to accomplish this. Additionally -we need to add owners files to the repository. - -### Approval Munger - -We need to add a munger that adds comments to PRs indicating whose approval they require. This munger will -look for PRs that do not have approvers already present in the comments, or where approvers have been -requested, and add an appropriate comment to the PR. - - -### Status Munger - -GitHub has a [status api](https://developer.github.com/v3/repos/statuses/), we will add a status munger that pushes a status onto a PR of approval status. This status will only be approved if the relevant -approvers have approved the PR. - -### Requiring approval status - -Github has the ability to [require status checks prior to merging](https://help.github.com/articles/enabling-required-status-checks/) - -Once we have the status check munger described above implemented, we will add this required status check -to our main branch as well as any release branches. - -### Adding owners files - -In each directory in the repository we may add an OWNERS file. This file will contain the github OWNERS -for that directory. OWNERSHIP is hierarchical, so if a directory does not container an OWNERS file, its -parent's OWNERS file is used instead. There will be a top-level OWNERS file to back-stop the system. - -Obviously changing the OWNERS file requires OWNERS permission. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/owners.md?pixel)]() - diff --git a/pr_workflow.dia b/pr_workflow.dia deleted file mode 100644 index 753a284b..00000000 Binary files a/pr_workflow.dia and /dev/null differ diff --git a/pr_workflow.png b/pr_workflow.png deleted file mode 100644 index 0e2bd5d6..00000000 Binary files a/pr_workflow.png and /dev/null differ diff --git a/profiling.md b/profiling.md deleted file mode 100644 index f50537f1..00000000 --- a/profiling.md +++ /dev/null @@ -1,46 +0,0 @@ -# Profiling Kubernetes - -This document explain how to plug in profiler and how to profile Kubernetes services. - -## Profiling library - -Go comes with inbuilt 'net/http/pprof' profiling library and profiling web service. The way service works is binding debug/pprof/ subtree on a running webserver to the profiler. Reading from subpages of debug/pprof returns pprof-formatted profiles of the running binary. The output can be processed offline by the tool of choice, or used as an input to handy 'go tool pprof', which can graphically represent the result. - -## Adding profiling to services to APIserver. - -TL;DR: Add lines: - -```go -m.mux.HandleFunc("/debug/pprof/", pprof.Index) -m.mux.HandleFunc("/debug/pprof/profile", pprof.Profile) -m.mux.HandleFunc("/debug/pprof/symbol", pprof.Symbol) -``` - -to the init(c *Config) method in 'pkg/master/master.go' and import 'net/http/pprof' package. - -In most use cases to use profiler service it's enough to do 'import _ net/http/pprof', which automatically registers a handler in the default http.Server. Slight inconvenience is that APIserver uses default server for intra-cluster communication, so plugging profiler to it is not really useful. In 'pkg/kubelet/server/server.go' more servers are created and started as separate goroutines. The one that is usually serving external traffic is secureServer. The handler for this traffic is defined in 'pkg/master/master.go' and stored in Handler variable. It is created from HTTP multiplexer, so the only thing that needs to be done is adding profiler handler functions to this multiplexer. This is exactly what lines after TL;DR do. - -## Connecting to the profiler - -Even when running profiler I found not really straightforward to use 'go tool pprof' with it. The problem is that at least for dev purposes certificates generated for APIserver are not signed by anyone trusted and because secureServer serves only secure traffic it isn't straightforward to connect to the service. The best workaround I found is by creating an ssh tunnel from the kubernetes_master open unsecured port to some external server, and use this server as a proxy. To save everyone looking for correct ssh flags, it is done by running: - -```sh -ssh kubernetes_master -L:localhost:8080 -``` - -or analogous one for you Cloud provider. Afterwards you can e.g. run - -```sh -go tool pprof http://localhost:/debug/pprof/profile -``` - -to get 30 sec. CPU profile. - -## Contention profiling - -To enable contention profiling you need to add line `rt.SetBlockProfileRate(1)` in addition to `m.mux.HandleFunc(...)` added before (`rt` stands for `runtime` in `master.go`). This enables 'debug/pprof/block' subpage, which can be used as an input to `go tool pprof`. - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/profiling.md?pixel)]() - diff --git a/pull-requests.md b/pull-requests.md deleted file mode 100644 index 888d7320..00000000 --- a/pull-requests.md +++ /dev/null @@ -1,105 +0,0 @@ - - -- [Pull Request Process](#pull-request-process) -- [Life of a Pull Request](#life-of-a-pull-request) - - [Before sending a pull request](#before-sending-a-pull-request) - - [Release Notes](#release-notes) - - [Reviewing pre-release notes](#reviewing-pre-release-notes) - - [Visual overview](#visual-overview) -- [Other notes](#other-notes) -- [Automation](#automation) - - - -# Pull Request Process - -An overview of how pull requests are managed for kubernetes. This document -assumes the reader has already followed the [development guide](development.md) -to set up their environment. - -# Life of a Pull Request - -Unless in the last few weeks of a milestone when we need to reduce churn and stabilize, we aim to be always accepting pull requests. - -Either the [on call](on-call-rotations.md) manually or the [github "munger"](https://github.com/kubernetes/contrib/tree/master/mungegithub) submit-queue plugin automatically will manage merging PRs. - -There are several requirements for the submit-queue to work: -* Author must have signed CLA ("cla: yes" label added to PR) -* No changes can be made since last lgtm label was applied -* k8s-bot must have reported the GCE E2E build and test steps passed (Jenkins unit/integration, Jenkins e2e) - -Additionally, for infrequent or new contributors, we require the on call to apply the "ok-to-merge" label manually. This is gated by the [whitelist](https://github.com/kubernetes/contrib/blob/master/mungegithub/whitelist.txt). - -## Before sending a pull request - -The following will save time for both you and your reviewer: - -* Enable [pre-commit hooks](development.md#committing-changes-to-your-fork) and verify they pass. -* Verify `make verify` passes. -* Verify `make test` passes. -* Verify `make test-integration` passes. - -## Release Notes - -This section applies only to pull requests on the master branch. -For cherry-pick PRs, see the [Cherrypick instructions](cherry-picks.md) - -1. All pull requests are initiated with a `release-note-label-needed` label. -1. For a PR to be ready to merge, the `release-note-label-needed` label must be removed and one of the other `release-note-*` labels must be added. -1. `release-note-none` is a valid option if the PR does not need to be mentioned - at release time. -1. `release-note` labeled PRs generate a release note using the PR title by - default OR the release-note block in the PR template if filled in. - * See the [PR template](../../.github/PULL_REQUEST_TEMPLATE.md) for more - details. - * PR titles and body comments are mutable and can be modified at any time - prior to the release to reflect a release note friendly message. - -The only exception to these rules is when a PR is not a cherry-pick and is -targeted directly to the non-master branch. In this case, a `release-note-*` -label is required for that non-master PR. - -### Reviewing pre-release notes - -At any time, you can see what the release notes will look like on any branch. -(NOTE: This only works on Linux for now) - -``` -$ git pull https://github.com/kubernetes/release -$ RELNOTES=$PWD/release/relnotes -$ cd /to/your/kubernetes/repo -$ $RELNOTES -man # for details on how to use the tool -# Show release notes from the last release on a branch to HEAD -$ $RELNOTES --branch=master -``` - -## Visual overview - -![PR workflow](pr_workflow.png) - -# Other notes - -Pull requests that are purely support questions will be closed and -redirected to [stackoverflow](http://stackoverflow.com/questions/tagged/kubernetes). -We do this to consolidate help/support questions into a single channel, -improve efficiency in responding to requests and make FAQs easier -to find. - -Pull requests older than 2 weeks will be closed. Exceptions can be made -for PRs that have active review comments, or that are awaiting other dependent PRs. -Closed pull requests are easy to recreate, and little work is lost by closing a pull -request that subsequently needs to be reopened. We want to limit the total number of PRs in flight to: -* Maintain a clean project -* Remove old PRs that would be difficult to rebase as the underlying code has changed over time -* Encourage code velocity - - -# Automation - -We use a variety of automation to manage pull requests. This automation is described in detail -[elsewhere.](automation.md) - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/pull-requests.md?pixel)]() - diff --git a/running-locally.md b/running-locally.md deleted file mode 100644 index 327d685e..00000000 --- a/running-locally.md +++ /dev/null @@ -1,170 +0,0 @@ -Getting started locally ------------------------ - -**Table of Contents** - -- [Requirements](#requirements) - - [Linux](#linux) - - [Docker](#docker) - - [etcd](#etcd) - - [go](#go) - - [OpenSSL](#openssl) -- [Clone the repository](#clone-the-repository) -- [Starting the cluster](#starting-the-cluster) -- [Running a container](#running-a-container) -- [Running a user defined pod](#running-a-user-defined-pod) -- [Troubleshooting](#troubleshooting) - - [I cannot reach service IPs on the network.](#i-cannot-reach-service-ips-on-the-network) - - [I cannot create a replication controller with replica size greater than 1! What gives?](#i-cannot-create-a-replication-controller-with-replica-size-greater-than-1--what-gives) - - [I changed Kubernetes code, how do I run it?](#i-changed-kubernetes-code-how-do-i-run-it) - - [kubectl claims to start a container but `get pods` and `docker ps` don't show it.](#kubectl-claims-to-start-a-container-but-get-pods-and-docker-ps-dont-show-it) - - [The pods fail to connect to the services by host names](#the-pods-fail-to-connect-to-the-services-by-host-names) - -### Requirements - -#### Linux - -Not running Linux? Consider running [Minikube](http://kubernetes.io/docs/getting-started-guides/minikube/), or on a cloud provider like [Google Compute Engine](../getting-started-guides/gce.md). - -#### Docker - -At least [Docker](https://docs.docker.com/installation/#installation) -1.3+. Ensure the Docker daemon is running and can be contacted (try `docker -ps`). Some of the Kubernetes components need to run as root, which normally -works fine with docker. - -#### etcd - -You need an [etcd](https://github.com/coreos/etcd/releases) in your path, please make sure it is installed and in your ``$PATH``. - -#### go - -You need [go](https://golang.org/doc/install) in your path (see [here](development.md#go-versions) for supported versions), please make sure it is installed and in your ``$PATH``. - -#### OpenSSL - -You need [OpenSSL](https://www.openssl.org/) installed. If you do not have the `openssl` command available, you may see the following error in `/tmp/kube-apiserver.log`: - -``` -server.go:333] Invalid Authentication Config: open /tmp/kube-serviceaccount.key: no such file or directory -``` - -### Clone the repository - -In order to run kubernetes you must have the kubernetes code on the local machine. Cloning this repository is sufficient. - -```$ git clone --depth=1 https://github.com/kubernetes/kubernetes.git``` - -The `--depth=1` parameter is optional and will ensure a smaller download. - -### Starting the cluster - -In a separate tab of your terminal, run the following (since one needs sudo access to start/stop Kubernetes daemons, it is easier to run the entire script as root): - -```sh -cd kubernetes -hack/local-up-cluster.sh -``` - -This will build and start a lightweight local cluster, consisting of a master -and a single node. Type Control-C to shut it down. - -If you've already compiled the Kubernetes components, then you can avoid rebuilding them with this script by using the `-O` flag. - -```sh -./hack/local-up-cluster.sh -O -``` - -You can use the cluster/kubectl.sh script to interact with the local cluster. hack/local-up-cluster.sh will -print the commands to run to point kubectl at the local cluster. - - -### Running a container - -Your cluster is running, and you want to start running containers! - -You can now use any of the cluster/kubectl.sh commands to interact with your local setup. - -```sh -cluster/kubectl.sh get pods -cluster/kubectl.sh get services -cluster/kubectl.sh get replicationcontrollers -cluster/kubectl.sh run my-nginx --image=nginx --replicas=2 --port=80 - - -## begin wait for provision to complete, you can monitor the docker pull by opening a new terminal - sudo docker images - ## you should see it pulling the nginx image, once the above command returns it - sudo docker ps - ## you should see your container running! - exit -## end wait - -## introspect Kubernetes! -cluster/kubectl.sh get pods -cluster/kubectl.sh get services -cluster/kubectl.sh get replicationcontrollers -``` - - -### Running a user defined pod - -Note the difference between a [container](../user-guide/containers.md) -and a [pod](../user-guide/pods.md). Since you only asked for the former, Kubernetes will create a wrapper pod for you. -However you cannot view the nginx start page on localhost. To verify that nginx is running you need to run `curl` within the docker container (try `docker exec`). - -You can control the specifications of a pod via a user defined manifest, and reach nginx through your browser on the port specified therein: - -```sh -cluster/kubectl.sh create -f test/fixtures/doc-yaml/user-guide/pod.yaml -``` - -Congratulations! - -### Troubleshooting - -#### I cannot reach service IPs on the network. - -Some firewall software that uses iptables may not interact well with -kubernetes. If you have trouble around networking, try disabling any -firewall or other iptables-using systems, first. Also, you can check -if SELinux is blocking anything by running a command such as `journalctl --since yesterday | grep avc`. - -By default the IP range for service cluster IPs is 10.0.*.* - depending on your -docker installation, this may conflict with IPs for containers. If you find -containers running with IPs in this range, edit hack/local-cluster-up.sh and -change the service-cluster-ip-range flag to something else. - -#### I cannot create a replication controller with replica size greater than 1! What gives? - -You are running a single node setup. This has the limitation of only supporting a single replica of a given pod. If you are interested in running with larger replica sizes, we encourage you to try the local vagrant setup or one of the cloud providers. - -#### I changed Kubernetes code, how do I run it? - -```sh -cd kubernetes -make -hack/local-up-cluster.sh -``` - -#### kubectl claims to start a container but `get pods` and `docker ps` don't show it. - -One or more of the Kubernetes daemons might've crashed. Tail the logs of each in /tmp. - -#### The pods fail to connect to the services by host names - -To start the DNS service, you need to set the following variables: - -```sh -KUBE_ENABLE_CLUSTER_DNS=true -KUBE_DNS_SERVER_IP="10.0.0.10" -KUBE_DNS_DOMAIN="cluster.local" -KUBE_DNS_REPLICAS=1 -``` - -To know more on DNS service you can look [here](http://issue.k8s.io/6667). Related documents can be found [here](../../build-tools/kube-dns/#how-do-i-configure-it) - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/running-locally.md?pixel)]() - diff --git a/scheduler.md b/scheduler.md deleted file mode 100755 index b1cfea7a..00000000 --- a/scheduler.md +++ /dev/null @@ -1,72 +0,0 @@ -# The Kubernetes Scheduler - -The Kubernetes scheduler runs as a process alongside the other master -components such as the API server. Its interface to the API server is to watch -for Pods with an empty PodSpec.NodeName, and for each Pod, it posts a Binding -indicating where the Pod should be scheduled. - -## The scheduling process - -``` - +-------+ - +---------------+ node 1| - | +-------+ - | - +----> | Apply pred. filters - | | - | | +-------+ - | +----+---------->+node 2 | - | | +--+----+ - | watch | | - | | | +------+ - | +---------------------->+node 3| -+--+---------------+ | +--+---+ -| Pods in apiserver| | | -+------------------+ | | - | | - | | - +------------V------v--------+ - | Priority function | - +-------------+--------------+ - | - | node 1: p=2 - | node 2: p=5 - v - select max{node priority} = node 2 - -``` - -The Scheduler tries to find a node for each Pod, one at a time. -- First it applies a set of "predicates" to filter out inappropriate nodes. For example, if the PodSpec specifies resource requests, then the scheduler will filter out nodes that don't have at least that much resources available (computed as the capacity of the node minus the sum of the resource requests of the containers that are already running on the node). -- Second, it applies a set of "priority functions" -that rank the nodes that weren't filtered out by the predicate check. For example, it tries to spread Pods across nodes and zones while at the same time favoring the least (theoretically) loaded nodes (where "load" - in theory - is measured as the sum of the resource requests of the containers running on the node, divided by the node's capacity). -- Finally, the node with the highest priority is chosen (or, if there are multiple such nodes, then one of them is chosen at random). The code for this main scheduling loop is in the function `Schedule()` in [plugin/pkg/scheduler/generic_scheduler.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/generic_scheduler.go) - -## Scheduler extensibility - -The scheduler is extensible: the cluster administrator can choose which of the pre-defined -scheduling policies to apply, and can add new ones. - -### Policies (Predicates and Priorities) - -The built-in predicates and priorities are -defined in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/predicates/predicates.go) and -[plugin/pkg/scheduler/algorithm/priorities/priorities.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/priorities/priorities.go), respectively. - -### Modifying policies - -The policies that are applied when scheduling can be chosen in one of two ways. Normally, -the policies used are selected by the functions `defaultPredicates()` and `defaultPriorities()` in -[plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). -However, the choice of policies can be overridden by passing the command-line flag `--policy-config-file` to the scheduler, pointing to a JSON file specifying which scheduling policies to use. See [examples/scheduler-policy-config.json](../../examples/scheduler-policy-config.json) for an example -config file. (Note that the config file format is versioned; the API is defined in [plugin/pkg/scheduler/api](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/api/)). -Thus to add a new scheduling policy, you should modify predicates.go or priorities.go, and either register the policy in `defaultPredicates()` or `defaultPriorities()`, or use a policy config file. - -## Exploring the code - -If you want to get a global picture of how the scheduler works, you can start in -[plugin/cmd/kube-scheduler/app/server.go](http://releases.k8s.io/HEAD/plugin/cmd/kube-scheduler/app/server.go) - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/scheduler.md?pixel)]() - diff --git a/scheduler_algorithm.md b/scheduler_algorithm.md deleted file mode 100755 index 28c6c2bc..00000000 --- a/scheduler_algorithm.md +++ /dev/null @@ -1,44 +0,0 @@ -# Scheduler Algorithm in Kubernetes - -For each unscheduled Pod, the Kubernetes scheduler tries to find a node across the cluster according to a set of rules. A general introduction to the Kubernetes scheduler can be found at [scheduler.md](scheduler.md). In this document, the algorithm of how to select a node for the Pod is explained. There are two steps before a destination node of a Pod is chosen. The first step is filtering all the nodes and the second is ranking the remaining nodes to find a best fit for the Pod. - -## Filtering the nodes - -The purpose of filtering the nodes is to filter out the nodes that do not meet certain requirements of the Pod. For example, if the free resource on a node (measured by the capacity minus the sum of the resource requests of all the Pods that already run on the node) is less than the Pod's required resource, the node should not be considered in the ranking phase so it is filtered out. Currently, there are several "predicates" implementing different filtering policies, including: - -- `NoDiskConflict`: Evaluate if a pod can fit due to the volumes it requests, and those that are already mounted. Currently supported volumes are: AWS EBS, GCE PD, and Ceph RBD. Only Persistent Volume Claims for those supported types are checked. Persistent Volumes added directly to pods are not evaluated and are not constrained by this policy. -- `NoVolumeZoneConflict`: Evaluate if the volumes a pod requests are available on the node, given the Zone restrictions. -- `PodFitsResources`: Check if the free resource (CPU and Memory) meets the requirement of the Pod. The free resource is measured by the capacity minus the sum of requests of all Pods on the node. To learn more about the resource QoS in Kubernetes, please check [QoS proposal](../design/resource-qos.md). -- `PodFitsHostPorts`: Check if any HostPort required by the Pod is already occupied on the node. -- `HostName`: Filter out all nodes except the one specified in the PodSpec's NodeName field. -- `MatchNodeSelector`: Check if the labels of the node match the labels specified in the Pod's `nodeSelector` field and, as of Kubernetes v1.2, also match the `scheduler.alpha.kubernetes.io/affinity` pod annotation if present. See [here](../user-guide/node-selection/) for more details on both. -- `MaxEBSVolumeCount`: Ensure that the number of attached ElasticBlockStore volumes does not exceed a maximum value (by default, 39, since Amazon recommends a maximum of 40 with one of those 40 reserved for the root volume -- see [Amazon's documentation](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/volume_limits.html#linux-specific-volume-limits)). The maximum value can be controlled by setting the `KUBE_MAX_PD_VOLS` environment variable. -- `MaxGCEPDVolumeCount`: Ensure that the number of attached GCE PersistentDisk volumes does not exceed a maximum value (by default, 16, which is the maximum GCE allows -- see [GCE's documentation](https://cloud.google.com/compute/docs/disks/persistent-disks#limits_for_predefined_machine_types)). The maximum value can be controlled by setting the `KUBE_MAX_PD_VOLS` environment variable. -- `CheckNodeMemoryPressure`: Check if a pod can be scheduled on a node reporting memory pressure condition. Currently, no ``BestEffort`` should be placed on a node under memory pressure as it gets automatically evicted by kubelet. -- `CheckNodeDiskPressure`: Check if a pod can be scheduled on a node reporting disk pressure condition. Currently, no pods should be placed on a node under disk pressure as it gets automatically evicted by kubelet. - -The details of the above predicates can be found in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/predicates/predicates.go). All predicates mentioned above can be used in combination to perform a sophisticated filtering policy. Kubernetes uses some, but not all, of these predicates by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). - -## Ranking the nodes - -The filtered nodes are considered suitable to host the Pod, and it is often that there are more than one nodes remaining. Kubernetes prioritizes the remaining nodes to find the "best" one for the Pod. The prioritization is performed by a set of priority functions. For each remaining node, a priority function gives a score which scales from 0-10 with 10 representing for "most preferred" and 0 for "least preferred". Each priority function is weighted by a positive number and the final score of each node is calculated by adding up all the weighted scores. For example, suppose there are two priority functions, `priorityFunc1` and `priorityFunc2` with weighting factors `weight1` and `weight2` respectively, the final score of some NodeA is: - - finalScoreNodeA = (weight1 * priorityFunc1) + (weight2 * priorityFunc2) - -After the scores of all nodes are calculated, the node with highest score is chosen as the host of the Pod. If there are more than one nodes with equal highest scores, a random one among them is chosen. - -Currently, Kubernetes scheduler provides some practical priority functions, including: - -- `LeastRequestedPriority`: The node is prioritized based on the fraction of the node that would be free if the new Pod were scheduled onto the node. (In other words, (capacity - sum of requests of all Pods already on the node - request of Pod that is being scheduled) / capacity). CPU and memory are equally weighted. The node with the highest free fraction is the most preferred. Note that this priority function has the effect of spreading Pods across the nodes with respect to resource consumption. -- `BalancedResourceAllocation`: This priority function tries to put the Pod on a node such that the CPU and Memory utilization rate is balanced after the Pod is deployed. -- `SelectorSpreadPriority`: Spread Pods by minimizing the number of Pods belonging to the same service, replication controller, or replica set on the same node. If zone information is present on the nodes, the priority will be adjusted so that pods are spread across zones and nodes. -- `CalculateAntiAffinityPriority`: Spread Pods by minimizing the number of Pods belonging to the same service on nodes with the same value for a particular label. -- `ImageLocalityPriority`: Nodes are prioritized based on locality of images requested by a pod. Nodes with larger size of already-installed packages required by the pod will be preferred over nodes with no already-installed packages required by the pod or a small total size of already-installed packages required by the pod. -- `NodeAffinityPriority`: (Kubernetes v1.2) Implements `preferredDuringSchedulingIgnoredDuringExecution` node affinity; see [here](../user-guide/node-selection/) for more details. - -The details of the above priority functions can be found in [plugin/pkg/scheduler/algorithm/priorities](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/priorities/). Kubernetes uses some, but not all, of these priority functions by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). Similar as predicates, you can combine the above priority functions and assign weight factors (positive number) to them as you want (check [scheduler.md](scheduler.md) for how to customize). - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/scheduler_algorithm.md?pixel)]() - diff --git a/testing.md b/testing.md deleted file mode 100644 index 45848f3b..00000000 --- a/testing.md +++ /dev/null @@ -1,230 +0,0 @@ -# Testing guide - -Updated: 5/21/2016 - -**Table of Contents** - - -- [Testing guide](#testing-guide) - - [Unit tests](#unit-tests) - - [Run all unit tests](#run-all-unit-tests) - - [Set go flags during unit tests](#set-go-flags-during-unit-tests) - - [Run unit tests from certain packages](#run-unit-tests-from-certain-packages) - - [Run specific unit test cases in a package](#run-specific-unit-test-cases-in-a-package) - - [Stress running unit tests](#stress-running-unit-tests) - - [Unit test coverage](#unit-test-coverage) - - [Benchmark unit tests](#benchmark-unit-tests) - - [Integration tests](#integration-tests) - - [Install etcd dependency](#install-etcd-dependency) - - [Etcd test data](#etcd-test-data) - - [Run integration tests](#run-integration-tests) - - [Run a specific integration test](#run-a-specific-integration-test) - - [End-to-End tests](#end-to-end-tests) - - - -This assumes you already read the [development guide](development.md) to -install go, godeps, and configure your git client. All command examples are -relative to the `kubernetes` root directory. - -Before sending pull requests you should at least make sure your changes have -passed both unit and integration tests. - -Kubernetes only merges pull requests when unit, integration, and e2e tests are -passing, so it is often a good idea to make sure the e2e tests work as well. - -## Unit tests - -* Unit tests should be fully hermetic - - Only access resources in the test binary. -* All packages and any significant files require unit tests. -* The preferred method of testing multiple scenarios or input is - [table driven testing](https://github.com/golang/go/wiki/TableDrivenTests) - - Example: [TestNamespaceAuthorization](../../test/integration/auth/auth_test.go) -* Unit tests must pass on OS X and Windows platforms. - - Tests using linux-specific features must be skipped or compiled out. - - Skipped is better, compiled out is required when it won't compile. -* Concurrent unit test runs must pass. -* See [coding conventions](coding-conventions.md). - -### Run all unit tests - -`make test` is the entrypoint for running the unit tests that ensures that -`GOPATH` is set up correctly. If you have `GOPATH` set up correctly, you can -also just use `go test` directly. - -```sh -cd kubernetes -make test # Run all unit tests. -``` - -### Set go flags during unit tests - -You can set [go flags](https://golang.org/cmd/go/) by setting the -`KUBE_GOFLAGS` environment variable. - -### Run unit tests from certain packages - -`make test` accepts packages as arguments; the `k8s.io/kubernetes` prefix is -added automatically to these: - -```sh -make test WHAT=pkg/api # run tests for pkg/api -``` - -To run multiple targets you need quotes: - -```sh -make test WHAT="pkg/api pkg/kubelet" # run tests for pkg/api and pkg/kubelet -``` - -In a shell, it's often handy to use brace expansion: - -```sh -make test WHAT=pkg/{api,kubelet} # run tests for pkg/api and pkg/kubelet -``` - -### Run specific unit test cases in a package - -You can set the test args using the `KUBE_TEST_ARGS` environment variable. -You can use this to pass the `-run` argument to `go test`, which accepts a -regular expression for the name of the test that should be run. - -```sh -# Runs TestValidatePod in pkg/api/validation with the verbose flag set -make test WHAT=pkg/api/validation KUBE_GOFLAGS="-v" KUBE_TEST_ARGS='-run ^TestValidatePod$' - -# Runs tests that match the regex ValidatePod|ValidateConfigMap in pkg/api/validation -make test WHAT=pkg/api/validation KUBE_GOFLAGS="-v" KUBE_TEST_ARGS="-run ValidatePod\|ValidateConfigMap$" -``` - -For other supported test flags, see the [golang -documentation](https://golang.org/cmd/go/#hdr-Description_of_testing_flags). - -### Stress running unit tests - -Running the same tests repeatedly is one way to root out flakes. -You can do this efficiently. - -```sh -# Have 2 workers run all tests 5 times each (10 total iterations). -make test PARALLEL=2 ITERATION=5 -``` - -For more advanced ideas please see [flaky-tests.md](flaky-tests.md). - -### Unit test coverage - -Currently, collecting coverage is only supported for the Go unit tests. - -To run all unit tests and generate an HTML coverage report, run the following: - -```sh -make test KUBE_COVER=y -``` - -At the end of the run, an HTML report will be generated with the path -printed to stdout. - -To run tests and collect coverage in only one package, pass its relative path -under the `kubernetes` directory as an argument, for example: - -```sh -make test WHAT=pkg/kubectl KUBE_COVER=y -``` - -Multiple arguments can be passed, in which case the coverage results will be -combined for all tests run. - -### Benchmark unit tests - -To run benchmark tests, you'll typically use something like: - -```sh -go test ./pkg/apiserver -benchmem -run=XXX -bench=BenchmarkWatch -``` - -This will do the following: - -1. `-run=XXX` is a regular expression filter on the name of test cases to run -2. `-bench=BenchmarkWatch` will run test methods with BenchmarkWatch in the name - * See `grep -nr BenchmarkWatch .` for examples -3. `-benchmem` enables memory allocation stats - -See `go help test` and `go help testflag` for additional info. - -## Integration tests - -* Integration tests should only access other resources on the local machine - - Most commonly etcd or a service listening on localhost. -* All significant features require integration tests. - - This includes kubectl commands -* The preferred method of testing multiple scenarios or inputs -is [table driven testing](https://github.com/golang/go/wiki/TableDrivenTests) - - Example: [TestNamespaceAuthorization](../../test/integration/auth/auth_test.go) -* Each test should create its own master, httpserver and config. - - Example: [TestPodUpdateActiveDeadlineSeconds](../../test/integration/pods/pods_test.go) -* See [coding conventions](coding-conventions.md). - -### Install etcd dependency - -Kubernetes integration tests require your `PATH` to include an -[etcd](https://github.com/coreos/etcd/releases) installation. Kubernetes -includes a script to help install etcd on your machine. - -```sh -# Install etcd and add to PATH - -# Option a) install inside kubernetes root -hack/install-etcd.sh # Installs in ./third_party/etcd -echo export PATH="\$PATH:$(pwd)/third_party/etcd" >> ~/.profile # Add to PATH - -# Option b) install manually -grep -E "image.*etcd" cluster/saltbase/etcd/etcd.manifest # Find version -# Install that version using yum/apt-get/etc -echo export PATH="\$PATH:" >> ~/.profile # Add to PATH -``` - -### Etcd test data - -Many tests start an etcd server internally, storing test data in the operating system's temporary directory. - -If you see test failures because the temporary directory does not have sufficient space, -or is on a volume with unpredictable write latency, you can override the test data directory -for those internal etcd instances with the `TEST_ETCD_DIR` environment variable. - -### Run integration tests - -The integration tests are run using `make test-integration`. -The Kubernetes integration tests are writting using the normal golang testing -package but expect to have a running etcd instance to connect to. The `test- -integration.sh` script wraps `make test` and sets up an etcd instance -for the integration tests to use. - -```sh -make test-integration # Run all integration tests. -``` - -This script runs the golang tests in package -[`test/integration`](../../test/integration/). - -### Run a specific integration test - -You can use also use the `KUBE_TEST_ARGS` environment variable with the `hack -/test-integration.sh` script to run a specific integration test case: - -```sh -# Run integration test TestPodUpdateActiveDeadlineSeconds with the verbose flag set. -make test-integration KUBE_GOFLAGS="-v" KUBE_TEST_ARGS="-run ^TestPodUpdateActiveDeadlineSeconds$" -``` - -If you set `KUBE_TEST_ARGS`, the test case will be run with only the `v1` API -version and the watch cache test is skipped. - -## End-to-End tests - -Please refer to [End-to-End Testing in Kubernetes](e2e-tests.md). - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/testing.md?pixel)]() - diff --git a/update-release-docs.md b/update-release-docs.md deleted file mode 100644 index 1e0988db..00000000 --- a/update-release-docs.md +++ /dev/null @@ -1,115 +0,0 @@ -# Table of Contents - - - -- [Table of Contents](#table-of-contents) -- [Overview](#overview) -- [Adding a new docs collection for a release](#adding-a-new-docs-collection-for-a-release) -- [Updating docs in an existing collection](#updating-docs-in-an-existing-collection) - - [Updating docs on HEAD](#updating-docs-on-head) - - [Updating docs in release branch](#updating-docs-in-release-branch) - - [Updating docs in gh-pages branch](#updating-docs-in-gh-pages-branch) - - - -# Overview - -This document explains how to update kubernetes release docs hosted at http://kubernetes.io/docs/. - -http://kubernetes.io is served using the [gh-pages -branch](https://github.com/kubernetes/kubernetes/tree/gh-pages) of kubernetes repo on github. -Updating docs in that branch will update http://kubernetes.io - -There are 2 scenarios which require updating docs: -* Adding a new docs collection for a release. -* Updating docs in an existing collection. - -# Adding a new docs collection for a release - -Whenever a new release series (`release-X.Y`) is cut from `master`, we push the -corresponding set of docs to `http://kubernetes.io/vX.Y/docs`. The steps are as follows: - -* Create a `_vX.Y` folder in `gh-pages` branch. -* Add `vX.Y` as a valid collection in [_config.yml](https://github.com/kubernetes/kubernetes/blob/gh-pages/_config.yml) -* Create a new `_includes/nav_vX.Y.html` file with the navigation menu. This can - be a copy of `_includes/nav_vX.Y-1.html` with links to new docs added and links - to deleted docs removed. Update [_layouts/docwithnav.html] - (https://github.com/kubernetes/kubernetes/blob/gh-pages/_layouts/docwithnav.html) - to include this new navigation html file. Example PR: [#16143](https://github.com/kubernetes/kubernetes/pull/16143). -* [Pull docs from release branch](#updating-docs-in-gh-pages-branch) in `_vX.Y` - folder. - -Once these changes have been submitted, you should be able to reach the docs at -`http://kubernetes.io/vX.Y/docs/` where you can test them. - -To make `X.Y` the default version of docs: - -* Update [_config.yml](https://github.com/kubernetes/kubernetes/blob/gh-pages/_config.yml) - and [/kubernetes/kubernetes/blob/gh-pages/_docs/index.md](https://github.com/kubernetes/kubernetes/blob/gh-pages/_docs/index.md) - to point to the new version. Example PR: [#16416](https://github.com/kubernetes/kubernetes/pull/16416). -* Update [_includes/docversionselector.html](https://github.com/kubernetes/kubernetes/blob/gh-pages/_includes/docversionselector.html) - to make `vX.Y` the default version. -* Add "Disallow: /vX.Y-1/" to existing [robots.txt](https://github.com/kubernetes/kubernetes/blob/gh-pages/robots.txt) - file to hide old content from web crawlers and focus SEO on new docs. Example PR: - [#16388](https://github.com/kubernetes/kubernetes/pull/16388). -* Regenerate [sitemaps.xml](https://github.com/kubernetes/kubernetes/blob/gh-pages/sitemap.xml) - so that it now contains `vX.Y` links. Sitemap can be regenerated using - https://www.xml-sitemaps.com. Example PR: [#17126](https://github.com/kubernetes/kubernetes/pull/17126). -* Resubmit the updated sitemaps file to [Google - webmasters](https://www.google.com/webmasters/tools/sitemap-list?siteUrl=http://kubernetes.io/) for google to index the new links. -* Update [_layouts/docwithnav.html] (https://github.com/kubernetes/kubernetes/blob/gh-pages/_layouts/docwithnav.html) - to include [_includes/archivedocnotice.html](https://github.com/kubernetes/kubernetes/blob/gh-pages/_includes/archivedocnotice.html) - for `vX.Y-1` docs which need to be archived. -* Ping @thockin to update docs.k8s.io to redirect to `http://kubernetes.io/vX.Y/`. [#18788](https://github.com/kubernetes/kubernetes/issues/18788). - -http://kubernetes.io/docs/ should now be redirecting to `http://kubernetes.io/vX.Y/`. - -# Updating docs in an existing collection - -The high level steps to update docs in an existing collection are: - -1. Update docs on `HEAD` (master branch) -2. Cherryick the change in relevant release branch. -3. Update docs on `gh-pages`. - -## Updating docs on HEAD - -[Development guide](development.md) provides general instructions on how to contribute to kubernetes github repo. -[Docs how to guide](how-to-doc.md) provides conventions to follow while writing docs. - -## Updating docs in release branch - -Once docs have been updated in the master branch, the changes need to be -cherrypicked in the latest release branch. -[Cherrypick guide](cherry-picks.md) has more details on how to cherrypick your change. - -## Updating docs in gh-pages branch - -Once release branch has all the relevant changes, we can pull in the latest docs -in `gh-pages` branch. -Run the following 2 commands in `gh-pages` branch to update docs for release `X.Y`: - -``` -_tools/import_docs vX.Y _vX.Y release-X.Y release-X.Y -``` - -For ex: to pull in docs for release 1.1, run: - -``` -_tools/import_docs v1.1 _v1.1 release-1.1 release-1.1 -``` - -Apart from copying over the docs, `_tools/release_docs` also does some post processing -(like updating the links to docs to point to http://kubernetes.io/docs/ instead of pointing to github repo). -Note that we always pull in the docs from release branch and not from master (pulling docs -from master requires some extra processing like versionizing the links and removing unversioned warnings). - -We delete all existing docs before pulling in new ones to ensure that deleted -docs go away. - -If the change added or deleted a doc, then update the corresponding `_includes/nav_vX.Y.html` file as well. - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/update-release-docs.md?pixel)]() - diff --git a/updating-docs-for-feature-changes.md b/updating-docs-for-feature-changes.md deleted file mode 100644 index 309b809d..00000000 --- a/updating-docs-for-feature-changes.md +++ /dev/null @@ -1,76 +0,0 @@ -# How to update docs for new kubernetes features - -This document describes things to consider when updating Kubernetes docs for new features or changes to existing features (including removing features). - -## Who should read this doc? - -Anyone making user facing changes to kubernetes. This is especially important for Api changes or anything impacting the getting started experience. - -## What docs changes are needed when adding or updating a feature in kubernetes? - -### When making Api changes - -*e.g. adding Deployments* -* Always make sure docs for downstream effects are updated *(StatefulSet -> PVC, Deployment -> ReplicationController)* -* Add or update the corresponding *[Glossary](http://kubernetes.io/docs/reference/)* item -* Verify the guides / walkthroughs do not require any changes: - * **If your change will be recommended over the approaches shown in these guides, then they must be updated to reflect your change** - * [Hello Node](http://kubernetes.io/docs/hellonode/) - * [K8s101](http://kubernetes.io/docs/user-guide/walkthrough/) - * [K8S201](http://kubernetes.io/docs/user-guide/walkthrough/k8s201/) - * [Guest-book](https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/guestbook) - * [Thorough-walkthrough](http://kubernetes.io/docs/user-guide/) -* Verify the [landing page examples](http://kubernetes.io/docs/samples/) do not require any changes (those under "Recently updated samples") - * **If your change will be recommended over the approaches shown in the "Updated" examples, then they must be updated to reflect your change** - * If you are aware that your change will be recommended over the approaches shown in non-"Updated" examples, create an Issue -* Verify the collection of docs under the "Guides" section do not require updates (may need to use grep for this until are docs are more organized) - -### When making Tools changes - -*e.g. updating kube-dash or kubectl* -* If changing kubectl, verify the guides / walkthroughs do not require any changes: - * **If your change will be recommended over the approaches shown in these guides, then they must be updated to reflect your change** - * [Hello Node](http://kubernetes.io/docs/hellonode/) - * [K8s101](http://kubernetes.io/docs/user-guide/walkthrough/) - * [K8S201](http://kubernetes.io/docs/user-guide/walkthrough/k8s201/) - * [Guest-book](https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/guestbook) - * [Thorough-walkthrough](http://kubernetes.io/docs/user-guide/) -* If updating an existing tool - * Search for any docs about the tool and update them -* If adding a new tool for end users - * Add a new page under [Guides](http://kubernetes.io/docs/) -* **If removing a tool (kube-ui), make sure documentation that references it is updated appropriately!** - -### When making cluster setup changes - -*e.g. adding Multi-AZ support* -* Update the relevant [Administering Clusters](http://kubernetes.io/docs/) pages - -### When making Kubernetes binary changes - -*e.g. adding a flag, changing Pod GC behavior, etc* -* Add or update a page under [Configuring Kubernetes](http://kubernetes.io/docs/) - -## Where do the docs live? - -1. Most external user facing docs live in the [kubernetes/docs](https://github.com/kubernetes/kubernetes.github.io) repo - * Also see the *[general instructions](http://kubernetes.io/editdocs/)* for making changes to the docs website -2. Internal design and development docs live in the [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes) repo - -## Who should help review docs changes? - -* cc *@kubernetes/docs* -* Changes to [kubernetes/docs](https://github.com/kubernetes/kubernetes.github.io) repo must have both a Technical Review and a Docs Review - -## Tips for writing new docs - -* Try to keep new docs small and focused -* Document pre-requisites (if they exist) -* Document what concepts will be covered in the document -* Include screen shots or pictures in documents for GUIs -* *TODO once we have a standard widget set we are happy with* - include diagrams to help describe complex ideas (not required yet) - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/updating-docs-for-feature-changes.md?pixel)]() - diff --git a/writing-a-getting-started-guide.md b/writing-a-getting-started-guide.md deleted file mode 100644 index b1d65d60..00000000 --- a/writing-a-getting-started-guide.md +++ /dev/null @@ -1,101 +0,0 @@ -# Writing a Getting Started Guide - -This page gives some advice for anyone planning to write or update a Getting Started Guide for Kubernetes. -It also gives some guidelines which reviewers should follow when reviewing a pull request for a -guide. - -A Getting Started Guide is instructions on how to create a Kubernetes cluster on top of a particular -type(s) of infrastructure. Infrastructure includes: the IaaS provider for VMs; -the node OS; inter-node networking; and node Configuration Management system. -A guide refers to scripts, Configuration Management files, and/or binary assets such as RPMs. We call -the combination of all these things needed to run on a particular type of infrastructure a -**distro**. - -[The Matrix](../../docs/getting-started-guides/README.md) lists the distros. If there is already a guide -which is similar to the one you have planned, consider improving that one. - - -Distros fall into two categories: - - **versioned distros** are tested to work with a particular binary release of Kubernetes. These - come in a wide variety, reflecting a wide range of ideas and preferences in how to run a cluster. - - **development distros** are tested work with the latest Kubernetes source code. But, there are - relatively few of these and the bar is much higher for creating one. They must support - fully automated cluster creation, deletion, and upgrade. - -There are different guidelines for each. - -## Versioned Distro Guidelines - -These guidelines say *what* to do. See the Rationale section for *why*. - - Send us a PR. - - Put the instructions in `docs/getting-started-guides/...`. Scripts go there too. This helps devs easily - search for uses of flags by guides. - - We may ask that you host binary assets or large amounts of code in our `contrib` directory or on your - own repo. - - Add or update a row in [The Matrix](../../docs/getting-started-guides/README.md). - - State the binary version of Kubernetes that you tested clearly in your Guide doc. - - Setup a cluster and run the [conformance tests](e2e-tests.md#conformance-tests) against it, and report the - results in your PR. - - Versioned distros should typically not modify or add code in `cluster/`. That is just scripts for developer - distros. - - When a new major or minor release of Kubernetes comes out, we may also release a new - conformance test, and require a new conformance test run to earn a conformance checkmark. - -If you have a cluster partially working, but doing all the above steps seems like too much work, -we still want to hear from you. We suggest you write a blog post or a Gist, and we will link to it on our wiki page. -Just file an issue or chat us on [Slack](http://slack.kubernetes.io) and one of the committers will link to it from the wiki. - -## Development Distro Guidelines - -These guidelines say *what* to do. See the Rationale section for *why*. - - the main reason to add a new development distro is to support a new IaaS provider (VM and - network management). This means implementing a new `pkg/cloudprovider/providers/$IAAS_NAME`. - - Development distros should use Saltstack for Configuration Management. - - development distros need to support automated cluster creation, deletion, upgrading, etc. - This mean writing scripts in `cluster/$IAAS_NAME`. - - all commits to the tip of this repo need to not break any of the development distros - - the author of the change is responsible for making changes necessary on all the cloud-providers if the - change affects any of them, and reverting the change if it breaks any of the CIs. - - a development distro needs to have an organization which owns it. This organization needs to: - - Setting up and maintaining Continuous Integration that runs e2e frequently (multiple times per day) against the - Distro at head, and which notifies all devs of breakage. - - being reasonably available for questions and assisting with - refactoring and feature additions that affect code for their IaaS. - -## Rationale - - - We want people to create Kubernetes clusters with whatever IaaS, Node OS, - configuration management tools, and so on, which they are familiar with. The - guidelines for **versioned distros** are designed for flexibility. - - We want developers to be able to work without understanding all the permutations of - IaaS, NodeOS, and configuration management. The guidelines for **developer distros** are designed - for consistency. - - We want users to have a uniform experience with Kubernetes whenever they follow instructions anywhere - in our Github repository. So, we ask that versioned distros pass a **conformance test** to make sure - really work. - - We want to **limit the number of development distros** for several reasons. Developers should - only have to change a limited number of places to add a new feature. Also, since we will - gate commits on passing CI for all distros, and since end-to-end tests are typically somewhat - flaky, it would be highly likely for there to be false positives and CI backlogs with many CI pipelines. - - We do not require versioned distros to do **CI** for several reasons. It is a steep - learning curve to understand our automated testing scripts. And it is considerable effort - to fully automate setup and teardown of a cluster, which is needed for CI. And, not everyone - has the time and money to run CI. We do not want to - discourage people from writing and sharing guides because of this. - - Versioned distro authors are free to run their own CI and let us know if there is breakage, but we - will not include them as commit hooks -- there cannot be so many commit checks that it is impossible - to pass them all. - - We prefer a single Configuration Management tool for development distros. If there were more - than one, the core developers would have to learn multiple tools and update config in multiple - places. **Saltstack** happens to be the one we picked when we started the project. We - welcome versioned distros that use any tool; there are already examples of - CoreOS Fleet, Ansible, and others. - - You can still run code from head or your own branch - if you use another Configuration Management tool -- you just have to do some manual steps - during testing and deployment. - - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/writing-a-getting-started-guide.md?pixel)]() - diff --git a/writing-good-e2e-tests.md b/writing-good-e2e-tests.md deleted file mode 100644 index ab13aff2..00000000 --- a/writing-good-e2e-tests.md +++ /dev/null @@ -1,235 +0,0 @@ -# Writing good e2e tests for Kubernetes # - -## Patterns and Anti-Patterns ## - -### Goals of e2e tests ### - -Beyond the obvious goal of providing end-to-end system test coverage, -there are a few less obvious goals that you should bear in mind when -designing, writing and debugging your end-to-end tests. In -particular, "flaky" tests, which pass most of the time but fail -intermittently for difficult-to-diagnose reasons are extremely costly -in terms of blurring our regression signals and slowing down our -automated merge queue. Up-front time and effort designing your test -to be reliable is very well spent. Bear in mind that we have hundreds -of tests, each running in dozens of different environments, and if any -test in any test environment fails, we have to assume that we -potentially have some sort of regression. So if a significant number -of tests fail even only 1% of the time, basic statistics dictates that -we will almost never have a "green" regression indicator. Stated -another way, writing a test that is only 99% reliable is just about -useless in the harsh reality of a CI environment. In fact it's worse -than useless, because not only does it not provide a reliable -regression indicator, but it also costs a lot of subsequent debugging -time, and delayed merges. - -#### Debuggability #### - -If your test fails, it should provide as detailed as possible reasons -for the failure in it's output. "Timeout" is not a useful error -message. "Timed out after 60 seconds waiting for pod xxx to enter -running state, still in pending state" is much more useful to someone -trying to figure out why your test failed and what to do about it. -Specifically, -[assertion](https://onsi.github.io/gomega/#making-assertions) code -like the following generates rather useless errors: - -``` -Expect(err).NotTo(HaveOccurred()) -``` - -Rather -[annotate](https://onsi.github.io/gomega/#annotating-assertions) your assertion with something like this: - -``` -Expect(err).NotTo(HaveOccurred(), "Failed to create %d foobars, only created %d", foobarsReqd, foobarsCreated) -``` - -On the other hand, overly verbose logging, particularly of non-error conditions, can make -it unnecessarily difficult to figure out whether a test failed and if -so why? So don't log lots of irrelevant stuff either. - -#### Ability to run in non-dedicated test clusters #### - -To reduce end-to-end delay and improve resource utilization when -running e2e tests, we try, where possible, to run large numbers of -tests in parallel against the same test cluster. This means that: - -1. you should avoid making any assumption (implicit or explicit) that -your test is the only thing running against the cluster. For example, -making the assumption that your test can run a pod on every node in a -cluster is not a safe assumption, as some other tests, running at the -same time as yours, might have saturated one or more nodes in the -cluster. Similarly, running a pod in the system namespace, and -assuming that that will increase the count of pods in the system -namespace by one is not safe, as some other test might be creating or -deleting pods in the system namespace at the same time as your test. -If you do legitimately need to write a test like that, make sure to -label it ["\[Serial\]"](e2e-tests.md#kinds_of_tests) so that it's easy -to identify, and not run in parallel with any other tests. -1. You should avoid doing things to the cluster that make it difficult -for other tests to reliably do what they're trying to do, at the same -time. For example, rebooting nodes, disconnecting network interfaces, -or upgrading cluster software as part of your test is likely to -violate the assumptions that other tests might have made about a -reasonably stable cluster environment. If you need to write such -tests, please label them as -["\[Disruptive\]"](e2e-tests.md#kinds_of_tests) so that it's easy to -identify them, and not run them in parallel with other tests. -1. You should avoid making assumptions about the Kubernetes API that -are not part of the API specification, as your tests will break as -soon as these assumptions become invalid. For example, relying on -specific Events, Event reasons or Event messages will make your tests -very brittle. - -#### Speed of execution #### - -We have hundreds of e2e tests, some of which we run in serial, one -after the other, in some cases. If each test takes just a few minutes -to run, that very quickly adds up to many, many hours of total -execution time. We try to keep such total execution time down to a -few tens of minutes at most. Therefore, try (very hard) to keep the -execution time of your individual tests below 2 minutes, ideally -shorter than that. Concretely, adding inappropriately long 'sleep' -statements or other gratuitous waits to tests is a killer. If under -normal circumstances your pod enters the running state within 10 -seconds, and 99.9% of the time within 30 seconds, it would be -gratuitous to wait 5 minutes for this to happen. Rather just fail -after 30 seconds, with a clear error message as to why your test -failed ("e.g. Pod x failed to become ready after 30 seconds, it -usually takes 10 seconds"). If you do have a truly legitimate reason -for waiting longer than that, or writing a test which takes longer -than 2 minutes to run, comment very clearly in the code why this is -necessary, and label the test as -["\[Slow\]"](e2e-tests.md#kinds_of_tests), so that it's easy to -identify and avoid in test runs that are required to complete -timeously (for example those that are run against every code -submission before it is allowed to be merged). -Note that completing within, say, 2 minutes only when the test -passes is not generally good enough. Your test should also fail in a -reasonable time. We have seen tests that, for example, wait up to 10 -minutes for each of several pods to become ready. Under good -conditions these tests might pass within a few seconds, but if the -pods never become ready (e.g. due to a system regression) they take a -very long time to fail and typically cause the entire test run to time -out, so that no results are produced. Again, this is a lot less -useful than a test that fails reliably within a minute or two when the -system is not working correctly. - -#### Resilience to relatively rare, temporary infrastructure glitches or delays #### - -Remember that your test will be run many thousands of -times, at different times of day and night, probably on different -cloud providers, under different load conditions. And often the -underlying state of these systems is stored in eventually consistent -data stores. So, for example, if a resource creation request is -theoretically asynchronous, even if you observe it to be practically -synchronous most of the time, write your test to assume that it's -asynchronous (e.g. make the "create" call, and poll or watch the -resource until it's in the correct state before proceeding). -Similarly, don't assume that API endpoints are 100% available. -They're not. Under high load conditions, API calls might temporarily -fail or time-out. In such cases it's appropriate to back off and retry -a few times before failing your test completely (in which case make -the error message very clear about what happened, e.g. "Retried -http://... 3 times - all failed with xxx". Use the standard -retry mechanisms provided in the libraries detailed below. - -### Some concrete tools at your disposal ### - -Obviously most of the above goals apply to many tests, not just yours. -So we've developed a set of reusable test infrastructure, libraries -and best practises to help you to do the right thing, or at least do -the same thing as other tests, so that if that turns out to be the -wrong thing, it can be fixed in one place, not hundreds, to be the -right thing. - -Here are a few pointers: - -+ [E2e Framework](../../test/e2e/framework/framework.go): - Familiarise yourself with this test framework and how to use it. - Amongst others, it automatically creates uniquely named namespaces - within which your tests can run to avoid name clashes, and reliably - automates cleaning up the mess after your test has completed (it - just deletes everything in the namespace). This helps to ensure - that tests do not leak resources. Note that deleting a namespace - (and by implication everything in it) is currently an expensive - operation. So the fewer resources you create, the less cleaning up - the framework needs to do, and the faster your test (and other - tests running concurrently with yours) will complete. Your tests - should always use this framework. Trying other home-grown - approaches to avoiding name clashes and resource leaks has proven - to be a very bad idea. -+ [E2e utils library](../../test/e2e/framework/util.go): - This handy library provides tons of reusable code for a host of - commonly needed test functionality, including waiting for resources - to enter specified states, safely and consistently retrying failed - operations, usefully reporting errors, and much more. Make sure - that you're familiar with what's available there, and use it. - Likewise, if you come across a generally useful mechanism that's - not yet implemented there, add it so that others can benefit from - your brilliance. In particular pay attention to the variety of - timeout and retry related constants at the top of that file. Always - try to reuse these constants rather than try to dream up your own - values. Even if the values there are not precisely what you would - like to use (timeout periods, retry counts etc), the benefit of - having them be consistent and centrally configurable across our - entire test suite typically outweighs your personal preferences. -+ **Follow the examples of stable, well-written tests:** Some of our - existing end-to-end tests are better written and more reliable than - others. A few examples of well-written tests include: - [Replication Controllers](../../test/e2e/rc.go), - [Services](../../test/e2e/service.go), - [Reboot](../../test/e2e/reboot.go). -+ [Ginkgo Test Framework](https://github.com/onsi/ginkgo): This is the - test library and runner upon which our e2e tests are built. Before - you write or refactor a test, read the docs and make sure that you - understand how it works. In particular be aware that every test is - uniquely identified and described (e.g. in test reports) by the - concatenation of it's `Describe` clause and nested `It` clauses. - So for example `Describe("Pods",...).... It(""should be scheduled - with cpu and memory limits")` produces a sane test identifier and - descriptor `Pods should be scheduled with cpu and memory limits`, - which makes it clear what's being tested, and hence what's not - working if it fails. Other good examples include: - -``` - CAdvisor should be healthy on every node -``` - -and - -``` - Daemon set should run and stop complex daemon -``` - - On the contrary -(these are real examples), the following are less good test -descriptors: - -``` - KubeProxy should test kube-proxy -``` - -and - -``` -Nodes [Disruptive] Network when a node becomes unreachable -[replication controller] recreates pods scheduled on the -unreachable node AND allows scheduling of pods on a node after -it rejoins the cluster -``` - -An improvement might be - -``` -Unreachable nodes are evacuated and then repopulated upon rejoining [Disruptive] -``` - -Note that opening issues for specific better tooling is welcome, and -code implementing that tooling is even more welcome :-). - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/writing-good-e2e-tests.md?pixel)]() - -- cgit v1.2.3