diff options
| author | Thomas Stromberg <tstromberg@google.com> | 2019-04-09 08:03:30 -0700 |
|---|---|---|
| committer | Thomas Stromberg <tstromberg@google.com> | 2019-04-09 08:03:30 -0700 |
| commit | fe38881b69ee956c5280dca003c080b1a0117fb0 (patch) | |
| tree | 818193ab072382907ddbe5525bdd8c359abc1886 /contributors | |
| parent | 99250d74722c9bfeda644dd4c0fc1b6db27006fa (diff) | |
| parent | 98185d86c18bbd1e5330c110a31c3398bc148cbb (diff) | |
Rebase
Diffstat (limited to 'contributors')
| -rw-r--r-- | contributors/design-proposals/scheduling/nodeaffinity.md | 37 | ||||
| -rw-r--r-- | contributors/design-proposals/storage/csi-migration.md | 45 | ||||
| -rw-r--r-- | contributors/devel/development.md | 97 | ||||
| -rw-r--r-- | contributors/devel/sig-api-machinery/generating-clientset.md | 2 | ||||
| -rw-r--r-- | contributors/devel/sig-architecture/api-conventions.md | 4 | ||||
| -rw-r--r-- | contributors/devel/sig-release/cherry-picks.md | 8 | ||||
| -rw-r--r-- | contributors/guide/github-workflow.md | 93 |
7 files changed, 169 insertions, 117 deletions
diff --git a/contributors/design-proposals/scheduling/nodeaffinity.md b/contributors/design-proposals/scheduling/nodeaffinity.md index ae167ce5..31fb520a 100644 --- a/contributors/design-proposals/scheduling/nodeaffinity.md +++ b/contributors/design-proposals/scheduling/nodeaffinity.md @@ -144,12 +144,36 @@ Hopefully this won't cause too much confusion. ## Examples -**TODO: fill in this section** - -* Run this pod on a node with an Intel or AMD CPU - -* Run this pod on a node in availability zone Z - +Run a pod on a node with an Intel or AMD CPU and in availability zone Z: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: pod-with-node-affinity +spec: + affinity: + nodeAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: kubernetes.io/arch + operator: In + values: + - intel + - amd64 + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 1 + preference: + matchExpressions: + - key: failure-domain.kubernetes.io/zone + operator: In + values: + - Z + containers: + - name: pod-with-node-affinity + image: tomcat:8 +``` ## Backward compatibility @@ -240,3 +264,4 @@ The main related issue is [#341](https://github.com/kubernetes/kubernetes/issues Issue [#367](https://github.com/kubernetes/kubernetes/issues/367) is also related. Those issues reference other related issues. + diff --git a/contributors/design-proposals/storage/csi-migration.md b/contributors/design-proposals/storage/csi-migration.md index ee6dc464..2dc03170 100644 --- a/contributors/design-proposals/storage/csi-migration.md +++ b/contributors/design-proposals/storage/csi-migration.md @@ -237,8 +237,7 @@ with the in-tree plugin, the VolumeAttachment object becomes orphaned. ### In-line Volumes In-line controller calls are a special case because there is no PV. In this case -we will forward the in-tree volume source to CSI attach as-is and it will be -copied to a new field in the VolumeAttachment object +we will translate the volume source and copy it to the field VolumeAttachment.Spec.Source.VolumeAttachmentSource.InlineVolumeSource. The VolumeAttachment name must be made with the CSI Translated version of the VolumeSource in order for it to be discoverable by Detach and WaitForAttach @@ -259,12 +258,12 @@ type VolumeAttachmentSource struct { // +optional PersistentVolumeName *string `json:"persistentVolumeName,omitempty" protobuf:"bytes,1,opt,name=persistentVolumeName"` - // Allows CSI migration code to copy an inline volume - // source from a pod to the VolumeAttachment to support shimming of - // in-tree inline volumes to a CSI backend. - // This field is alpha-level and is only honored by servers that enable the CSIMigration feature. + // Translated VolumeSource from a pod to a CSIPersistentVolumeSource + // to support shimming of in-tree inline volumes to a CSI backend. + // This field is alpha-level and is only honored by servers that + // enable the CSIMigration feature. // +optional - InlineVolumeSource *v1.VolumeSource `json:"inlineVolumeSource,omitempty protobuf:"bytes,2,opt,name=inlineVolumeSource"` + InlineCSIVolumeSource *v1.CSIPersistentVolumeSource `json:"inlineCSIVolumeSource,omitempty" protobuf:"bytes,2,opt,name=inlineCSIVolumeSource"` } ``` @@ -292,12 +291,40 @@ existing Pods in the ADC. TODO: Design ### Raw Block +In the OperationGenerator, `GenerateMapVolumeFunc`, `GenerateUnmapVolumeFunc` and +`GenerateUnmapDeviceFunc` are used to prepare and mount/umount block devices. At the +beginning of each API, we will check whether migration is enabled for the plugin. If +enabled, volume spec will be translated from the in-tree spec to out-of-tree spec using +CSI as the persistence volume source. -TODO: Design +Caveat: the original spec needs to be used when setting the state of `actualStateOfWorld` +for where is it used before the translation. ### Volume Reconstruction -TODO: Design +Volume Reconstruction is currently a routine in the reconciler that runs on the +nodes when a Kubelet restarts and loses its cached state (`desiredState` and +`actualState`). It is kicked off in `syncStates()` in +`pkg/kubeletvolumemanager/reconciler/reconciler.go` and attempts to reconstruct +a volume based on the mount path on the host machine. + +When CSI Migration is turned on, when the reconstruction code is run and it +finds a CSI mounted volume we currently do not know whether it was mounted as a +native CSI volume or migrated from in-tree. To solve this issue we will save a +`migratedVolume` boolean in the `saveVolumeData` function when the `NewMounter` +is created during the `MountVolume` call for that particular volume in the +Operation generator. + +When the Kubelet is restarted and we lose state the Kubelet will call +`reconstructVolume` we can `loadVolumeData` and determine whether that CSI +volume was migrated or not, as well as get the information about the original +plugin requested. With that information we should be able to call the +`ReconstructVolumeOperation` with the correct in-tree plugin to get the original +in-tree spec that we can then pass to the rest of volume reconstruction. The +rest of the volume reconstruction code will then use this in-tree spec passed to +the `desiredState`, `actualState`, and `operationGenerator` and the volume will +go through the standard volume pathways and go through the standard migrated +volume lifecycles described above in the "Pre-Provisioned Volumes" section. ### Volume Limit diff --git a/contributors/devel/development.md b/contributors/devel/development.md index 0c2fa44b..c2bee309 100644 --- a/contributors/devel/development.md +++ b/contributors/devel/development.md @@ -120,12 +120,6 @@ tools]. Kubernetes build system requires `rsync` command present in the development platform. -### etcd - -Kubernetes maintains state in [`etcd`][etcd-latest], a distributed key store. - -Please [install it locally][etcd-install] to run local integration tests. - ### Go Kubernetes is written in [Go](http://golang.org). If you don't have a Go @@ -160,6 +154,97 @@ images. This requires pushing the [e2e][e2e-image] and [test][test-image] images that are `FROM` the desired Go version. - The cross tag `KUBE_BUILD_IMAGE_CROSS_TAG` in [build/common.sh]. +### Quick Start + +The following section is a quick start on how to build Kubernetes locally, for more detailed information you can see [kubernetes/build](https://git.k8s.io/kubernetes/build/README.md). +The best way to validate your current setup is to build a small part of Kubernetes. This way you can address issues without waiting for the full build to complete. To build a specific part of Kubernetes use the `WHAT` environment variable to let the build scripts know you want to build only a certain package/executable. + +```sh +make WHAT=cmd/{$package_you_want} +``` + +*Note:* This applies to all top level folders under kubernetes/cmd. + +So for the cli, you can run: + +```sh +make WHAT=cmd/kubectl +``` + +If everything checks out you will have an executable in the `_output/bin` directory to play around with. + +*Note:* If you are using `CDPATH`, you must either start it with a leading colon, or unset the variable. The make rules and scripts to build require the current directory to come first on the CD search path in order to properly navigate between directories. + +```sh +cd $working_dir/kubernetes +make +``` + +To remove the limit on the number of errors the Go compiler reports (default +limit is 10 errors): +```sh +make GOGCFLAGS="-e" +``` + +To build with optimizations disabled (enables use of source debug tools): + +```sh +make GOGCFLAGS="-N -l" +``` + +To build binaries for all platforms: + +```sh +make cross +``` + +#### Install etcd + +```sh +cd $working_dir/kubernetes + +# Installs in ./third_party/etcd +hack/install-etcd.sh + +# Add to PATH +echo export PATH="\$PATH:$working_dir/kubernetes/third_party/etcd" >> ~/.profile +``` + +#### Test + +```sh +cd $working_dir/kubernetes + +# Run all the presubmission verification. Then, run a specific update script (hack/update-*.sh) +# for each failed verification. For example: +# hack/update-gofmt.sh (to make sure all files are correctly formatted, usually needed when you add new files) +# hack/update-bazel.sh (to update bazel build related files, usually needed when you add or remove imports) +make verify + +# Alternatively, run all update scripts to avoid fixing verification failures one by one. +make update + +# Run every unit test +make test + +# Run package tests verbosely +make test WHAT=./pkg/api/helper GOFLAGS=-v + +# Run integration tests, requires etcd +# For more info, visit https://git.k8s.io/community/contributors/devel/sig-testing/testing.md#integration-tests +make test-integration + +# Run e2e tests by building test binaries, turn up a test cluster, run all tests, and tear the cluster down +# Equivalent to: go run hack/e2e.go -- -v --build --up --test --down +# Note: running all e2e tests takes a LONG time! To run specific e2e tests, visit: +# ./e2e-tests.md#building-kubernetes-and-running-the-tests +make test-e2e +``` + +See the [testing guide](./sig-testing/testing.md) and [end-to-end tests](./sig-testing/e2e-tests.md) +for additional information and scenarios. + +Run `make help` for additional information on these make targets. #### Dependency management diff --git a/contributors/devel/sig-api-machinery/generating-clientset.md b/contributors/devel/sig-api-machinery/generating-clientset.md index bf12e92c..a5619b97 100644 --- a/contributors/devel/sig-api-machinery/generating-clientset.md +++ b/contributors/devel/sig-api-machinery/generating-clientset.md @@ -1,6 +1,6 @@ # Generation and release cycle of clientset -Client-gen is an automatic tool that generates [clientset](../design-proposals/api-machinery/client-package-structure.md#high-level-client-sets) based on API types. This doc introduces the use of client-gen, and the release cycle of the generated clientsets. +Client-gen is an automatic tool that generates [clientset](/contributors/design-proposals/api-machinery/client-package-structure.md#high-level-client-sets) based on API types. This doc introduces the use of client-gen, and the release cycle of the generated clientsets. ## Using client-gen diff --git a/contributors/devel/sig-architecture/api-conventions.md b/contributors/devel/sig-architecture/api-conventions.md index 24ae0ebf..be47d470 100644 --- a/contributors/devel/sig-architecture/api-conventions.md +++ b/contributors/devel/sig-architecture/api-conventions.md @@ -379,7 +379,7 @@ Some resources in the v1 API contain fields called **`phase`**, and associated `message`, `reason`, and other status fields. The pattern of using `phase` is deprecated. Newer API types should use conditions instead. Phase was essentially a state-machine enumeration field, that contradicted [system-design -principles](../design-proposals/architecture/principles.md#control-logic) and +principles](../../design-proposals/architecture/principles.md#control-logic) and hampered evolution, since [adding new enum values breaks backward compatibility](api_changes.md). Rather than encouraging clients to infer implicit properties from phases, we prefer to explicitly expose the individual @@ -404,7 +404,7 @@ only provided with reasonable effort, and is not guaranteed to not be lost. Status information that may be large (especially proportional in size to collections of other resources, such as lists of references to other objects -- see below) and/or rapidly changing, such as -[resource usage](../design-proposals/scheduling/resources.md#usage-data), should be put into separate +[resource usage](../../design-proposals/scheduling/resources.md#usage-data), should be put into separate objects, with possibly a reference from the original object. This helps to ensure that GETs and watch remain reasonably efficient for the majority of clients, which may not need that data. diff --git a/contributors/devel/sig-release/cherry-picks.md b/contributors/devel/sig-release/cherry-picks.md index 097fef85..934111a3 100644 --- a/contributors/devel/sig-release/cherry-picks.md +++ b/contributors/devel/sig-release/cherry-picks.md @@ -40,9 +40,11 @@ branches. * Milestones on cherry-pick PRs should be the milestone for the target release branch (for example, milestone 1.11 for a cherry-pick onto release-1.11). - * You can find the current release team members in the - [appropriate release folder](https://git.k8s.io/sig-release/releases) for the target release. - You may cc them with `<@githubusername>` on your cherry-pick PR. + * During code freeze, to get attention on a cherry-pick by the current + release team members see the [appropriate release folder](https://git.k8s.io/sig-release/releases) + for the target release's team contact information. You may cc them with + `<@githubusername>` on your cherry-pick PR. + * For prior branches, check the [patch release schedule](https://git.k8s.io/sig-release/releases/patch-releases.md), which includes contact information for the patch release team. ## Cherry-pick Review diff --git a/contributors/guide/github-workflow.md b/contributors/guide/github-workflow.md index cef4e0a3..01ae6e80 100644 --- a/contributors/guide/github-workflow.md +++ b/contributors/guide/github-workflow.md @@ -22,7 +22,7 @@ Define a local working directory: # You must follow exactly this pattern, # neither `$GOPATH/src/github.com/${your github profile name/` # nor any other pattern will work. -working_dir=$GOPATH/src/k8s.io +export working_dir=$GOPATH/src/k8s.io ``` > If you already do Go development on github, the `k8s.io` directory @@ -31,7 +31,7 @@ working_dir=$GOPATH/src/k8s.io Set `user` to match your github profile name: ```sh -user={your github profile name} +export user={your github profile name} ``` Both `$working_dir` and `$user` are mentioned in the figure above. @@ -74,95 +74,8 @@ git checkout -b myfeature Then edit code on the `myfeature` branch. #### Build -The following section is a quick start on how to build Kubernetes locally, for more detailed information you can see [kubernetes/build](https://git.k8s.io/kubernetes/build/README.md). -The best way to validate your current setup is to build a small part of Kubernetes. This way you can address issues without waiting for the full build to complete. To build a specific part of Kubernetes use the `WHAT` environment variable to let the build scripts know you want to build only a certain package/executable. -```sh -make WHAT=cmd/{$package_you_want} -``` - -*Note:* This applies to all top level folders under kubernetes/cmd. - -So for the cli, you can run: - -```sh -make WHAT=cmd/kubectl -``` - -If everything checks out you will have an executable in the `_output/bin` directory to play around with. - -*Note:* If you are using `CDPATH`, you must either start it with a leading colon, or unset the variable. The make rules and scripts to build require the current directory to come first on the CD search path in order to properly navigate between directories. - -```sh -cd $working_dir/kubernetes -make -``` - -To remove the limit on the number of errors the Go compiler reports (default -limit is 10 errors): -```sh -make GOGCFLAGS="-e" -``` - -To build with optimizations disabled (enables use of source debug tools): - -```sh -make GOGCFLAGS="-N -l" -``` - -To build binaries for all platforms: - -```sh -make cross -``` - -#### Install etcd - -```sh -cd $working_dir/kubernetes - -# Installs in ./third_party/etcd -hack/install-etcd.sh - -# Add to PATH -echo export PATH="\$PATH:$working_dir/kubernetes/third_party/etcd" >> ~/.profile -``` - -#### Test - -```sh -cd $working_dir/kubernetes - -# Run all the presubmission verification. Then, run a specific update script (hack/update-*.sh) -# for each failed verification. For example: -# hack/update-gofmt.sh (to make sure all files are correctly formatted, usually needed when you add new files) -# hack/update-bazel.sh (to update bazel build related files, usually needed when you add or remove imports) -make verify - -# Alternatively, run all update scripts to avoid fixing verification failures one by one. -make update - -# Run every unit test -make test - -# Run package tests verbosely -make test WHAT=./pkg/api/helper GOFLAGS=-v - -# Run integration tests, requires etcd -# For more info, visit https://git.k8s.io/community/contributors/devel/sig-testing/testing.md#integration-tests -make test-integration - -# Run e2e tests by building test binaries, turn up a test cluster, run all tests, and tear the cluster down -# Equivalent to: go run hack/e2e.go -- -v --build --up --test --down -# Note: running all e2e tests takes a LONG time! To run specific e2e tests, visit: -# https://git.k8s.io/community/contributors/devel/sig-testing/e2e-tests.md#building-kubernetes-and-running-the-tests -make test-e2e -``` - -See the [testing guide](/contributors/devel/sig-testing/testing.md) and [end-to-end tests](/contributors/devel/sig-testing/e2e-tests.md) -for additional information and scenarios. - -Run `make help` for additional information on these make targets. +This workflow is process-specific; for quick start build instructions for [kubernetes/kubernetes](https://git.k8s.io/kubernetes) please [see here](/contributors/devel/development.md#building-kubernetes-on-a-local-osshell-environment). ### 4 Keep your branch in sync |
