Skip to content
Snippets Groups Projects
Verified Commit e7494177 authored by Maarten de Waard's avatar Maarten de Waard :angel:
Browse files

Merge branch 'master' of github.com:helm/charts

parents 2b230510 14d04359
No related branches found
No related tags found
No related merge requests found
Showing
with 460 additions and 51 deletions
......@@ -14,7 +14,7 @@ jobs:
shellcheck -x test/repo-sync.sh
lint-charts:
docker:
- image: gcr.io/kubernetes-charts-ci/test-image:v2.0.5
- image: gcr.io/kubernetes-charts-ci/test-image:v3.2.0
steps:
- checkout
- run:
......@@ -22,7 +22,7 @@ jobs:
command: |
git remote add k8s https://github.com/helm/charts
git fetch k8s master
chart_test.sh --config test/.testenv --no-install
ct lint --config test/ct.yaml
sync:
docker:
- image: google/cloud-sdk
......
......@@ -35,6 +35,6 @@ even continue reviewing your changes.
#### Checklist
[Place an '[x]' (no spaces) in all applicable fields. Please remove unrelated fields.]
- [ ] [DCO](https://www.helm.sh/blog/helm-dco/index.html) signed
- [ ] [DCO](https://github.com/helm/charts/blob/master/CONTRIBUTING.md#sign-your-work) signed
- [ ] Chart Version bumped
- [ ] Variables are documented in the README.md
approvers:
- lachie83
- linki
- mgoodness
- prydonius
- sameersbn
- seanknox
- viglesiasce
- foxish
- unguiculus
......@@ -12,3 +9,10 @@ approvers:
- mattfarina
- davidkarlsen
- paulczar
- cpanato
- jlegrone
emeritus:
- linki
- mgoodness
- seanknox
......@@ -60,17 +60,17 @@ Note: We use the same [workflow](https://github.com/kubernetes/community/blob/ma
## Owning and Maintaining A Chart
Individual charts can be maintained by one or more members of the Kubernetes community. When someone maintains a chart they have the access to merge changes to that chart. To have merge access to a chart someone needs to:
Individual charts can be maintained by one or more users of GitHub. When someone maintains a chart they have the access to merge changes to that chart. To have merge access to a chart someone needs to:
1. Be listed on the chart, in the `Chart.yaml` file, as a maintainer. If you need sponsors and have contributed to the chart, please reach out to the existing maintainers, or if you are having trouble connecting with them, please reach out to one of the [OWNERS](OWNERS) of the charts repository.
1. Be invited (and accept your invite) as a read-only collaborator on [this repo](https://github.com/helm/charts). This is required for @k8s-ci-robot [PR comment interaction](https://github.com/kubernetes/community/blob/master/contributors/guide/pull-requests.md).
1. An OWNERS file needs to be added to a chart. That OWNERS file should list the maintainers' GitHub login names for both the reviewers and approvers sections. For an example see the [Drupal chart](stable/drupal/OWNERS). The `OWNERS` file should also be appended to the `.helmignore` file.
Once these two steps are done a chart approver can merge pull requests following the directions in the [REVIEW_GUIDELINES.md](REVIEW_GUIDELINES.md) file.
Once these three steps are done a chart approver can merge pull requests following the directions in the [REVIEW_GUIDELINES.md](REVIEW_GUIDELINES.md) file.
## Trusted Collaborator
The `pull-charts-e2e` test run, that installs a chart to test it, is required before a pull request can be merged. These tests run automatically for members of the Helm Org and for chart OWNERS, listed in OWNERS files. For regular contributors who are trusted, in a manner similar to Kubernetes community members, we have trusted collaborators. These individuals can have their tests run automatically as well as mark other pull requests as ok to test by adding a comment of `/ok-to-test` on pull requests.
The `pull-charts-e2e` test run, that installs a chart to test it, is required before a pull request can be merged. These tests run automatically for members of the Helm Org and for chart [repository collaborators](https://help.github.com/articles/adding-outside-collaborators-to-repositories-in-your-organization/). For regular contributors who are trusted, in a manner similar to Kubernetes community members, we have trusted collaborators. These individuals can have their tests run automatically as well as mark other pull requests as ok to test by adding a comment of `/ok-to-test` on pull requests.
There are two paths to becoming a trusted collaborator. One only needs follow one of them.
......
......@@ -16,6 +16,17 @@ Note, if a reviewer who is not an approver in an OWNERS file leaves a comment of
Chart releases must be immutable. Any change to a chart warrants a chart version bump even if it is only changes to the documentation.
## Versioning
The chart `version` should follow [semver](https://semver.org/).
Stable charts should start at `1.0.0` (for maintainability don't create new PRs for stable charts only to meet this criteria, but when reviewing PRs take the opportunity to ensure that this is met).
Any breaking (backwards incompatible) changes to a chart should:
1. Bump the MAJOR version
2. In the README, under a section called "Upgrading", describe the manual steps necessary to upgrade to the new (specified) MAJOR version
## Chart Metadata
The `Chart.yaml` should be as complete as possible. The following fields are mandatory:
......@@ -37,17 +48,17 @@ Stable charts should not depend on charts in incubator.
Resources and labels should follow some conventions. The standard resource metadata (`metadata.labels` and `spec.template.metadata.labels`) should be this:
```yaml
name: {{ template "myapp.fullname" . }}
name: {{ include "myapp.fullname" . }}
labels:
app: {{ template "myapp.name" . }}
chart: {{ template "myapp.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
app.kubernetes.io/name: {{ include "myapp.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
helm.sh/chart: {{ include "myapp.chart" . }}
```
If a chart has multiple components, a `component` label should be added (e. g. `component: server`). The resource name should get the component as suffix (e. g. `name: {{ template "myapp.fullname" . }}-server`).
If a chart has multiple components, a `app.kubernetes.io/component` label should be added (e. g. `app.kubernetes.io/component: server`). The resource name should get the component as suffix (e. g. `name: {{ include "myapp.fullname" . }}-server`).
Note that templates have to be namespaced. With Helm 2.7+, `helm create` does this out-of-the-box. The `app` label should use the `name` template, not `fullname` as is still the case with older charts.
Note that templates have to be namespaced. With Helm 2.7+, `helm create` does this out-of-the-box. The `app.kubernetes.io/name` label should use the `name` template, not `fullname` as is still the case with older charts.
### Deployments, StatefulSets, DaemonSets Selectors
......@@ -56,13 +67,13 @@ Note that templates have to be namespaced. With Helm 2.7+, `helm create` does th
```yaml
selector:
matchLabels:
app: {{ template "myapp.name" . }}
release: {{ .Release.Name }}
app.kubernetes.io/name: {{ include "myapp.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
```
If a chart has multiple components, a `component` label should be added to the selector (see above).
`spec.selector.matchLabels` defined in `Deployments`/`StatefulSets`/`DaemonSets` `>=v1/beta2` **must not** contain `chart` label or any label containing a version of the chart, because the selector is immutable.
`spec.selector.matchLabels` defined in `Deployments`/`StatefulSets`/`DaemonSets` `>=v1/beta2` **must not** contain `helm.sh/chart` label or any label containing a version of the chart, because the selector is immutable.
The chart label string contains the version, so if it is specified, whenever the the Chart.yaml version changes, Helm's attempt to change this immutable field would cause the upgrade to fail.
#### Fixing Selectors
......@@ -70,39 +81,39 @@ The chart label string contains the version, so if it is specified, whenever the
##### For Deployments, StatefulSets, DaemonSets apps/v1beta1 or extensions/v1beta1
- If it does not specify `spec.selector.matchLabels`, set it
- Remove `chart` label in `spec.selector.matchLabels` if it exists
- Remove `helm.sh/chart` label in `spec.selector.matchLabels` if it exists
- Bump patch version of the Chart
##### For Deployments, StatefulSets, DaemonSets >=apps/v1beta2
- Remove `chart` label in `spec.selector.matchLabels` if it exists
- Remove `helm.sh/chart` label in `spec.selector.matchLabels` if it exists
- Bump major version of the Chart as it is a breaking change
### Service Selectors
Label selectors for services must have both `app` and `release` labels.
Label selectors for services must have both `app.kubernetes.io/name` and `app.kubernetes.io/instance` labels.
```yaml
selector:
app: {{ template "myapp.name" . }}
release: {{ .Release.Name }}
app.kubernetes.io/name: {{ include "myapp.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
```
If a chart has multiple components, a `component` label should be added to the selector (see above).
If a chart has multiple components, a `app.kubernetes.io/component` label should be added to the selector (see above).
### Persistence Labels
### StatefulSet
In case of a `Statefulset`, `spec.volumeClaimTemplates.metadata.labels` must have both `app` and `release` labels, and **must not** contain `chart` label or any label containing a version of the chart, because `spec.volumeClaimTemplates` is immutable.
In case of a `Statefulset`, `spec.volumeClaimTemplates.metadata.labels` must have both `app.kubernetes.io/name` and `app.kubernetes.io/instance` labels, and **must not** contain `helm.sh/chart` label or any label containing a version of the chart, because `spec.volumeClaimTemplates` is immutable.
```yaml
labels:
app: {{ template "myapp.name" . }}
release: {{ .Release.Name }}
app.kubernetes.io/name: {{ include "myapp.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
```
If a chart has multiple components, a `component` label should be added to the selector (see above).
If a chart has multiple components, a `app.kubernetes.io/component` label should be added to the selector (see above).
### PersistentVolumeClaim
......@@ -159,7 +170,7 @@ volumes:
- name: data
{{- if .Values.persistence.enabled }}
persistentVolumeClaim:
claimName: {{ .Values.persistence.existingClaim | default (include "fullname" .) }}
claimName: {{ .Values.persistence.existingClaim | default (include "myapp.fullname" .) }}
{{- else }}
emptyDir: {}
{{- end -}}
......@@ -172,12 +183,12 @@ volumes:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ template "fullname" . }}
name: {{ include "myapp.fullname" . }}
labels:
app: {{ template "name" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
app.kubernetes.io/name: {{ include "myapp.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
helm.sh/chart: {{ include "myapp.chart" . }}
spec:
accessModes:
- {{ .Values.persistence.accessMode | quote }}
......@@ -217,18 +228,18 @@ autoscaling:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "myapp.fullname" . }}
labels:
app: {{ template "helm-chart.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
component: "{{ .Values.name }}"
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "helm-chart.fullname" . }}
app.kubernetes.io/name: {{ include "myapp.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
helm.sh/chart: {{ include "myapp.chart" . }}
app.kubernetes.io/component: "{{ .Values.name }}"
spec:
scaleTargetRef:
apiVersion: apps/v1beta1
apiVersion: apps/v1
kind: Deployment
name: {{ template "helm-chart.fullname" . }}
name: {{ include "myapp.fullname" . }}
minReplicas: {{ .Values.autoscaling.minReplicas }}
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
metrics:
......@@ -271,12 +282,12 @@ ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ include "fullname" }}
name: {{ include "myapp.fullname" }}
labels:
app: {{ include "name" . }}
chart: {{ include "chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
app.kubernetes.io/name: {{ include "myapp.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
helm.sh/chart: {{ include "myapp.chart" . }}
{{- with .Values.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
......@@ -299,7 +310,7 @@ spec:
paths:
- path: {{ .Values.ingress.path }}
backend:
serviceName: {{ include "fullname" }}
serviceName: {{ include "myapp.fullname" }}
servicePort: http
{{- end }}
{{- end }}
......@@ -338,3 +349,13 @@ While reviewing Charts that contain workloads such as [Deployments](https://kube
10. As much as possible complex pre-app setups are configured using [init containers](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/).
More [configuration](https://kubernetes.io/docs/concepts/configuration/overview/) best practices.
## Tests
This repository follows a [test procedure](https://github.com/helm/charts/blob/master/test/README.md). This allows the charts of this repository to be tested according to several rules (linting, semver checking, deployment testing, etc) for every Pull Request.
The `ci` directory of a given Chart allows testing different use cases, by allowing you to define different sets of values overriding `values.yaml`, one file per set. See the [documentation](https://github.com/helm/charts/blob/master/test/README.md#providing-custom-test-values) for more information.
This directory MUST exist with at least one test file in it.
# Kubernetes Community Code of Conduct
# Community Code of Conduct
Please refer to our [Kubernetes Community Code of Conduct](https://git.k8s.io/community/code-of-conduct.md)
Helm follows the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).
name: artifactory
home: https://www.jfrog.com/artifactory/
version: 5.2.1
appVersion: 5.2.0
description: DEPRECATED Universal Repository Manager supporting all major packaging formats, build tools and CI servers.
keywords:
- artifactory
- jfrog
sources:
- https://bintray.com/jfrog/product/JFrog-Artifactory-Pro/view
- https://github.com/JFrogDev
icon: https://raw.githubusercontent.com/JFrogDev/artifactory-dcos/master/images/jfrog_med.png
## Deprecated following https://github.com/helm/charts/blob/master/PROCESSES.md#deprecating-a-chart
## Chart is now maintained in https://github.com/jfrog/charts
deprecated: true
# JFrog Artifactory Helm Chart - DEPRECATED
**This chart is deprecated! You can find the new chart in:**
- **Sources:** https://github.com/jfrog/charts
- **Charts repository:** https://charts.jfrog.io
```bash
helm repo add jfrog https://charts.jfrog.io
```
## Prerequisites Details
* Artifactory Pro trial license [get one from here](https://www.jfrog.com/artifactory/free-trial/)
## Todo
* Implement Support of Reverse proxy for Docker Repo using Nginx
* Smarter upscaling/downscaling
## Chart Details
This chart will do the following:
* Deploy Artifactory-oss
* Deploy Artifactory-Pro
## Installing the Chart
To install the chart with the release name `my-release`:
```bash
$ helm install --name my-release incubator/artifactory
```
Note: By default it will run Artifactory-oss to run Artifactory-Pro uncomment image in value.yaml or use following command
```bash
$ helm install --name my-release --set image=docker.bintray.io/jfrog/artifactory-pro incubator/artifactory
```
## Deleting the Charts
Deletion of the PetSet doesn't cascade to deleting associated Pods and PVCs. To delete them:
```
$ helm delete my-release
```
## Configuration
The following tables lists the configurable parameters of the artifactory chart and their default values.
| Parameter | Description | Default |
|---------------------------|-----------------------------------|----------------------------------------------------------|
| `Image` | Container image name | `docker.bintray.io/jfrog/artifactory-oss` |
| `ImageTag` | Container image tag | `5.2.0` |
| `ImagePullPolicy` | Container pull policy | `Always` |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`.
## Useful links
https://www.jfrog.com
https://www.jfrog.com/confluence/
#### THIS CHART IS DEPRECATED! ####
Get the Artifactory URL to visit by running these commands in the same shell:
{{- if contains "NodePort" .Values.ServiceType }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT/
{{- else if contains "LoadBalancer" .Values.ServiceType }}
**** NOTE: It may take a few minutes for the LoadBalancer IP to be available. ****
**** You can watch the status of by running 'kubectl get svc -w {{ template "fullname" . }}' ****
export SERVICE_IP=$(kubectl get svc {{ template "fullname" . }} --namespace {{ .Release.Namespace }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
echo http://$SERVICE_IP:{{ .Values.httpPort }}/
{{- else if contains "ClusterIP" .Values.ServiceType }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ template "fullname" . }}" -o jsonpath="{.items[0].metadata.name}")
echo http://127.0.0.1:{{ .Values.httpPort }}
kubectl port-forward --namespace {{ .Release.Namespace }} $POD_NAME {{ .Values.httpPort }}:{{ .Values.httpPort }}
{{- end }}
Default credential for Artifactory:
user: admin
password: password
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{define "name"}}{{default "artifactory" .Values.nameOverride | trunc 24 }}{{end}}
{{/*
Create a default fully qualified app name.
We truncate at 24 chars because some Kubernetes name fields are limited to this
(by the DNS naming spec).
*/}}
{{define "fullname"}}
{{- $name := default "artifactory" .Values.nameOverride -}}
{{printf "%s-%s" .Release.Name $name | trunc 24 -}}
{{end}}
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{template "fullname" .}}
labels:
app: {{ template "fullname" . }}
heritage: "{{ .Release.Service }}"
release: "{{ .Release.Name }}"
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
spec:
replicas: {{default 1 .Values.replicaCount}}
template:
metadata:
labels:
app: {{template "fullname" .}}
release: {{.Release.Name | quote }}
spec:
containers:
- name: {{ template "fullname" . }}
image: "{{ .Values.image}}:{{ .Values.imageTag}}"
imagePullPolicy: {{default "IfNotPresent" .Values.ImagePullPolicy}}
resources:
{{ toYaml .Values.resources | indent 10 }}
ports:
- containerPort: 8081
name: http
volumeMounts:
- name: etc
mountPath: /var/opt/jfrog/artifactory/etc
- name: logs
mountPath: /var/opt/jfrog/artifactory/logs
- name: data
mountPath: /var/opt/jfrog/artifactory/data
volumes:
- name: data
- name: logs
- name: etc
\ No newline at end of file
apiVersion: v1
kind: Service
metadata:
name: {{template "fullname" .}}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
app: {{template "fullname" .}}
spec:
ports:
- port: {{default 8081 .Values.httpPort}}
targetPort: 8081
protocol: TCP
name: http
selector:
app: {{template "fullname" .}}
type: {{.Values.ServiceType}}
\ No newline at end of file
# Default values for Artifactory.
# This is a YAML-formatted file.
# Declare name/value pairs to be passed into your templates.
# name: value
Name: artifactory
Component: "Artifactory"
## Uncomment following line if you want to run Artifactory-Pro
# image: "docker.bintray.io/jfrog/artifactory-pro"
image: "docker.bintray.io/jfrog/artifactory-oss"
imageTag: "5.2.0"
imagePullPolicy: "Always"
replicaCount: 1
httpPort: 8081
## Kubernetes configuration
## For minikube, set this to NodePort, elsewhere use LoadBalancer
##
ServiceType: ClusterIP
resources:
requests:
memory: 2048Mi
cpu: 200m
## Persist data to a persitent volume
persistence:
enabled: true
storageClass: generic
accessMode: ReadWriteOnce
size: 8Gi
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
apiVersion: v1
name: aws-alb-ingress-controller
description: A Helm chart for AWS ALB Ingress Controller
version: 0.1.7
appVersion: "v1.1.2"
engine: gotpl
home: https://github.com/kubernetes-sigs/aws-alb-ingress-controller
sources:
- https://github.com/kubernetes-sigs/aws-alb-ingress-controller
keywords:
- aws
- ingress
maintainers:
- name: bigkraig
email: kraig.amador@ticketmaster.com
- name: M00nF1sh
email: yyyng@amazon.com
approvers:
- bigkraig
- M00nF1sh
reviewers:
- bigkraig
- M00nF1sh
# aws-alb-ingress-controller
[aws-alb-ingress-controller](https://github.com/kubernetes-sigs/aws-alb-ingress-controller) satisfies Kubernetes ingress resources by provisioning Application Load Balancers.
## TL;DR:
```bash
helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
helm install incubator/aws-alb-ingress-controller --set clusterName=MyClusterName --set autoDiscoverAwsRegion=true --set autoDiscoverAwsVpcID=true
```
## Introduction
This chart bootstraps an alb-ingress-controller deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
## Prerequisites
- Kubernetes 1.9+ with Beta APIs enabled
## Enable helm incubator repository
```bash
helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
```
## Installing the Chart
To install the chart with the release name `my-release` into `kube-system`:
```bash
helm install incubator/aws-alb-ingress-controller --set clusterName=MyClusterName --set autoDiscoverAwsRegion=true --set autoDiscoverAwsVpcID=true --name my-release --namespace kube-system
```
The command deploys alb-ingress-controller on the Kubernetes cluster in the default configuration. The [configuration](#configuration) section lists the parameters that can be configured during installation.
> **Tip**: List all releases using `helm list`
## Uninstalling the Chart
To uninstall/delete the `my-release` deployment:
```console
$ helm delete my-release
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
The following tables lists the configurable parameters of the alb-ingress-controller chart and their default values.
| Parameter | Description | Default |
| ------------------------- | -------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------- |
| `clusterName` | (REQUIRED) Resources created by the ALB Ingress controller will be prefixed with this string | N/A |
| `awsRegion` | AWS region of k8s cluster, required if ec2metadata is unavailable from controller pod | `us-west-2 ` |
| `autoDiscoverAwsRegion` | auto discover awsRegion from ec2metadata, omit awsRegion when this set to true | false |
| `awsVpcID` | AWS VPC ID of k8s cluster, required if ec2metadata is unavailable from controller pod | `vpc-xxx` |
| `autoDiscoverAwsVpcID` | auto discover awsVpcID from ec2metadata, omit awsRegion when this set to true | false |
| `image.repository` | controller container image repository | `894847497797.dkr.ecr.us-west-2.amazonaws.com/aws-alb-ingress-controller` |
| `image.tag` | controller container image tag | `v1.0.1` |
| `image.pullPolicy` | controller container image pull policy | `IfNotPresent` |
| `enableReadinessProbe` | enable readinessProbe on controller pod |`false` |
| `enableLivenessProbe` | enable livenessProbe on controller pod | `false` |
| `extraEnv` | map of environment variables to be injected into the controller pod | `{}` |
| `nodeSelector` | node labels for controller pod assignment | `{}` |
| `tolerations` | controller pod toleration for taints | `{}` |
| `podAnnotations` | annotations to be added to controller pod | `{}` |
| `podLabels` | labels to be added to controller pod | `{}` |
| `resources` | controller pod resource requests & limits | `{}` |
| `rbac.create` | If true, create & use RBAC resources | `true` |
| `rbac.serviceAccountName` | ServiceAccount ALB ingress controller will use (ignored if rbac.create=true) | `default` |
| `scope.ingressClass` | If provided, the ALB ingress controller will only act on Ingress resources annotated with this class | `alb` |
| `scope.singleNamespace` | If true, the ALB ingress controller will only act on Ingress resources in a single namespace | `false` (watch all namespaces) |
| `scope.watchNamespace` | If scope.singleNamespace=true, the ALB ingress controller will only act on Ingress resources in this namespace | `""` (namespace of the ALB ingress controller) |
```bash
helm install incubator/aws-alb-ingress-controller --set clusterName=MyClusterName --set autoDiscoverAwsRegion=true --set autoDiscoverAwsVpcID=true --name my-release --namespace kube-system
```
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
```bash
helm install incubator/aws-alb-ingress-controller --name my-release -f values.yaml
```
> **Tip**: You can use the default [values.yaml](values.yaml)
> **Tip**: If you use `aws-alb-ingress-controller` as releaseName, the generated pod name will be shorter.(e.g. `aws-alb-ingress-controller-66cc9fb67c-7mg4w` instead of `my-release-aws-alb-ingress-controller-66cc9fb67c-7mg4w`)
\ No newline at end of file
To verify that alb-ingress-controller has started, run:
kubectl --namespace={{ .Release.Namespace }} get pods -l "app.kubernetes.io/name={{ include "aws-alb-ingress-controller.name" . }},app.kubernetes.io/instance={{ .Release.Name }}"
An example Ingress that makes use of the controller:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/subnets: subnet-a4f0098e,subnet-457ed533,subnet-95c904cd
name: example
namespace: foo
spec:
rules:
- host: www.example.com
http:
paths:
- path: /
backend:
serviceName: exampleService
servicePort: 80
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "aws-alb-ingress-controller.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "aws-alb-ingress-controller.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "aws-alb-ingress-controller.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment