Skip to content
Snippets Groups Projects
Verified Commit f1d1bc49 authored by Maarten de Waard's avatar Maarten de Waard :angel:
Browse files

Merge branch 'master' into k3s

parents 3bbc048d 394de7e4
No related branches found
No related tags found
No related merge requests found
......@@ -152,7 +152,7 @@ testinfra:
script:
- *debug_information
- cd ansible/
- pytest -v -m 'testinfra' --connection=ansible --ansible-inventory=${CLUSTER_DIR}/inventory.yml --hosts='ansible://*'
- pytest -v -s -m 'testinfra' --connection=ansible --ansible-inventory=${CLUSTER_DIR}/inventory.yml --hosts='ansible://*'
only:
changes:
- .gitlab-ci.yml
......
......@@ -11,6 +11,10 @@
* [ ] Add app to `docs/installation_instructions.md`
* [ ] Add app to `docs/testing_instructions.md`
## Etc
* [ ] Make sure to use an `existingClaim` for persistent volumes
## Tests
* [ ] Add behave feature (`tests/behave/feature`)
......
......@@ -57,13 +57,17 @@ services:
cluster_cidr: 10.42.0.0/16
image: ''
service_cluster_ip_range: 10.43.0.0/16
extra_args:
feature-gates: 'RotateKubeletServerCertificate=true'
kubelet:
cluster_dns_server: 10.43.0.10
cluster_domain: cluster.local
extra_args:
containerized: 'true'
eviction-hard: "memory.available<100Mi,nodefs.available<1Gi,imagefs.available<1Gi"
eviction-minimum-reclaim: "memory.available=0Mi,nodefs.available=0Mi,imagefs.available=0Gi"
eviction-hard: 'memory.available<100Mi,nodefs.available<1Gi,imagefs.available<1Gi'
eviction-minimum-reclaim: 'memory.available=0Mi,nodefs.available=0Mi,imagefs.available=0Gi'
protect-kernel-defaults: 'true'
feature-gates: 'RotateKubeletServerCertificate=true'
extra_binds:
# Make local storage work with persistent volumes that use `subpath`
# see https://open.greenhost.net/openappstack/openappstack/issues/236
......
......@@ -28,6 +28,25 @@
when: rke_version.stdout != rke.version
become: true
# https://rancher.com/docs/rancher/v2.x/en/security/hardening-2.3.3/#1-1-rancher-rke-kubernetes-cluster-host-configuration
- name: Configure sysctl for kubelet
sysctl:
name: "{{ item.name }}"
value: "{{ item.value }}"
loop:
- name: vm.overcommit_memory
value: 1
- name: vm.panic_on_oom
value: 0
- name: kernel.panic
value: 10
- name: kernel.panic_on_oops
value: 1
- name: kernel.keys.root_maxkeys
value: 1000000
- name: kernel.keys.root_maxbytes
value: 25000000
become: true
- name: Deploy rke cluster configuration file
tags:
......
......@@ -106,3 +106,28 @@ backup the whole `/var/lib/OpenAppStack/` directory.
Restore instructions will follow, please [reach out to us](https://openappstack.net/contact.html)
if you need assistance.
## Change the IP of your cluster
In case your cluster needs to migrate to another IP use these steps to make
OpenAppStack and `rke` adopt it:
* `rke etcd snapshot-save --config /var/lib/OpenAppStack/rke/cluster.yml --name test`
* Change IP in `/var/lib/OpenAppStack/rke/cluster.yml`
* `/usr/local/bin/rke up --config=/var/lib/OpenAppStack/rke/cluster.yml`
* `rke etcd snapshot-restore --config /var/lib/OpenAppStack/rke/cluster.yml --name test`
* `/usr/local/bin/rke up --config=/var/lib/OpenAppStack/rke/cluster.yml`
## Delete evicted pods
In case your cluster disk usage is over 80%, kubernetes [taints](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/)
the node with `DiskPressure`. Then it tries to evict pods, which is pointless in
a single node setup but still happened anyway. Sometimes hundreds of pods will end
up in `evicted` state but still showed up after `DiskPressure` recovered.
See also the [out of resource handling with kubelet](https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/) documentation.
You can delete all evicted pods with this:
kubectl get pods --all-namespaces -ojson | jq -r '.items[] | select(.status.reason!=null) | select(.status.reason | contains("Evicted")) | .metadata.name + " " + .metadata.namespace' | xargs -n2 -l bash -c 'kubectl delete pods $0 --namespace=$1'
......@@ -11,7 +11,7 @@ spec:
releaseName: nc
chart:
git: https://open.greenhost.net/openappstack/nextcloud
ref: c77183d3661a4886e15d370904ca0cfc9e4da982
ref: e06a57b75be3a281680ecf1c1253094eea1ecabf
path: .
valuesFrom:
- secretKeyRef:
......
import pytest
import json
import pprint
@pytest.mark.testinfra
def test_os_release(host):
system_info = host.system_info
assert system_info.release == '10'
@pytest.mark.testinfra
def test_kubernetes_setup(host):
"""
Kube-bench checks if the setup conforms the CIS security benchmark. Not all
tests are relevant for the Rancher/RKE setup, because, for example, 1.1
checks the rights of config files that do not exist in our system.
"""
# Instantiate PrettyPrinter to get readable test output if it fails
pp = pprint.PrettyPrinter()
# Only run these tests
# 1. Master tests
# 1.1: Ignore, because it's about config files we don't have
# 1.2: Ignore 1.2.32 - 1.2.35 because you don't need an encryption provider
# in a single node setup
tests = []
tests += ['1.2.{}'.format(x) for x in range(1, 5)]
# Skip 1.2.6 (TLS settings), because we have only 1 node
tests += ['1.2.{}'.format(x) for x in range(7, 15)]
# TODO: Add PodSecurityPolicy so 1.2.16 can succeed (
tests += ['1.2.{}'.format(x) for x in range(17, 32)]
# 1.3: Controller manager, all tests added
tests += ['1.3.{}'.format(x) for x in range(1, 7)]
# 1.4: Scheduler, all tests added
tests += ['1.4.{}'.format(x) for x in range(1, 2)]
# 2. Etcd, all tests added
tests += ['2.{}'.format(x) for x in range(1, 7)]
# 3. Control plane configuration, all tests added
tests += ['3.1.1', '3.2.1', '3.2.2']
# 4. Node tests
# 4.1: Ignore, because it's about config files we don't have
# 4.2:
# 4.2.8: can't fix because we can't unset it
# 4.2.10 seems related to TLS connection between nodes, so is not relevant for us ATM
tests += ['4.2.{}'.format(x) for x in range (1, 7)]
tests += ['4.2.9']
tests += ['4.2.{}'.format(x) for x in range (11, 13)]
# 5: Kubernetes policies, not added for now because they are extra relevant
# for multi-user clusters.
result_data = []
check_arg = ",".join(tests)
result = host.run(" ".join([
"docker",
"run",
"--pid=host",
"-v",
"/etc:/etc:ro",
"-v",
"/var:/var:ro",
"-t",
"aquasec/kube-bench:latest",
"--version=1.15",
'--check="{}"'.format(check_arg),
"--noremediations",
"--noresults",
"--nosummary",
"--json"]), capture_output=True)
if result.rc != 0:
print("Docker run failed: ")
print(result.stderr)
# kube-bench doesn't give perfectly valid JSON as output. It gives 1 line
# of valid json per test
for line in result.stdout.splitlines():
output_data = json.loads(line)
if output_data['total_fail'] > 0:
print("Failed tests: ")
for test_output in output_data['tests']:
if test_output['fail'] > 0:
print("Section {}".format(test_output['section']))
for result in test_output['results']:
if result['status'] == 'FAIL':
pp.pprint(result)
result_data.append(output_data)
for data in result_data:
assert data['total_fail'] == 0
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment