Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • xeruf/dashboard
  • stackspin/dashboard
2 results
Show changes
Commits on Source (276)
Showing
with 763 additions and 302 deletions
......@@ -9,13 +9,13 @@ stages:
- package-helm-chart
- release-helm-chart
image: node:18-alpine
image: node:20-alpine
variables:
CHART_NAME: stackspin-dashboard
CHART_DIR: deployment/helmchart/
build-project:
yarn:
stage: build-project
before_script: []
script:
......@@ -25,13 +25,14 @@ build-project:
- echo "REACT_APP_API_URL=/api/v1" > .env
- echo "EXTEND_ESLINT=true" >> .env
- yarn build
- mv build web-build
- mkdir docker
- mv build docker/html
- echo "Build successful"
artifacts:
expire_in: 1 hour
name: web-build
paths:
- frontend/web-build
- frontend/docker/html
.kaniko-build:
script:
......@@ -42,6 +43,8 @@ build-project:
build-frontend-image:
stage: build-image
needs:
- yarn
image:
# We need a shell to provide the registry credentials, so we need to use the
# kaniko debug image (https://github.com/GoogleContainerTools/kaniko#debug-image)
......@@ -49,7 +52,7 @@ build-frontend-image:
entrypoint: [""]
variables:
KANIKO_BUILD_IMAGENAME: dashboard
DIRECTORY: frontend/web-build
DIRECTORY: frontend/docker
before_script:
- cp deployment/Dockerfile $DIRECTORY
- cp deployment/nginx.conf $DIRECTORY
......@@ -58,6 +61,7 @@ build-frontend-image:
build-backend-image:
stage: build-image
needs: []
variables:
KANIKO_BUILD_IMAGENAME: dashboard-backend
DIRECTORY: backend
......@@ -71,6 +75,7 @@ build-backend-image:
build-test-image:
stage: build-image
needs: []
variables:
KANIKO_BUILD_IMAGENAME: cypress-test
DIRECTORY: tests
......
# Changelog
## [0.8.4]
## 0.13.3
- Fix creating app roles for users created via the CLI.
## 0.13.2
- Update hydra client library to v2 and adapt to its changed API.
- Change the jwt identity claim because that's strictly required to be a string
now, and we put a json object in there before.
## 0.13.1
- Add the `cryptography` python library as dependency of the backend. This is
necessary for sha256_password and caching_sha2_password auth methods.
## 0.13.0
- Fix the initial recovery flow created automatically for new users, which was
broken by the kratos client lib upgrade.
- Add support for serving arbitrary files from the frontend pod, provided by a
persistent volume claim. Reorganize the assets to make it easier to include
custom icons this way.
- Add support for theming. Customizing colours, logo and background image is
now particularly easy, but mostly anything can be changed through a custom
stylesheet.
- Only show Stackspin version info to admin users.
## 0.12.4
- Prevent database connections from being shared between worker processes.
That sharing may cause intermittent database errors.
## 0.12.3
- Fix broken kratos hooks tracking last recovery and login times.
- Upgrade python to 3.13.
## 0.12.2
- Fix consent page for `ENFORCE_2FA` instances.
## 0.12.1
- Add back `jinja2-base64-filters` to backend for templating kustomizations
during app installation.
## 0.12.0
- Add basic support for WebAuthn as second-factor authentication.
- Only show app version numbers in the dashboard tiles to admin users.
- Do not serve Dockerfile and nginx config from frontend.
- Start renovating python dependencies of the backend. Upgrade many direct and
indirect dependencies.
## 0.11.1
- Fix password reset form in case no email address is pre-filled.
## 0.11.0
- Allow pre-filling user's email address in a link to the password (re)set
form. This is useful when creating new user accounts.
- Fix user provisioning after installing new apps.
## 0.10.5
- Look up users from Kratos by email address using the proper (new) API
mechanism for that, instead of iterating over all users.
- Compare email addresses case insensitively to deal with Stackspin apps
changing case of email address strings.
- Fix broken user accounts when created via the flask CLI.
- Replace slightly off-spec usage of `__repr__` by `__str__`.
## 0.10.4
- Disable Zulip accounts when deleting users, because Zulip doesn't allow
deleting accounts via SCIM.
## 0.10.3
- Fix setting successful provisioning status.
## 0.10.2
- Fine-tune logging levels, and introduce a new environment variable
`LOG_LEVEL` to set the log level at runtime.
- Track when a user's full name has been changed, and only include the name in
the SCIM provisioning call when it has changed, or for newly provisioned
users.
## 0.10.1
- Watch dashboard configmaps with lists of apps and oauthclients, and
reload config on changes. This also makes sure that we always load the config
at dashboard start-up, even when there are no (SCIM-supporting) apps
installed.
## 0.10.0
- Include new "System resources" module with basic stats.
- Implement basic (manual/static) SCIM functionality for automatic user provisioning.
- Implement dynamic (i.e., arbitrary apps) SCIM functionality, tested and
tailored for Nextcloud and Zulip.
- Upgrade to tailwind v3, and update several other javascript dependencies.
- Make info modals slightly wider, to make sure you can see the full contents
also for slightly larger fonts. In particular, this fixes a partially
invisible reset link.
- Add a CLI command for deleting older unused accounts.
- Add logo for Gitea.
## 0.9.2
- Fix saving user properties, which was broken because of the partial tags
implementation.
## 0.9.1
- Do not autocomplete totp input field.
- Allow removing user app roles from CLI.
## 0.9.0
- Improve user listing: show label for admin users, show last login and
password reset times, improved layout.
- Fix rare bug in frontend's idea of admin status in the face of custom apps.
- Prepare backend for user tags.
## 0.8.4
- Allow enforcing 2fa.
- Add button for admin users to reset 2FA of users. Also improve UX of this and
......@@ -10,25 +137,25 @@
- Fix css of demo sign-up.
- Upgrade to python 3.12.
## [0.8.3]
## 0.8.3
- Introduce backend code for resetting 2FA, and add cli command for that.
- Upgrade Kratos api library `ory-kratos-client` to 1.0.0.
- Patch our usage of Kratos api pagination of identities list.
## [0.8.2]
## 0.8.2
- End the Kratos session in prelogout. This makes sure that we end the "SSO
session" also when logging out from an app. We used to rely on hydra's
post-logout url to get at the kratos logout, but apps sometimes override that
url via an oidc parameter.
## [0.8.1]
## 0.8.1
- Add a couple of attributes to our OIDC tokens to support our switch to
another Nextcloud app for OIDC.
## [0.8.0]
## 0.8.0
- Add feature to easily edit app permissions for multiple users at once.
- Change the way secrets are created for apps, creating them in the stackspin
......@@ -42,33 +169,33 @@
connections.
- Fix listing of Velero in app permissions when batch-creating users.
## [0.7.6]
## 0.7.6
- Add Forgejo metadata for use as custom app.
## [0.7.5]
## 0.7.5
- Add Jitsi and Mattermost metadata for use as custom apps.
## [0.7.4]
## 0.7.4
- Make the sign-in UI less wide.
## [0.7.3]
## 0.7.3
Only changes to the helm chart.
## [0.7.2]
## 0.7.2
- Apply Stackspin styling to the login component. This covers the login pages,
recovery page, and profile/authentication settings.
## [0.7.1]
## 0.7.1
- Load the flask_migrate flask extension in dev/cli mode so we may run `flask
db` commands from the cli again.
## [0.7.0]
## 0.7.0
- Improve the UX of the dashboard tiles: adding help texts in modals, add a
status dropdown with version info, add alerts before and after automatic
......@@ -81,28 +208,28 @@ Only changes to the helm chart.
This was fixed by upgrading Kratos; we're happy to see that the default
Kratos behaviour was changed in this regard.
## [0.6.7]
## 0.6.7
Only changes to the helm chart.
## [0.6.6]
## 0.6.6
Only changes to the helm chart.
## [0.6.5]
## 0.6.5
- Further improve (error) message handling. In particular, show feedback when
saving profile settings. Some of the previous error message changes have been
reverted pending further consideration of the design.
- Disable changing the email address as this is not supported right now.
## [0.6.4]
## 0.6.4
- Fix error messages that were not shown, in particular when providing wrong
credentials when logging in. We redesigned the error handling, considering
that these messages may be translated later on.
## [0.6.3]
## 0.6.3
- Add support for Hedgedoc.
- Add a button for admins for creating a recovery link for a user.
......@@ -113,11 +240,11 @@ Only changes to the helm chart.
- Show the user UUID in user modal.
- Only show installed apps when configuring roles.
## [0.6.2]
## 0.6.2
- Fix submit button label in the form for verifying your TOTP code.
## [0.6.1]
## 0.6.1
- Add TOTP as second factor authentication. Please note that you'll need to set
a `backend.dashboardUrl` value instead of the old `backend.loginPanelUrl` one
......@@ -125,13 +252,13 @@ Only changes to the helm chart.
- Create a new backend endpoint for providing some environment variables to the
frontend, with the URLs of the Kratos and Hydra APIs.
## [0.6.0]
## 0.6.0
- Make it easier to add apps, by reading apps and oauthclients from configmaps
at startup.
- Reset alembic migration history.
## [0.5.2]
## 0.5.2
- Fix login welcome message
- Clarify "set new password" button (#94)
......@@ -139,11 +266,11 @@ Only changes to the helm chart.
entered (#96)
- Fix access checking for monitoring (#105)
## [0.5.1]
## 0.5.1
- Fix bug of missing "Monitoring" app access when creating a new user.
- Add Velero to the list of installable apps, but hide it from the dashboard
## [0.5.0]
## 0.5.0
- Merge dashboard-backend repository into this repository, released as 0.5.0
......@@ -114,7 +114,7 @@ You need to do this once for every cluster you want to use as a development clus
Before running the frontend in native mode:
* Make sure you have nodejs installed. You may want to use [Node Version
Manager](https://github.com/nvm-sh/nvm) to make it easy to install several
version side by side.
versions side by side.
* Install necessary javascript dependencies (will be placed in
`frontend/node_modules`) using `./dev.sh frontend setup`.
......
FROM python:3.12-slim
FROM python:3.13-slim
# set "app" as the working directory from which CMD, RUN, ADD references
WORKDIR /app
......
from apscheduler.schedulers.background import BackgroundScheduler
from flask import Flask, jsonify
from flask_cors import CORS
from flask_jwt_extended import JWTManager
import flask_migrate
from jsonschema.exceptions import ValidationError
from NamedAtomicLock import NamedAtomicLock
import threading
import traceback
from werkzeug.exceptions import BadRequest
# These imports are required
......@@ -12,11 +15,16 @@ from cliapp import cli
from web import web
from areas import users
from areas import apps
from areas.apps.apps import *
from areas import auth
from areas import resources
from areas import roles
from areas import tags
from cliapp import cliapp
import config
import helpers.kubernetes
import helpers.provision
import helpers.threads
from web import login
from database import db
......@@ -42,6 +50,7 @@ import os
import sys
# Configure logging.
log_level = logging.getLevelName(config.LOG_LEVEL or 'INFO')
from logging.config import dictConfig
dictConfig({
'version': 1,
......@@ -52,15 +61,17 @@ dictConfig({
'class': 'logging.StreamHandler',
'stream': 'ext://flask.logging.wsgi_errors_stream',
'formatter': 'default',
'level': log_level,
}},
'root': {
'level': 'INFO',
'handlers': ['wsgi'],
'level': log_level,
},
# Loggers are created also by alembic, flask_migrate, etc. Without this
# setting, those loggers seem to be ignored.
'disable_existing_loggers': False,
})
logging.getLogger("kubernetes.client.rest").setLevel(logging.WARNING)
app = Flask(__name__)
......@@ -69,12 +80,13 @@ app.config["SQLALCHEMY_DATABASE_URI"] = SQLALCHEMY_DATABASE_URI
app.config["SQLALCHEMY_ENGINE_OPTIONS"] = {'pool_pre_ping': True}
app.config["SQLALCHEMY_TRACK_MODIFICATIONS"] = SQLALCHEMY_TRACK_MODIFICATIONS
app.logger.setLevel(logging.INFO)
cors = CORS(app)
db.init_app(app)
with app.app_context():
provisioner = helpers.provision.Provision()
def init_routines():
with app.app_context():
# We have reset the alembic migration history at Stackspin version 2.2.
......@@ -98,14 +110,53 @@ def init_routines():
app.logger.info(f"upgrade failed: {type(e)}: {e}")
sys.exit(2)
# We need this app context in order to talk the database, which is managed by
# flask-sqlalchemy, which assumes a flask app context.
def reload():
# We need this app context in order to talk the database, which is managed by
# flask-sqlalchemy, which assumes a flask app context.
with app.app_context():
app.logger.info("Reloading dashboard config from cluster resources.")
# Load the list of apps from a configmap and store any missing ones in the
# database.
app_slugs = cluster_config.populate_apps()
# Same for the list of oauthclients.
cluster_config.populate_oauthclients()
# Load per-app scim config if present.
cluster_config.populate_scim_config(app_slugs)
# We could call `reload` here manually, but actually the watch also at its
# start creates events for existing secrets so we don't need to.
with app.app_context():
# Load the list of apps from a configmap and store any missing ones in the
# database.
cluster_config.populate_apps()
# Same for the list of oauthclients.
cluster_config.populate_oauthclients()
# Set watch for dashboard SCIM config secrets. Any time those change,
# we reload so we can do SCIM for newly installed apps.
try:
helpers.kubernetes.watch_dashboard_config(app, reload)
except Exception as e:
app.logger.error(f"Error watching dashboard config: {e}")
# Set up a generic task scheduler (cron-like).
scheduler = BackgroundScheduler()
scheduler.start()
# Add a job to run the provisioning reconciliation routine regularly.
# We'll also run it when we make changes that should be propagated
# immediately.
scheduler.add_job(helpers.threads.request_provision, 'interval', id='provision', hours=24)
# We'll run this in a separate thread so it can be done in the background.
# We have this single "provisioning worker" so there will be only one
# provisioning operation at a time. We use an Event to signal a
# provisioning request.
def provisioning_loop():
while True:
app.logger.info("Waiting for next provisioning run.")
# helpers.threads.provision_event.wait()
# helpers.threads.provision_event.clear()
helpers.threads.wait_provision()
app.logger.info("Starting provisioning.")
with app.app_context():
try:
provisioner.reconcile()
except Exception as e:
app.logger.warn(f"Exception during user provisioning:")
app.logger.warn(traceback.format_exc())
threading.Thread(target=provisioning_loop).start()
# `init_routines` should only run once per dashboard instance. To enforce this
# we have different behaviour for production and development mode:
......@@ -128,6 +179,13 @@ else:
# cli.
flask_migrate.Migrate(app, db)
# Now that we've done all database interactions in the initialisation routines,
# we need to drop all connections to the database to prevent those from being
# shared among different worker processes.
logging.info("Disposing of database connections.")
with app.app_context():
db.engine.dispose()
app.register_blueprint(api_v1)
app.register_blueprint(web)
app.register_blueprint(cli)
......@@ -144,10 +202,19 @@ jwt = JWTManager(app)
# When token is not valid or missing handler
@jwt.invalid_token_loader
def invalid_token_callback(reason):
logging.info(f"Invalid token: {reason}.")
return jsonify({"errorMessage": "Unauthorized (invalid token)"}), 401
@jwt.unauthorized_loader
def unauthorized_callback(reason):
logging.info(f"No token: {reason}.")
return jsonify({"errorMessage": "Unauthorized (no token)"}), 401
@jwt.expired_token_loader
def expired_token_callback(*args):
return jsonify({"errorMessage": "Unauthorized"}), 401
def expired_token_callback(reason):
logging.info(f"Expired token: {reason}.")
return jsonify({"errorMessage": "Unauthorized (expired token)"}), 401
@app.route("/")
def index():
......
from .apps import *
from .apps_service import *
from .models import *
import threading
from flask import current_app
from flask_jwt_extended import get_jwt
import ory_kratos_client
from ory_kratos_client.api import identity_api
from .models import App, AppRole
from areas.roles.models import Role
from areas.users.models import User
from config import *
from database import db
from helpers.access_control import user_has_access
from helpers.kratos_user import KratosUser
import helpers.kubernetes as k8s
from helpers.threads import request_provision
class AppsService:
@staticmethod
......@@ -18,7 +25,7 @@ class AppsService:
def get_accessible_apps():
apps = App.query.all()
kratos_admin_api_configuration = ory_kratos_client.Configuration(host=KRATOS_ADMIN_URL, discard_unknown_keys=True)
kratos_admin_api_configuration = ory_kratos_client.Configuration(host=KRATOS_ADMIN_URL)
with ory_kratos_client.ApiClient(kratos_admin_api_configuration) as kratos_admin_client:
kratos_identity_api = identity_api.IdentityApi(kratos_admin_client)
......@@ -42,3 +49,39 @@ class AppsService:
def get_app_roles():
app_roles = AppRole.query.all()
return [{"user_id": app_role.user_id, "app_id": app_role.app_id, "role_id": app_role.role_id} for app_role in app_roles]
@classmethod
def install_app(cls, app):
app.install()
# Create app roles for the new app for all admins, and reprovision. We
# do this asynchronously, because we first need to wait for the app
# installation to be finished -- otherwise the SCIM config for user
# provisioning is not ready yet.
current_app.logger.info("Starting thread for creating app roles.")
# We need to pass the app context to the thread, because it needs that
# for database operations.
ca = current_app._get_current_object()
threading.Thread(target=cls.create_admin_app_roles, args=(ca, app,)).start()
@staticmethod
def create_admin_app_roles(ca, app):
"""Create AppRole objects for the given app for all admins."""
with ca.app_context():
ca.logger.info("Waiting for kustomizations to be ready.")
k8s.wait_kustomization_ready(app)
for user in User.get_all():
if not user['stackspin_data']['stackspin_admin']:
# We are only dealing with admin users here.
continue
existing_app_role = AppRole.query.filter_by(app_id=app.id, user_id=user['id']).first()
if existing_app_role is None:
ca.logger.info(f"Creating app role for app {app.slug} for admin user {user['id']}")
app_role = AppRole(
user_id=user['id'],
app_id=app.id,
role_id=Role.ADMIN_ROLE_ID
)
db.session.add(app_role)
db.session.commit()
ca.logger.info("Requesting user provisioning.")
request_provision()
"""Everything to do with Apps"""
import os
import base64
import enum
import os
from sqlalchemy import ForeignKey, Integer, String, Boolean
from sqlalchemy import Boolean, DateTime, Enum, ForeignKey, Integer, String, Unicode
from sqlalchemy.orm import relationship
from database import db
......@@ -29,6 +30,9 @@ class App(db.Model):
# The URL is only stored in the DB for external applications; otherwise the
# URL is stored in a configmap (see get_url)
url = db.Column(String(length=128), unique=False)
scim_url = db.Column(String(length=1024), nullable=True)
scim_token = db.Column(String(length=1024), nullable=True)
scim_group_support = db.Column(Boolean, nullable=False, server_default='0')
oauthclients = relationship("OAuthClientApp", back_populates="app")
def __init__(self, slug, name, external=False, url=None):
......@@ -37,7 +41,7 @@ class App(db.Model):
self.external = external
self.url = url
def __repr__(self):
def __str__(self):
return f"{self.id} <{self.name}>"
def get_url(self):
......@@ -92,12 +96,12 @@ class App(db.Model):
def uninstall(self):
"""
Delete the app kustomization.
Delete the `add-$app` kustomization.
In our case, this triggers a deletion of the app's PVCs (so deletes all
data), as well as any other Kustomizations and HelmReleases related to
the app. It also triggers a deletion of the OAuth2Client object. It
also does not remove the TLS secret generated by cert-manager.
This triggers a deletion of the app's PVCs (so deletes all data), as
well as any other Kustomizations and HelmReleases related to the app.
It also triggers a deletion of the OAuth2Client object. It does not
remove the TLS secret generated by cert-manager.
"""
self.__delete_kustomization()
......@@ -180,6 +184,24 @@ class App(db.Model):
return os.path.join(os.path.dirname(os.path.realpath(__file__)), "templates")
class ProvisionStatus(enum.Enum):
SyncNeeded = "SyncNeeded"
Provisioned = "Provisioned"
# Provisioning is not necessary for this user/role, for
# example because the user has no access to this app.
NotProvisioned = "NotProvisioned"
# SCIM Provisioning is not supported for this particular app.
NotSupported = "NotSupported"
# This user needs to be deleted from this app.
ToDelete = "ToDelete"
# This app role entry corresponds to a Stackspin user that no longer
# exists.
Orphaned = "Orphaned"
# Something went wrong; more information can be found in the
# `last_provision_message`.
Error = "Error"
class AppRole(db.Model): # pylint: disable=too-few-public-methods
"""
The AppRole object, stores the roles Users have on Apps
......@@ -188,10 +210,25 @@ class AppRole(db.Model): # pylint: disable=too-few-public-methods
user_id = db.Column(String(length=64), primary_key=True)
app_id = db.Column(Integer, ForeignKey("app.id"), primary_key=True)
role_id = db.Column(Integer, ForeignKey("role.id"))
provision_status = db.Column(
Enum(
ProvisionStatus,
native_enum=False,
length=32,
values_callable=lambda _: [str(member.value) for member in ProvisionStatus]
),
nullable=False,
default=ProvisionStatus.SyncNeeded,
server_default=ProvisionStatus.SyncNeeded.value
)
last_provision_attempt = db.Column(DateTime, nullable=True)
last_provision_message = db.Column(Unicode(length=256), nullable=True)
scim_id = db.Column(Unicode(length=256), nullable=True)
role = relationship("Role")
app = relationship("App")
def __repr__(self):
def __str__(self):
return (f"role_id: {self.role_id}, user_id: {self.user_id},"
f" app_id: {self.app_id}, role: {self.role}")
......@@ -221,7 +258,7 @@ class AppStatus(): # pylint: disable=too-few-public-methods
kustomization = app.kustomization
if kustomization is not None and "status" in kustomization:
ks_ready, ks_message = AppStatus.check_condition(kustomization['status'])
ks_ready, ks_message = k8s.check_condition(kustomization['status'])
self.installed = True
if ks_ready:
self.ready = ks_ready
......@@ -238,7 +275,7 @@ class AppStatus(): # pylint: disable=too-few-public-methods
helmreleases = app.helmreleases
for helmrelease in helmreleases:
hr_status = helmrelease['status']
hr_ready, hr_message = AppStatus.check_condition(hr_status)
hr_ready, hr_message = k8s.check_condition(hr_status)
# For now, only show the message of the first HR that isn't ready
if not hr_ready:
......@@ -250,29 +287,9 @@ class AppStatus(): # pylint: disable=too-few-public-methods
self.ready = ks_ready
self.message = f"App Kustomization status: {ks_message}"
def __repr__(self):
def __str__(self):
return f"Installed: {self.installed}\tReady: {self.ready}\tMessage: {self.message}"
@staticmethod
def check_condition(status):
"""
Returns a tuple that has true/false for readiness and a message
Ready, in this case means that the condition's type == "Ready" and its
status == "True". If the condition type "Ready" does not occur, the
status is interpreted as not ready.
The message that is returned is the message that comes with the
condition with type "Ready"
:param status: Kubernetes resource's "status" object.
:type status: dict
"""
for condition in status["conditions"]:
if condition["type"] == "Ready":
return condition["status"] == "True", condition["message"]
return False, "Condition with type 'Ready' not found"
def to_dict(self):
"""Represents this app status as a dict"""
return {
......@@ -296,6 +313,20 @@ class OAuthClientApp(db.Model): # pylint: disable=too-few-public-methods
app = relationship("App", back_populates="oauthclients")
def __repr__(self):
def __str__(self):
return (f"oauthclient_id: {self.oauthclient_id}, app_id: {self.app_id},"
f" app: {self.app}")
class ScimAttribute(db.Model): # pylint: disable=too-few-public-methods
"""
The ScimAttribute object records that a certain user attribute needs to be
set in a certain app via SCIM.
"""
user_id = db.Column(String(length=64), primary_key=True)
app_id = db.Column(Integer, ForeignKey("app.id"), primary_key=True)
attribute = db.Column(String(length=64), primary_key=True)
def __str__(self):
return (f"attribute: {self.attribute}, user_id: {self.user_id},"
f" app_id: {self.app_id}")
......@@ -4,7 +4,7 @@ from flask_cors import cross_origin
from datetime import timedelta
from areas import api_v1
from areas.apps import App, AppRole
from areas.apps.models import App, AppRole
from config import *
from helpers import HydraOauth, BadRequest, KratosApi
......@@ -28,6 +28,7 @@ def hydra_callback():
raise BadRequest("Missing code query param")
token = HydraOauth.get_token(state, code)
token_id = token["access_token"]
user_info = HydraOauth.get_user_info()
kratos_id = user_info["sub"]
......@@ -35,7 +36,7 @@ def hydra_callback():
try:
access_token = create_access_token(
identity=token, expires_delta=timedelta(hours=1), additional_claims={"user_id": kratos_id}
identity=token_id, expires_delta=timedelta(hours=1), additional_claims={"user_id": kratos_id}
)
except Exception as e:
raise BadRequest("Error with creating auth token between backend and frontend")
......
from .resources import *
from .resources_service import *
from flask import jsonify, request
from flask_cors import cross_origin
from flask_expects_json import expects_json
from flask_jwt_extended import get_jwt, jwt_required
from areas import api_v1
from helpers.auth_guard import admin_required
from .resources_service import ResourcesService
@api_v1.route("/resources", methods=["GET"])
@jwt_required()
@cross_origin()
@admin_required()
def get_resources():
res = ResourcesService.get_resources()
return jsonify(res)
# from config import KRATOS_ADMIN_URL
# from database import db
# from kubernetes.client import CustomObjectsApi
# import re
import requests
from requests.exceptions import ConnectionError
from flask import current_app
# import helpers.kubernetes
class ResourcesService:
@classmethod
def get_resources(cls):
# custom = CustomObjectsApi()
# raw_nodes = custom.list_cluster_custom_object('metrics.k8s.io', 'v1beta1', 'nodes')
# raw_pods = custom.list_cluster_custom_object('metrics.k8s.io', 'v1beta1', 'pods')
# nodes = []
# for node in raw_nodes["items"]:
# nodes.append({
# "name": node["metadata"]["name"],
# "cpu_raw": node["usage"]["cpu"],
# "cpu": cls.parse_cpu(node["usage"]["cpu"]),
# "memory_raw": node["usage"]["memory"],
# "memory_used": cls.parse_memory(node["usage"]["memory"]),
# })
try:
cores = cls.get_prometheus('machine_cpu_cores')
current_app.logger.info(f"Number of cores: {cores}")
result = {
"success": True,
# Number of cores in use. So average usage times number of cores.
"cpu": cores * cls.get_prometheus('1 - (avg by(instance) (rate(node_cpu_seconds_total{mode="idle"}[8m])))', 'float'),
"cpu_total": cores,
# "memory_raw": node["usage"]["memory"],
# "memory_used": cls.parse_memory(node["usage"]["memory"]),
"memory_total": cls.get_prometheus('machine_memory_bytes'),
"memory_available": cls.get_prometheus('node_memory_MemAvailable_bytes'),
"disk_free": cls.get_prometheus('node_filesystem_free_bytes{mountpoint="/"}'),
"disk_total": cls.get_prometheus('node_filesystem_size_bytes{mountpoint="/"}'),
}
except ConnectionError:
return {
"success": False,
"message": "Could not contact prometheus; perhaps monitoring is not enabled.",
}
return result
# @staticmethod
# def parse_cpu(s):
# result = re.match(r"^(\d+)([mun]?)$", s)
# if result is None:
# raise Exception("cpu data does not match known patterns")
# number = result.group(1)
# suffix = result.group(2)
# multipliers = {"": 1, "m": 1e-3, "u": 1e-6, "n": 1e-9}
# return (int(number) * multipliers[suffix])
# @staticmethod
# def parse_memory(s):
# result = re.match(r"^(\d+)(|Ki|Mi|Gi)$", s)
# if result is None:
# raise Exception("memory data does not match known patterns")
# number = result.group(1)
# suffix = result.group(2)
# multipliers = {"": 1, "Ki": 1024, "Mi": 1024*1024, "Gi": 1024*1024*1024}
# return (int(number) * multipliers[suffix])
@staticmethod
def get_prometheus(query, cast='int'):
try:
params = {
"query": query,
}
result = requests.get("http://kube-prometheus-stack-prometheus:9090/api/v1/query", params=params)
current_app.logger.info(query)
current_app.logger.info(result.json())
value = result.json()["data"]["result"][0]["value"][1]
except AttributeError:
return None
if cast == 'float':
converted = float(value)
else:
converted = int(value)
return converted
# "pods": {
# "apiVersion": "metrics.k8s.io/v1beta1",
# "items": [
# {
# "containers": [
# {
# "name": "traffic-manager",
# "usage": {
# "cpu": "2839360n",
# "memory": "24696Ki"
# }
# }
# ],
# "metadata": {
# "creationTimestamp": "2023-11-30T15:10:10Z",
# "labels": {
# "app": "traffic-manager",
# "pod-template-hash": "5cd7cc7fd6",
# "telepresence": "manager"
# },
# "name": "traffic-manager-5cd7cc7fd6-mp7td",
# "namespace": "ambassador"
# },
# "timestamp": "2023-11-30T15:10:00Z",
# "window": "12.942s"
# },
......@@ -9,5 +9,5 @@ class Role(db.Model):
id = db.Column(Integer, primary_key=True)
name = db.Column(String(length=64))
def __repr__(self):
def __str__(self):
return f"Role {self.name}"
......@@ -6,7 +6,7 @@ class Tag(db.Model):
name = db.Column(String(length=256))
colour = db.Column(String(length=64))
def __repr__(self):
def __str__(self):
return f"Tag {self.slug}"
class TagUser(db.Model):
......@@ -14,5 +14,5 @@ class TagUser(db.Model):
user_id = db.Column(String(length=64), primary_key=True)
tag_id = db.Column(Integer, primary_key=True)
def __repr__(self):
def __str__(self):
return f"TagUser, with tag_id {self.tag_id}, user_id {self.user_id}"
......@@ -13,6 +13,7 @@ class TagService:
tag = Tag(name=data["name"], colour=data.get("colour"))
db.session.add(tag)
db.session.commit()
return {"id": tag.id, "name": tag.name, "colour": tag.colour}
@staticmethod
def update_tag(id, data):
......
......@@ -25,8 +25,8 @@ def get_tags():
@admin_required()
def post_tag():
data = request.get_json()
TagService.create_tag(data)
return jsonify(message="Tag created successfully.")
tag = TagService.create_tag(data)
return jsonify(tag)
@api_v1.route("/tags/<int:id>", methods=["PUT"])
@jwt_required()
......
from areas.apps.models import App, AppRole
from areas.roles.models import Role
from areas.tags.models import TagUser
from database import db
from helpers import KratosApi
class User():
@staticmethod
def get_all():
page = 0
userList = []
# Get all associated user data (Stackspin roles, tags).
stackspinData = UserStackspinData()
while page >= 0:
if page == 0:
res = KratosApi.get("/admin/identities?per_page=1000").json()
else:
res = KratosApi.get("/admin/identities?per_page=1000&page={}".format(page)).json()
for r in res:
# Inject information from the `stackspin` database that's associated to this user.
r["stackspin_data"] = stackspinData.getData(r["id"])
userList.append(r)
if len(res) == 0:
page = -1
else:
page = page + 1
return userList
class UserStackspinData():
# TODO: we currently ignore the userID parameter, so we always get all
# associated information even if we only need it for a single user.
# That should be changed.
def __init__(self, userID=None):
self.dashboardRoles = self.__getDashboardRoles()
self.userTags = self.__getUserTags()
def getData(self, userID):
stackspinData = {}
dashboardRole = self.dashboardRoles.get(userID)
if dashboardRole is not None:
stackspinData["stackspin_admin"] = dashboardRole == Role.ADMIN_ROLE_ID
# Also, user tags.
stackspinData["tags"] = self.userTags.get(userID, [])
return stackspinData
@staticmethod
def setTags(userID, tags):
# Delete all existing tags, because the new set of tags is interpreted
# to overwrite the previous set.
db.session.query(TagUser).filter(TagUser.user_id == userID).delete()
# Now create an entry for every tag in the new list.
for tagID in tags:
tagUser = TagUser(user_id=userID, tag_id=tagID)
db.session.add(tagUser)
@staticmethod
def __getDashboardRoles():
dashboardRoles = {}
for appRole, app in (
db.session.query(AppRole, App)
.filter(AppRole.app_id == App.id)
.filter(App.slug == "dashboard")
.all()
):
dashboardRoles[appRole.user_id] = appRole.role_id
return dashboardRoles
@staticmethod
def __getUserTags():
userTags = {}
for tagUser in db.session.query(TagUser).all():
if tagUser.user_id in userTags:
userTags[tagUser.user_id].append(tagUser.tag_id)
else:
userTags[tagUser.user_id] = [tagUser.tag_id]
return userTags
import ory_kratos_client
from ory_kratos_client.model.json_patch \
from ory_kratos_client.models.json_patch \
import JsonPatch
from ory_kratos_client.model.json_patch_document \
import JsonPatchDocument
from ory_kratos_client.model.update_recovery_flow_body \
from ory_kratos_client.models.update_recovery_flow_body \
import UpdateRecoveryFlowBody
from ory_kratos_client.models.update_recovery_flow_with_link_method \
import UpdateRecoveryFlowWithLinkMethod
from ory_kratos_client.api import frontend_api, identity_api
from datetime import datetime
......@@ -12,17 +12,19 @@ import time
from flask import current_app
from .models import User, UserStackspinData
from config import KRATOS_ADMIN_URL
from database import db
from areas.apps import App, AppRole, AppsService
from areas.roles import Role, RoleService
from areas.tags import TagUser
from areas.apps.models import App, AppRole, ProvisionStatus
from areas.apps.apps_service import AppsService
from areas.roles import Role
from helpers import KratosApi
from helpers.error_handler import KratosError
from helpers.provision import Provision
from helpers.threads import request_provision
kratos_admin_api_configuration = \
ory_kratos_client.Configuration(host=KRATOS_ADMIN_URL, discard_unknown_keys=True)
kratos_admin_api_configuration = ory_kratos_client.Configuration(host=KRATOS_ADMIN_URL)
kratos_client = ory_kratos_client.ApiClient(kratos_admin_api_configuration)
kratos_frontend_api = frontend_api.FrontendApi(kratos_client)
kratos_identity_api = identity_api.IdentityApi(kratos_client)
......@@ -30,25 +32,7 @@ kratos_identity_api = identity_api.IdentityApi(kratos_client)
class UserService:
@classmethod
def get_users(cls):
page = 0
userList = []
# Get all associated user data (Stackspin roles, tags).
stackspinData = UserStackspinData()
while page >= 0:
if page == 0:
res = KratosApi.get("/admin/identities?per_page=1000").json()
else:
res = KratosApi.get("/admin/identities?per_page=1000&page={}".format(page)).json()
for r in res:
# Inject information from the `stackspin` database that's associated to this user.
r["stackspin_data"] = stackspinData.getData(r["id"])
userList.append(r)
if len(res) == 0:
page = -1
else:
page = page + 1
return userList
return User.get_all()
@classmethod
def get_user(cls, id):
......@@ -99,11 +83,12 @@ class UserService:
db.session.add(app_role)
if data.get("tags"):
if data.get("tags") is not None:
UserStackspinData.setTags(res["id"], data["tags"])
# Commit all changes to the stackspin database.
db.session.commit()
request_provision()
# We start a recovery flow immediately after creating the
# user, so the user can set their initial password.
......@@ -112,9 +97,13 @@ class UserService:
return UserService.get_user(res["id"])
@staticmethod
def reset_2fa(id):
def reset_totp(id):
KratosApi.delete("/admin/identities/{}/credentials/totp".format(id))
@staticmethod
def reset_webauthn(id):
KratosApi.delete("/admin/identities/{}/credentials/webauthn".format(id))
@staticmethod
def __start_recovery_flow(email):
......@@ -129,90 +118,79 @@ class UserService:
:type email: str
"""
api_response = kratos_frontend_api.create_native_recovery_flow()
flow = api_response['id']
flow = api_response.id
# Submit the recovery flow to send an email to the new user.
update_recovery_flow_body = \
UpdateRecoveryFlowBody(method="link", email=email)
UpdateRecoveryFlowBody(UpdateRecoveryFlowWithLinkMethod(method="link", email=email))
api_response = kratos_frontend_api.update_recovery_flow(flow,
update_recovery_flow_body=update_recovery_flow_body)
@classmethod
def put_user(cls, id, data):
kratos_data = {
"schema_id": "default",
"traits": {"email": data["email"], "name": data.get("name", "")},
}
KratosApi.put("/admin/identities/{}".format(id), kratos_data)
# Get the old version of the identity. We need that for comparison to
# see if some attributes are changed by our update.
old_user = KratosApi.get("/admin/identities/{}".format(id)).json()
old_name = old_user["traits"].get("name", "")
new_name = data.get("name", "")
# Create list of patches with our changes.
patches = []
patches.append(JsonPatch(op="replace", path="/traits/email", value=data['email']))
patches.append(JsonPatch(op="replace", path="/traits/name", value=new_name))
# Determine whether we're really changing the name, and if so record
# that fact in the database in the form of a ScimAttribute. We'll use
# that information later during provisioning via SCIM.
if old_name != new_name:
current_app.logger.info(f"Name changed for: {data['email']}")
current_app.logger.info(f" old name: {old_name}")
current_app.logger.info(f" new name: {new_name}")
Provision.store_attribute(attribute='name', user_id=id)
# We used a PUT before, but that deletes any attributes that we don't
# specify, which is not so convenient. So we PATCH just the attributes
# we're changing instead.
kratos_identity_api.patch_identity(id, json_patch=patches)
if data["app_roles"]:
app_roles = data["app_roles"]
for ar in app_roles:
app = App.query.filter_by(slug=ar["name"]).first()
app_role = AppRole.query.filter_by(
user_id=id, app_id=app.id).first()
if app_role:
# There is already a role set for this user and app, so we
# edit it.
app_role.role_id = ar["role_id"] if "role_id" in ar else None
else:
# There is no role set yet for this user and app, so we
# create a new one.
appRole = AppRole(
user_id=id,
role_id=ar["role_id"] if "role_id" in ar else None,
app_id=app.id,
)
db.session.add(appRole)
cls.set_user_role(id, app.id, ar["role_id"] if "role_id" in ar else None)
if data["tags"]:
if data.get("tags") is not None:
UserStackspinData.setTags(id, data["tags"])
db.session.commit()
request_provision()
return cls.get_user(id)
@classmethod
def put_multiple_users(cls, user_editing_id, data):
for user_data in data["users"]:
kratos_data = {
# "schema_id": "default",
"traits": {"email": user_data["email"]},
}
KratosApi.put("/admin/identities/{}".format(user_data["id"]), kratos_data)
is_admin = RoleService.is_user_admin(user_editing_id)
if is_admin and user_data["app_roles"]:
app_roles = user_data["app_roles"]
for ar in app_roles:
app = App.query.filter_by(slug=ar["name"]).first()
app_role = AppRole.query.filter_by(
user_id=user_data["id"], app_id=app.id).first()
if app_role:
app_role.role_id = ar["role_id"] if "role_id" in ar else None
db.session.commit()
else:
appRole = AppRole(
user_id=user_Data["id"],
role_id=ar["role_id"] if "role_id" in ar else None,
app_id=app.id,
)
db.session.add(appRole)
db.session.commit()
if user_data["tags"]:
UserStackspinData.setTags(user_data["id"], user_data["tags"])
return cls.get_user(user_data["id"])
def set_user_role(cls, user_id, app_id, role_id):
app_role = AppRole.query.filter_by(user_id=user_id, app_id=app_id).first()
if app_role:
# There is already a role set for this user and app, so we
# edit it.
app_role.role_id = role_id
# Mark the app role so the SCIM routine will pick it up at
# its next run.
app_role.provision_status = ProvisionStatus.SyncNeeded
else:
# There is no role set yet for this user and app, so we
# create a new one.
appRole = AppRole(
user_id=user_id,
role_id=role_id,
app_id=app_id,
)
db.session.add(appRole)
request_provision()
@staticmethod
def delete_user(id):
app_role = AppRole.query.filter_by(user_id=id).all()
for ar in app_role:
db.session.delete(ar)
ar.provision_status = ProvisionStatus.ToDelete
db.session.commit()
request_provision()
@classmethod
def post_multiple_users(cls, data):
......@@ -243,6 +221,7 @@ class UserService:
f"Exception: {error} on creating user: {user_email}")
creation_failed_users.append(user_email)
request_provision()
success_response = {}
existing_response = {}
failed_response = {}
......@@ -262,12 +241,9 @@ class UserService:
def recovery_complete(userID):
# Current unix time.
now = int(datetime.today().timestamp())
current_app.logger.info("Waiting a little while before setting last_recovery...")
time.sleep(0.5)
current_app.logger.info(f"Set last_recovery for {userID} to {now}")
patch = JsonPatch(op="replace", path="/metadata_admin/last_recovery", value=now)
patch_doc = JsonPatchDocument(value=[patch])
kratos_identity_api.patch_identity(userID, json_patch_document=patch_doc)
kratos_identity_api.patch_identity(userID, json_patch=[patch])
@staticmethod
def login_complete(userID):
......@@ -275,8 +251,7 @@ class UserService:
now = int(datetime.today().timestamp())
current_app.logger.info(f"Set last_login for {userID} to {now}")
patch = JsonPatch(op="replace", path="/metadata_admin/last_login", value=now)
patch_doc = JsonPatchDocument(value=[patch])
kratos_identity_api.patch_identity(userID, json_patch_document=patch_doc)
kratos_identity_api.patch_identity(userID, json_patch=[patch])
@staticmethod
def __insertAppRoleToUser(userId, userRes):
......@@ -299,52 +274,3 @@ class UserService:
userRes["traits"]["app_roles"] = app_roles
return userRes
class UserStackspinData():
# TODO: we currently ignore the userID parameter, so we always get all
# associated information even if we only need it for a single user.
# That should be changed.
def __init__(self, userID=None):
self.dashboardRoles = self.__getDashboardRoles()
self.userTags = self.__getUserTags()
def getData(self, userID):
stackspinData = {}
dashboardRole = self.dashboardRoles.get(userID)
if dashboardRole is not None:
stackspinData["stackspin_admin"] = dashboardRole == Role.ADMIN_ROLE_ID
# Also, user tags.
stackspinData["tags"] = self.userTags.get(userID, [])
return stackspinData
@staticmethod
def setTags(userID, tags):
# Delete all existing tags, because the new set of tags is interpreted
# to overwrite the previous set.
db.session.query(TagUser).filter(TagUser.user_id == userID).delete()
# Now create an entry for every tag in the new list.
for tagID in tags:
tagUser = TagUser(user_id=userID, tag_id=tagID)
db.session.add(tagUser)
@staticmethod
def __getDashboardRoles():
dashboardRoles = {}
for appRole, app in (
db.session.query(AppRole, App)
.filter(AppRole.app_id == App.id)
.filter(App.slug == "dashboard")
.all()
):
dashboardRoles[appRole.user_id] = appRole.role_id
return dashboardRoles
@staticmethod
def __getUserTags():
userTags = {}
for tagUser in db.session.query(TagUser).all():
if tagUser.user_id in userTags:
userTags[tagUser.user_id].append(tagUser.tag_id)
else:
userTags[tagUser.user_id] = [tagUser.tag_id]
return userTags
......@@ -36,12 +36,20 @@ def get_user_recovery(id):
res = UserService.create_recovery_link(id)
return jsonify(res)
@api_v1.route("/users/<string:id>/reset_2fa", methods=["POST"])
@api_v1.route("/users/<string:id>/reset_totp", methods=["POST"])
@jwt_required()
@cross_origin()
@admin_required()
def reset_2fa(id):
res = UserService.reset_2fa(id)
def reset_totp(id):
res = UserService.reset_totp(id)
return jsonify(res)
@api_v1.route("/users/<string:id>/reset_webauthn", methods=["POST"])
@jwt_required()
@cross_origin()
@admin_required()
def reset_webauthn(id):
res = UserService.reset_webauthn(id)
return jsonify(res)
# This is supposed to be called by Kratos as a webhook after a user has
......@@ -118,18 +126,6 @@ def post_multiple_users():
return jsonify(res)
# multi-user editing of app roles
@api_v1.route("/users-multi-edit", methods=["PUT"])
@jwt_required()
@cross_origin()
@expects_json(schema_multi_edit)
@admin_required()
def put_multiple_users():
data = request.get_json()
user_id = __get_user_id_from_jwt()
res = UserService.put_multiple_users(user_id, data)
return jsonify(res)
@api_v1.route("/me", methods=["GET"])
@jwt_required()
@cross_origin()
......@@ -146,7 +142,7 @@ def get_personal_info():
def update_personal_info():
data = request.get_json()
user_id = __get_user_id_from_jwt()
res = UserService.put_user(user_id, user_id, data)
res = UserService.put_user(user_id, data)
return jsonify(res)
......
......@@ -7,7 +7,7 @@ the user entries in the database(s)"""
import sys
import click
import ory_hydra_client
import datetime
import ory_kratos_client
from flask import current_app
from flask.cli import AppGroup
......@@ -17,7 +17,8 @@ from sqlalchemy import func
from config import HYDRA_ADMIN_URL, KRATOS_ADMIN_URL, KRATOS_PUBLIC_URL
from helpers import KratosUser
from cliapp import cli
from areas.apps import AppRole, App
from areas.apps.apps_service import AppsService
from areas.apps.models import AppRole, App
from areas.roles import Role
from areas.users import UserService
from database import db
......@@ -25,10 +26,7 @@ from database import db
# APIs
# Kratos has an admin and public end-point. We create an API for the admin one.
# The kratos implementation has bugs, which forces us to set the
# discard_unknown_keys to True.
kratos_admin_api_configuration = \
ory_kratos_client.Configuration(host=KRATOS_ADMIN_URL, discard_unknown_keys=True)
kratos_admin_api_configuration = ory_kratos_client.Configuration(host=KRATOS_ADMIN_URL)
kratos_admin_client = ory_kratos_client.ApiClient(kratos_admin_api_configuration)
kratos_identity_api = identity_api.IdentityApi(kratos_admin_client)
......@@ -53,7 +51,7 @@ def create_app(slug, name, external_url = None):
:param extenal-url: if set, it marks this as an external app and
configures the url
"""
current_app.logger.info(f"Creating app definition: {name} ({slug}")
current_app.logger.info(f"Creating app definition: {name} ({slug})")
obj = App(name=name, slug=slug)
......@@ -82,6 +80,43 @@ def list_app():
print(f"App name: {obj.name}\tSlug: {obj.slug},\tURL: {obj.get_url()}\tStatus: {obj.get_status()}")
@user_cli.command("cleanup")
@click.option("--dry-run", is_flag=True, default=False)
def cleanup_users(dry_run):
"""
Remove users that have never been active and are at least six weeks old.
"""
current_app.logger.info("Listing inactive users")
if dry_run:
print("Dry run, so not deleting anything.")
users = KratosUser.find_all(kratos_identity_api)
number_users = 0
number_inactive_users = 0
for user in users:
number_users = number_users + 1
try:
last_recovery = user.metadata_admin.get('last_recovery')
except (KeyError, AttributeError):
last_recovery = None
if last_recovery is not None:
continue
print(user)
print(f" Created at: {user.created_at}")
# For this long period we ignore any timezone difference.
age = datetime.datetime.now(datetime.timezone.utc) - user.created_at
if age > datetime.timedelta(weeks=6):
print("That's more than 6 weeks ago.")
number_inactive_users = number_inactive_users + 1
if not dry_run:
print("Deleting.")
user.delete()
UserService.delete(user.uuid)
if dry_run:
print(f"Would delete {number_inactive_users} users out of {number_users} total.")
else:
print(f"Deleted {number_inactive_users} users out of {number_users} total.")
@app_cli.command(
"delete",
)
......@@ -157,14 +192,14 @@ def install_app(slug):
if app.external:
current_app.logger.info(
f"App {slug} is an external app and can not be provisioned automatically")
f"App {slug} is an external app and cannot be provisioned automatically")
sys.exit(1)
current_status = app.get_status()
if not current_status.installed:
app.install()
AppsService.install_app(app)
current_app.logger.info(
f"App {slug} installing... use `status` to see status")
f"App {slug} installing...")
else:
current_app.logger.error(f"App {slug} is already installed")
......@@ -199,7 +234,8 @@ def setrole(email, app_slug, role):
"""Set role for a user
:param email: Email address of user to assign role
:param app_slug: Slug name of the app, for example 'nextcloud'
:param role: Role to assign. currently only 'admin', 'user'
:param role: Role to assign. Currently only 'admin', 'user', 'none'/'no
access'.
"""
current_app.logger.info(f"Assigning role {role} to {email} for app {app_slug}")
......@@ -207,37 +243,24 @@ def setrole(email, app_slug, role):
# Find user
user = KratosUser.find_by_email(kratos_identity_api, email)
if role not in ("admin", "user"):
print("At this point only the roles 'admin' and 'user' are accepted")
sys.exit(1)
if not user:
print("User not found. Abort")
sys.exit(1)
app_obj = db.session.query(App).filter(App.slug == app_slug).first()
if not app_obj:
app = db.session.query(App).filter(App.slug == app_slug).first()
if not app:
print("App not found. Abort.")
sys.exit(1)
role_obj = (
db.session.query(AppRole)
.filter(AppRole.app_id == app_obj.id)
.filter(AppRole.user_id == user.uuid)
.first()
)
if role_obj:
db.session.delete(role_obj)
if role == "none":
role = "no access"
role = Role.query.filter(func.lower(Role.name) == func.lower(role)).first()
if not role:
print("Role not found. Abort.")
sys.exit(1)
obj = AppRole()
obj.user_id = user.uuid
obj.app_id = app_obj.id
obj.role_id = role.id if role else None
UserService.set_user_role(user.uuid, app.id, role.id)
db.session.add(obj)
db.session.commit()
......@@ -249,24 +272,25 @@ def show_user(email):
:param email: Email address of the user to show
"""
user = KratosUser.find_by_email(kratos_identity_api, email)
if user is not None:
print(user)
print("")
print(f"UUID: {user.uuid}")
print(f"Username: {user.username}")
print(f"Updated: {user.updated_at}")
print(f"Created: {user.created_at}")
print(f"State: {user.state}")
print(f"Roles:")
results = db.session.query(AppRole, Role).join(App, Role)\
.add_entity(App).add_entity(Role)\
.filter(AppRole.user_id == user.uuid)
for entry in results:
app = entry[-2]
role = entry[-1]
print(f" {role.name: >9} on {app.name}")
else:
if user is None:
print(f"User with email address '{email}' was not found")
return
print(user)
print("")
print(f"UUID: {user.uuid}")
print(f"Username: {user.username}")
print(f"Updated: {user.updated_at}")
print(f"Created: {user.created_at}")
print(f"State: {user.state}")
print(f"Roles:")
results = db.session.query(AppRole)\
.filter_by(user_id=user.uuid)\
.join(App).join(Role)\
.add_entity(App).add_entity(Role)
for entry in results:
app = entry[-2]
role = entry[-1]
print(f" {role.name: >9} on {app.name}")
@user_cli.command("update")
......@@ -292,6 +316,10 @@ def update_user(email, field, value):
else:
current_app.logger.error(f"Field not found: {field}")
# TODO: this currently deletes the last_recovery and last_login because
# `save` uses a simple PUT and is not aware of those fields. We should
# switch to PATCH instead, or refactor so `save` uses the same code as
# `put_user`.
user.save()
......@@ -306,7 +334,7 @@ def delete_user(email):
if not user:
current_app.logger.error(f"User with email {email} not found.")
sys.exit(1)
user.delete()
UserService.delete_user(user.uuid)
@user_cli.command("create")
......@@ -328,6 +356,20 @@ def create_user(email):
user.email = email
user.save()
dashboard_app = db.session.query(App).filter(App.slug == 'dashboard').first()
if not dashboard_app:
print("Dashboard app not found. Aborting.")
sys.exit(1)
user_role = Role.query.filter(func.lower(Role.name) == 'user').first()
if not user_role:
print("User role not found. Aborting.")
sys.exit(1)
UserService.set_user_role(user.uuid, dashboard_app.id, user_role.id)
db.session.commit()
@user_cli.command("setpassword")
@click.argument("email")
......@@ -386,9 +428,7 @@ def recover_user(email):
"""Get recovery link for a user, to manual update the user/use
:param email: Email address of the user
"""
current_app.logger.info(f"Trying to send recover email for user: {email}")
try:
# Get the ID of the user
kratos_user = KratosUser.find_by_email(kratos_identity_api, email)
......@@ -401,9 +441,25 @@ def recover_user(email):
current_app.logger.error(f"Error while getting reset link: {error}")
@user_cli.command("reset_2fa")
@user_cli.command("reset_totp")
@click.argument("email")
def reset_totp(email):
"""Remove configured totp second factor for a user.
:param email: Email address of the user
"""
current_app.logger.info(f"Removing totp second factor for user: {email}")
try:
# Get the ID of the user
kratos_user = KratosUser.find_by_email(kratos_identity_api, email)
# Get a recovery URL
UserService.reset_totp(kratos_user.uuid)
except Exception as error: # pylint: disable=broad-except
current_app.logger.error(f"Error while removing totp second factor: {error}")
@user_cli.command("reset_webauthn")
@click.argument("email")
def reset_2fa(email):
def reset_webauthn(email):
"""Remove configured second factor for a user.
:param email: Email address of the user
"""
......@@ -414,9 +470,9 @@ def reset_2fa(email):
# Get the ID of the user
kratos_user = KratosUser.find_by_email(kratos_identity_api, email)
# Get a recovery URL
UserService.reset_2fa(kratos_user.uuid)
UserService.reset_webauthn(kratos_user.uuid)
except Exception as error: # pylint: disable=broad-except
current_app.logger.error(f"Error while removing second factor: {error}")
current_app.logger.error(f"Error while removing webauthn second factor: {error}")
cli.cli.add_command(user_cli)