TL;DR: This error message was caused by an empty (i.e. 0 bytes) key file in Notary client's "trust_dir" folder.
The aim of Prometheus Operator is to provide Kubernetes native deployment and management of Prometheus and related monitoring components. The kube-prometheus-stack helm chart (formerly named prometheus-operator) contains just one helm command to set everything up. However, it leaves out specific details about the underlying implementation. In this post, I'll take a deeper look what happens under the hood when the kube-prometheus-stack helm chart is installed in a Kubernetes cluster.
Kubernetes has several methods to authorize requests to the API server, namely Node, Attribute-based access control (ABAC), Role-based access control (RBAC), and Webhook. While reading the RBAC documentation on Default ClusterRoles, I found the descriptions vague - probably generalized by the author(s) so as to remain relevant across the various Kuberenetes versions. However, I wanted a quick reference guide on the exact resources and permissions each of them had (e.g. for "pod" resource, the "edit" ClusterRole has X, Y and Z permissions). Hopefully the following list helps others who are looking for something similar.
I was experimenting how I could expose applications in AWS Elastic Kubernetes Service (EKS) via Kubernetes Service resources and AWS load balancers. Out of curiosity, I also wanted to know if I could ssh into containers in EKS without using "kubectl exec" or any container runtime commands (e.g. "docker attach"). One scenario would be when I need to access the container's filesystem to extract a log/config file, but 1) I do not have EKS cluster admin role for more permissive actions, and 2) the kubectl environment is exposed via a structured CI/CD pipeline and is non-interactive. As I could not find any concrete examples/tutorials, here are my implementation setup and steps.
Several guides of various permutations for this task already exist, and in order to avoid reinventing the wheel, I'll just be providing the commands and terse explanations on why certain flags are set.
The JFrog Artifactory and its complementary suite of tools is well known across the industry. As part of a certification preparation, I wanted to find out more about how it is administered. This post is how to install JFrog Artifactory 7 and Xray 3 using Helm Charts in an AWS EC2 instance.
In this previous post, I explained how to set up a PostgreSQL database on Linux Alpine. To make it repeatable, I condensed the entire setup within a Vagrantfile with the Linux Containers (LXC) provider.
The vagrant-lxc plugin allows LXC to be a (custom) provider for Vagrant. This allows Vagrant boxes to be spun up as LXC containers on the host machine.
Depending on your distro and host OS installation add-ons, it may not be enough to just install the plugin as it relies on additional packages for it to work. In this post, I will go through how to set up the vagrant-lxc plugin for a clean installation of Ubuntu Server.
This post explains how I installed the latest version of CKAN (version 2.8.2 at time of post) on Linux Containers (LXC). I see this method of using LXC as a middle ground between installing from source and using Docker Compose - while it is not as effortless as using Docker Compose and its orchestration, this setup has more flexibility, and can be easily repeated (using snapshots and images) and scaled once completed.
I found that installing and starting PostgreSQL on Alpine Linux to be a not so straight forward task. So here's a short post on how I did it.