[go: up one dir, main page]

Skip to content

[Helm Chart] Document how to install with decomposed mode (external database)

Related to this #383534 (closed)

We need to do the same, but for configuring Gitlab Helm Chart with decomposed database setup, but only when the database is outside of the Helm Chart. Which is recommended for production environments.

Adding a section to the https://docs.gitlab.com/charts/advanced/external-db/ that's related to configuring Postgresql with external databases main and ci. And maybe this: https://docs.gitlab.com/charts/charts/globals.html#configure-postgresql-settings

Related MR that supported this: gitlab-org/charts/gitlab!2122 (merged)

How to spin up a VM to test GitLab charts with decomposed external database setup

1. Create the VM instance on GCP

  1. Create a GCP project with https://about.gitlab.com/handbook/infrastructure-standards/realms/sandbox/#how-to-get-started
  2. Go to your GCP project in console.cloud.google.com. Wait a few minutes if your project doesn't show up yet.
  3. Go to the Compute Engine section, enable the API
  4. Create a new VM instance. I chose Machine type: n2-standard-4. Considering choosing at least 20GB for the attached volume size
  5. Download, and install gcloud locally. Select the SSH dropdown for the VM instance you want to connect to. Select the gcloud SSH option to get the command line to connect via SSH using the gcloud too. https://cloud.google.com/sdk/docs/install

2. Install the prerequisites on the VM

Install Docker

  1. Install Docker: https://docs.docker.com/engine/install/debian/
  2. Make sure you add user to the Docker group: https://docs.docker.com/engine/install/linux-postinstall/

Minikube

Kubectl

  • sudo apt-get install kubectl

Wget/Helm

3. Create the external database and setup the main/ci databases and database users (postgres/gitlab)

For the sake of testing, we will create the external database via Docker. Which is going to reside outside of the Kubernetes Cluster

For our testing purposes, we will assume that we have 2 database users along with their passwords

postgres / test123
gitlab / bogus
docker run --rm -d --name=pg12 -p 5432:5432 -e POSTGRES_PASSWORD=test123 -e POSTGRES_HOST_AUTH_METHOD=md5 postgres:12-bullseye
docker exec -ti pg12 bash
psql -h localhost -U postgres -d postgres
CREATE USER gitlab SUPERUSER;
CREATE DATABASE gitlabhq_production;
CREATE DATABASE gitlabhq_production_ci;
\password gitlab # for example: bogus
\q # to exit the psql client
exit # to exit the container

4. Create the Minikube kubernetes cluster

feel free to increase if you have a bigger instance

minikube start --driver docker --cpus 4 --memory 12000 

You can test that it's working by issueing some kubectl command like

kubectl get nodes

You should get something like

NAME       STATUS   ROLES           AGE   VERSION
minikube   Ready    control-plane   17m   v1.25.3

5. Create a Kubernetes Secret that contains the passwords for the database

kubectl create secret generic gitlab-postgresql-password \
    --from-literal=main-postgres-password=test123 \
    --from-literal=main-gitlab-password=bogus \
    --from-literal=ci-postgres-password=test123 \
    --from-literal=ci-gitlab-password=bogus

The kubernetes pods are going to reference this kubernetes secret for the database passwords for both databases.

6. Get the Internal IP Address of the virtual server and save it into an Environment Variable

export INTERNAL_IP=10.156.0.2

You can get the IP4 using the command ip -f inet addr show ens4 | awk '/inet / {print $2}' Or from the GCP control panel as well.

7. Clone the Gitlab Charts Repository

git clone https://gitlab.com/gitlab-org/charts/gitlab.git
cd gitlab

8. Deploy the Helm Release

helm upgrade --install gitlab . --timeout=900s -f examples/values-minikube.yaml --set prometheus.install=false --set gitlab-runner.install=false --set=global.grafana.enabled=false --set=postgresql.install=false \
--set global.psql.main.password.secret=gitlab-postgresql-password --set global.psql.password.key=main-gitlab-password \
--set global.psql.main.host=$INTERNAL_IP --set global.psql.main.database=gitlabhq_production \
--set global.psql.ci.password.secret=gitlab-postgresql-password --set global.psql.password.key=ci-gitlab-password \
--set global.psql.ci.host=$INTERNAL_IP --set global.psql.ci.database=gitlabhq_production_ci

9. Verifying the setup

Let's enter one of the rails pods to verify our setup

export POD_NAME=$(kubectl get pod -l "app=webservice" --no-headers -o custom-columns=":metadata.name"|head -n 1)
kubectl exec -ti $POD_NAME bash
cd /srv/gitlab
cat config/database.yml # To verify the database configuration

To lock the writes on the legacy tables. This should be later automated as part of the migration tool, that's intended to be used by self-managed customers to migrate from single-db to decomposed database.

./bin/rake gitlab:db:lock_writes

Fill the database with seed data

./bin/rake db:seed_fu
Edited by Omar Qunsul