proposal: post-deploy tests
End-to-end tests can already be done in the test stage, but this usually needs some steps to have an environments with database etc. Also this usually tests against a somehow artificial environment that might not reflect "real" deployments on kubernetes.
So my proposal is to introduce a post-deploy stage that runs after the deployment against the actual deployment on kubernetes
Idea
- you can define a post-deploy job containing an image and command in values-file or gitlab-ci file
- that job will use the image and invoke the command. It will default to the app image, to make it easy to use the same repository for the test definitions
- The job will run after the deployment and will show the results in the pipeline
- it will run first and foremost in every review-app pipeline. We have to make sure that you already see the target url in the Merge request, so that you don't have to wait for the post-deploy job to finish in order to check the review app yourself
- we can additionaly run it after every dev deploy but before the "create-release" step. This way we make sure that usually the app has been tested with that new job before a release-tag is created
- we can also allow it to run after stage and prod builds
some thoughts
- running it on prod builds might sound controversial, but in fact many customers "test" the app on prod anyway, so its in general a good idea to make the app stable towards people (or bots) that test the app. But in the end thats up to the project to decide
- running it against stage or dev is also interesting, because these databases are usually not fresh. Having stable tests against a filled database is probably a good idea because it reflects real world use cases
implementation details
There are multiple ways how to run it:
- run it as "post-deploy" in helm: that will run it as a job inside kubernetes. it will know the app that it has to test by its public url (ROOT_URL or a special env var that we can provide). Running it as hook of helm has the advantage to mark the release as failed when the job fails, but has some major disadvantages:
- it will do the test as part of
helm deploy
, which is bad as it is very intransparent whats happening there. Also it usually has timeout and as end-to-end tests tend to be slow, that timeout will become a problem - its not easy to see the test results directly in the merge request
- run it as a separate job in gitlab-pipeline, but run it as kubernetes job (using a kubectl apply)
- this decouples the helm deploy from the test run and it runs direclty in the cluster
- unsure how we can display the results of the job directly in the pipeline
- run it inside the gitlab pipeline (best if it works)
- we could define a gitlab job
that uses the newly created app image (or the image that is defined for the post-deploy job) and runs the commandactualy we can just run the command, no need to use that image as we are already in the repository - that would make it easy to display the progress and the result in gitlab itself
- it will consume resources on the gitlab clusters, not sure if that is a problem
- The intended type of test tests against the public url, so it can run everywhere and has no dependencies
- if the test needs to know secrets (login data), we cannot use the values from bitwarden as gitlab does not have access to it. We could either just define env vars in gitlab-ci (that are used for the pipline itself) or we could fetch secrets from kubernetes using kubectl.
- we could define a gitlab job
Edited by Marco Wettstein