As an organizer for the DevSlop Game Day, I couldn’t participate in the CTF itself (bummer!) so I chose to walk through the challenges prior to the event to ensure that they are solvable and easy to comprehend. I personally had no experience with Kubernetes prior to organizing this CTF, therefore, it was a perfect chance to learn by doing.
The CTF is designed with beginners in mind. The aim is to provide enough information to the learner so that they can end up with the solution on their own. What I personally love about this kind of CTF is that it provides learners with a bigger picture. One is not simply deploying a pod, but they are tasked with building a realistic application that encompasses the several components interconnected. A learner who is able to complete all tasks leaves the CTF having an overall idea of how microservices are built to interact with other components as part of a whole.
The walk through below was created a couple of days before the CTF was launched as part of my attempt to solve the challenges myself. Anyway, enough rant. Let’s dive right in.

Table of Contents
- Getting Started
- Building Docker Images
- Pushing Docker Images to Amazon ECR Repositories
- Configuring Environment Variables with ConfigMaps and Secrets
- Deploying Services
- Deploying Microservices to Kubernetes
- Deploying Redis
- Configuring Ingress For The Front-End
- Securing the Cluster with Network Policies
Getting Started
The DevSlop Game Day Announcement advised us to install the following tools prior to Game Day:
- The AWS CLI version 2
- The Kubernetes command-line tool
- Docker
- An IDE or Text Editor of choice eg.Visual Studio Code, Sublime Text, Atom etc.
The instructions provided detail how to set those tools up per OS. I already had these all preinstalled on my Ubuntu desktop.
Introduction (1 Point)
The CTF walks us through the deployment of a TODO application that comprises of 5 microservices. The Kubernetes cluster itself is run in Amazon Elastic Kubernetes Service.
The architecture that we are working on is shown below and consists of:
- A Front-end which is the only public microservice that sits behind a Load Balancer
- Auth API
- TODOs API
- Users API
- Log Message Processor
- Redis

Capture The Flag
The first flag for this step is simply a confirmation that we understand what a CTF flag is.
DevSlopCTF{ok}
Accessing The Cluster (2 Points)
This section offers two ways of accessing the Infrastructure. One of the options is to use AWS Cloud9, which is a cloud-based IDE for writing, running, and debugging code with just a browser. The other option is to use your own local machine to access the environment. I chose to use my local machine.
Configuring The AWS CLI
You are advised to use AWS CLI version 1.18. or later. Let’s double-check that we have the recommended version installed.
$ aws --version
aws-cli/1.18.216 Python/3.8.5 Linux/5.4.0-65-generic botocore/1.19.56
The next step is to configure the AWS profile that matches my team name team4
using the command aws configure --profile <TEAM_NAME>
. The Access Key ID and Secret Access Key were provided out of band. I chose to paste them here since they are already invalidated.
$ aws configure --profile team4
AWS Access Key ID [None]: AKIAQXZMQZU3FWERBCN5
AWS Secret Access Key [None]: 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
Default region name [None]: us-east-1
Default output format [None]: json
We are then required to export an environment variable called AWS_PROFILE
and set it to the team name provided, team4
. A link is provided to help us.
$ export AWS_PROFILE=team4
The next step is to validate that the CLI is properly configured using the command aws sts get-caller-identity --profile <TEAM_NAME>
$ aws sts get-caller-identity --profile team4
{
"UserId": "AIDAQXZMQZU3GGS2XHEJ2",
"Account": "051096112438",
"Arn": "arn:aws:iam::051096112438:user/team4"
}
As expected, we see that the UserId
provided differs from the AWS Access Key ID
and that the Amazon Resource Name (a.k.a ARN) has our IAM user.
Configuring kubectl
The next task is to generate the kubeconfig
file which should be in the $HOME/.kube
directory.
We are provided with the command aws eks --region us-east-1 update-kubeconfig --name Kubernetes-ctf-cluster
to generate the kubeconfig file.
$ aws eks --region us-east-1 update-kubeconfig --name Kubernetes-ctf-cluster
Added new context arn:aws:eks:us-east-1:051096112438:cluster/Kubernetes-ctf-cluster to /home/userxx/.kube/config
The command generates the kubeconfig file in the following location /home/userxx/.kube/config
. The contents of the file are shown below.
$ cat /home/userxx/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1ERXlOVEUwTkRjMU1Gb1hEVE14TURFeU16RTBORGMxTUZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTXNECmdiSnhnN0kvNE9OanNVbWsvT1pYa0hjRlBPK05XOTRCb0lXbUM4ZDhJNm1FZ2ZDMDlmVmVhR21ORHFGTXkzRUQKejFzb1M0NWQzd096NU00UGNuM0syK29yYXhURmlvakJaOCtWdXp0VWl2akxGNENhZlNUb2R0Vyttc2lCN0s2Vgo3V1Vsd20yNDU0aWFiSkFFOWQ0d3dkZ294RE9BWGk1VEJMNkQ2QWxoMCtTeFhoUGxKT0NDNTh3VjRPMzBzTXh1CnU0UE5nR0xpVVF3N2RNM09YSEtKVEt3TG94M3FzMEppYnpRL3J4V0tnR0dzL3ZKQUNsK0pDWFYrd1lBZE13MysKN3ZteHhaMVVUQ1lScXZUNXU5b1NFZklwUmp4SU05cllSVW9sZWtkNEpCYlFacjNNd2o1Yk42MmpyTlZnUVBKZgpMamZpUWF2MXRHYzZSaERIcmdrQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFNamYzRUlYYTEreWhrN1cweUZZKzhISHdKVmUKdCszbjkxeHU2Sm53WjlPalBEcFR3bzBkZGhXQU16b24rZXFRdmwrR2FtT2JsMGwwL0FnRFdqMytoRHpjaEo3TApBbjY3clkxaHB1OGJzUWdLUDdTNDZ4L0RsTHdtaDhwVnc1NTAxVjN5MnUwbVlOZjFWVVAxNDlTVzBmOENZTThOCktlMk0rL3hBbktXN0dHaWZFQTFrZE9ZbGQyekZMUXlkdWdwR0FCVmpid01ZZzJqYk5hTU5WV3Jacld2QktQdlgKVlJpOVM4bE9WUEJWc0FuOXhadGttNmFkQ0kzcVNiS0FsTDFFYzRpeWswZHM4RW1iMmhvazRtOWRBRkRDNHhQegpVVU1sbUViVmt0UmlaaDNkbkV6cDJQcmNHU00vcXh1Rm1tVmR4VEM5eTI5YXY4MkhKYVhnYU91bzl2Yz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://E856E5152BB1C1CC76291D085F3EE7EE.gr7.us-east-1.eks.amazonaws.com
name: arn:aws:eks:us-east-1:051096112438:cluster/Kubernetes-ctf-cluster
contexts:
- context:
cluster: arn:aws:eks:us-east-1:051096112438:cluster/Kubernetes-ctf-cluster
user: arn:aws:eks:us-east-1:051096112438:cluster/Kubernetes-ctf-cluster
name: arn:aws:eks:us-east-1:051096112438:cluster/Kubernetes-ctf-cluster
current-context: arn:aws:eks:us-east-1:051096112438:cluster/Kubernetes-ctf-cluster
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-east-1:051096112438:cluster/Kubernetes-ctf-cluster
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-east-1
- eks
- get-token
- --cluster-name
- Kubernetes-ctf-cluster
command: aws
env:
- name: AWS_PROFILE
value: team4
To test that we have gained access to the cluster, we need to run kubectl get pods -n <TEAM_NAME>
.
$ kubectl get pods -n team4
No resources found in team4 namespace.
So far so good. The expected output for this command is indeed No resources found in <TEAM_NAME> namespace.
Cloning The Application Repository
The Application repository is hosted in github. We need to clone the repository to the local computer.
$ git clone git@github.com:thedojoseries/Kubernetes-ctf.git
Cloning into 'Kubernetes-ctf'...
The authenticity of host 'github.com (140.82.121.3)' can't be established.
RSA key fingerprint is SHA256:XXXXXXXXXXXX.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'github.com,140.82.121.3' (RSA) to the list of known hosts.
remote: Enumerating objects: 448, done.
remote: Counting objects: 100% (448/448), done.
remote: Compressing objects: 100% (255/255), done.
remote: Total 448 (delta 240), reused 359 (delta 151), pack-reused 0
Receiving objects: 100% (448/448), 812.27 KiB | 2.03 MiB/s, done.
Resolving deltas: 100% (240/240), done.
The files that we cloned are shown below
$ cd Kubernetes-ctf/
@lsv-u01:~/Downloads/Kubernetes-ctf$ ls
auth-api frontend log-message-processor todos-api users-api
Capture The Flag
This is also a free flag to confirm that everything is setup as expected and that we have access to the cluster.
DevSlopCTF{I am ready}
Building Docker Images
We need to first build the Docker images for each microservice before we can deploy Docker containers to the Kubernetes cluster. There is already a Redis image in the official Docker registry, so that does not need to be built.
Introduction (1 Point)
The Dockerfile
in each folder that we downloaded contains instructions used by Docker to build an image.
Let’s verify the contents.
Dockerfile contents for auth-api
.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf$ cat auth-api/Dockerfile
FROM golang:1.9-alpine
EXPOSE 8081
WORKDIR /go/src/app
RUN apk --no-cache add curl git && \
curl https://raw.githubusercontent.com/golang/dep/master/install.sh | sh
COPY . .
RUN dep ensure
RUN go build -o auth-api
CMD /go/src/app/auth-api
Dockerfile contents for frontend
.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf$ cat frontend/Dockerfile
FROM node:8-alpine
EXPOSE 8080
WORKDIR /usr/src/app
COPY package.json ./
RUN npm install
COPY . .
CMD ["sh", "-c", "npm start" ]
Dockerfile contents for log-message-processor
.
$ cat log-message-processor/Dockerfile
FROM python:3.6-alpine
WORKDIR /usr/src/app
RUN apk add --no-cache build-base
COPY requirements.txt .
RUN pip3 install -r requirements.txt
COPY main.py .
CMD ["python3","-u","main.py"]
Dockerfile contents for todos-api
.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf$ cat todos-api/Dockerfile
FROM node:8-alpine
EXPOSE 8082
WORKDIR /usr/src/app
COPY package.json ./
RUN npm install
COPY . .
CMD ["sh", "-c", "npm start" ]
Dockerfile contents for users-api
.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf$ cat users-api/Dockerfile
FROM openjdk:8-alpine
EXPOSE 8083
WORKDIR /usr/src/app
COPY pom.xml mvnw ./
COPY .mvn/ ./.mvn
RUN ./mvnw dependency:resolve
COPY . .
RUN ./mvnw install
CMD ["java", "-jar", "./target/users-api-0.0.1-SNAPSHOT.jar"]
Capture The Flag
This flag is simply to validate that we have the required files.
DevSlopCTF{ok}
Front-End (99 Points)
We are instructed to use the docker build
command to build the frontend image. A helpful link to docker docs is provided. We see that the usage for this command is docker build [OPTIONS] PATH | URL | -
. As we were told to use the Dockerfile. From the help page, we see that can pass a single Dockerfile in the URL or pipe the file in via STDIN using the command $ docker build - < Dockerfile
. Let’s try that.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend$ docker build .
Sending build context to Docker daemon 451.6kB
Step 1/7 : FROM node:8-alpine
---> 2b8fcdc6230a
Step 2/7 : EXPOSE 8080
---> Using cache
---> 8ec4ed1e32b7
Step 3/7 : WORKDIR /usr/src/app
---> Using cache
---> 22277587cc80
Step 4/7 : COPY package.json ./
---> e4698944daed
Step 5/7 : RUN npm install
---> Running in 39cde6fe1423
npm WARN deprecated babel-eslint@7.2.3: babel-eslint is now @babel/eslint-parser. This package will no longer receive updates.
npm WARN deprecated eslint-loader@1.9.0: This loader has been deprecated. Please use eslint-webpack-plugin
npm WARN deprecated extract-text-webpack-plugin@2.1.2: Deprecated. Please use https://github.com/webpack-contrib/mini-css-extract-plugin
npm WARN deprecated core-js@2.6.12: core-js@<3 is no longer maintained and not recommended for usage due to the number of issues. Please, upgrade your dependencies to the actual version of core-js@3.
npm WARN deprecated browserslist@2.11.3: Browserslist 2 could fail on reading Browserslist >3.0 config used in other tools.
npm WARN deprecated request@2.88.2: request has been deprecated, see https://github.com/request/request/issues/3142
npm WARN deprecated bfj-node4@5.3.1: Switch to the `bfj` package for fixes and new features!
npm WARN deprecated har-validator@5.1.5: this library is no longer supported
npm WARN deprecated browserslist@1.7.7: Browserslist 2 could fail on reading Browserslist >3.0 config used in other tools.
npm WARN deprecated circular-json@0.3.3: CircularJSON is in maintenance only, flatted is its successor.
npm WARN deprecated chokidar@2.1.8: Chokidar 2 will break on node v14+. Upgrade to chokidar 3 with 15x less dependencies.
npm WARN deprecated fsevents@1.2.13: fsevents 1 will break on node v14+ and could be using insecure binaries. Upgrade to fsevents 2.
npm WARN deprecated urix@0.1.0: Please see https://github.com/lydell/urix#deprecated
npm WARN deprecated resolve-url@0.2.1: https://github.com/lydell/resolve-url#deprecated
> node-sass@4.14.1 install /usr/src/app/node_modules/node-sass
> node scripts/install.js
Downloading binary from https://github.com/sass/node-sass/releases/download/v4.14.1/linux_musl-x64-57_binding.node
Download complete
Binary saved to /usr/src/app/node_modules/node-sass/vendor/linux_musl-x64-57/binding.node
Caching binary to /root/.npm/node-sass/4.14.1/linux_musl-x64-57_binding.node
> core-js@2.6.12 postinstall /usr/src/app/node_modules/core-js
> node -e "try{require('./postinstall')}catch(e){}"
Thank you for using core-js ( https://github.com/zloirock/core-js ) for polyfilling JavaScript standard library!
The project needs your help! Please consider supporting of core-js on Open Collective or Patreon:
> https://opencollective.com/core-js
> https://www.patreon.com/zloirock
Also, the author of core-js ( https://github.com/zloirock ) is looking for a good job -)
> ejs@2.7.4 postinstall /usr/src/app/node_modules/ejs
> node ./postinstall.js
Thank you for installing EJS: built with the Jake JavaScript build tool (https://jakejs.com/)
> node-sass@4.14.1 postinstall /usr/src/app/node_modules/node-sass
> node scripts/build.js
Binary found at /usr/src/app/node_modules/node-sass/vendor/linux_musl-x64-57/binding.node
Testing binary
Binary is fine
npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN notsup Unsupported engine for js-beautify@1.13.5: wanted: {"node":">=10"} (current: {"node":"8.17.0","npm":"6.13.4"})
npm WARN notsup Not compatible with your version of node/npm: js-beautify@1.13.5
npm WARN notsup Unsupported engine for mkdirp@1.0.4: wanted: {"node":">=10"} (current: {"node":"8.17.0","npm":"6.13.4"})
npm WARN notsup Not compatible with your version of node/npm: mkdirp@1.0.4
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@~2.3.1 (node_modules/chokidar/node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents@2.3.2: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@^1.2.7 (node_modules/watchpack-chokidar2/node_modules/chokidar/node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents@1.2.13: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
added 1290 packages from 743 contributors and audited 1300 packages in 41.668s
11 packages are looking for funding
run `npm fund` for details
found 21 vulnerabilities (4 low, 9 moderate, 8 high)
run `npm audit fix` to fix them, or `npm audit` for details
Removing intermediate container 39cde6fe1423
---> deaa51f7a47b
Step 6/7 : COPY . .
---> 61aa7ed8eb7b
Step 7/7 : CMD ["sh", "-c", "npm start" ]
---> Running in 74f2881dbea3
Removing intermediate container 74f2881dbea3
---> e019f9b4bc2d
Successfully built e019f9b4bc2d
Capture The Flag
We learn that the front-end uses a Package Manager to install dependencies. The flag is the name of the package manager. From the helpful link, we know that A package manager is a collection of software tools that automates the process of installing, upgrading, configuring, and removing programs for a system in a consistent manner.
From the output above, we see that the packet manager being used is npm from the command RUN npm install
.
DevSlopCTF{npm}
TODOs API (99 Points)
Next, we are advised to run docker build to build an image for the TODOs API.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend$ cd ../todos-api/
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/todos-api$ docker build .
Sending build context to Docker daemon 96.26kB
Step 1/7 : FROM node:8-alpine
---> 2b8fcdc6230a
Step 2/7 : EXPOSE 8082
---> Running in bad4d09b9d98
Removing intermediate container bad4d09b9d98
---> e0cef1fb5fbc
Step 3/7 : WORKDIR /usr/src/app
---> Running in d6748de2aa04
Removing intermediate container d6748de2aa04
---> 5f27504f78a0
Step 4/7 : COPY package.json ./
---> 09eb14a41424
Step 5/7 : RUN npm install
---> Running in 339b58d32b3b
npm WARN deprecated chokidar@2.1.8: Chokidar 2 will break on node v14+. Upgrade to chokidar 3 with 15x less dependencies.
npm WARN deprecated fsevents@1.2.13: fsevents 1 will break on node v14+ and could be using insecure binaries. Upgrade to fsevents 2.
npm WARN deprecated resolve-url@0.2.1: https://github.com/lydell/resolve-url#deprecated
npm WARN deprecated urix@0.1.0: Please see https://github.com/lydell/urix#deprecated
> nodemon@1.19.4 postinstall /usr/src/app/node_modules/nodemon
> node bin/postinstall || exit 0
Love nodemon? You can now support the project via the open collective:
> https://opencollective.com/nodemon/donate
npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@^1.2.7 (node_modules/chokidar/node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents@1.2.13: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
npm WARN zipkin-instrumentation-express@0.11.2 requires a peer of @types/express@^4.0.39 but none is installed. You must install peer dependencies yourself.
npm WARN todos-api@1.0.0 No repository field.
added 304 packages from 194 contributors and audited 305 packages in 9.814s
found 1 low severity vulnerability
run `npm audit fix` to fix them, or `npm audit` for details
Removing intermediate container 339b58d32b3b
---> dc9b43ab2a1d
Step 6/7 : COPY . .
---> e595beed5e74
Step 7/7 : CMD ["sh", "-c", "npm start" ]
---> Running in d25ef931c0f1
Removing intermediate container d25ef931c0f1
---> 19213d5f426d
Successfully built 19213d5f426d
Capture The Flag
The TODOs API also uses a Package Manager to install dependencies. The flag is the name of the package manager. We see that it uses the same packet manager as the front-end from the Step 5/7 : RUN npm install
command.
DevSlopCTF{npm}
Users API (99 Points)
The same command docker build
is used to build an image for the Users API. I have truncated the output as it was really noisy.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/todos-api$ cd ../users-api/
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/users-api$ docker build .
Sending build context to Docker daemon 94.72kB
Step 1/9 : FROM openjdk:8-alpine
8-alpine: Pulling from library/openjdk
e7c96db7181b: Pull complete
f910a506b6cb: Pull complete
c2274a1a0e27: Pull complete
Digest: sha256:94792824df2df33402f201713f932b58cb9de94a0cd524164a0f2283343547b3
Status: Downloaded newer image for openjdk:8-alpine
---> a3562aa0b991
Step 2/9 : EXPOSE 8083
---> Running in 3442a739a636
Removing intermediate container 3442a739a636
---> 7b932e811074
Step 3/9 : WORKDIR /usr/src/app
---> Running in eb208f26841b
Removing intermediate container eb208f26841b
---> 26b97c1c461c
Step 4/9 : COPY pom.xml mvnw ./
---> e019cc1d31aa
Step 5/9 : COPY .mvn/ ./.mvn
---> 971689fe5512
Step 6/9 : RUN ./mvnw dependency:resolve
---> Running in ad8a0b42772f
/usr/src/app
Downloading https://repo1.maven.org/maven2/org/apache/maven/apache-maven/3.5.0/apache-maven-3.5.0-bin.zip
..................................................................................................................................................................................................................................................................................................................................................................................................................................................
Unzipping /root/.m2/wrapper/dists/apache-maven-3.5.0-bin/6ps54u5pnnbbpr6ds9rppcc7iv/apache-maven-3.5.0-bin.zip to /root/.m2/wrapper/dists/apache-maven-3.5.0-bin/6ps54u5pnnbbpr6ds9rppcc7iv
Set executable permissions for: /root/.m2/wrapper/dists/apache-maven-3.5.0-bin/6ps54u5pnnbbpr6ds9rppcc7iv/apache-maven-3.5.0/bin/mvn
// TRUNCATED
Downloaded: https://repo.maven.apache.org/maven2/commons-collections/commons-collections/3.2.1/commons-collections-3.2.1.jar (575 kB at 608 kB/s)
[INFO]
[INFO] The following files have been resolved:
[INFO] com.fasterxml.jackson.core:jackson-annotations:jar:2.8.0:compile
// TRUNCATED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 28.407 s
[INFO] Finished at: 2021-02-14T21:02:51Z
[INFO] Final Memory: 27M/236M
[INFO] ------------------------------------------------------------------------
Removing intermediate container ad8a0b42772f
---> d7cf468bcca3
Step 7/9 : COPY . .
---> c71fe75c9922
Step 8/9 : RUN ./mvnw install
---> Running in 7b72ff956697
/usr/src/app
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building users-api 0.0.1-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ users-api ---
Downloading: https://repo.maven.apache.org/maven2/org/apache/maven/maven-project/2.0.6/maven-project-2.0.6.pom
// TRUNCATED
Downloaded: https://repo.maven.apache.org/maven2/com/google/collections/google-collections/1.0/google-collections-1.0.jar (640 kB at 1.6 MB/s)
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 8 source files to /usr/src/app/target/classes
[INFO]
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ users-api ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /usr/src/app/src/test/resources
[INFO]
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ users-api ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 1 source file to /usr/src/app/target/test-classes
[INFO]
[INFO] --- maven-surefire-plugin:2.18.1:test (default-test) @ users-api ---
// TRUNCATED
[INFO] Installing /usr/src/app/target/users-api-0.0.1-SNAPSHOT.jar to /root/.m2/repository/com/elgris/users-api/0.0.1-SNAPSHOT/users-api-0.0.1-SNAPSHOT.jar
[INFO] Installing /usr/src/app/pom.xml to /root/.m2/repository/com/elgris/users-api/0.0.1-SNAPSHOT/users-api-0.0.1-SNAPSHOT.pom
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 22.458 s
[INFO] Finished at: 2021-02-14T21:03:16Z
[INFO] Final Memory: 37M/348M
[INFO] ------------------------------------------------------------------------
Removing intermediate container 7b72ff956697
---> 8d26fff0f4da
Step 9/9 : CMD ["java", "-jar", "./target/users-api-0.0.1-SNAPSHOT.jar"]
---> Running in 1f29cef84675
Removing intermediate container 1f29cef84675
---> 650b405ecff6
Successfully built 650b405ecff6
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/users-api$
Capture The Flag
The flag is the name of the package manager. From the command Step 8/9 : RUN ./mvnw install
and from the noisy output, we seem to be installing something to do with maven. I personally have noted used this package manager so google to the rescue. Apache Maven can manage a project’s build, reporting and documentation. We got our flag.
DevSlopCTF{maven}
Log Message Processor (99 Points)
Next, we build the image for the Log Message Processor.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/users-api$ cd ../log-message-processor/
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/log-message-processor$ docker build .
Sending build context to Docker daemon 7.168kB
Step 1/7 : FROM python:3.6-alpine
3.6-alpine: Pulling from library/python
4c0d98bf9879: Pull complete
5e807dbff582: Pull complete
1cf32de05765: Pull complete
5818ae83b301: Pull complete
0d4c65e1344c: Pull complete
Digest: sha256:4aae963dcacd3086dfa8d82a5f123691b893d05064280877c83f3dbf609efd61
Status: Downloaded newer image for python:3.6-alpine
---> d39b82549c6d
Step 2/7 : WORKDIR /usr/src/app
---> Running in 235f0c1d8727
Removing intermediate container 235f0c1d8727
---> 1af3701dfc8f
Step 3/7 : RUN apk add --no-cache build-base
---> Running in 3ce63ebb530a
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/community/x86_64/APKINDEX.tar.gz
(1/20) Installing libgcc (10.2.1_pre1-r3)
(2/20) Installing libstdc++ (10.2.1_pre1-r3)
//TRUNCATED
(20/20) Installing build-base (0.5-r2)
Executing busybox-1.32.1-r2.trigger
OK: 201 MiB in 54 packages
Removing intermediate container 3ce63ebb530a
---> f9ffb9ea858f
Step 4/7 : COPY requirements.txt .
---> baf254a2ecae
Step 5/7 : RUN pip3 install -r requirements.txt
---> Running in 17dff05555ba
Collecting redis==2.10.6
Downloading redis-2.10.6-py2.py3-none-any.whl (64 kB)
//TRUNCATED
Building wheels for collected packages: thriftpy
Building wheel for thriftpy (setup.py): started
Building wheel for thriftpy (setup.py): finished with status 'done'
Created wheel for thriftpy: filename=thriftpy-0.3.9-cp36-cp36m-linux_x86_64.whl size=168220 sha256=9b7bab4aba79f01386282b816f575d239c348163faa69c00acb914e5a86baf3d
Stored in directory: /root/.cache/pip/wheels/5b/5c/97/4f89b14ea7db3aa07d6c3b6d13459671f7a2dcdf57ccfcf00c
Successfully built thriftpy
Installing collected packages: ply, urllib3, thriftpy, six, idna, chardet, certifi, requests, redis, py-zipkin
Successfully installed certifi-2020.12.5 chardet-4.0.0 idna-2.10 ply-3.11 py-zipkin-0.11.0 redis-2.10.6 requests-2.25.1 six-1.15.0 thriftpy-0.3.9 urllib3-1.26.3
Removing intermediate container 17dff05555ba
---> c495d4c464a8
Step 6/7 : COPY main.py .
---> abd87d5c6e6f
Step 7/7 : CMD ["python3","-u","main.py"]
---> Running in 6601df6950f1
Removing intermediate container 6601df6950f1
---> 4144a6eeb4fa
Successfully built 4144a6eeb4fa
Capture The Flag
The flag is the name of the package manager. The command Step 5/7 : RUN pip3 install -r requirements.txt
points to us using pip3. pip3 is the package installer for Python.
DevSlopCTF{pip3}
Auth API (103 Points)
The last docker image that we need to build is that for AUTH API.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/log-message-processor$ cd ../auth-api/
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/auth-api$ docker build .
Sending build context to Docker daemon 18.43kB
Step 1/8 : FROM golang:1.9-alpine
1.9-alpine: Pulling from library/golang
8e3ba11ec2a2: Pull complete
8e6b2bc60854: Pull complete
3d20cafe6dc8: Pull complete
533d243d9519: Pull complete
61a3cf7df0db: Pull complete
ec4d1222aabd: Pull complete
a2fb1cdee015: Pull complete
Digest: sha256:220aaadccc956ab874ff9744209e5a756d7a32bcffede14d08589c2c54801ce0
Status: Downloaded newer image for golang:1.9-alpine
---> b0260be938c6
Step 2/8 : EXPOSE 8081
---> Running in efe61b4658d8
Removing intermediate container efe61b4658d8
---> 5498be9c150e
Step 3/8 : WORKDIR /go/src/app
---> Running in a363f5547db7
Removing intermediate container a363f5547db7
---> f228eb4af2ec
Step 4/8 : RUN apk --no-cache add curl git && curl https://raw.githubusercontent.com/golang/dep/master/install.sh | sh
---> Running in a129d6d58995
fetch http://dl-cdn.alpinelinux.org/alpine/v3.8/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.8/community/x86_64/APKINDEX.tar.gz
(1/7) Installing nghttp2-libs (1.39.2-r0)
(2/7) Installing libssh2 (1.9.0-r1)
(3/7) Installing libcurl (7.61.1-r3)
(4/7) Installing curl (7.61.1-r3)
(5/7) Installing expat (2.2.8-r0)
(6/7) Installing pcre2 (10.31-r0)
(7/7) Installing git (2.18.4-r0)
Executing busybox-1.28.4-r0.trigger
OK: 19 MiB in 21 packages
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 5230 100 5230 0 0 21004 0 --:--:-- --:--:-- --:--:-- 20920
ARCH = amd64
OS = linux
Will install into /go/bin
Fetching https://github.com/golang/dep/releases/latest..
Release Tag = v0.5.4
Fetching https://github.com/golang/dep/releases/tag/v0.5.4..
Fetching https://github.com/golang/dep/releases/download/v0.5.4/dep-linux-amd64..
Setting executable permissions.
Moving executable to /go/bin/dep
Removing intermediate container a129d6d58995
---> 469a6587de37
Step 5/8 : COPY . .
---> a74b96319369
Step 6/8 : RUN dep ensure
---> Running in 01e855f4dde5
Removing intermediate container 01e855f4dde5
---> a39658c30f4e
Step 7/8 : RUN go build -o auth-api
---> Running in 2eb831f70dd2
Removing intermediate container 2eb831f70dd2
---> 7fec3c87687f
Step 8/8 : CMD /go/src/app/auth-api
---> Running in 80450c71e552
Removing intermediate container 80450c71e552
---> 9b51cc1cddd1
Successfully built 9b51cc1cddd1
Capture The Flag
The flag is the name of the package manager. From the command Step 6/8 : RUN dep ensure
, we see that dep is used. dep is the “official experiment” dependency management tool for the go programming language. We got our flag.
DevSlopCTF{dep}
Pushing Docker Images to Amazon ECR Repositories
The next step will be pushing the docker images to a Amazon Elastic Container Registry (ECR) which is a remote Docker repository.
Introduction (5 Point)
The repositories for each microservice have already been created for us. My repos are:
team4-auth-api
team4-frontend
team4-log-message-processor
team4-todos-api
team4-users-api
Inorder to push the images to the repo, we need to login using the Docker CLI.
Capture The Flag
We need to submit an ok
flag confirming that we are all set.
DevSlopCTF{ok}
Front-End (199 Points)
We are advised to google how to push Docker images to AWS ECR. From the Amazon link I found, this happens in a couple of steps:
- Authenticate your Docker client to the Amazon ECR registry to which you intend to push your image. I followed the instructions linked here with
get-login
.
The output that we saw previously comes in handy as we need to get our aws account id
$ aws sts get-caller-identity --profile team4
{
"UserId": "AIDAQXZMQZU3GGS2XHEJ2",
"Account": "051096112438",
"Arn": "arn:aws:iam::051096112438:user/team4"
}
We will use the get-login
command is available to authenticate to your Amazon ECR registry.
$ aws ecr get-login --region us-east-1 --no-include-email
docker login -u AWS -p eyJwYXlsb2FkIjoia0ZkK096YXBEL2pOK1BERkhkalBDbWRSeHFvVHZTUXgyOVdBdUp0czF0Z1RsN1NTMEVsWTlWQXA1d0gxajBta0pJRmxDNit3UGNBTTcvS3Uxd1BGdXVPWlF6emZ4c3hMdDVxRzBKWjFLN3VOTDFuV01SN0xmYXpPL1d1THczSlo1Wm1CMWJ4MUducU0xRDJnZ1FJWEtWbmRoTEtTSWFnOU1Tb0tpQk9Md3VOSWkyZUFoV3BOWUg3cUVHZzd1UEE2NGlHUzI0ZXJTWHNZM2dNY1J3ZlZNNnFrREloRE9ySVRQUTlkclBQRkxHYm9KUUx0bE1kTjlFVlUwd1NrY0dsK3FBa2dIZmJhYU1QN3dQaWZUeUdOdVlzamxhdkQ3U2JVbEVaVGZiVGdlNVZqcmpYKzRreG9aOHVYM0NhN1h3WVhFMUN2YnlkdGJjaE9iTmFZSVdHdnozZ0ZoMm9GRlYyQTVCazFFSTQ1b2t5aVNFU0ora3FNWHMwWFZtODU3Y0U0TkZzMFhhVVhnbS9LRnlRTlp2TzlUOFdoK2dvcHQ5YmJSWkRQTlRHN2ZGbTV0UFRtOGJDRHZFejdLRzBGMmh3ZE9HSmhwL2cyQzRKb3lZYXFyKzZDa2dqNVhmZEtHMnptd2pxU3lmTmRURzJQVFNJdTdGSVcreXBQMWNYeHA4Um1KWXZMN2tmazZLQ2pCRkxXbTlpZzBJallWVkswM1gxMXR0ZWRQclFCdHo3WFdINER3RzlPNDVrRngrVFpFREFUM2QyNEsxMVh2a2VmU2VwS2xFbVN1TUUwNHR2THJMVDlzcngwUS84aWU0bFdqSzQ0Ni8rMWRXTHFFVERHRURDU090c05nVjBUUVMrWUxkWWpEbUtjSGRyVE05R0h2dmNzRElsanJ3aTRkdS9YU3FyMHk1b2hZNm1SN01qbHRINWNFY1lWaldQeEtSeWJXSWtxMkx6WVVjNnZyakhnOUFkcllhVHZLblRzOFRqQkxNQ3AzZFdDdStsSE42bU1LK1hyMG9hRUUrRzVMajhrOWpSaGtiYjEwZ3hRWWl1WHhZK3FMd0wvenNGckc0UHVtbTRNQUJwSmd5b0NLYUxpTyttblN0bXNqZ0R1QjhhdGQ2KytZUWwxdE9USkJlcXVtTjl1WnBVVHRzSEpXVEN5Q0ZHbEhoNGRxVDQzWWlNK0RWSmxWdGFwYVZyMTRjblJkRmdOT1VkNzlkbTlESlhtYlhPRzdib0tJZ2IxU1ZJSHg5dWpWTnllbWt4d3JsMDA5YkVmRFNzdEVObjc5SlR0aEdacGFrWWN0QjNmMUJWM0ZCbHVDL2I3TDFVbnRMYUhPaW9QTlplYVFWTWZxNFZkUTlWNGhnbUJjaHJIOUZyYURIaExSWUtvZ1VRay9zQVVaUjVaZ3VxMjB5dUFDTWxJcjlzZ1JYenY4ajZrWTZUZVh1aWtKMFNzaVFDd3IzRGQiLCJkYXRha2V5IjoiQVFFQkFIaHdtMFlhSVNKZVJ0Sm01bjFHNnVxZWVrWHVvWFhQZTVVRmNlOVJxOC8xNHdBQUFINHdmQVlKS29aSWh2Y05BUWNHb0c4d2JRSUJBREJvQmdrcWhraUc5dzBCQndFd0hnWUpZSVpJQVdVREJBRXVNQkVFRENPb2M0QmRvbzNDdFVYd05BSUJFSUE3T1VBaTlSYkY5UHI0ZWFFTW9PQW5kQ1Zvd21aTVJJUmNUaTJLWEpCeG82ZVYyWEZiY0l3WG93Z2hLUmdhRGxSazN6Z0pSSmYzbHFwSm9iMD0iLCJ2ZXJzaW9uIjoiMiIsInR5cGUiOiJEQVRBX0tFWSIsImV4cGlyYXRpb24iOjE2MTMzODkyNjd9 https://051096112438.dkr.ecr.us-east-1.amazonaws.com
The resulting output is a docker login command that you use to authenticate your Docker client to your Amazon ECR registry. Copy and paste the docker login command into a terminal to authenticate your Docker CLI to the registry. This command provides an authorization token that is valid for the specified registry for 12 hours.
$ docker login -u AWS -p eyJwYXlsb2FkIjoia0ZkK096YXBEL2pOK1BERkhkalBDbWRSeHFvVHZTUXgyOVdBdUp0czF0Z1RsN1NTMEVsWTlWQXA1d0gxajBta0pJRmxDNit3UGNBTTcvS3Uxd1BGdXVPWlF6emZ4c3hMdDVxRzBKWjFLN3VOTDFuV01SN0xmYXpPL1d1THczSlo1Wm1CMWJ4MUducU0xRDJnZ1FJWEtWbmRoTEtTSWFnOU1Tb0tpQk9Md3VOSWkyZUFoV3BOWUg3cUVHZzd1UEE2NGlHUzI0ZXJTWHNZM2dNY1J3ZlZNNnFrREloRE9ySVRQUTlkclBQRkxHYm9KUUx0bE1kTjlFVlUwd1NrY0dsK3FBa2dIZmJhYU1QN3dQaWZUeUdOdVlzamxhdkQ3U2JVbEVaVGZiVGdlNVZqcmpYKzRreG9aOHVYM0NhN1h3WVhFMUN2YnlkdGJjaE9iTmFZSVdHdnozZ0ZoMm9GRlYyQTVCazFFSTQ1b2t5aVNFU0ora3FNWHMwWFZtODU3Y0U0TkZzMFhhVVhnbS9LRnlRTlp2TzlUOFdoK2dvcHQ5YmJSWkRQTlRHN2ZGbTV0UFRtOGJDRHZFejdLRzBGMmh3ZE9HSmhwL2cyQzRKb3lZYXFyKzZDa2dqNVhmZEtHMnptd2pxU3lmTmRURzJQVFNJdTdGSVcreXBQMWNYeHA4Um1KWXZMN2tmazZLQ2pCRkxXbTlpZzBJallWVkswM1gxMXR0ZWRQclFCdHo3WFdINER3RzlPNDVrRngrVFpFREFUM2QyNEsxMVh2a2VmU2VwS2xFbVN1TUUwNHR2THJMVDlzcngwUS84aWU0bFdqSzQ0Ni8rMWRXTHFFVERHRURDU090c05nVjBUUVMrWUxkWWpEbUtjSGRyVE05R0h2dmNzRElsanJ3aTRkdS9YU3FyMHk1b2hZNm1SN01qbHRINWNFY1lWaldQeEtSeWJXSWtxMkx6WVVjNnZyakhnOUFkcllhVHZLblRzOFRqQkxNQ3AzZFdDdStsSE42bU1LK1hyMG9hRUUrRzVMajhrOWpSaGtiYjEwZ3hRWWl1WHhZK3FMd0wvenNGckc0UHVtbTRNQUJwSmd5b0NLYUxpTyttblN0bXNqZ0R1QjhhdGQ2KytZUWwxdE9USkJlcXVtTjl1WnBVVHRzSEpXVEN5Q0ZHbEhoNGRxVDQzWWlNK0RWSmxWdGFwYVZyMTRjblJkRmdOT1VkNzlkbTlESlhtYlhPRzdib0tJZ2IxU1ZJSHg5dWpWTnllbWt4d3JsMDA5YkVmRFNzdEVObjc5SlR0aEdacGFrWWN0QjNmMUJWM0ZCbHVDL2I3TDFVbnRMYUhPaW9QTlplYVFWTWZxNFZkUTlWNGhnbUJjaHJIOUZyYURIaExSWUtvZ1VRay9zQVVaUjVaZ3VxMjB5dUFDTWxJcjlzZ1JYenY4ajZrWTZUZVh1aWtKMFNzaVFDd3IzRGQiLCJkYXRha2V5IjoiQVFFQkFIaHdtMFlhSVNKZVJ0Sm01bjFHNnVxZWVrWHVvWFhQZTVVRmNlOVJxOC8xNHdBQUFINHdmQVlKS29aSWh2Y05BUWNHb0c4d2JRSUJBREJvQmdrcWhraUc5dzBCQndFd0hnWUpZSVpJQVdVREJBRXVNQkVFRENPb2M0QmRvbzNDdFVYd05BSUJFSUE3T1VBaTlSYkY5UHI0ZWFFTW9PQW5kQ1Zvd21aTVJJUmNUaTJLWEpCeG82ZVYyWEZiY0l3WG93Z2hLUmdhRGxSazN6Z0pSSmYzbHFwSm9iMD0iLCJ2ZXJzaW9uIjoiMiIsInR5cGUiOiJEQVRBX0tFWSIsImV4cGlyYXRpb24iOjE2MTMzODkyNjd9 https://051096112438.dkr.ecr.us-east-1.amazonaws.com
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /home/userxx/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
- Identify the image to push. Run the docker images command to list the images on your system.
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> 9b51cc1cddd1 2 hours ago 368MB
<none> <none> 4144a6eeb4fa 2 hours ago 247MB
<none> <none> 650b405ecff6 3 hours ago 267MB
<none> <none> 19213d5f426d 3 hours ago 87.9MB
<none> <none> e019f9b4bc2d 3 hours ago 255MB
python 3.6-alpine d39b82549c6d 10 days ago 40.7MB
node 8-alpine 2b8fcdc6230a 13 months ago 73.5MB
openjdk 8-alpine a3562aa0b991 21 months ago 105MB
golang 1.9-alpine b0260be938c6 2 years ago 240MB
From the output, we only see the image IDs. From building the frontend, we had the following output therefore, I know that the image I need is e019f9b4bc2d
.
Step 7/7 : CMD ["sh", "-c", "npm start" ]
---> Running in 74f2881dbea3
Removing intermediate container 74f2881dbea3
---> e019f9b4bc2d
Successfully built e019f9b4bc2d
- Tag your image with the Amazon ECR registry, repository, and optional image tag name combination to use.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf$ docker tag e019f9b4bc2d 051096112438.dkr.ecr.us-east-1.amazonaws.com/team4-frontend:frontend
- Push the image using the docker push command.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf$ docker push 051096112438.dkr.ecr.us-east-1.amazonaws.com/team4-frontend:frontend
The push refers to repository [051096112438.dkr.ecr.us-east-1.amazonaws.com/team4-frontend]
9435ddc22098: Pushed
da0867928195: Pushed
8d9458bbe754: Pushed
1eca433c501f: Pushed
6b73202914ee: Pushed
9851b6482f16: Pushed
540b4f5b2c41: Pushed
6b27de954cca: Pushed
frontend: digest: sha256:3e06e5b1f65af67f911f26ecb9bc5b86f094b0eda6e414d9e58c5d4271a40fc6 size: 1994
The output above contains information about the image that was just pushed, including the digest of the image (a SHA-256 Hash).
Capture The Flag
We are expected to count the number of characters in the Hash 3e06e5b1f65af67f911f26ecb9bc5b86f094b0eda6e414d9e58c5d4271a40fc6
. There are 64
characters. The flag will be the number of characters in binary.
DevSlopCTF{1000000}
Auth API (199 Points)
Let’s follow the same procedure to push the Auth API Image to AWS ECR.
- Our first step is to login.
userxx@lsv-u01:~$ aws ecr get-login --region us-east-1 --no-include-email
docker login -u AWS -p eyJwYXlsb2FkIjoiYU16QnErazlVeC80RFdCNDNtSFFYMGduQThxV2d0QTB6c093cDF6T1JLY3V3dVVoZ2xVcmFpU2xranNOWHZMb21QUDExVG1zMTdVbmVzWWRDTTZJUng4MGlyM2hQRHIxcFdlZURBOEkzd3ZTR3ZORGRlOTBuWUx3SFBPMzVvL2cweU9KWC9jRzl2b3k3VkhrOWdkKzErVklGVHFwTHE4RXBRMnBteHZYbnVDMnBkemFlK1hUeGJSWkdWNnZBSGtmQW9xNm1vMnNLREFPc2NGakRZVzVGMWRZOFdybjVmcGRwekVTd0U3RnlUM2U5YlpqOEpUcW5UWndkZHFXeUY4Zm1KK0NKQkpMMmdVVzdwYTZLUDNkTXdrdXpuNEg5dUhkaDhoSGs1OCt5eVpqcHduaEQvRHF3TzJveFMwN2xtS1dPbTkxc0I5U0RlckswQ2NrTlpDZURlSHlXVnZDOE5tS3MxbnhhRlJDejFjSXRiTTZKMHU3OUp1WUorSURXUVVyejVoTzlYcHhQTmxka1FNYlZiWHBVSy9waE1wT0JvQXFZeFlUbmFTMWRud1hCVGoxb3hYSWdRTzFMWFIzR1hyVTlHcFZ0N09wYWw2WDcvOEZ2bjk1TDFzSFZwV1NMWGdna24xMzFCWU5sSkpaQ2hKTkUrOGxCcks5MUZ0Mjg2dUNYbExuQkZLOVRRM0tOM1k4OTdHeC9qREo0dUpOWTF3d3pUY3pGRkJtaGZrbnNoQU9pTXgzRUFud1kvMWhucHZRWmo2VzVOODlHdzNFem4xZWNncUp4ZmdFakxDdmdRdkllZVB2L3NuRUpXR1VzVE1LY05lWkdTbWtXY3U5RTdCaEFzUjFZQ1VGclNTUTh6c04xbzloQ3REN1hvdXgrVWhXaDZtbDZKaXNWL2dSOE5uZTl0S0lUcGVEMkdaZjJOSHoyeXRHQk9Sbm85NDUrWHdkVTkyVzU3SUV0dEY5UkhKQlBnNWtUQXlRQmFLRkk5SlVCWlFJdDFYTGgzSEQwWE10QkVJemVzeTdPMEVqRU9iS09JdjVQRHRMbzBXdG5vM0lzb0Fxb2MyN2VPMG5oYmt0M1pwZUQ0RDhZaDViUkt2WGZmSHpXRXk5V0lvZE1lSW5xSWZCWW16eFNaYUc5V2hwQTd1dVRmb3pXcnY2WVpkemVkUmRNNzBsemxzaHA1blBDYTdiMEtBQTJEVENqZWV0UkYxWE5vdUlvMHBFM0VPTDN1MlkwV01uSDQzbGVxeHZoRUVMS293eGhtd2FKeWtKSkFoTXZHLzlRRU9PQ0NaRTVCZDZMenB5MDF2YnlXM21wMi9HQkpwMGNwMDAzZFo3Wk9nUEQ5eCtiUHZmUTVRUzBMYlFrOGE2T2c1YUhSYitnMkZBU2FzcThEbUNETWlTNHJBRG5NNkJXVVJhbVF4UXNGeWNKZU5KWDhnSFJNc3NjcUxRcS92Mnc2U1EiLCJkYXRha2V5IjoiQVFFQkFIaHdtMFlhSVNKZVJ0Sm01bjFHNnVxZWVrWHVvWFhQZTVVRmNlOVJxOC8xNHdBQUFINHdmQVlKS29aSWh2Y05BUWNHb0c4d2JRSUJBREJvQmdrcWhraUc5dzBCQndFd0hnWUpZSVpJQVdVREJBRXVNQkVFRE5yUnpEWWxuYU84SmIrakd3SUJFSUE3UVYxMExXMm56OWs5U0d4bkRrSkxGejgvTGgwQ0MrUytmcDIwNFhtMUQyM2l0Y3dKbGNJM1doeDdMTTUvVW1tUnFrMk4xV25sQkM0QmRDMD0iLCJ2ZXJzaW9uIjoiMiIsInR5cGUiOiJEQVRBX0tFWSIsImV4cGlyYXRpb24iOjE2MTM1NTY3ODB9 https://051096112438.dkr.ecr.us-east-1.amazonaws.com
The resulting output is a docker login command that we use to authenticate our Docker client to the Amazon ECR registry.
userxx@lsv-u01:~$ docker login -u AWS -p eyJwYXlsb2FkIjoiYU16QnErazlVeC80RFdCNDNtSFFYMGduQThxV2d0QTB6c093cDF6T1JLY3V3dVVoZ2xVcmFpU2xranNOWHZMb21QUDExVG1zMTdVbmVzWWRDTTZJUng4MGlyM2hQRHIxcFdlZURBOEkzd3ZTR3ZORGRlOTBuWUx3SFBPMzVvL2cweU9KWC9jRzl2b3k3VkhrOWdkKzErVklGVHFwTHE4RXBRMnBteHZYbnVDMnBkemFlK1hUeGJSWkdWNnZBSGtmQW9xNm1vMnNLREFPc2NGakRZVzVGMWRZOFdybjVmcGRwekVTd0U3RnlUM2U5YlpqOEpUcW5UWndkZHFXeUY4Zm1KK0NKQkpMMmdVVzdwYTZLUDNkTXdrdXpuNEg5dUhkaDhoSGs1OCt5eVpqcHduaEQvRHF3TzJveFMwN2xtS1dPbTkxc0I5U0RlckswQ2NrTlpDZURlSHlXVnZDOE5tS3MxbnhhRlJDejFjSXRiTTZKMHU3OUp1WUorSURXUVVyejVoTzlYcHhQTmxka1FNYlZiWHBVSy9waE1wT0JvQXFZeFlUbmFTMWRud1hCVGoxb3hYSWdRTzFMWFIzR1hyVTlHcFZ0N09wYWw2WDcvOEZ2bjk1TDFzSFZwV1NMWGdna24xMzFCWU5sSkpaQ2hKTkUrOGxCcks5MUZ0Mjg2dUNYbExuQkZLOVRRM0tOM1k4OTdHeC9qREo0dUpOWTF3d3pUY3pGRkJtaGZrbnNoQU9pTXgzRUFud1kvMWhucHZRWmo2VzVOODlHdzNFem4xZWNncUp4ZmdFakxDdmdRdkllZVB2L3NuRUpXR1VzVE1LY05lWkdTbWtXY3U5RTdCaEFzUjFZQ1VGclNTUTh6c04xbzloQ3REN1hvdXgrVWhXaDZtbDZKaXNWL2dSOE5uZTl0S0lUcGVEMkdaZjJOSHoyeXRHQk9Sbm85NDUrWHdkVTkyVzU3SUV0dEY5UkhKQlBnNWtUQXlRQmFLRkk5SlVCWlFJdDFYTGgzSEQwWE10QkVJemVzeTdPMEVqRU9iS09JdjVQRHRMbzBXdG5vM0lzb0Fxb2MyN2VPMG5oYmt0M1pwZUQ0RDhZaDViUkt2WGZmSHpXRXk5V0lvZE1lSW5xSWZCWW16eFNaYUc5V2hwQTd1dVRmb3pXcnY2WVpkemVkUmRNNzBsemxzaHA1blBDYTdiMEtBQTJEVENqZWV0UkYxWE5vdUlvMHBFM0VPTDN1MlkwV01uSDQzbGVxeHZoRUVMS293eGhtd2FKeWtKSkFoTXZHLzlRRU9PQ0NaRTVCZDZMenB5MDF2YnlXM21wMi9HQkpwMGNwMDAzZFo3Wk9nUEQ5eCtiUHZmUTVRUzBMYlFrOGE2T2c1YUhSYitnMkZBU2FzcThEbUNETWlTNHJBRG5NNkJXVVJhbVF4UXNGeWNKZU5KWDhnSFJNc3NjcUxRcS92Mnc2U1EiLCJkYXRha2V5IjoiQVFFQkFIaHdtMFlhSVNKZVJ0Sm01bjFHNnVxZWVrWHVvWFhQZTVVRmNlOVJxOC8xNHdBQUFINHdmQVlKS29aSWh2Y05BUWNHb0c4d2JRSUJBREJvQmdrcWhraUc5dzBCQndFd0hnWUpZSVpJQVdVREJBRXVNQkVFRE5yUnpEWWxuYU84SmIrakd3SUJFSUE3UVYxMExXMm56OWs5U0d4bkRrSkxGejgvTGgwQ0MrUytmcDIwNFhtMUQyM2l0Y3dKbGNJM1doeDdMTTUvVW1tUnFrMk4xV25sQkM0QmRDMD0iLCJ2ZXJzaW9uIjoiMiIsInR5cGUiOiJEQVRBX0tFWSIsImV4cGlyYXRpb24iOjE2MTM1NTY3ODB9 https://051096112438.dkr.ecr.us-east-1.amazonaws.com
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /home/userxx/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
- We need to Identify the image to push from the list of images in our system. In our case, the image is
9b51cc1cddd1
based on the output of ourdocker build
command.
Step 8/8 : CMD /go/src/app/auth-api
---> Running in 80450c71e552
Removing intermediate container 80450c71e552
---> 9b51cc1cddd1
Successfully built 9b51cc1cddd1
- Tag the image with the Amazon ECR registry, repository, and optional image tag name combination to use.
userxx@lsv-u01:~$ $ docker tag 9b51cc1cddd1 051096112438.dkr.ecr.us-east-1.amazonaws.com/team4-auth-api:authapi
- Next, push the image using the docker push command. The folder we are pushing to is
team4-auth-api
.
userxx@lsv-u01:~$ docker push 051096112438.dkr.ecr.us-east-1.amazonaws.com/team4-auth-api:authapi
The push refers to repository [051096112438.dkr.ecr.us-east-1.amazonaws.com/team4-auth-api]
ad45f0dd141c: Pushed
7573babcaed4: Pushed
8bd6a8085cd6: Pushed
8e5bc3865c32: Pushed
14aff4165e19: Pushed
9ae8db0a5cb6: Pushed
c68b9f081446: Pushed
98600d12b3d1: Pushed
3d5153698765: Pushed
5222eeb73419: Pushed
8b34f02ac284: Pushed
73046094a9b8: Pushed
authapi: digest: sha256:2630d7f662c5e29de4e3013990488d9e927ecb57b83b999cc28e372b3df0d548 size: 2829
Capture The Flag
The flag will be the number of bits of the resulting digest 2630d7f662c5e29de4e3013990488d9e927ecb57b83b999cc28e372b3df0d548
, but in hexadecimal format. The hash is made up of 64 hex characters and we see that they used SHA256 hashing algorithm. SHA256 is a 256 bits long algorithm, as its name indicates. The value 256
in hexadecimal format is 0x100
DevSlopCTF{100}
TODOs API (199 Points)
- The output from the
docker build
command shows that the image created was19213d5f426d
.
Step 7/7 : CMD ["sh", "-c", "npm start" ]
---> Running in d25ef931c0f1
Removing intermediate container d25ef931c0f1
---> 19213d5f426d
Successfully built 19213d5f426d
- Tag the image with the Amazon ECR registry, repository, and optional image tag name combination to use.
userxx@lsv-u01:~$ docker tag 19213d5f426d 051096112438.dkr.ecr.us-east-1.amazonaws.com/team4-todos-api
- Push the image to the
team4-todos-api
repo in AWS ECR.
userxx@lsv-u01:~$ docker push 051096112438.dkr.ecr.us-east-1.amazonaws.com/team4-todos-api
The push refers to repository [051096112438.dkr.ecr.us-east-1.amazonaws.com/team4-todos-api]
7ab8cccca71a: Layer already exists
8fa78e380e47: Layer already exists
dab0d31f68cf: Layer already exists
048bb3a54113: Layer already exists
6b73202914ee: Layer already exists
9851b6482f16: Layer already exists
540b4f5b2c41: Layer already exists
6b27de954cca: Layer already exists
latest: digest: sha256:802c12ab8c26a87bbb8e96040373caa1b03e16477653924197704656116b1e8d size: 1992
Capture The Flag
The output above contains information about the image that was just pushed, including the tag of the image, which was replaced by latest
. We are to run the tag through an MD5 hash function. We can use CyberChef for that. The flag is the resulting 128-bit hash value 71ccb7a35a452ea8153b6d920f9f190e
.
DevSlopCTF{71ccb7a35a452ea8153b6d920f9f190e}
Users API (199 Points)
From the output of docker build
, we see our image is 650b405ecff6
.
Step 9/9 : CMD ["java", "-jar", "./target/users-api-0.0.1-SNAPSHOT.jar"]
---> Running in 1f29cef84675
Removing intermediate container 1f29cef84675
---> 650b405ecff6
Successfully built 650b405ecff6
Let’s tag the image and push it to the team4-users-api
repo.
userxx@lsv-u01:~$ docker tag 650b405ecff6 051096112438.dkr.ecr.us-east-1.amazonaws.com/team4-users-api
userxx@lsv-u01:~$ docker push 051096112438.dkr.ecr.us-east-1.amazonaws.com/team4-users-api
The push refers to repository [051096112438.dkr.ecr.us-east-1.amazonaws.com/team4-users-api]
3671acbc6b59: Pushed
89d67a3983f8: Pushed
4458a83d565d: Pushed
cba0f06d0372: Pushed
5719b8535421: Pushed
42183e7c207e: Pushed
ceaf9e1ebef5: Pushed
9b9b7f3d56a0: Pushed
f1b5933fe4b5: Pushed
latest: digest: sha256:b98683bc94b3d2bf322106864dba184013f39b823a27262fe5e8616cf4662e09 size: 2204
Capture The Flag
The output above contains information about the image that was just pushed, including the tag of the image, which was replaced by latest
. We are to run the tag through an RIPEMD-320 hash function. We can use CyberChef for that. The flag is the resulting 320-bit hash value a3dabeca8436d7dc296f76aac3e573b6337801e57dcc99a3d1c0ce9b26f3df166bbb2a58c0d8d2b3
.
DevSlopCTF{a3dabeca8436d7dc296f76aac3e573b6337801e57dcc99a3d1c0ce9b26f3df166bbb2a58c0d8d2b3}
Log Message Processor (199 Points)
The last image we need to push is that of the Log Message Processor. From the output of docker build
, we see that the image id is 4144a6eeb4fa
.
Step 7/7 : CMD ["python3","-u","main.py"]
---> Running in 6601df6950f1
Removing intermediate container 6601df6950f1
---> 4144a6eeb4fa
Successfully built 4144a6eeb4fa
Let’s tag the image and push it to the team4-log-message-processor
repo.
userxx@lsv-u01:~$ docker tag 4144a6eeb4fa 051096112438.dkr.ecr.us-east-1.amazonaws.com/team4-log-message-processor
userxx@lsv-u01:~$ docker push 051096112438.dkr.ecr.us-east-1.amazonaws.com/team4-log-message-processor
The push refers to repository [051096112438.dkr.ecr.us-east-1.amazonaws.com/team4-log-message-processor]
f8947607aef2: Pushed
5c40e8abb567: Pushed
d9d85f3b96f1: Pushed
064ef86dc94e: Pushed
21bf1f3e46da: Pushed
4ba796985418: Pushed
56a0fed9c27d: Pushed
565113e78b75: Pushed
27448fb13b80: Pushed
1119ff37d4a9: Pushed
latest: digest: sha256:7f5d147e598a8f3ac5cda36f26f77f64c641129192580abbd1131ce9a9f8776b size: 2412
Capture The Flag
The output above contains information about the image that was just pushed, including the tag of the image, which was replaced by latest
. We are to run the tag through an Whirlpool hash function. We can use CyberChef for that. The flag is the resulting 320-bit hash value e8fc4fa250e0974faef1212664c143cf1ee2ca052e7155a0ec246a0f2bf90376760f3cb64023af05d7b09ee0760a62bbec0666f7a24c93ed3bcf7ededf325bf4
.
DevSlopCTF{e8fc4fa250e0974faef1212664c143cf1ee2ca052e7155a0ec246a0f2bf90376760f3cb64023af05d7b09ee0760a62bbec0666f7a24c93ed3bcf7ededf325bf4}
Deploying microservices to Kubernetes
Introduction (3)
In this section, we will learn how to deploy each microservice to the Kubernetes cluster. We are introduced to the definitions of a pod, deployment, label and namespace. We see an example of a configuration file for a Pod.
apiVersion: v1
kind: Pod
metadata:
name: static-web
labels:
role: myrole
spec:
containers:
- name: web
image: nginx
ports:
- name: web
containerPort: 80
protocol: TCP
We also learn that in the cluster being used for this challenge, there is one namespace per team. Our’s is team4
. Every time we deploy a resource during this challenge, we will have to tell Kubernetes to deploy it to our own team’s namespace.
Capture The Flag
This flag is to confirm that we have understood the primer.
DevSlopCTF{ok}
Front-End (398 Points)
We are to create another folder in frontend
called k8s
to store the configuration files related to the Front-end.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend$ mkdir k8s
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend$ cd k8s/
Create a file called deployment.yaml
.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend/k8s$ touch deployment.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend/k8s$ ls
deployment.yaml
The requirements for the contents of the configuration file are as follows:
- The configuration file should define a
Deployment
. That translates to (kind: Deployment) - The name of the Deployment should be
frontend
(name: frontend) - The container should run in your team’s namespace (team4). This will be specified when we run the deployment command.
- There should be one single frontend container running at a time (replica: 1)
- Assign a label to the Pod:
app: frontend
. Theselector
field defines how the Deployment finds which Pods to manage. In this case, you select a label that is defined in the Pod template - When defining a container:
- The name of the container should be
frontend
( containers: -> name: frontend) - The image should be downloaded from ECR (the cluster already has permission to do that). From our previous task, the Image location is
051096112438.dkr.ecr.us-east-1.amazonaws.com/team4-frontend
. - The container should listen on port 8080 (containerPort: 8080). We are to give a name to this container port. We will name it
name: web
. - Finally, set the Pod’s restart policy to
Always
. From the official documentation of a pod, we see the option to set therestartPolicy
of a pod. We will specifyrestartPolicy: Always
.
- The name of the container should be
This is our finalized config file.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend/k8s$ gedit deployment.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend/k8s$ cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 1
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: 051096112438.dkr.ecr.us-east-1.amazonaws.com/team4-frontend:frontend
ports:
- name: webport
containerPort: 8080
restartPolicy: Always
Since the image is being pulled from ECR, ensure you are logged in via CLI incase you have issues.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend/k8s$ aws ecr get-login --region us-east-1 --no-include-email
docker login -u AWS -p eyJwYXlsb2FkIjoib1JaUTBycG9zYlRTcityQ1h0TVo5UldiN3N5WEFiMzZQZ3BVUFVFcUI4RkVzMmZ6VDd2ZTRWOTB0T2I3L0lQQUttT1VENW5sc29tSHA4NDdadXpPcEZYTVBSdlRGUzBEcGsvaFVVRjJLSkgyWE1kcmxOc1g1bHh6MG1Ea0lSTEd3YXhZVGUyK2RRWXpyZ0FpeG81dVBUYzIxb2dhRVhYZFoxSWhTKzdWb0pRd3dUdzdyZzJuK0VvODJ2b1N1VjNRbVVIbExHNzV3MGN3U3d2ZVNrdzNJMElTNnlWcTRKMVlVRC9NemduUVp1VXd4d0tlanMvUXRGQ0QzYTN2YndadHdVeDNOYzNaaFcxbGRCMXJOV1FzNk0vRXBoMVdMbHdLMjhEWjJQcTZXTkluUGU3Qm1tbStUVlZ1bVJSRDNxd0twdXV0ZHI4N3A3ZVdsZ2V2bms0QlUzSzRBNTBqeEZGWHZwTWc3aVNCbmI1bEt6NmVGN3Fla3BqRVpyMU1WMFp0VWYvOWUyUXl1VndCZzRvSW5saW5JYUlPbFhYbHJaR2JlKzd3cHg5WFpxdFZld25oWjRzekpsOFpITmJ2ZUs0Uk9LK2lpZ3dkS0UyaDk5VGlHYmVVZHhNam9RYWE4VWVIK0tMNktJQmxBV25YNVlhdHBRbGxTQzh4QVF3N1ltNkpJSnlldVoyaGdSWU5ydEpCY0hIMEJSSmxrZ0NhWm56d1BQZy8yTU9Pc2REK1lkb3p6TmNxMVZjT2VnRE9yQzc3OVRTSGV2V2NqZC9zaTRHWGt3SkRqOWdjc1o0aFBQeWZxYUtwdk9jUENKWklGc3BBQXpLbHByMWQ5ZDhXcHVvS2FiUGhpYTJIZUI1VHRqSkVpWVlhMmhySEEwU01oRGtPNGRhL3M3bEFSZEFrMVJmRVc1cmZGTGFEOTVLOVZBc1ZYN1hpSC9RWGtJM1k1RW53Ym0zNnliY0pKTm42K2dneFhnNXBwMXhBZ09TYUp0UDFRY1NQTVpCK1NwYXYxdCt2K0hNMmdUSXcxQTZIZUxDMk56c3ZXYnlqTENSSW5uVnMvZGNlMnRWeWNLZ3RNdldvcEtWUkZ2V3FiWEllY3JWUXkyNWRaL3MwVHEyZnVGcUNUYllVbkRaenZlV0pILzgxNGhUQ0FyVUZ2c1RtYjhKZkNsUldjT05QV2ZPelpWVzNjT2w3ZFk5cHRoSU10WE9YMWMrLzc5bkpTY0ZFbVZaMjRLTzgyb1dnYmlxSktLc21LYzdsOEZEdjdzRTU0Z1VmRDUxQnF5YkpPN2R2ZytGS0o5L1U1VlI0dFIwcGpVU3k3NVVEZEMwNXJKQXY2TEtTWTRCdUlMWE1kaWNuMTZYaVlBSFVGaGdsa2d4clpqNWw5QW5ocmIzWEo4enhRSGE4R0RYcy9hR2FBajBpbkgzYzVSbTkzbGw5REhvaEN1anRrYnFmRXBpUWNZVTEiLCJkYXRha2V5IjoiQVFFQkFIaHdtMFlhSVNKZVJ0Sm01bjFHNnVxZWVrWHVvWFhQZTVVRmNlOVJxOC8xNHdBQUFINHdmQVlKS29aSWh2Y05BUWNHb0c4d2JRSUJBREJvQmdrcWhraUc5dzBCQndFd0hnWUpZSVpJQVdVREJBRXVNQkVFREs0dkMvRUlwclpJam1nRWhRSUJFSUE3N0J0d3BrMk42eWNyY3pPYTFOa0ZnbXByK3FTaEZvOTBPSXNvYVRtSDZLZ0tJWlBMTjlMb1I1eEc1OElONDZrc3lQdGVpZGR5WVRtUHFSTT0iLCJ2ZXJzaW9uIjoiMiIsInR5cGUiOiJEQVRBX0tFWSIsImV4cGlyYXRpb24iOjE2MTM2NzcyNjd9 https://051096112438.dkr.ecr.us-east-1.amazonaws.com
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend/k8s$ docker login -u AWS -p eyJwYXlsb2FkIjoib1JaUTBycG9zYlRTcityQ1h0TVo5UldiN3N5WEFiMzZQZ3BVUFVFcUI4RkVzMmZ6VDd2ZTRWOTB0T2I3L0lQQUttT1VENW5sc29tSHA4NDdadXpPcEZYTVBSdlRGUzBEcGsvaFVVRjJLSkgyWE1kcmxOc1g1bHh6MG1Ea0lSTEd3YXhZVGUyK2RRWXpyZ0FpeG81dVBUYzIxb2dhRVhYZFoxSWhTKzdWb0pRd3dUdzdyZzJuK0VvODJ2b1N1VjNRbVVIbExHNzV3MGN3U3d2ZVNrdzNJMElTNnlWcTRKMVlVRC9NemduUVp1VXd4d0tlanMvUXRGQ0QzYTN2YndadHdVeDNOYzNaaFcxbGRCMXJOV1FzNk0vRXBoMVdMbHdLMjhEWjJQcTZXTkluUGU3Qm1tbStUVlZ1bVJSRDNxd0twdXV0ZHI4N3A3ZVdsZ2V2bms0QlUzSzRBNTBqeEZGWHZwTWc3aVNCbmI1bEt6NmVGN3Fla3BqRVpyMU1WMFp0VWYvOWUyUXl1VndCZzRvSW5saW5JYUlPbFhYbHJaR2JlKzd3cHg5WFpxdFZld25oWjRzekpsOFpITmJ2ZUs0Uk9LK2lpZ3dkS0UyaDk5VGlHYmVVZHhNam9RYWE4VWVIK0tMNktJQmxBV25YNVlhdHBRbGxTQzh4QVF3N1ltNkpJSnlldVoyaGdSWU5ydEpCY0hIMEJSSmxrZ0NhWm56d1BQZy8yTU9Pc2REK1lkb3p6TmNxMVZjT2VnRE9yQzc3OVRTSGV2V2NqZC9zaTRHWGt3SkRqOWdjc1o0aFBQeWZxYUtwdk9jUENKWklGc3BBQXpLbHByMWQ5ZDhXcHVvS2FiUGhpYTJIZUI1VHRqSkVpWVlhMmhySEEwU01oRGtPNGRhL3M3bEFSZEFrMVJmRVc1cmZGTGFEOTVLOVZBc1ZYN1hpSC9RWGtJM1k1RW53Ym0zNnliY0pKTm42K2dneFhnNXBwMXhBZ09TYUp0UDFRY1NQTVpCK1NwYXYxdCt2K0hNMmdUSXcxQTZIZUxDMk56c3ZXYnlqTENSSW5uVnMvZGNlMnRWeWNLZ3RNdldvcEtWUkZ2V3FiWEllY3JWUXkyNWRaL3MwVHEyZnVGcUNUYllVbkRaenZlV0pILzgxNGhUQ0FyVUZ2c1RtYjhKZkNsUldjT05QV2ZPelpWVzNjT2w3ZFk5cHRoSU10WE9YMWMrLzc5bkpTY0ZFbVZaMjRLTzgyb1dnYmlxSktLc21LYzdsOEZEdjdzRTU0Z1VmRDUxQnF5YkpPN2R2ZytGS0o5L1U1VlI0dFIwcGpVU3k3NVVEZEMwNXJKQXY2TEtTWTRCdUlMWE1kaWNuMTZYaVlBSFVGaGdsa2d4clpqNWw5QW5ocmIzWEo4enhRSGE4R0RYcy9hR2FBajBpbkgzYzVSbTkzbGw5REhvaEN1anRrYnFmRXBpUWNZVTEiLCJkYXRha2V5IjoiQVFFQkFIaHdtMFlhSVNKZVJ0Sm01bjFHNnVxZWVrWHVvWFhQZTVVRmNlOVJxOC8xNHdBQUFINHdmQVlKS29aSWh2Y05BUWNHb0c4d2JRSUJBREJvQmdrcWhraUc5dzBCQndFd0hnWUpZSVpJQVdVREJBRXVNQkVFREs0dkMvRUlwclpJam1nRWhRSUJFSUE3N0J0d3BrMk42eWNyY3pPYTFOa0ZnbXByK3FTaEZvOTBPSXNvYVRtSDZLZ0tJWlBMTjlMb1I1eEc1OElONDZrc3lQdGVpZGR5WVRtUHFSTT0iLCJ2ZXJzaW9uIjoiMiIsInR5cGUiOiJEQVRBX0tFWSIsImV4cGlyYXRpb24iOjE2MTM2NzcyNjd9 https://051096112438.dkr.ecr.us-east-1.amazonaws.com
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /home/userxx/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
We now need to apply the configuration file using the command kubectl apply -f frontend/k8s/deployment.yaml
. Since we are already in the path, we do not need the path frontend/k8s/
. We need to
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend/k8s$ kubectl apply -f deployment.yaml --namespace=team4
deployment.apps/frontend created
Capture The Flag
Let’s verify the container is up and running using the command kubectl get pods -n <TEAM_NAME>
.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend/k8s$ kubectl get pods -n team4
NAME READY STATUS RESTARTS AGE
frontend-65c8d8874c-z842x 1/1 Running 0 95s
Let’s dump the logs from the container. We see from the guide provided that we can use the command kubectl logs my-pod
.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend/k8s$ kubectl logs frontend-65c8d8874c-z842x -n team4
> frontend@1.0.0 start /usr/src/app
> node build/dev-server.js
[HPM] Proxy created: /login -> http://127.0.0.1:8081
[HPM] Proxy created: /todos -> http://127.0.0.1:8082
[HPM] Proxy created: /zipkin -> http://127.0.0.1:9411/api/v2/spans
[HPM] Proxy rewrite rule created: "^/zipkin" ~> ""
> Starting dev server...
WARNING Compiled with 4 warnings07:58:47
warning in ./src/components/Todos.vue
(Emitted value instead of an instance of Error) Do not use v-for index as key on <transition-group> children, this is the same as not using keys.
@ ./src/components/Todos.vue 6:2-301
@ ./src/router/index.js
@ ./src/main.js
@ multi ./build/dev-client ./src/main.js
warning in ./~/zipkin-transport-http/lib/HttpLogger.js
11:27-34 Critical dependency: require function is used in a way in which dependencies cannot be statically extracted
warning in ./~/zipkin/lib/InetAddress.js
62:23-30 Critical dependency: require function is used in a way in which dependencies cannot be statically extracted
warning in ./~/zipkin-instrumentation-vue-resource/~/zipkin/lib/InetAddress.js
62:23-30 Critical dependency: require function is used in a way in which dependencies cannot be statically extracted
> Listening at http://127.0.0.1:8080
As per the instructions, we do see the line [HPM] Proxy rewrite rule created: "^/zipkin" ~> ""
. The flag is the string within quotes ^/zipkin" ~> "
.
DevSlopCTF{^/zipkin}
Auth API (398 Points)
Let’s create a folder to store our files including the deployment.yaml
file.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/auth-api$ mkdir k8s
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/auth-api$ cd k8s/
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/auth-api/k8s$ touch deployment.yaml
The requirements this time are as follows:
- This configuration file should define a Deployment, and NOT a Pod (kind: Deployment)
- The name of the Deployment should be auth-api (name: auth-api)
- The container should run in your team’s namespace (team4 – This will be specified when we run the deployment command.)
- There should be one single auth-api container running at a time (replicas: 1)
- Assign a label to the Pod: app: auth-api (this is specified under template)
- When defining a container:
- The name of the container should be auth-api
- The image should be downloaded from ECR (the cluster already has permission to do that)
- The container should listen on port 8081 (tip: give a name to this container port)
- Finally, set the Pod’s restart policy to Always
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/auth-api/k8s$ cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-api
spec:
replicas: 1
selector:
matchLabels:
app: auth-api
template:
metadata:
labels:
app: auth-api
spec:
containers:
- name: auth-api
image: 051096112438.dkr.ecr.us-east-1.amazonaws.com/team4-auth-api:authapi
ports:
- name: authport
containerPort: 8081
restartPolicy: Always
Next, we need to deploy the Auth API.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/auth-api/k8s$ kubectl apply -f deployment.yaml --namespace=team4
deployment.apps/auth-api created
Capture The Flag
We can verify that the pod is running.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/auth-api/k8s$ kubectl get pods -n team4
NAME READY STATUS RESTARTS AGE
auth-api-77b95f88c-kmg7v 1/1 Running 0 4m59s
We are instructed to dump the logs of the auth-api container and pay attention to the json payload and specifically this part "echo","file":"[REDACTED]","line"
.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/auth-api/k8s$ kubectl logs auth-api-77b95f88c-kmg7v -n team4
{"time":"2021-02-18T08:29:48.507700114Z","level":"INFO","prefix":"echo","file":"proc.go","line":"195","message":"Zipkin URL was not provided, tracing is not initialised"}
____ __
/ __/___/ / ___
/ _// __/ _ \/ _ \
/___/\__/_//_/\___/ v3.2.6
High performance, minimalist Go web framework
https://echo.labstack.com
____________________________________O/_______
O\
⇨ http server started on [::]:44279
The flag is the value for the key file in the JSON payload. Sure enough, we see the "file":"proc.go"
message and the redacted part is proc.go
.
DevSlopCTF{proc.go}
Users API (399 Points)
We are getting the hang of it now. We simply need to repeat the actions above for the remaining services. Instead of creating a new deployment file with the touch
command, I will simply copy the previous one we created and edit based on specifications.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/users-api$ mkdir k8s
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/users-api$ cd k8s/
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/users-api/k8s$ cp ../auth-api/k8s/deployment.yaml .
The requirements this time are:
- This configuration file should define a Deployment, and NOT a Pod (kind: Deployment)
- The name of the Deployment should be users-api (name: users-api)
- The container should run in your team’s namespace (team4)
- There should be one single users-api container running at a time (replicas: 1)
- Assign a label to the Pod: app: users-api (app: users-api)
- When defining a container:
- The name of the container should be users-api (name: users-api)
- The image should be downloaded from ECR (051096112438.dkr.ecr.us-east-1.amazonaws.com/team4-users-api)
- The container should listen on port 8083. Give a name to this container port (name: usersport)
- Finally, set the Pod’s restart policy to Always (restartPolicy: Always)
Our updated deployment file looks like this
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/users-api$ gedit deployment.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/users-api$ cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: users-api
spec:
replicas: 1
selector:
matchLabels:
app: users-api
template:
metadata:
labels:
app: users-api
spec:
containers:
- name: users-api
image: 051096112438.dkr.ecr.us-east-1.amazonaws.com/team4-users-api
ports:
- name: usersport
containerPort: 8083
restartPolicy: Always
Let’s now deploy the users API.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/users-api$ kubectl apply -f deployment.yaml --namespace=team4
deployment.apps/users-api created
Capture The Flag
Let’s verify that the pod has been created and is running.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/users-api$ kubectl get pods -n team4
NAME READY STATUS RESTARTS AGE
users-api-5558c96597-rjxmq 1/1 Running 0 47s
We need to dump the logs of this API.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/users-api$ kubectl logs users-api-5558c96597-rjxmq -n team4
2021-02-18 08:53:03.724 INFO [bootstrap,,,] 1 --- [ main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@78e03bb5: startup date [Thu Feb 18 08:53:03 GMT 2021]; root of context hierarchy
//TRUNCATED
2021-02-18 08:53:18.702 INFO [users-api,,,] 1 --- [ main] com.elgris.usersapi.UsersApiApplication : Started UsersApiApplication in 15.703 seconds (JVM running for 16.601)
The logs are a bit chatty but at the bottom, we see the entry at the end. The flag is the string that has been replaced with [REDACTED] in the statement Started [REDACTED] in XXXX seconds (JVM running for XXXX)
. We see the redacted word is UsersApiApplication
.
DevSlopCTF{UsersApiApplication}
TODOs API (400 Points)
Let’s repeat the actions above for the Todo service. Instead of creating a new deployment file with the touch
command, I will simply copy the previous one we created and edit based on specifications.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/todos-api$ mkdir k8s
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/todos-api$ cd k8s/
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/todos-api/k8s$ ls
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/todos-api/k8s$ cp ../../auth-api/k8s/deployment.yaml .
The requirements this time are:
- This configuration file should define a Deployment, and NOT a Pod (kind: Deployment)
- The name of the Deployment should be todos-api (name: todos-api)
- The container should run in your team’s namespace (team4)
- There should be one single todos-api container running at a time (replicas: 1)
- Assign two labels to the Pod: app: todos-api and redis-access: "true"
- When defining a container:
- The name of the container should be todos-api (name: todos-api)
- The image should be downloaded from ECR (051096112438.dkr.ecr.us-east-1.amazonaws.com/team4-todos-api)
- The container should listen on port 8082. Give a name to this container port (name: todosport)
- Finally, set the Pod’s restart policy to Always (restartPolicy: Always)
This is what we get
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/todos-api/k8s$ gedit deployment.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/todos-api/k8s$ cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: todos-api
spec:
replicas: 1
selector:
matchLabels:
app: todos-api
redis-access: "true"
template:
metadata:
labels:
app: todos-api
redis-access: "true"
spec:
containers:
- name: todos-api
image: 051096112438.dkr.ecr.us-east-1.amazonaws.com/team4-todos-api
ports:
- name: todosport
containerPort: 8082
restartPolicy: Always
Let us deploy the Todo API.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/todos-api/k8s$ kubectl apply -f deployment.yaml --namespace=team4
deployment.apps/todos-api created
Capture The Flag
Let us verify that the pod is up and running then dump the logs.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/todos-api/k8s$ kubectl get pods -n team4
NAME READY STATUS RESTARTS AGE
todos-api-5787675bc6-qw4cw 1/1 Running 0 4m38s
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/todos-api/k8s$ kubectl logs todos-api-5787675bc6-qw4cw -n team4
> todos-api@1.0.0 start /usr/src/app
> nodemon server.js
[nodemon] 1.19.4
[nodemon] to restart at any time, enter `rs`
[nodemon] watching dir(s): *.*
[nodemon] watching extensions: js,mjs,json
[nodemon] starting `node server.js`
todo list RESTful API server started on: 8082
We are to pay attention to the string [REDACTED] starting node server.js
. We see that the flag maps to nodemon
.
DevSlopCTF{nodemon}
Log Message Processor (400 Points)
The last service to deploy is the Log Message Processor. Same thing applies here. Lets create the folder and copy over the deployment file so that we can edit is.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/todos-api/k8s$ cd ../../log-message-processor/
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/log-message-processor$ mkdir k8s
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/log-message-processor$ cd k8s/
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/log-message-processor/k8s$ cp ../../auth-api/k8s/deployment.yaml .
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/log-message-processor/k8s$ gedit deployment.yaml
The requirements this time are:
- This configuration file should define a Deployment, and NOT a Pod (kind: Deployment)
- The name of the Deployment should be log-message-processor (name: log-message-processor)
- The container should run in your team’s namespace (team4)
- There should be one single todos-api container running at a time (replicas: 1)
- Assign two labels to the Pod: app: log-message-processor and redis-access: "true"
- When defining a container:
- The name of the container should be log-message-processor (name: log-message-processor)
- The image should be downloaded from ECR (051096112438.dkr.ecr.us-east-1.amazonaws.com/team4-log-message-processor)
- The container should NOT listen on any port (let’s remove the contents of ports:)
- Finally, set the Pod’s restart policy to Always (restartPolicy: Always)
The resulting deployment file is below
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/log-message-processor/k8s$ gedit deployment.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/log-message-processor/k8s$ cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: log-message-processor
spec:
replicas: 1
selector:
matchLabels:
app: log-message-processor
redis-access: "true"
template:
metadata:
labels:
app: log-message-processor
redis-access: "true"
spec:
containers:
- name: log-message-processor
image: 051096112438.dkr.ecr.us-east-1.amazonaws.com/team4-log-message-processor
restartPolicy: Always
Finally, let’s deploy it.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/log-message-processor/k8s$ kubectl apply -f deployment.yaml --namespace=team4
deployment.apps/log-message-processor created
Capture The Flag
Let’s verify the status of the Pod.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/log-message-processor/k8s$ kubectl get pods -n team4
NAME READY STATUS RESTARTS AGE
log-message-processor-78b6668c64-r67cz 0/1 CrashLoopBackOff 1 12s
We see that the pod is not running and that we hit the CrashLoopBackOff
status. This is expected as per the instructions and will be fixed later.
Let’s analyze the logs for this pod to see the error.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/log-message-processor/k8s$ kubectl logs log-message-processor-78b6668c64-r67cz -n team4
Traceback (most recent call last):
File "main.py", line 16, in <module>
redis_host = os.environ['REDIS_HOST']
File "/usr/local/lib/python3.6/os.py", line 669, in __getitem__
raise KeyError(key) from None
KeyError: 'REDIS_HOST'
The flag is the string that has been replaced with {REDACTED} in the line KeyError: '{REDACTED}'
. In this case, the value is REDIS_HOST
.
DevSlopCTF{REDIS_HOST}
Configuring Environment Variables with ConfigMaps and Secrets
In this section, we will be dealing with ConfigMaps and Secrets which enable us to provide a few environment variables to each microservice so they can function correctly.
Introduction (5 Points)
We see a sample showing how to add variables to a container using the env
parameter. We are advised to use ConfigMaps and Secrets to do so. The difference being that ConfigMaps are used to store non-confidental data, while Secrets are used for sensitive information.
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"
Capture The Flag
Let’s confirm that we do know what ConfigMaps and Secrets are.
DevSlopCTF{ok}
Front-end (250 Points)
The Front-end needs 3 environment variables to function correctly, none of which is confidential. For that case, we need to use ConfigMap to render those variables.
- AUTH_API_ADDRESS: The address of the Auth API
- PORT: The port where the Front-end should listen to traffic on
- TODOS_API_ADDRESS: The address of the TODOs API
Let’s create a file called configmap.yaml
based on the requirements below. We can get a sample of a ConfigMap from the official documentation here
- The name of the ConfigMap should be frontend (name: frontend)
- The ConfigMap should define 3 key-value pairs (data:):
- AUTH_API_ADDRESS: http://auth-api:8081
- PORT: 8080
- TODOS_API_ADDRESS: http://todos-api:8082
Here is the ConfigMap I used
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend/k8s$ gedit configmap.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend/k8s$ cat configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: frontend
data:
AUTH_API_ADDRESS: http://auth-api:8081
PORT: "8080"
TODOS_API_ADDRESS: http://todos-api:8082
Let’s deploy the configmap and verify that the Front-end has been created successfully and it defines 3 key-value pairs.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend/k8s$ kubectl apply -f configmap.yaml --namespace=team4
configmap/frontend created
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend/k8s$ kubectl get configmap frontend -n team4
NAME DATA AGE
frontend 3 60s
We now need to ensure that the deployment.yaml file calls the contents of the ConfigMap when the pods are deployed. We can use the example from the official documentation here to make the changes in the deployment file.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend/k8s$ gedit deployment.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend/k8s$ cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 1
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: 051096112438.dkr.ecr.us-east-1.amazonaws.com/team4-frontend:frontend
ports:
- name: webport
containerPort: 8080
env: # this is how we specify the environmental variables
- name: AUTH_API_ADDRESS
valueFrom:
configMapKeyRef:
name: frontend # the name of out ConfigMap is frontend
key: AUTH_API_ADDRESS
- name: PORT
valueFrom:
configMapKeyRef:
name: frontend # the name of out ConfigMap is frontend
key: PORT
- name: TODOS_API_ADDRESS
valueFrom:
configMapKeyRef:
name: frontend # the name of out ConfigMap is frontend
key: TODOS_API_ADDRESS
restartPolicy: Always
Let’s update the deployment and double-check that all is well.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend/k8s$ kubectl apply -f deployment.yaml --namespace=team4
deployment.apps/frontend configured
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend/k8s$ kubectl get pods -n team4
NAME READY STATUS RESTARTS AGE
frontend-5f75977b7b-kq6d5 1/1 Running 0 69s
Capture The Flag
We need to dump the logs from the frontend container and review the output.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend/k8s$ kubectl logs frontend-5f75977b7b-kq6d5 -n team4
> frontend@1.0.0 start /usr/src/app
> node build/dev-server.js
[HPM] Proxy created: /login -> http://auth-api:8081
[HPM] Proxy created: /todos -> http://todos-api:8082
[HPM] Proxy created: /zipkin -> http://127.0.0.1:9411/api/v2/spans
[HPM] Proxy rewrite rule created: "^/zipkin" ~> ""
> Starting dev server...
WARNING Compiled with 4 warnings11:09:17
warning in ./src/components/Todos.vue
(Emitted value instead of an instance of Error) Do not use v-for index as key on <transition-group> children, this is the same as not using keys.
@ ./src/components/Todos.vue 6:2-301
@ ./src/router/index.js
@ ./src/main.js
@ multi ./build/dev-client ./src/main.js
warning in ./~/zipkin/lib/InetAddress.js
62:23-30 Critical dependency: require function is used in a way in which dependencies cannot be statically extracted
warning in ./~/zipkin-transport-http/lib/HttpLogger.js
11:27-34 Critical dependency: require function is used in a way in which dependencies cannot be statically extracted
warning in ./~/zipkin-instrumentation-vue-resource/~/zipkin/lib/InetAddress.js
62:23-30 Critical dependency: require function is used in a way in which dependencies cannot be statically extracted
> Listening at http://127.0.0.1:8080
we are to pay attention to the line [HPM] Proxy created: /todos -> http://todos-api:8082
. The flag is the string http://todos-api:8082
.
DevSlopCTF{http://todos-api:8082}
TODOs API (299 Points)
For the TODOs API, we need 5 environmental variables to function. One is confidential, so we will use Secrets for it.
- JWT_SECRET: The JWT token used to authenticate with the TODOs API and that will be sent by the Front-end
The rest are non-confidential, therefore, we will use ConfigMap.
- TODO_API_PORT: The port where the TODOs API should listen to traffic on
- REDIS_HOST: The address of Redis where data will be temporarily stored
- REDIS_PORT: The port where Redis will be listening to traffic on
- REDIS_CHANNEL: The name of the Redis channel
Let’s create both the configmap.yaml and the secret.yaml. We will start by creating the configmap.yaml.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/todos-api/k8s$ touch configmap.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/todos-api/k8s$ touch secret.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/todos-api/k8s$ gedit configmap.yaml
The requirements for the ConfigMap are below. This is similar to the one we just created.
- The name of the ConfigMap should be todos-api (name: todos-api)
- The ConfigMap should define 4 key-value pairs:
- TODO_API_PORT: 8082
- REDIS_HOST: redis-queue
- REDIS_PORT: 6379
- REDIS_CHANNEL: log_channel
The result is below:
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/todos-api/k8s$ gedit configmap.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/todos-api/k8s$ cat configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: todos-api
data:
TODO_API_PORT: "8082"
REDIS_HOST: redis-queue
REDIS_PORT: "6379"
REDIS_CHANNEL: log_channel
For the Secret file, we can see a sample here. The requirements for the file are:
- The name of the Secret should be todos-api (type: Opaque)
- The type should be Opaque
- The Secret should define a single key-value pair, where the key is JWT_SECRET and the decoded value is myfancysecret. The secret can be encoded as "bXlmYW5jeXNlY3JldA=="
Th eresulting file contents are below.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/todos-api/k8s$ gedit secret.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/todos-api/k8s$ cat secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: todos-api
namespace: team4
type: Opaque
data:
JWT_SECRET: bXlmYW5jeXNlY3JldA==
Let’s now deploy both ConfigMap and Secret.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/todos-api/k8s$ kubectl apply -f configmap.yaml --namespace=team4
configmap/todos-api created
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/todos-api/k8s$ kubectl apply -f secret.yaml --namespace=team4
secret/todos-api created
Next, we perform validation of the two.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/todos-api/k8s$ kubectl get configmap todos-api -n team4
NAME DATA AGE
todos-api 4 68s
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/todos-api/k8s$ kubectl get secret todos-api -n team4
NAME TYPE DATA AGE
todos-api Opaque 1 2m28s
Everything looks fine, so let’s go ahead and update the deployment.yaml
file so that environment variables are created from the content of these 2 resources. From the secret example, we can see how to call the secret values in the Deployment file. The resulting file is shown below
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/todos-api/k8s$ gedit deployment.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/todos-api/k8s$ cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: todos-api
spec:
replicas: 1
selector:
matchLabels:
app: todos-api
redis-access: "true"
template:
metadata:
labels:
app: todos-api
redis-access: "true"
spec:
containers:
- name: todos-api
image: 051096112438.dkr.ecr.us-east-1.amazonaws.com/team4-todos-api
ports:
- name: todosport
containerPort: 8082
env: # this is how we specify the environmental variables
- name: JWT_SECRET
valueFrom:
secretKeyRef:
name: todos-api # the name of out ConfigMap
key: JWT_SECRET
- name: TODO_API_PORT
valueFrom:
configMapKeyRef:
name: todos-api # the name of out ConfigMap
key: TODO_API_PORT
- name: REDIS_HOST
valueFrom:
configMapKeyRef:
name: todos-api # the name of out ConfigMap
key: REDIS_HOST
- name: REDIS_PORT
valueFrom:
configMapKeyRef:
name: todos-api # the name of out ConfigMap
key: REDIS_PORT
- name: REDIS_CHANNEL
valueFrom:
configMapKeyRef:
name: todos-api # the name of out ConfigMap is frontend
key: REDIS_CHANNEL
restartPolicy: Always
We need to update the deployment to apply the changes.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/todos-api/k8s$ kubectl apply -f deployment.yaml --namespace=team4
deployment.apps/todos-api configured
Capture The Flag
Let’s dump the logs to get the flag. We are to lookout for the reattemtping to [REDACTED], (...)
line.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/todos-api/k8s$ kubectl get pods -n team4
NAME READY STATUS RESTARTS AGE
auth-api-77b95f88c-kmg7v 1/1 Running 0 13h
frontend-5f75977b7b-kq6d5 1/1 Running 0 10h
log-message-processor-78b6668c64-r67cz 0/1 CrashLoopBackOff 141 11h
todos-api-96f98d577-l2sjz 1/1 Running 0 49s
users-api-5558c96597-rjxmq 1/1 Running 0 12h
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/todos-api/k8s$ kubectl logs todos-api-96f98d577-l2sjz -n team4
> todos-api@1.0.0 start /usr/src/app
> nodemon server.js
[nodemon] 1.19.4
[nodemon] to restart at any time, enter `rs`
[nodemon] watching dir(s): *.*
[nodemon] watching extensions: js,mjs,json
[nodemon] starting `node server.js`
todo list RESTful API server started on: 8082
reattemtping to connect to redis, attempt #11
The redacted part is connect to redis
. We got our flag.
DevSlopCTF{connect to redis}
Log Message Processor (299 Points)
Now that we have an understanding of how to create the ConfigMap and the Secrets, let’s just duplicate that to the remianing services starting with Log Message Processor.
The Log Message Processor needs 3 environment variables to function correctly, none of which are confidential.
- REDIS_HOST: The address of Redis
- REDIS_PORT: The port where Redis is listening to traffic on
- REDIS_CHANNEL: The name of the Redis channel
The requirements for the file are:
- The name of the ConfigMap should be log-message-processor
- The ConfigMap should define 3 key-value pairs:
- REDIS_HOST: redis-queue
- REDIS_PORT: 6379
- REDIS_CHANNEL: log_channel
The resulting file is as follows:
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/log-message-processor/k8s$ gedit configmap.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/log-message-processor/k8s$ cat configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: log-message-processor
data:
REDIS_HOST: redis-queue
REDIS_PORT: "6379"
REDIS_CHANNEL: log_channel
Let’s deploy the configmap.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/log-message-processor/k8s$ kubectl apply -f configmap.yaml --namespace=team4
configmap/log-message-processor created
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/log-message-processor/k8s$ kubectl get configmap log-message-processor -n team4
NAME DATA AGE
log-message-processor 3 71s
Let’s update and deploy the deployment file.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/log-message-processor/k8s$ gedit deployment.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/log-message-processor/k8s$ cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: log-message-processor
spec:
replicas: 1
selector:
matchLabels:
app: log-message-processor
redis-access: "true"
template:
metadata:
labels:
app: log-message-processor
redis-access: "true"
spec:
containers:
- name: log-message-processor
image: 051096112438.dkr.ecr.us-east-1.amazonaws.com/team4-log-message-processor
env: # this is how we specify the environmental variables
- name: REDIS_HOST
valueFrom:
configMapKeyRef:
name: log-message-processor # the name of out ConfigMap
key: REDIS_HOST
- name: REDIS_PORT
valueFrom:
configMapKeyRef:
name: log-message-processor # the name of out ConfigMap
key: REDIS_PORT
- name: REDIS_CHANNEL
valueFrom:
configMapKeyRef:
name: log-message-processor # the name of out ConfigMap
key: REDIS_CHANNEL
restartPolicy: Always
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/log-message-processor/k8s$ kubectl apply -f deployment.yaml --namespace=team4
deployment.apps/log-message-processor created
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/log-message-processor/k8s$ kubectl get pods -n team4
NAME READY STATUS RESTARTS AGE
auth-api-77b95f88c-kmg7v 1/1 Running 0 14h
frontend-5f75977b7b-kq6d5 1/1 Running 0 11h
log-message-processor-6ffff88b8b-k6jch 0/1 Error 1 10s
todos-api-96f98d577-l2sjz 1/1 Running 0 44m
users-api-5558c96597-rjxmq 1/1 Running 0 13h
Capture The Flag
Let’s analyze the logs from the container to find the flag.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/log-message-processor/k8s$ kubectl logs log-message-processor-6ffff88b8b-k6jch -n team4
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/redis/connection.py", line 484, in connect
sock = self._connect()
// TRUNCATED
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "main.py", line 28, in <module>
pubsub.subscribe([redis_channel])
File "/usr/local/lib/python3.6/site-packages/redis/client.py", line 2482, in subscribe
ret_val = self.execute_command('SUBSCRIBE', *iterkeys(new_channels))
File "/usr/local/lib/python3.6/site-packages/redis/client.py", line 2404, in execute_command
self._execute(connection, connection.send_command, *args)
File "/usr/local/lib/python3.6/site-packages/redis/client.py", line 2415, in _execute
connection.connect()
File "/usr/local/lib/python3.6/site-packages/redis/connection.py", line 489, in connect
raise ConnectionError(self._error_message(e))
redis.exceptions.ConnectionError: Error -2 connecting to redis-queue:6379. Name does not resolve.
We are supposed to be on the lookout for the redacted field in the line redis.exceptions.ConnectionError: Error -2 connecting to redis-queue:6379. [REDACTED].
.
In this case, the flag is Name does not resolve
.
DevSlopCTF{Name does not resolve}
Users API (308 Points)
The Users API needs 2 environment variables to function correctly:
- JWT_SECRET: The JWT token used to authenticate with the Users API
- SERVER_PORT: The port where the Users API should listen to traffic on
We will need to use both ConfigMap and Secret. The ConfigMap will be for the port and the Secret for the token.
The requirements for the configmap are:
- The name of the ConfigMap should be users-api
- The ConfigMap should define 1 key-value pair:
- SERVER_PORT: 8083
The resulting file is shown below.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/users-api/k8s$ touch configmap.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/users-api/k8s$ gedit configmap.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/users-api/k8s$ cat configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: users-api
data:
SERVER_PORT: "8083"
Next, we need to work on the secret.yaml file. The requirements for this file are
- The name of the Secret should be users-api
- The type should be Opaque
- The Secret should define a single key-value pair, where the key is JWT_SECRET and the decoded value is myfancysecret. This can be base64 encoded in cyberchef to "bXlmYW5jeXNlY3JldA==". The resulting file is shown below.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/users-api/k8s$ touch secret.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/users-api/k8s$ gedit secret.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/users-api/k8s$ cat secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: users-api
namespace: team4
type: Opaque
data:
JWT_SECRET: bXlmYW5jeXNlY3JldA==
The two files are ready to be deployed.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/users-api/k8s$ kubectl apply -f configmap.yaml --namespace=team4
configmap/users-api created
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/users-api/k8s$ kubectl apply -f secret.yaml --namespace=team4
secret/users-api created
Let’s verify that both ConfigMap and Secret have been deployed correctly.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/users-api/k8s$ kubectl get configmap users-api -n team4
NAME DATA AGE
users-api 1 79s
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/users-api/k8s$ kubectl get secret users-api -n team4
NAME TYPE DATA AGE
users-api Opaque 1 82s
We need to call the contents of the two files in the deployment file and then update it so that the changes can propagate.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/users-api/k8s$ gedit deployment.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/users-api/k8s$ cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: users-api
spec:
replicas: 1
selector:
matchLabels:
app: users-api
template:
metadata:
labels:
app: users-api
spec:
containers:
- name: users-api
image: 051096112438.dkr.ecr.us-east-1.amazonaws.com/team4-users-api
ports:
- name: usersport
containerPort: 8083
env:
- name: JWT_SECRET
valueFrom:
secretKeyRef:
name: users-api
key: JWT_SECRET
- name: SERVER_PORT
valueFrom:
configMapKeyRef:
name: users-api
key: SERVER_PORT
restartPolicy: Always
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/users-api/k8s$ kubectl apply -f deployment.yaml --namespace=team4
deployment.apps/users-api configured
Capture The Flag
In this case, we are interested in the JSONPath we can follow this link for how to do that in kubectl. We need to run a command that will retrieve the image name of the running users-api container using JSONPath and using the least amount of keys in the JSONPath expression. I used a JSON formatter to beautify the output and narrow down to the path.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/users-api/k8s$ kubectl get pods users-api-65bc9f45d9-lk99w -n team4 -o=jsonpath='{..image}'
051096112438.dkr.ecr.us-east-1.amazonaws.com/team4-users-api 051096112438.dkr.ecr.us-east-1.amazonaws.com/team4-users-api:latest
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/users-api/k8s$ kubectl get pods users-api-65bc9f45d9-lk99w -n team4 -o=jsonpath='{.spec.containers[0].image}'
DevSlopCTF{{.spec.containers[0].image}}
Auth API (339 Points)
The Auth API needs 3 environment variables to function correctly, one of which is confidential.
- AUTH_API_PORT: The port where the Auth API should listen to traffic on
- JWT_SECRET: The JWT token used to authenticate with the Auth API and that will be sent by the Front-end
- USERS_API_ADDRESS: The address of the Users API
We will use Secret for the JWT_SECRET
and ConfigMap for the other two.
The requirements of the ConfigMap are:
- The name of the ConfigMap should be auth-api
- The ConfigMap should define 2 key-value pairs:
- AUTH_API_PORT: 8081
- USERS_API_ADDRESS: http://users-api:8083
The contents of the ConfigMap are below.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/auth-api/k8s$ touch configmap.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/auth-api/k8s$ gedit configmap.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/auth-api/k8s$ cat configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: auth-api
data:
AUTH_API_PORT: "8081"
USERS_API_ADDRESS: http://users-api:8083
The requirements for Secret are as follows:
- The name of the Secret should be auth-api
- The type should be Opaque
- The Secret should define a single key-value pair, where the key is JWT_SECRET and the decoded value is myfancysecret. This can be base64 encoded in cyberchef to "bXlmYW5jeXNlY3JldA==".
We end up with the contents below.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/auth-api/k8s$ touch secret.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/auth-api/k8s$ gedit secret.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/auth-api/k8s$ cat secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: auth-api
namespace: team4
type: Opaque
data:
JWT_SECRET: bXlmYW5jeXNlY3JldA==
Let’s deploy both files and update the deployment.yaml.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/auth-api/k8s$ kubectl apply -f secret.yaml --namespace=team4
secret/auth-api created
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/auth-api/k8s$ kubectl apply -f configmap.yaml --namespace=team4
configmap/auth-api created
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/auth-api/k8s$ gedit deployment.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/auth-api/k8s$ cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-api
spec:
replicas: 1
selector:
matchLabels:
app: auth-api
template:
metadata:
labels:
app: auth-api
spec:
containers:
- name: auth-api
image: 051096112438.dkr.ecr.us-east-1.amazonaws.com/team4-auth-api:authapi
ports:
- name: authport
containerPort: 8081
env:
- name: JWT_SECRET
valueFrom:
secretKeyRef:
name: auth-api
key: JWT_SECRET
- name: AUTH_API_PORT
valueFrom:
configMapKeyRef:
name: auth-api
key: AUTH_API_PORT
- name: USERS_API_ADDRESS
valueFrom:
configMapKeyRef:
name: auth-api
key: USERS_API_ADDRESS
restartPolicy: Always
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/auth-api/k8s$ kubectl apply -f deployment.yaml --namespace=team4
deployment.apps/auth-api configured
Let’s verify that both ConfigMap and Secret have been deployed correctly.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/auth-api/k8s$ kubectl get configmap auth-api -n team4
NAME DATA AGE
auth-api 2 9m21s
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/auth-api/k8s$ kubectl get secret auth-api -n team4
NAME TYPE DATA AGE
auth-api Opaque 1 9m52s
Capture The Flag
To capture the flag for this task, we need to take the value of the key JWT_SECRET from secret.yaml and run it through a SHA256 Hash function.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/auth-api/k8s$ cat secret.yaml | grep JWT_SECRET
JWT_SECRET: bXlmYW5jeXNlY3JldA==
We see that the value is bXlmYW5jeXNlY3JldA==
. We can use cyberchef to convert that. We get c5cbf6a748f4d29837043daefafc2531ad1c701410e1aa2bb9baa99858827736
. The flag will be the first 10 characters of the hash, that is c5cbf6a748
.
DevSlopCTF{c5cbf6a748}
Deploying Redis
So far, we have not yet deployed Redis. Redis is an open source, in-memory data structure store, used as a database, cache, and message broker. In our TODO application, Redis will be used to store the items in the TODO list.
We need to create a folder to house the contents of redis. We will then create the deployment file.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf$ mkdir redis-queue
userxx@lsv-u01:~/Downloads/Kubernetes-ctf$ touch deployment.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf$ gedit deployment.yaml
The requirements of the deployment.yaml are as follows:
- This configuration file should define a Deployment, and NOT a Pod
- The name of the Deployment should be redis-queue
- The container should run in your team’s namespace (team1, team2, team3 etc)
- There should be one single redis-queue container running at a time
- Assign a label to the Pod: app: redis-queue
- When defining a container:
- The name of the container should be redis-queue
- The name of the Docker image should be redis (no username required – the image name is literally just redis)
- The container should listen on port 6379 (tip: give a name to this container port)
- Finally, set the Pod’s restart policy to Always
The resulting file is show below.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf$ cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-queue
namespace: team4
spec:
replicas: 1
selector:
matchLabels:
app: redis-queue
template:
metadata:
labels:
app: redis-queue
spec:
containers:
- name: redis-queue
image: redis
ports:
- name: redisport
containerPort: 6379
restartPolicy: Always
Let’s now try to deploy redis.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf$ kubectl apply -f deployment.yaml --namespace=team4
deployment.apps/redis-queue created
Capture The Flag
We need to verify whether Redis is running or not.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf$ kubectl get pods -n team4
NAME READY STATUS RESTARTS AGE
redis-queue-769696f9fb-pp6wx 1/1 Running 0 104s
we see that it is running. We now need to review the logs to find information about the mode that Redis is running on.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf$ kubectl logs redis-queue-769696f9fb-pp6wx -n team4
1:C 19 Feb 2021 00:29:07.800 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 19 Feb 2021 00:29:07.800 # Redis version=6.0.10, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 19 Feb 2021 00:29:07.800 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
1:M 19 Feb 2021 00:29:07.801 * Running mode=standalone, port=6379.
1:M 19 Feb 2021 00:29:07.801 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 19 Feb 2021 00:29:07.801 # Server initialized
1:M 19 Feb 2021 00:29:07.801 * Ready to accept connections
We see the line Running mode=standalone, port=6379.
. We found the mode that Redis is running on – standalone
. That’s the flag.
DevSlopCTF{standalone}
Deploying Services
In this section, we are introduced to the concept of Services.
Introduction (5 Points)
Services offer some form of loadbalancing for Pods. Services are an abstraction that defines how to access a logical set of Pods. It does serve traffic to multiple IP addresses, but it’s not as smart or uses the same powerful algorithms that some Load Balancers do.
Capture The Flag
Let’s confirm that we know what Services are and are ready to start the task.
DevSlopCTF{ok}
Front-end (379 Points)
We need to create a service configuration file based on the following requirements:
- The name of the service should be frontend
- The service should listen to traffic on port 8080
- The service should only be reachable from within the cluster (type: ClusterIP makes the Service only reachable from within the cluster)
For that, let’s copy a sample Service from the official documentation.
userxx@lsv-u01:~$ cd Downloads/Kubernetes-ctf/frontend/
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend$ cd k8s/
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend/k8s$ touch service.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend/k8s$ gedit service.yaml
Based on the requirements, this is what we have
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend/k8s$ gedit service.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend/k8s$ cat service.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend/k8s$ cat service.yaml
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
type: ClusterIP
selector:
app: frontend
ports:
- port: 8080
Let’s deploy the service.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend/k8s$ kubectl apply -f service.yaml --namespace=team4
service/frontend created
Capture The Flag
We need to send a request to the frontend based on the listening port we set and analyse the output.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend/k8s$ kubectl -n team4 run -i -t curl --image=curlimages/curl --restart=Never --command sh
If you don't see a command prompt, try pressing enter.
/ $ curl frontend:8080
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>frontend</title>
</head>
<body>
<div class="container">
<div id="app"></div>
</div>
/app.js</body>
</html>/ $
The flag is the string seen in the line http://[REDACTED]</body>
. In this case, redacted is /app.js
.
DevSlopCTF{/app.js}
Auth API (379 Points)
The requiremenst for this service file similar to the ones seen in the frontend.
- The name of the service should be auth-api
- The service should listen to traffic on port 8081
- The service should only be reachable from within the cluster
The resulting contents are:
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/auth-api/k8s$ cp ../../frontend/k8s/service.yaml .
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/auth-api/k8s$ gedit service.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/auth-api/k8s$ cat service.yaml
apiVersion: v1
kind: Service
metadata:
name: auth-api
spec:
type: ClusterIP
selector:
app: auth-api
ports:
- port: 8081
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/auth-api/k8s$ kubectl apply -f service.yaml --namespace=team4
service/auth-api created
Capture The Flag
We need to send a request to the frontend based on the listening port we set and analyse the output. The port here is 8081.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/auth-api/k8s$ kubectl -n team4 run -i -t curl --image=curlimages/curl --restart=Never --command sh
Error from server (AlreadyExists): pods "curl" already exists
We are getting an error since the curl pod already exists, so we need to use the existing curl pod.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/auth-api/k8s$ kubectl exec -it curl -n team4 -- sh
/ $ curl auth-api:8081
{"message":"Not Found"}/
We are supposed to pay attention to the returned output {"message":"[REDACTED]"}
. We see that the flag is Not Found
.
DevSlopCTF{Not Found}
TODOs API (379 Points)
We have similar service requirements for this API.
- The name of the service should be todos-api
- The service should listen to traffic on port 8082
- The service should only be reachable from within the cluster
The resulting contents are:
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend/k8s$ cd ../../todos-api/k8s/
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/todos-api/k8s$ cp ../../frontend/k8s/service.yaml .
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/todos-api/k8s$ gedit service.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/todos-api/k8s$ cat service.yaml
apiVersion: v1
kind: Service
metadata:
name: todos-api
spec:
type: ClusterIP
selector:
app: todos-api
ports:
- port: 8082
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/todos-api/k8s$ kubectl apply -f service.yaml --namespace=team4
service/todos-api created
Capture The Flag
Since we already deployed the curl
pod, let’s login to it. We are supposed to issue the command curl todos-api:<PORT>
and observe the message {"message":"[REDACTED]"}
.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/todos-api/k8s$ kubectl -n team4 run -i -t curl --image=curlimages/curl --restart=Never --command sh
If you don't see a command prompt, try pressing enter.
/ $ curl todos-api:8082
{"message":"invalid token"}/
In this case, the flag is invalid token
.
DevSlopCTF{invalid token}
Users API (379 Points)
We have similar service requirements for this API.
- The name of the service should be users-api
- The service should listen to traffic on port 8083
- The service should only be reachable from within the cluster
The resulting contents are shown below
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/auth-api/k8s$ cd ../../users-api/k8s/
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/users-api/k8s$ cp ../../frontend/k8s/service.yaml .
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/users-api/k8s$ gedit service.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/users-api/k8s$ cat service.yaml
apiVersion: v1
kind: Service
metadata:
name: users-api
spec:
type: ClusterIP
selector:
app: users-api
ports:
- port: 8083
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/users-api/k8s$ kubectl apply -f service.yaml --namespace=team4
service/users-api created
Capture The Flag
Since we already deployed the curl
pod, let’s login to it.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/users-api/k8s$ kubectl exec -it curl -n team4 -- sh
/ $ curl users-api:8083
{"timestamp":1613753694398,"status":500,"error":"Internal Server Error","exception":"javax.servlet.ServletException","message":"Missing or invalid Authorization header","path":"/"}/ $
We are supposed to keep an eye on this part "exception":"[REDACTED]","message":
. The redacted phrase is javax.servlet.ServletException
. Found the flag.
DevSlopCTF{javax.servlet.ServletException}
Redis (379 Points)
The last Service configutarion file to deploy is that of Redis. The requirements are similar as previous tasks.
- The name of the service should be redis-queue
- The service should listen to traffic on port 6379
- The service should only be reachable from within the cluster
The resulting contents are shown below
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/redis-queue$ cp ../frontend/k8s/service.yaml .
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/redis-queue$ gedit service.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/redis-queue$ cat service.yaml
apiVersion: v1
kind: Service
metadata:
name: redis-queue
spec:
type: ClusterIP
selector:
app: redis-queue
ports:
- port: 6379
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/redis-queue$ kubectl apply -f service.yaml --namespace=team4
service/redis-queue created
Capture The Flag
The same case applies here, we are to login to the curl pod and try and connect to redis on the specified port.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/redis-queue$ kubectl exec -it curl -n team4 -- sh
/ $ curl redis-queue:6379
curl: (52) Empty reply from server
The flag has been redacted in the statement curl: (52) [REDACTED]
. It appears to be Empty reply from server
.
DevSlopCTF{Empty reply from server}
Configuring Ingress For The Front-End
In this section, we will connect the Front-end publicly so that we can load the TODOs application’s interface in our Web Browser.
Introduction (5 Points)
We are introduced to the concepts of ingress and Ingress Controller. We will need to deploy an Ingress for the Front-end. The Nginx Ingress Controller is already sitting behind a Load Balancer so does not need to be setup.
Capture The Flag
Let’s acknowledge that we have an understanding of Ingress and Ingress Controller by submitting the flag.
DevSlopCTF{ok}
Front-end (5995 Points)
To make the Front-end accessible from outside the cluster, we need to deploy a resource of type Ingress
. For the Ingress to be part of athe Nginx Ingress Controller
that was already setup and for it to receive traffic from that Ingress Controller’s Load Balancer, we need to specify the name of the class for the Ingress, which is nginx
.
The requirements for the ingress are as follows:
- The API version should be networking.k8s.io/v1beta1 (apiVersion: networking.k8s.io/v1beta1)
- The name of the resource should be frontend (name: frontend)
- The Ingress class should be set to nginx (Kubernetes.io/ingress.class: "nginx")
- There should be a single rule which routes HTTP traffic to the frontend Service on port 8080 when the Host header is set to frontend.<TEAM_NAME>.k8s-ctf.com, where <TEAM_NAME> is the name of your team (host: frontend.team4.k8s-ctf.com)
We can fetch a sample of ingress file from here. Also found this helpful. The result is as follows:
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/todos-api/k8s$ cd ../../frontend/k8s/
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend/k8s$ gedit ingress.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend/k8s$ gedit ingress.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend/k8s$ cat ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: frontend
spec:
ingressClassName: "nginx"
rules:
- host: frontend.team4.k8s-ctf.com
http:
paths:
- backend:
serviceName: frontend
servicePort: 8080
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend/k8s$ kubectl apply -f ingress.yaml --namespace=team4
ingress.networking.k8s.io/frontend created
Capture The Flag
To verify whether or not the ingress was properly setup, we need to see if the address of the Load Balancer in the ingress controller has been associated with the Front-end’s ingress resource. We can validate that using the command shown.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend/k8s$ kubectl get ingress -n team4
NAME CLASS HOSTS ADDRESS PORTS AGE
frontend nginx frontend.team4.k8s-ctf.com a325b2169c2404f9fa91db7f9d740dab-0e5a75cb385f2a92.elb.us-east-1.amazonaws.com 80 28m
In this case, a325b2169c2404f9fa91db7f9d740dab-0e5a75cb385f2a92.elb.us-east-1.amazonaws.com
is the Nginx Ingress Controller’s Load Balancer address – which now also serves traffic to the Front-end Service.
Trying to navigate to frontend.team4.k8s-ctf.com
via the browser does not work. For it to work, we need to add this address to the HostFile in the local machine. We can follow this guide to add that. First, let’s look up the IP address of the LoadBalancer.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend/k8s$ nslookup a325b2169c2404f9fa91db7f9d740dab-0e5a75cb385f2a92.elb.us-east-1.amazonaws.com
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
Name: a325b2169c2404f9fa91db7f9d740dab-0e5a75cb385f2a92.elb.us-east-1.amazonaws.com
Address: 54.152.30.156
Now that we have the IP address, we can add a mapping to the host file.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend/k8s$ sudo gedit /etc/hosts
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend/k8s$ cat /etc/hosts | grep -i frontend
54.152.30.156 frontend.team4.k8s-ctf.com
We can now try and access the portal http://frontend.team4.k8s-ctf.com
again.

We are able to login using username:johnd
and password:foo
.

Now that we are logged in, we need to find out the size of the payload being returned by the TODOs API that contains all the TODOs items using developer tools. This can bee seen from the Network Tab
. We need to refresh the page where you see the TODOs items and observe the API calls being made. We see that one of the API calls is to /todos. To get the falg, we need to analyze the todo API call to find the number of Bytes of the payload that contains all the TODOs items.

We see that the size of the GET request is 122B. This is the number of Bytes of the payload that contains all the TODOs items and is our flag.
DevSlopCTF{122}
Securing the Cluster with Network Policies
This section deals with restricting access to the services within the cluster using NetworkPolicy. Network Policies allow you to control the traffic flow to (ingress) and from (egress) a Pod. We need to write network policies for each of the services.
Frontend
The requiremenst for the FrontEnd are as follows:
- Use the API version networking.k8s.io/v1
- The Network Policy should be called frontend
- The Policy should:
- ALLOW ingress from all IPs on the Internet
- ALLOW egress to:
- Auth API on port 8081
- TODOs API on port 8082
Let’s create the yaml and fill it in based on the guidance from the official documentation.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend/k8s$ touch networkpolicy.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend/k8s$ gedit networkpolicy.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend/k8s$ cat networkpolicy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: frontend
namespace: team4
spec:
podSelector:
matchLabels:
app: frontend
policyTypes:
- Ingress
- Egress
ingress:
- {}
egress:
- to:
- podSelector:
matchLabels:
app: auth-api
ports:
- protocol: TCP
port: 8081
- to:
- podSelector:
matchLabels:
app: todos-api
ports:
- protocol: TCP
port: 8082
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend/k8s$ kubectl apply -f networkpolicy.yaml --namespace=team4
networkpolicy.networking.k8s.io/frontend created
Auth API
The requirements for the Auth API NetworkPolicy are as follows:
- Use the API version networking.k8s.io/v1
- The Network Policy should be called auth-api
- The Policy should:
- ALLOW ingress on port 8081 from the Front-end
- ALLOW egress to the Users API on port 8083
We can simply copy the previous NetworkPolicy and edit accordingly.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/frontend/k8s$ cd ../../auth-api/k8s/
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/auth-api/k8s$ cp ../../frontend/k8s/networkpolicy.yaml .
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/auth-api/k8s$ gedit networkpolicy.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/auth-api/k8s$ cat networkpolicy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: auth-api
namespace: team4
spec:
podSelector:
matchLabels:
app: auth-api
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8081
egress:
- to:
- podSelector:
matchLabels:
app: users-api
ports:
- protocol: TCP
port: 8083
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/auth-api/k8s$ kubectl apply -f networkpolicy.yaml --namespace=team4
networkpolicy.networking.k8s.io/auth-api created
TODOs API
In the same way, we need to create a NetworkPolicy for the todos api that fullfills the following requirements.
- Use the API version networking.k8s.io/v1
- The Network Policy should be called todos-api
- The Policy should:
- ALLOW ingress on port 8082 from the Front-end
- ALLOW egress to Redis on port 6379
The reults are as follows:
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/auth-api/k8s$ cd ../../todos-api/k8s/
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/todos-api/k8s$ cp ../../auth-api/k8s/networkpolicy.yaml .
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/todos-api/k8s$ gedit networkpolicy.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/todos-api/k8s$ cat networkpolicy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: todos-api
namespace: team4
spec:
podSelector:
matchLabels:
app: todos-api
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8082
egress:
- to:
- podSelector:
matchLabels:
app: redis-queue
ports:
- protocol: TCP
port: 6379
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/todos-api/k8s$ kubectl apply -f networkpolicy.yaml --namespace=team4
networkpolicy.networking.k8s.io/todos-api created
Users API
Network Policy file for the Users API has the following requirements:
- Use the API version networking.k8s.io/v1
- The Network Policy should be called users-api
- The Policy should:
- ALLOW ingress on port 8083 from Auth API
- DENY egress (i.e., the Users API cannot start talking to any other Pod in the cluster)
The results are shown below.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/todos-api/k8s$ cd ../../users-api/k8s/
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/users-api/k8s$ cp ../../auth-api/k8s/networkpolicy.yaml .
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/users-api/k8s$ gedit networkpolicy.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/users-api/k8s$ cat networkpolicy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: users-api
namespace: team4
spec:
podSelector:
matchLabels:
app: users-api
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: auth-api
ports:
- protocol: TCP
port: 8083
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/users-api/k8s$ kubectl apply -f networkpolicy.yaml --namespace=team4
networkpolicy.networking.k8s.io/users-api created
Log Message Processor
We need to write a Network Policy for the Log Message Processor following the requirements below:
- Use the API version networking.k8s.io/v1
- The Network Policy should be called log-message-processor
- The Policy should:
- DENY ingress (i.e., no Pod in the cluster can communicate with the Log Message Processor)
- ALLOW egress to Redis on port 6379
The result is shown below and then deployed.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/users-api/k8s$ cd ../../log-message-processor/k8s/
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/log-message-processor/k8s$ cp ../../auth-api/k8s/networkpolicy.yaml .
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/log-message-processor/k8s$ gedit networkpolicy.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/log-message-processor/k8s$ cat networkpolicy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: log-message-processor
namespace: team4
spec:
podSelector:
matchLabels:
app: log-message-processor
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: redis-queue
ports:
- protocol: TCP
port: 6379
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/log-message-processor/k8s$ kubectl apply -f networkpolicy.yaml --namespace=team4
networkpolicy.networking.k8s.io/log-message-processor created
Redis
Our last Network Policy is that of Redis whose requirements are:
- Use the API version networking.k8s.io/v1
- The Network Policy should be called redis-queue
- The Policy should:
- ALLOW ingress on port 6379 from the TODOs API and the Log Message Processor
- DENY egress (i.e., Redis cannot start talking to any other Pod in the cluster)
We shall copy and edit a previous NetworkPolicy based on the above requirements.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/users-api/k8s$ cd ../../redis-queue/
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/redis-queue$ cp ../auth-api/k8s/networkpolicy.yaml .
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/redis-queue$ gedit networkpolicy.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/redis-queue$ cat networkpolicy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: redis-queue
namespace: team4
spec:
podSelector:
matchLabels:
app: redis-queue
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: redis-access: "true"
ports:
- protocol: TCP
port: 6379
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/redis-queue$ kubectl apply -f networkpolicy.yaml --namespace=team4
networkpolicy.networking.k8s.io/redis-queue created
Now that we have finished all services, let’s double-check that we have the correct NetworkPolicies in place.
userxx@lsv-u01:~$ kubectl get networkpolicy -n team4
NAME POD-SELECTOR AGE
auth-api app=auth-api 12h
frontend app=frontend 11h
log-message-processor app=log-message-processor 11h
redis-queue app=redis-queue 11h
todos-api app=todos-api 12h
users-api app=users-api 11h
Capture The Flag
We are supposed to browse to http://frontend.team4.k8s-ctf.com/
and check if all is OK. The network access seems broken.
There is one remaining Egress rule that needs to be deployed in order for the application to fully work.
Using the architectural diagram and the NetworkPolicy mappings we have done in this task, let’s try and map the paths of traffic to provide a visual of where the error might lie.

It looks like we have all in order. Let’s double check connectivity between the auth-api and the users-api
badmin@lsv-u01:~$ kubectl exec -it auth-api-5997f77ff6-b5zmw -n team4 -- sh
/go/src/app # curl users-api:8083
curl: (6) Could not resolve host: users-api
Hm, that does not work Could not resolve host
even though we have the policy in place. Perhaps an issue with DNS? Well, when we review the NetworkPolicies that we created, they were mostly post specific. There was no policy that allowed DNS between the auth-api and the users-api. Makes sense now. The task does hint to creating an egress rule to get stuff working. We can go ahead and create a policy for each of the microservices, or simply create one for all. We are lazy and tired at this point, so let’s create one for all. I will allow TCP on port 53 since we are testing via HTTP.
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/general$ touch dns.yaml
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/general$ cat dns.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- ports:
- protocol: TCP
port: 53
userxx@lsv-u01:~/Downloads/Kubernetes-ctf/general$ kubectl apply -f dns.yaml --namespace=team4
networkpolicy.networking.k8s.io/allow-dns created
Now that we have that added, let us test the communication once more.
badmin@lsv-u01:~$ kubectl exec -it auth-api-5997f77ff6-b5zmw -n team4 -- sh
/go/src/app # curl users-api:8083
{"timestamp":1613895020705,"status":500,"error":"Internal Server Error","exception":"javax.servlet.ServletException","message":"Missing or invalid Authorization header","path":"/"}/go/src/app #
Great! That now works. Now, let’s go back to the browser and try and test that all is working now.

Now to get the flag, we learn that the remaining Egress rule has a port number associated with it, which in this case is 53
. What we need to do is take that port number and multiply it by 20022021. That gives us 1061167113
. We got our final flag!
DevSlopCTF{1061167113}
Validation
Now that we are done, we need to hand in our work for validation.
Congratulations! You completed the challenge.
To confirm that all the steps were completed as intended. We will validate what you built.
The flag is DevSlopCTF{Gr34tJ0b}
All we have to do is subit the flag DevSlopCTF{Gr34tJ0b}
to request evaluation.
Woohoo! We made it! Hope you enjoyed this CTF as much as I did. See you in the next :).