R:PROPERTIES: :AUTHOR: Ian S. Pringle :TYPE: slip :ID: 975b63b1-d9a0-420f-8494-469c96e6f4b2
About
Jobs
Resources
Install & Setup
docker-compose test suite
version: '3'
services:
concourse-db:
image: postgres
environment:
POSTGRES_DB: concourse
POSTGRES_PASSWORD: concourse_pass
POSTGRES_USER: concourse_user
PGDATA: /database
concourse:
image: concourse/concourse
command: quickstart
privileged: true
depends_on: [concourse-db]
ports: ["8080:8080"]
environment:
CONCOURSE_POSTGRES_HOST: concourse-db
CONCOURSE_POSTGRES_USER: concourse_user
CONCOURSE_POSTGRES_PASSWORD: concourse_pass
CONCOURSE_POSTGRES_DATABASE: concourse
CONCOURSE_EXTERNAL_URL: http://localhost:8080
CONCOURSE_ADD_LOCAL_USER: test:test
CONCOURSE_MAIN_TEAM_LOCAL_USER: test
CONCOURSE_WORKER_BAGGAGECLAIM_DRIVER: overlay
CONCOURSE_CLIENT_SECRET: Y29uY291cnNlLXdlYgo=
CONCOURSE_TSA_CLIENT_SECRET: Y29uY291cnNlLXdvcmtlcgo=
CONCOURSE_X_FRAME_OPTIONS: allow
CONCOURSE_CONTENT_SECURITY_POLICY: "*"
CONCOURSE_CLUSTER_NAME: test
CONCOURSE_WORKER_CONTAINERD_DNS_SERVER: "8.8.8.8"
CONCOURSE_WORKER_RUNTIME: "containerd"
docker-compose up -d
docker-compose down
fly-cli
Install
MacOS
brew install --cask fly
Linux
Windows
Login
fly -t test login -c http://localhost:8080 -u test -p test
Usage
Hello world
Create the pipeline
First we start with the top-level key of jobs
, which is an unordered list \ of jobs.
jobs:
We now specify the name of our first job.
- name: hello-world-job
And then we add the plan
keyword. This is an ordered list of steps.
plan:
A step is the smallest unit of work in a Concourse pipeline. Each step is a single container. If you have four steps, Concourse will spin-up four different containers to execute each step. A step can be one of a few possible keywords:
get
retrieves a Resourceput
stores a Resourcetask
executes a task, which can be thought of as a function —ideal a pure functionset_pipeline
can configure a pipeline. This is useful for updating a pipeline’s Resourcesload_var
sets a variable scoped to the rest of the pipeline, allowing it to be accessed anywhere else within the pipelinein_parallel
run steps in paralleldo
runs steps in serial, this is useful for handling failurestry
runs a single step and treats it as a success, regardless of the actual result
For our purposes we’ll use the task
step to run a containee which will echo
out Hello World!
. Something to note about not only task
but most/all steps is that they can either be defined in the file or in another file entirely. This might be useful for programmatically determinging what step to run based on a Resource.
- task: hello-world-task
In our examople we will specify the config
however we could just as easily specify a file
. Regardless of whether we wrote the config
inline or if we sourced it from a file
, all task configurations look the same — the platform to run on (Windows, Linux, or Darwin), an image_resources
type, the run
command, and any number of optional keywords:
config:
platform: linux
image_resource:
type: registry-image
source:
repository: busybox
run:
path: echo
args: ["Hello, world!"]
The above is fairly straight-forward. Run the busybox
image on a Linux worker with the command echo "Hello, world!"
.
<<jobs-keyword>>
<<hello-world-job>>
<<plan-keyword>>
<<hello-world-task>>
<<hello-world-config-1>>
Run the pipeline
Getting a pipeline into Concourse and running takes three steps:
Set the pipeline
fly -t test set-pipeline -p hello-world -c hello-world.yml
no changes to apply
Unpause the pipeline
fly -t test unpause-pipeline -p hello-world
unpaused 'hello-world'
Run the pipeline
fly -t test trigger-job --job hello-world/hello-world-job --watch
started | hello-world/hello-world-job | #8 | |
[1minitializing[0m | |||
[1mselected | worker:[0m | 666c2e56e7a0 | |
[1mselected | worker:[0m | 666c2e56e7a0 | |
[1mselected | worker:[0m | 666c2e56e7a0 | |
[1mrunning | echo | Hello, | world![0m |
Hello, | world! | ||
succeeded |
Outputs and Inputs
Let’s take the above example and add an output. In concourse an output is defined and when the task is run Concourse sticks a directory in rootfs with the name specified in the output field. We’ll add an outputs
keyword to the config and change our run
to echo a message into that output dir:
config:
platform: linux
image_resource:
type: registry-image
source:
repository: busybox
outputs:
- name: storage
run:
path: sh
args:
- -cx
- |
echo "Hello, from the hello-world-task!" > storage/msg
When we created the above output Concourse creates a new “artifact” with then name specified and then mounts that artifact into the container. These artifacts can be thought of as mounts.
And then we can add a new task which takes the above tasks’s output as its input:
- task: receiver-task
config:
platform: linux
image_resource:
type: registry-image
source:
repository: busybox
inputs:
- name: storage
run:
path: cat
args: ["storage/msg"]
If an input’s artifact does not exist that will cause the task to fail.
<<jobs-keyword>>
<<hello-world-job>>
<<plan-keyword>>
<<hello-world-task>>
<<hello-world-config-2>>
<<receiver-task>>
Run the pipeline
Update the pipeline in Concourse and run the new pipeline:
fly -t test set-pipeline -p hello-world -c hello-receiver.yml
fly -t test trigger-job --job hello-world/hello-world-job --watch
Let’s have some fun!
jobs:
- name: fib_n
plan:
- task: init
config:
platform: linux
image_resource:
type: registry-image
source:
repository: busybox
outputs:
- name: fib
run:
path: sh
args:
- -cx
- |
echo $((1 + $RANDOM % 10)) >> fib/n
echo "1" > fib/i
echo "1" > fib/curr
echo "1" > fib/next
- task: run
config:
platform: linux
image_resource:
type: registry-image
source:
repository: busybox
inputs:
- name: fib
outputs:
- name: fib
run:
path: sh
args:
- -cx
- |
if cmp -s "fib/n" "fib/i"
then
cat fib/curr
else
curr=$(cat fib/curr)
next=$(cat fib/next)
i=$(cat fib/i)
cat fib/next > fib/curr
echo $(($curr + $next)) > fib/next
echo $(($i + 1)) > fib/i
exit 1
fi
on_failure:
task: run
attempts: 100
config:
platform: linux
image_resource:
type: registry-image
source:
repository: busybox
inputs:
- name: fib
outputs:
- name: fib
run:
path: sh
args:
- -cx
- |
if cmp -s "fib/n" "fib/i"
then
echo "The $(cat fib/n) Fibonacci number is $(cat fib/curr)."
exit 0
else
curr=$(cat fib/curr)
next=$(cat fib/next)
i=$(cat fib/i)
cat fib/next > fib/curr
echo $(($curr + $next)) > fib/next
echo $(($i + 1)) > fib/i
exit 1
fi