Skip to content

Commit fa85853

Browse files
authored
feat: add nomad template (coder#9786)
1 parent b742661 commit fa85853

File tree

8 files changed

+359
-3
lines changed

8 files changed

+359
-3
lines changed

cli/testdata/coder_templates_init_--help.golden

+1-1
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ USAGE:
66
Get started with a templated template.
77

88
OPTIONS:
9-
--id aws-ecs-container|aws-linux|aws-windows|azure-linux|do-linux|docker|docker-with-dotfiles|gcp-linux|gcp-vm-container|gcp-windows|kubernetes
9+
--id aws-ecs-container|aws-linux|aws-windows|azure-linux|do-linux|docker|docker-with-dotfiles|gcp-linux|gcp-vm-container|gcp-windows|kubernetes|nomad-docker
1010
Specify a given example template by ID.
1111

1212
———

docs/cli/templates_init.md

+2-2
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

examples/examples.gen.json

+12
Original file line numberDiff line numberDiff line change
@@ -133,5 +133,17 @@
133133
"kubernetes"
134134
],
135135
"markdown": "\n# Getting started\n\nThis template creates a deployment running the `codercom/enterprise-base:ubuntu` image.\n\n## Prerequisites\n\nThis template uses [`kubernetes_deployment`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/deployment) terraform resource, which requires the `coder` service account to have permission to create deploymnets. For example if you are using [helm](https://coder.com/docs/v2/latest/install/kubernetes#install-coder-with-helm) to install Coder, you should set `coder.serviceAccount.enableDeployments=true` in your `values.yaml`\n\n```diff\ncoder:\nserviceAccount:\n workspacePerms: true\n- enableDeployments: false\n+ enableDeployments: true\n annotations: {}\n name: coder\n```\n\n\u003e Note: This is only required for Coder versions \u003c 0.28.0, as this will be the default value for Coder versions \u003e= 0.28.0\n\n## Authentication\n\nThis template can authenticate using in-cluster authentication, or using a kubeconfig local to the\nCoder host. For additional authentication options, consult the [Kubernetes provider\ndocumentation](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs).\n\n### kubeconfig on Coder host\n\nIf the Coder host has a local `~/.kube/config`, you can use this to authenticate\nwith Coder. Make sure this is done with same user that's running the `coder` service.\n\nTo use this authentication, set the parameter `use_kubeconfig` to true.\n\n### In-cluster authentication\n\nIf the Coder host runs in a Pod on the same Kubernetes cluster as you are creating workspaces in,\nyou can use in-cluster authentication.\n\nTo use this authentication, set the parameter `use_kubeconfig` to false.\n\nThe Terraform provisioner will automatically use the service account associated with the pod to\nauthenticate to Kubernetes. Be sure to bind a [role with appropriate permission](#rbac) to the\nservice account. For example, assuming the Coder host runs in the same namespace as you intend\nto create workspaces:\n\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: coder\n\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n name: coder\nsubjects:\n - kind: ServiceAccount\n name: coder\nroleRef:\n kind: Role\n name: coder\n apiGroup: rbac.authorization.k8s.io\n```\n\nThen start the Coder host with `serviceAccountName: coder` in the pod spec.\n\n### Authenticate against external clusters\n\nYou may want to deploy workspaces on a cluster outside of the Coder control plane. Refer to the [Coder docs](https://coder.com/docs/v2/latest/platforms/kubernetes/additional-clusters) to learn how to modify your template to authenticate against external clusters.\n\n## Namespace\n\nThe target namespace in which the deployment will be deployed is defined via the `coder_workspace`\nvariable. The namespace must exist prior to creating workspaces.\n\n## Persistence\n\nThe `/home/coder` directory in this example is persisted via the attached PersistentVolumeClaim.\nAny data saved outside of this directory will be wiped when the workspace stops.\n\nSince most binary installations and environment configurations live outside of\nthe `/home` directory, we suggest including these in the `startup_script` argument\nof the `coder_agent` resource block, which will run each time the workspace starts up.\n\nFor example, when installing the `aws` CLI, the install script will place the\n`aws` binary in `/usr/local/bin/aws`. To ensure the `aws` CLI is persisted across\nworkspace starts/stops, include the following code in the `coder_agent` resource\nblock of your workspace template:\n\n```terraform\nresource \"coder_agent\" \"main\" {\n startup_script = \u003c\u003c-EOT\n set -e\n # install AWS CLI\n curl \"https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip\" -o \"awscliv2.zip\"\n unzip awscliv2.zip\n sudo ./aws/install\n EOT\n}\n```\n\n## code-server\n\n`code-server` is installed via the `startup_script` argument in the `coder_agent`\nresource block. The `coder_app` resource is defined to access `code-server` through\nthe dashboard UI over `localhost:13337`.\n\n## Deployment logs\n\nTo stream kubernetes pods events from the deployment, you can use Coder's [`coder-logstream-kube`](https://github.com/coder/coder-logstream-kube) tool. This can stream logs from the deployment to Coder's workspace startup logs. You just need to install the `coder-logstream-kube` helm chart on the cluster where the deployment is running.\n\n```shell\nhelm repo add coder-logstream-kube https://helm.coder.com/logstream-kube\nhelm install coder-logstream-kube coder-logstream-kube/coder-logstream-kube \\\n --namespace coder \\\n --set url=\u003cyour-coder-url-including-http-or-https\u003e\n```\n\nFor detailed instructions, see [Deployment logs](https://coder.com/docs/v2/latest/platforms/kubernetes/deployment-logs)\n"
136+
},
137+
{
138+
"id": "nomad-docker",
139+
"url": "",
140+
"name": "Develop in a Nomad Docker Container",
141+
"description": "Get started with Nomad Workspaces.",
142+
"icon": "/icon/nomad.svg",
143+
"tags": [
144+
"cloud",
145+
"nomad"
146+
],
147+
"markdown": "\n# Develop in a Nomad Docker Container\n\nThis example shows how to use Nomad service tasks to be used as a development environment using docker and host csi volumes.\n\n## Prerequisites\n\n- [Nomad](https://www.nomadproject.io/downloads)\n- [Docker](https://docs.docker.com/get-docker/)\n\n## Setup\n\n### 1. Start the CSI Host Volume Plugin\n\nThe CSI Host Volume plugin is used to mount host volumes into Nomad tasks. This is useful for development environments where you want to mount persistent volumes into your container workspace.\n\n1. Login to the Nomad server using SSH.\n\n2. Append the following stanza to your Nomad server configuration file and restart the nomad service.\n\n ```hcl\n plugin \"docker\" {\n config {\n allow_privileged = true\n }\n }\n ```\n\n ```shell\n sudo systemctl restart nomad\n ```\n\n3. Create a file `hostpath.nomad` with following content:\n\n ```hcl\n job \"hostpath-csi-plugin\" {\n datacenters = [\"dc1\"]\n type = \"system\"\n\n group \"csi\" {\n task \"plugin\" {\n driver = \"docker\"\n\n config {\n image = \"registry.k8s.io/sig-storage/hostpathplugin:v1.10.0\"\n\n args = [\n \"--drivername=csi-hostpath\",\n \"--v=5\",\n \"--endpoint=${CSI_ENDPOINT}\",\n \"--nodeid=node-${NOMAD_ALLOC_INDEX}\",\n ]\n\n privileged = true\n }\n\n csi_plugin {\n id = \"hostpath\"\n type = \"monolith\"\n mount_dir = \"/csi\"\n }\n\n resources {\n cpu = 256\n memory = 128\n }\n }\n }\n }\n ```\n\n4. Run the job:\n\n ```shell\n nomad job run hostpath.nomad\n ```\n\n### 2. Setup the Nomad Template\n\n1. Create the template by running the following command:\n\n ```shell\n coder template init nomad-docker\n cd nomad-docker\n coder template create\n ```\n\n2. Set up Nomad server address and optional authentication:\n\n3. Create a new workspace and start developing.\n"
136148
}
137149
]

examples/examples.go

+1
Original file line numberDiff line numberDiff line change
@@ -34,6 +34,7 @@ var (
3434
//go:embed templates/gcp-vm-container
3535
//go:embed templates/gcp-windows
3636
//go:embed templates/kubernetes
37+
//go:embed templates/nomad-docker
3738
files embed.FS
3839

3940
exampleBasePath = "https://github.com/coder/coder/tree/main/examples/templates/"
+96
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,96 @@
1+
---
2+
name: Develop in a Nomad Docker Container
3+
description: Get started with Nomad Workspaces.
4+
tags: [cloud, nomad]
5+
icon: /icon/nomad.svg
6+
---
7+
8+
# Develop in a Nomad Docker Container
9+
10+
This example shows how to use Nomad service tasks to be used as a development environment using docker and host csi volumes.
11+
12+
## Prerequisites
13+
14+
- [Nomad](https://www.nomadproject.io/downloads)
15+
- [Docker](https://docs.docker.com/get-docker/)
16+
17+
## Setup
18+
19+
### 1. Start the CSI Host Volume Plugin
20+
21+
The CSI Host Volume plugin is used to mount host volumes into Nomad tasks. This is useful for development environments where you want to mount persistent volumes into your container workspace.
22+
23+
1. Login to the Nomad server using SSH.
24+
25+
2. Append the following stanza to your Nomad server configuration file and restart the nomad service.
26+
27+
```hcl
28+
plugin "docker" {
29+
config {
30+
allow_privileged = true
31+
}
32+
}
33+
```
34+
35+
```shell
36+
sudo systemctl restart nomad
37+
```
38+
39+
3. Create a file `hostpath.nomad` with following content:
40+
41+
```hcl
42+
job "hostpath-csi-plugin" {
43+
datacenters = ["dc1"]
44+
type = "system"
45+
46+
group "csi" {
47+
task "plugin" {
48+
driver = "docker"
49+
50+
config {
51+
image = "registry.k8s.io/sig-storage/hostpathplugin:v1.10.0"
52+
53+
args = [
54+
"--drivername=csi-hostpath",
55+
"--v=5",
56+
"--endpoint=${CSI_ENDPOINT}",
57+
"--nodeid=node-${NOMAD_ALLOC_INDEX}",
58+
]
59+
60+
privileged = true
61+
}
62+
63+
csi_plugin {
64+
id = "hostpath"
65+
type = "monolith"
66+
mount_dir = "/csi"
67+
}
68+
69+
resources {
70+
cpu = 256
71+
memory = 128
72+
}
73+
}
74+
}
75+
}
76+
```
77+
78+
4. Run the job:
79+
80+
```shell
81+
nomad job run hostpath.nomad
82+
```
83+
84+
### 2. Setup the Nomad Template
85+
86+
1. Create the template by running the following command:
87+
88+
```shell
89+
coder template init nomad-docker
90+
cd nomad-docker
91+
coder template create
92+
```
93+
94+
2. Set up Nomad server address and optional authentication:
95+
96+
3. Create a new workspace and start developing.
+192
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,192 @@
1+
terraform {
2+
required_providers {
3+
coder = {
4+
source = "coder/coder"
5+
}
6+
nomad = {
7+
source = "hashicorp/nomad"
8+
}
9+
}
10+
}
11+
12+
variable "nomad_provider_address" {
13+
type = string
14+
description = "Nomad provider address. e.g., http://IP:PORT"
15+
default = "http://localhost:4646"
16+
}
17+
18+
variable "nomad_provider_http_auth" {
19+
type = string
20+
description = "Nomad provider http_auth in the form of `user:password`"
21+
sensitive = true
22+
default = ""
23+
}
24+
25+
provider "coder" {}
26+
27+
provider "nomad" {
28+
address = var.nomad_provider_address
29+
http_auth = var.nomad_provider_http_auth == "" ? null : var.nomad_provider_http_auth
30+
}
31+
32+
data "coder_parameter" "cpu" {
33+
name = "cpu"
34+
display_name = "CPU"
35+
description = "The number of CPU cores"
36+
default = "1"
37+
icon = "/icon/memory.svg"
38+
mutable = true
39+
option {
40+
name = "1 Cores"
41+
value = "1"
42+
}
43+
option {
44+
name = "2 Cores"
45+
value = "2"
46+
}
47+
option {
48+
name = "3 Cores"
49+
value = "3"
50+
}
51+
option {
52+
name = "4 Cores"
53+
value = "4"
54+
}
55+
}
56+
57+
data "coder_parameter" "memory" {
58+
name = "memory"
59+
display_name = "Memory"
60+
description = "The amount of memory in GB"
61+
default = "2"
62+
icon = "/icon/memory.svg"
63+
mutable = true
64+
option {
65+
name = "2 GB"
66+
value = "2"
67+
}
68+
option {
69+
name = "4 GB"
70+
value = "4"
71+
}
72+
option {
73+
name = "6 GB"
74+
value = "6"
75+
}
76+
option {
77+
name = "8 GB"
78+
value = "8"
79+
}
80+
}
81+
82+
data "coder_workspace" "me" {}
83+
84+
resource "coder_agent" "main" {
85+
os = "linux"
86+
arch = "amd64"
87+
startup_script_timeout = 180
88+
startup_script = <<-EOT
89+
set -e
90+
# install and start code-server
91+
curl -fsSL https://code-server.dev/install.sh | sh -s -- --method=standalone --prefix=/tmp/code-server
92+
/tmp/code-server/bin/code-server --auth none --port 13337 >/tmp/code-server.log 2>&1 &
93+
EOT
94+
95+
metadata {
96+
display_name = "Load Average (Host)"
97+
key = "load_host"
98+
# get load avg scaled by number of cores
99+
script = <<EOT
100+
echo "`cat /proc/loadavg | awk '{ print $1 }'` `nproc`" | awk '{ printf "%0.2f", $1/$2 }'
101+
EOT
102+
interval = 60
103+
timeout = 1
104+
}
105+
}
106+
107+
# code-server
108+
resource "coder_app" "code-server" {
109+
agent_id = coder_agent.main.id
110+
slug = "code-server"
111+
display_name = "code-server"
112+
icon = "/icon/code.svg"
113+
url = "http://localhost:13337?folder=/home/coder"
114+
subdomain = false
115+
share = "owner"
116+
117+
healthcheck {
118+
url = "http://localhost:13337/healthz"
119+
interval = 3
120+
threshold = 10
121+
}
122+
}
123+
124+
locals {
125+
workspace_tag = "coder-${data.coder_workspace.me.owner}-${data.coder_workspace.me.name}"
126+
home_volume_name = "coder_${data.coder_workspace.me.id}_home"
127+
}
128+
129+
resource "nomad_namespace" "coder_workspace" {
130+
name = local.workspace_tag
131+
description = "Coder workspace"
132+
meta = {
133+
owner = data.coder_workspace.me.owner
134+
}
135+
}
136+
137+
data "nomad_plugin" "hostpath" {
138+
plugin_id = "hostpath"
139+
wait_for_healthy = true
140+
}
141+
142+
resource "nomad_csi_volume" "home_volume" {
143+
depends_on = [data.nomad_plugin.hostpath]
144+
145+
lifecycle {
146+
ignore_changes = all
147+
}
148+
plugin_id = "hostpath"
149+
volume_id = local.home_volume_name
150+
name = local.home_volume_name
151+
namespace = nomad_namespace.coder_workspace.name
152+
153+
capability {
154+
access_mode = "single-node-writer"
155+
attachment_mode = "file-system"
156+
}
157+
158+
mount_options {
159+
fs_type = "ext4"
160+
}
161+
}
162+
163+
resource "nomad_job" "workspace" {
164+
count = data.coder_workspace.me.start_count
165+
depends_on = [nomad_csi_volume.home_volume]
166+
jobspec = templatefile("${path.module}/workspace.nomad.tpl", {
167+
coder_workspace_owner = data.coder_workspace.me.owner
168+
coder_workspace_name = data.coder_workspace.me.name
169+
workspace_tag = local.workspace_tag
170+
cores = tonumber(data.coder_parameter.cpu.value)
171+
memory_mb = tonumber(data.coder_parameter.memory.value * 1024)
172+
coder_init_script = coder_agent.main.init_script
173+
coder_agent_token = coder_agent.main.token
174+
workspace_name = data.coder_workspace.me.name
175+
home_volume_name = local.home_volume_name
176+
})
177+
deregister_on_destroy = true
178+
purge_on_destroy = true
179+
}
180+
181+
resource "coder_metadata" "workspace_info" {
182+
count = data.coder_workspace.me.start_count
183+
resource_id = nomad_job.workspace[0].id
184+
item {
185+
key = "CPU (Cores)"
186+
value = data.coder_parameter.cpu.value
187+
}
188+
item {
189+
key = "Memory (GiB)"
190+
value = data.coder_parameter.memory.value
191+
}
192+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,53 @@
1+
job "workspace" {
2+
datacenters = ["dc1"]
3+
namespace = "${workspace_tag}"
4+
type = "service"
5+
group "workspace" {
6+
volume "home_volume" {
7+
type = "csi"
8+
source = "${home_volume_name}"
9+
read_only = false
10+
attachment_mode = "file-system"
11+
access_mode = "single-node-writer"
12+
}
13+
network {
14+
port "http" {}
15+
}
16+
task "workspace" {
17+
driver = "docker"
18+
config {
19+
image = "codercom/enterprise-base:ubuntu"
20+
ports = ["http"]
21+
labels {
22+
name = "${workspace_tag}"
23+
managed_by = "coder"
24+
}
25+
hostname = "${workspace_name}"
26+
entrypoint = ["sh", "-c", "sudo chown coder:coder -R /home/coder && echo '${base64encode(coder_init_script)}' | base64 --decode | sh"]
27+
}
28+
volume_mount {
29+
volume = "home_volume"
30+
destination = "/home/coder"
31+
}
32+
resources {
33+
cores = ${cores}
34+
memory = ${memory_mb}
35+
}
36+
env {
37+
CODER_AGENT_TOKEN = "${coder_agent_token}"
38+
}
39+
meta {
40+
tag = "${workspace_tag}"
41+
managed_by = "coder"
42+
}
43+
}
44+
meta {
45+
tag = "${workspace_tag}"
46+
managed_by = "coder"
47+
}
48+
}
49+
meta {
50+
tag = "${workspace_tag}"
51+
managed_by = "coder"
52+
}
53+
}

site/static/icon/nomad.svg

+2
Loading

0 commit comments

Comments
 (0)