diff --git a/cli/testdata/coder_templates_init_--help.golden b/cli/testdata/coder_templates_init_--help.golden
index 94a244a85a8e0..c46f383c29f22 100644
--- a/cli/testdata/coder_templates_init_--help.golden
+++ b/cli/testdata/coder_templates_init_--help.golden
@@ -6,7 +6,7 @@ USAGE:
Get started with a templated template.
OPTIONS:
- --id aws-ecs-container|aws-linux|aws-windows|azure-linux|do-linux|docker|docker-with-dotfiles|gcp-linux|gcp-vm-container|gcp-windows|kubernetes
+ --id aws-ecs-container|aws-linux|aws-windows|azure-linux|do-linux|docker|docker-with-dotfiles|gcp-linux|gcp-vm-container|gcp-windows|kubernetes|nomad-docker
Specify a given example template by ID.
———
diff --git a/docs/cli/templates_init.md b/docs/cli/templates_init.md
index 3d9f0e24fec27..76cea7242cb5b 100644
--- a/docs/cli/templates_init.md
+++ b/docs/cli/templates_init.md
@@ -15,7 +15,7 @@ coder templates init [flags] [directory]
### --id
| | |
-| ---- | ---------------------------- | --------- | ----------- | ----------- | -------- | ------ | -------------------- | --------- | ---------------- | ----------- | ------------------ |
-| Type | enum[aws-ecs-container | aws-linux | aws-windows | azure-linux | do-linux | docker | docker-with-dotfiles | gcp-linux | gcp-vm-container | gcp-windows | kubernetes] |
+| ---- | ---------------------------- | --------- | ----------- | ----------- | -------- | ------ | -------------------- | --------- | ---------------- | ----------- | ---------- | -------------------- |
+| Type | enum[aws-ecs-container | aws-linux | aws-windows | azure-linux | do-linux | docker | docker-with-dotfiles | gcp-linux | gcp-vm-container | gcp-windows | kubernetes | nomad-docker] |
Specify a given example template by ID.
diff --git a/examples/examples.gen.json b/examples/examples.gen.json
index 749f62cb08e69..4dff0ecc53e0b 100644
--- a/examples/examples.gen.json
+++ b/examples/examples.gen.json
@@ -133,5 +133,17 @@
"kubernetes"
],
"markdown": "\n# Getting started\n\nThis template creates a deployment running the `codercom/enterprise-base:ubuntu` image.\n\n## Prerequisites\n\nThis template uses [`kubernetes_deployment`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/deployment) terraform resource, which requires the `coder` service account to have permission to create deploymnets. For example if you are using [helm](https://coder.com/docs/v2/latest/install/kubernetes#install-coder-with-helm) to install Coder, you should set `coder.serviceAccount.enableDeployments=true` in your `values.yaml`\n\n```diff\ncoder:\nserviceAccount:\n workspacePerms: true\n- enableDeployments: false\n+ enableDeployments: true\n annotations: {}\n name: coder\n```\n\n\u003e Note: This is only required for Coder versions \u003c 0.28.0, as this will be the default value for Coder versions \u003e= 0.28.0\n\n## Authentication\n\nThis template can authenticate using in-cluster authentication, or using a kubeconfig local to the\nCoder host. For additional authentication options, consult the [Kubernetes provider\ndocumentation](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs).\n\n### kubeconfig on Coder host\n\nIf the Coder host has a local `~/.kube/config`, you can use this to authenticate\nwith Coder. Make sure this is done with same user that's running the `coder` service.\n\nTo use this authentication, set the parameter `use_kubeconfig` to true.\n\n### In-cluster authentication\n\nIf the Coder host runs in a Pod on the same Kubernetes cluster as you are creating workspaces in,\nyou can use in-cluster authentication.\n\nTo use this authentication, set the parameter `use_kubeconfig` to false.\n\nThe Terraform provisioner will automatically use the service account associated with the pod to\nauthenticate to Kubernetes. Be sure to bind a [role with appropriate permission](#rbac) to the\nservice account. For example, assuming the Coder host runs in the same namespace as you intend\nto create workspaces:\n\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: coder\n\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n name: coder\nsubjects:\n - kind: ServiceAccount\n name: coder\nroleRef:\n kind: Role\n name: coder\n apiGroup: rbac.authorization.k8s.io\n```\n\nThen start the Coder host with `serviceAccountName: coder` in the pod spec.\n\n### Authenticate against external clusters\n\nYou may want to deploy workspaces on a cluster outside of the Coder control plane. Refer to the [Coder docs](https://coder.com/docs/v2/latest/platforms/kubernetes/additional-clusters) to learn how to modify your template to authenticate against external clusters.\n\n## Namespace\n\nThe target namespace in which the deployment will be deployed is defined via the `coder_workspace`\nvariable. The namespace must exist prior to creating workspaces.\n\n## Persistence\n\nThe `/home/coder` directory in this example is persisted via the attached PersistentVolumeClaim.\nAny data saved outside of this directory will be wiped when the workspace stops.\n\nSince most binary installations and environment configurations live outside of\nthe `/home` directory, we suggest including these in the `startup_script` argument\nof the `coder_agent` resource block, which will run each time the workspace starts up.\n\nFor example, when installing the `aws` CLI, the install script will place the\n`aws` binary in `/usr/local/bin/aws`. To ensure the `aws` CLI is persisted across\nworkspace starts/stops, include the following code in the `coder_agent` resource\nblock of your workspace template:\n\n```terraform\nresource \"coder_agent\" \"main\" {\n startup_script = \u003c\u003c-EOT\n set -e\n # install AWS CLI\n curl \"https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip\" -o \"awscliv2.zip\"\n unzip awscliv2.zip\n sudo ./aws/install\n EOT\n}\n```\n\n## code-server\n\n`code-server` is installed via the `startup_script` argument in the `coder_agent`\nresource block. The `coder_app` resource is defined to access `code-server` through\nthe dashboard UI over `localhost:13337`.\n\n## Deployment logs\n\nTo stream kubernetes pods events from the deployment, you can use Coder's [`coder-logstream-kube`](https://github.com/coder/coder-logstream-kube) tool. This can stream logs from the deployment to Coder's workspace startup logs. You just need to install the `coder-logstream-kube` helm chart on the cluster where the deployment is running.\n\n```shell\nhelm repo add coder-logstream-kube https://helm.coder.com/logstream-kube\nhelm install coder-logstream-kube coder-logstream-kube/coder-logstream-kube \\\n --namespace coder \\\n --set url=\u003cyour-coder-url-including-http-or-https\u003e\n```\n\nFor detailed instructions, see [Deployment logs](https://coder.com/docs/v2/latest/platforms/kubernetes/deployment-logs)\n"
+ },
+ {
+ "id": "nomad-docker",
+ "url": "",
+ "name": "Develop in a Nomad Docker Container",
+ "description": "Get started with Nomad Workspaces.",
+ "icon": "/icon/nomad.svg",
+ "tags": [
+ "cloud",
+ "nomad"
+ ],
+ "markdown": "\n# Develop in a Nomad Docker Container\n\nThis example shows how to use Nomad service tasks to be used as a development environment using docker and host csi volumes.\n\n## Prerequisites\n\n- [Nomad](https://www.nomadproject.io/downloads)\n- [Docker](https://docs.docker.com/get-docker/)\n\n## Setup\n\n### 1. Start the CSI Host Volume Plugin\n\nThe CSI Host Volume plugin is used to mount host volumes into Nomad tasks. This is useful for development environments where you want to mount persistent volumes into your container workspace.\n\n1. Login to the Nomad server using SSH.\n\n2. Append the following stanza to your Nomad server configuration file and restart the nomad service.\n\n ```hcl\n plugin \"docker\" {\n config {\n allow_privileged = true\n }\n }\n ```\n\n ```shell\n sudo systemctl restart nomad\n ```\n\n3. Create a file `hostpath.nomad` with following content:\n\n ```hcl\n job \"hostpath-csi-plugin\" {\n datacenters = [\"dc1\"]\n type = \"system\"\n\n group \"csi\" {\n task \"plugin\" {\n driver = \"docker\"\n\n config {\n image = \"registry.k8s.io/sig-storage/hostpathplugin:v1.10.0\"\n\n args = [\n \"--drivername=csi-hostpath\",\n \"--v=5\",\n \"--endpoint=${CSI_ENDPOINT}\",\n \"--nodeid=node-${NOMAD_ALLOC_INDEX}\",\n ]\n\n privileged = true\n }\n\n csi_plugin {\n id = \"hostpath\"\n type = \"monolith\"\n mount_dir = \"/csi\"\n }\n\n resources {\n cpu = 256\n memory = 128\n }\n }\n }\n }\n ```\n\n4. Run the job:\n\n ```shell\n nomad job run hostpath.nomad\n ```\n\n### 2. Setup the Nomad Template\n\n1. Create the template by running the following command:\n\n ```shell\n coder template init nomad-docker\n cd nomad-docker\n coder template create\n ```\n\n2. Set up Nomad server address and optional authentication:\n\n3. Create a new workspace and start developing.\n"
}
]
diff --git a/examples/examples.go b/examples/examples.go
index 5a8c00052a856..016804a073ba2 100644
--- a/examples/examples.go
+++ b/examples/examples.go
@@ -34,6 +34,7 @@ var (
//go:embed templates/gcp-vm-container
//go:embed templates/gcp-windows
//go:embed templates/kubernetes
+ //go:embed templates/nomad-docker
files embed.FS
exampleBasePath = "https://github.com/coder/coder/tree/main/examples/templates/"
diff --git a/examples/templates/nomad-docker/README.md b/examples/templates/nomad-docker/README.md
new file mode 100644
index 0000000000000..f676ed3aac14f
--- /dev/null
+++ b/examples/templates/nomad-docker/README.md
@@ -0,0 +1,96 @@
+---
+name: Develop in a Nomad Docker Container
+description: Get started with Nomad Workspaces.
+tags: [cloud, nomad]
+icon: /icon/nomad.svg
+---
+
+# Develop in a Nomad Docker Container
+
+This example shows how to use Nomad service tasks to be used as a development environment using docker and host csi volumes.
+
+## Prerequisites
+
+- [Nomad](https://www.nomadproject.io/downloads)
+- [Docker](https://docs.docker.com/get-docker/)
+
+## Setup
+
+### 1. Start the CSI Host Volume Plugin
+
+The CSI Host Volume plugin is used to mount host volumes into Nomad tasks. This is useful for development environments where you want to mount persistent volumes into your container workspace.
+
+1. Login to the Nomad server using SSH.
+
+2. Append the following stanza to your Nomad server configuration file and restart the nomad service.
+
+ ```hcl
+ plugin "docker" {
+ config {
+ allow_privileged = true
+ }
+ }
+ ```
+
+ ```shell
+ sudo systemctl restart nomad
+ ```
+
+3. Create a file `hostpath.nomad` with following content:
+
+ ```hcl
+ job "hostpath-csi-plugin" {
+ datacenters = ["dc1"]
+ type = "system"
+
+ group "csi" {
+ task "plugin" {
+ driver = "docker"
+
+ config {
+ image = "registry.k8s.io/sig-storage/hostpathplugin:v1.10.0"
+
+ args = [
+ "--drivername=csi-hostpath",
+ "--v=5",
+ "--endpoint=${CSI_ENDPOINT}",
+ "--nodeid=node-${NOMAD_ALLOC_INDEX}",
+ ]
+
+ privileged = true
+ }
+
+ csi_plugin {
+ id = "hostpath"
+ type = "monolith"
+ mount_dir = "/csi"
+ }
+
+ resources {
+ cpu = 256
+ memory = 128
+ }
+ }
+ }
+ }
+ ```
+
+4. Run the job:
+
+ ```shell
+ nomad job run hostpath.nomad
+ ```
+
+### 2. Setup the Nomad Template
+
+1. Create the template by running the following command:
+
+ ```shell
+ coder template init nomad-docker
+ cd nomad-docker
+ coder template create
+ ```
+
+2. Set up Nomad server address and optional authentication:
+
+3. Create a new workspace and start developing.
diff --git a/examples/templates/nomad-docker/main.tf b/examples/templates/nomad-docker/main.tf
new file mode 100644
index 0000000000000..26a9e2f09fe9f
--- /dev/null
+++ b/examples/templates/nomad-docker/main.tf
@@ -0,0 +1,192 @@
+terraform {
+ required_providers {
+ coder = {
+ source = "coder/coder"
+ }
+ nomad = {
+ source = "hashicorp/nomad"
+ }
+ }
+}
+
+variable "nomad_provider_address" {
+ type = string
+ description = "Nomad provider address. e.g., http://IP:PORT"
+ default = "http://localhost:4646"
+}
+
+variable "nomad_provider_http_auth" {
+ type = string
+ description = "Nomad provider http_auth in the form of `user:password`"
+ sensitive = true
+ default = ""
+}
+
+provider "coder" {}
+
+provider "nomad" {
+ address = var.nomad_provider_address
+ http_auth = var.nomad_provider_http_auth == "" ? null : var.nomad_provider_http_auth
+}
+
+data "coder_parameter" "cpu" {
+ name = "cpu"
+ display_name = "CPU"
+ description = "The number of CPU cores"
+ default = "1"
+ icon = "/icon/memory.svg"
+ mutable = true
+ option {
+ name = "1 Cores"
+ value = "1"
+ }
+ option {
+ name = "2 Cores"
+ value = "2"
+ }
+ option {
+ name = "3 Cores"
+ value = "3"
+ }
+ option {
+ name = "4 Cores"
+ value = "4"
+ }
+}
+
+data "coder_parameter" "memory" {
+ name = "memory"
+ display_name = "Memory"
+ description = "The amount of memory in GB"
+ default = "2"
+ icon = "/icon/memory.svg"
+ mutable = true
+ option {
+ name = "2 GB"
+ value = "2"
+ }
+ option {
+ name = "4 GB"
+ value = "4"
+ }
+ option {
+ name = "6 GB"
+ value = "6"
+ }
+ option {
+ name = "8 GB"
+ value = "8"
+ }
+}
+
+data "coder_workspace" "me" {}
+
+resource "coder_agent" "main" {
+ os = "linux"
+ arch = "amd64"
+ startup_script_timeout = 180
+ startup_script = <<-EOT
+ set -e
+ # install and start code-server
+ curl -fsSL https://code-server.dev/install.sh | sh -s -- --method=standalone --prefix=/tmp/code-server
+ /tmp/code-server/bin/code-server --auth none --port 13337 >/tmp/code-server.log 2>&1 &
+ EOT
+
+ metadata {
+ display_name = "Load Average (Host)"
+ key = "load_host"
+ # get load avg scaled by number of cores
+ script = <
+