Skip to content

Commit 6c2a04e

Browse files
authored
Added OVHCloud support (#12)
1 parent 5644198 commit 6c2a04e

File tree

4 files changed

+173
-1
lines changed

4 files changed

+173
-1
lines changed

Readme.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,3 +13,4 @@ Each subfolder in this repo is for a different platform.
1313
* AWS EKS
1414
* Digital Ocean K8s
1515
* IBMCloud K8s
16+
* OVHCloud K8s

ibmcloud-k8s/Readme.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
1. Fork this repo and set it up with [spacelift.io](https://spacelift.io/) or equivalent
44
2. Create an [API Key](https://cloud.ibm.com/docs/account?topic=account-userapikey&interface=ui) and set it as IC_API_KEY
55
3. Make sure to set the root directory to ibmcloud-k8s/
6-
4. Run and apply the Terraform (took me 10 minutes)
6+
4. Run and apply the Terraform (took me 75 minutes)
77

88
## Coder setup Instructions
99

ovhcloud-k8s/Readme.md

Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
## Getting Coder Installed
2+
3+
1. Create an OVH Cloud account, order a public cloud, and then set up a project (copy the ID)
4+
2. Fork this repo and set it up with [spacelift.io](https://spacelift.io/) or equivalent
5+
3. Set OVH_CLOUD_PROJECT_SERVICE to the project id from (1)
6+
4. For US, I went to https://api.us.ovhcloud.com/createToken/?GET=/*&POST=/*&PUT=/*&DELETE=/* to generate API keys
7+
5. Set OVH_APPLICATION_KEY, OVH_APPLICATION_SECRET, OVH_CONSUMER_KEY to the values from (4)
8+
6. Make sure to set the root directory to ovhcloud-k8s/
9+
7. Run and apply the Terraform (took me 10 minutes)
10+
11+
If you run into any auth issues, see the end of the Readme.
12+
13+
## Coder setup Instructions
14+
15+
1. In the OVHCloud Console, go to Public Cloud --> Load Balancer and copy the IP address on the far right.
16+
2. Create the initial username and password.
17+
3. Go to Templates, click Develop in Kubernetes, and click use template
18+
4. Click create template (it will refresh and prompt for 3 more template inputs)
19+
5. Set var.use_kubeconfig to false
20+
6. Set var.namespace to coder
21+
7. Click create template
22+
23+
With the admin user created and the template imported, we are ready to launch a workspace based on that template.
24+
25+
1. Click create workspace from the kubernetes template (templates/kubernetes/workspace)
26+
2. Give it a name and click create
27+
3. Within three minutes, the workspace should launch.
28+
29+
From there, you can click the Terminal button to get an interactive session in the k8s container, or you can click code-server to open up a VSCode window and start coding!
30+
31+
## kubernetes or helm provider is erroring during authentiation
32+
I'm purposefully using an [anti-pattern](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs) for the k8s provider in which I am both deploying the cluster and the helm charts in 1 repo for demo purposes only. If you run into an issue where the k8s or helm providers can't authenticate, you can fix it this way:
33+
34+
1. Go to OVHCloud --> Kubernetes --> Service, and in the Access and security panel you can download the kubeconfig file.
35+
2. Go to your Spacelift stack --> Environment, and click Edit.
36+
3. Change the type to "Mounted File", and upload the kubeconfig file from (1)
37+
4. Create an environment variable: "TF_VAR_kubeconfig_path" where the value is the path above.
38+
39+
This will enable you to update or delete the stack safely. See [this thread](https://github.com/terraform-aws-modules/terraform-aws-eks/issues/1234) for more details as to why this is necessary.

ovhcloud-k8s/main.tf

Lines changed: 132 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,132 @@
1+
terraform {
2+
required_providers {
3+
ovh = {
4+
source = "ovh/ovh"
5+
}
6+
}
7+
}
8+
9+
# The first time this TF file is run, the kubeconfig file can be
10+
# generated directly from the ovh_cloud_project_kube_nodepool resource.
11+
# However the second time, it won't load properly. Instead, you need
12+
# to mount a new kubeconfig and set the value here.
13+
variable "kubeconfig_path" {
14+
default = ""
15+
}
16+
17+
variable "coder_version" {
18+
default = "0.13.6"
19+
}
20+
21+
# Change this password away from the default if you are doing
22+
# anything more than a testing stack.
23+
variable "db_password" {
24+
default = "coder"
25+
}
26+
27+
###############################################################
28+
# K8s configuration
29+
###############################################################
30+
# Set OVH_APPLICATION_KEY, OVH_APPLICATION_SECRET, OVH_CONSUMER_KEY
31+
# https://api.us.ovhcloud.com/createToken/?GET=/*&POST=/*&PUT=/*&DELETE=/*
32+
# Set OVH_CLOUD_PROJECT_SERVICE to your Project ID
33+
provider "ovh" {
34+
endpoint = "ovh-us"
35+
}
36+
37+
resource "ovh_cloud_project_kube" "coder" {
38+
name = "coder_cluster"
39+
region = "US-EAST-VA-1"
40+
}
41+
42+
43+
resource "ovh_cloud_project_kube_nodepool" "coder" {
44+
kube_id = ovh_cloud_project_kube.coder.id
45+
name = "coder-pool" //Warning: "_" char is not allowed!
46+
flavor_name = "d2-8"
47+
desired_nodes = 2
48+
max_nodes = 2
49+
min_nodes = 2
50+
}
51+
52+
# There's an obnoxious TF issue while destroying
53+
resource "local_file" "kubeconfig" {
54+
content = ovh_cloud_project_kube.coder.kubeconfig
55+
filename = "tf-generated-config.yml"
56+
}
57+
58+
provider "kubernetes" {
59+
# If the kubeconfig_path variable is set, use that. Otherwise, fall back to the local file.
60+
config_path = var.kubeconfig_path != "" ? var.kubeconfig_path : local_file.kubeconfig.filename
61+
}
62+
63+
resource "kubernetes_namespace" "coder_namespace" {
64+
metadata {
65+
name = "coder"
66+
}
67+
68+
depends_on = [
69+
ovh_cloud_project_kube_nodepool.coder,
70+
local_file.kubeconfig
71+
]
72+
}
73+
74+
###############################################################
75+
# Coder configuration
76+
###############################################################
77+
provider "helm" {
78+
kubernetes {
79+
config_path = var.kubeconfig_path != "" ? var.kubeconfig_path : local_file.kubeconfig.filename
80+
}
81+
}
82+
83+
# kubectl logs postgresql-0 -n coder
84+
resource "helm_release" "pg_cluster" {
85+
name = "postgresql"
86+
namespace = kubernetes_namespace.coder_namespace.metadata.0.name
87+
88+
repository = "https://charts.bitnami.com/bitnami"
89+
chart = "postgresql"
90+
91+
set {
92+
name = "auth.username"
93+
value = "coder"
94+
}
95+
96+
set {
97+
name = "auth.password"
98+
value = var.db_password
99+
}
100+
101+
set {
102+
name = "auth.database"
103+
value = "coder"
104+
}
105+
106+
set {
107+
name = "persistence.size"
108+
value = "10Gi"
109+
}
110+
}
111+
112+
resource "helm_release" "coder" {
113+
name = "coder"
114+
namespace = kubernetes_namespace.coder_namespace.metadata.0.name
115+
116+
chart = "https://github.com/coder/coder/releases/download/v${var.coder_version}/coder_helm_${var.coder_version}.tgz"
117+
118+
values = [
119+
<<EOT
120+
coder:
121+
env:
122+
- name: CODER_PG_CONNECTION_URL
123+
value: "postgres://coder:${var.db_password}@${helm_release.pg_cluster.name}.coder.svc.cluster.local:5432/coder?sslmode=disable"
124+
- name: CODER_EXPERIMENTAL
125+
value: "true"
126+
EOT
127+
]
128+
129+
depends_on = [
130+
helm_release.pg_cluster
131+
]
132+
}

0 commit comments

Comments
 (0)