@@ -5,7 +5,7 @@ description:
5
5
---
6
6
7
7
This deployment guide shows you how to set up an Amazon Elastic Kubernetes
8
- Engine cluster on which Coder can deploy.
8
+ Engine (EKS) cluster on which Coder can deploy.
9
9
10
10
## Prerequisites
11
11
@@ -21,195 +21,76 @@ machine:
21
21
to fast-track this process
22
22
- [ eksctl command-line utility] ( https://docs.aws.amazon.com/eks/latest/userguide/eksctl.html )
23
23
24
- ## Node Considerations
25
-
26
- The node type and size that you select impact how you use Coder. When choosing,
27
- be sure to account for the number of developers you expect to use Coder, as well
28
- as the resources they need to run their workspaces. See our guide on on
29
- [ compute resources] ( ../../guides/admin/resources.md ) for additional information.
30
-
31
- If you expect to provision GPUs to your Coder workspaces, you ** must** use an
32
- EC2 instance from AWS'
33
- [ accelerated computing instance family] ( https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/accelerated-computing-instances.html ) .
34
-
35
- > GPUs are not supported in workspaces deployed as
36
- > [ container-based virtual machines (CVMs)] ( ../../workspaces/cvms.md ) unless
37
- > you're running Coder in a bare-metal Kubernetes environment.
38
-
39
- ## Preliminary steps
40
-
41
- Before you can create a cluster, you'll need to perform the following to set up
42
- and configure your AWS account.
43
-
44
- 1 . Go to AWS' [ EC2 console] ( https://console.aws.amazon.com/ec2/ ) ; this should
45
- take you to the EC2 page for the AWS region in which you're working (if not,
46
- change to the correct region using the dropdown in the top-right of the page)
47
- 1 . In the ** Resources** section in the middle of the page, click ** Elastic
48
- IPs** .
49
- 1 . Choose either an Elastic IP address you want to use or click ** Allocate
50
- Elastic IP address** . Choose ** Amazon's pool of IPv4 addresses** and click
51
- ** Allocate** .
52
- 1 . Return to the EC2 Dashboard.
53
- 1 . In the ** Resources** section in the middle of the page, click ** Key Pairs** .
54
- 1 . Click ** Create key pair** (alternatively, if you already have a local SSH key
55
- you'd like to use, you can click the Actions dropdown and import your key)
56
- 1 . Provide a ** name** for your key pair and select ** pem** as your ** file
57
- format** . Click ** Create key pair** .
58
- 1 . You'll automatically download the keypair; save it to a known directory on
59
- your local machine (we recommend keeping the default name, which will match
60
- the name you provided to AWS).
61
- 1 . Now that you have the ` .pem ` file, extract the public key portion of the
62
- keypair so that you can use it with the eksctl CLI in later steps:
63
-
64
- ``` sh
65
- ssh-keygen -y -f < PATH/TO/KEY> .pem >> < PATH/TO/KEY/KEY> .pub
66
- ```
67
-
68
- ** Note** : if you run into a bad permissions error, run ` sudo ` before the
69
- command above.
24
+ ## Step 1: Create an EKS cluster
25
+
26
+ While flags can be passed to ` eksctl create cluster ` , the following example uses
27
+ an [ ` eksctl ` configuration file] ( https://eksctl.io/usage/schema/ ) to define the
28
+ EKS cluster.
29
+
30
+ > The cluster name,
31
+ > [ region] ( https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones>.html#concepts-regions ) ,
32
+ > and SSH key path will be specific to your installation.
33
+
34
+ ``` yaml
35
+ apiVersion : eksctl.io/v1alpha5
36
+ kind : ClusterConfig
37
+
38
+ metadata :
39
+ name : coder-trial-cluster
40
+ region : us-east-1
41
+
42
+ managedNodeGroups :
43
+ - name : managed-ng-1
44
+ instanceType : t2.medium
45
+ amiFamily : Ubuntu2004
46
+ desiredCapacity : 1
47
+ minSize : 1
48
+ maxSize : 2
49
+ volumeSize : 100
50
+ ssh :
51
+ allow : true
52
+ publicKeyPath : ~/.ssh/id_rsa.pub
53
+ ` ` `
70
54
71
- When done, you should have a .pem and .pub file for the same keypair you
72
- downloaded from AWS.
55
+ This example uses ` t2.medium` instance with 2 nodes which is meant for a small
56
+ trial deployment. Depending on your needs, you can choose a
57
+ [larger size](https://aws.amazon.com/ec2/instance-types/) instead. See our
58
+ documentation on [resources](../../guides/admin/resources.md) and
59
+ [requirements](../requirements.md) for help estimating your cluster size.
73
60
74
- ## Step 1: Spin up a K8s cluster
61
+ > If your developers require Docker commands like `docker build`, `docker run`,
62
+ > and `docker-compose` as part of their development flow, then
63
+ > [container-based virtual machines (CVMs)](../../workspaces/cvms.md) are
64
+ > required. In this case, we recommend using the `Ubuntu2004` AMI family, as
65
+ > the `AmazonLinux2` AMI family does not meet the requirements
66
+ > for [cached CVMs](../../workspace-management/cvms/management#caching).
75
67
76
- To make subsequent steps easier, start by creating environment variables for the
77
- cluster name,
78
- [ region] ( https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-regions ) ,
79
- and SSH key path:
68
+ Once the file is ready, run the following command to create the cluster :
80
69
81
70
` ` ` console
82
- CLUSTER_NAME="YOUR_CLUSTER_NAME"
83
- SSH_KEY_PATH="<PATH/TO/KEY>.pub"
84
- REGION="YOUR_REGION"
71
+ eksctl create cluster -f cluster.yaml
85
72
` ` `
86
73
87
- The following will spin up a Kubernetes cluster using ` eksctl ` (be sure to
88
- update the parameters as necessary, especially the version number):
89
-
90
- ``` console
91
-
92
- eksctl create cluster \
93
- --name "$CLUSTER_NAME" \
94
- --version <version> \
95
- --region "$REGION" \
96
- --nodegroup-name standard-workers \
97
- --node-type t3.medium \
98
- --nodes 2 \
99
- --nodes-min 2 \
100
- --nodes-max 8 \
101
- --ssh-access \
102
- --ssh-public-key "$SSH_KEY_PATH" \
103
- --managed
104
- ```
74
+ This process may take ~15-30 minutes to complete since it is creating EC2
75
+ instance(s) aka node(s), node pool, a VPC, NAT Gateway, network interface,
76
+ security group, elastic IP, EKS cluster, namespaces and pods.
105
77
106
- Please note that the sample script creates a ` t3.medium ` instance; depending on
107
- your needs, you can choose a
108
- [ larger size] ( https://aws.amazon.com/ec2/instance-types/t3/ ) instead. See
109
- [ requirements] ( ../requirements.md ) for help estimating your cluster size.
78
+ > By default, EKS creates a `volumeBindingMode` of `WaitForFirstConsumer`. See the
79
+ > [Kubernetes docs](https://kubernetes.io/docs/concepts/storage/storage-classes/#volume-binding-mode)
80
+ > for more information on this mode. Coder accepts both `Immediate` and `WaitForFirstConsumer`.
110
81
111
82
When your cluster is ready, you should see the following message :
112
83
113
84
` ` ` console
114
- EKS cluster "YOUR_CLUSTER_NAME " in "YOUR_REGION " region is ready
85
+ EKS cluster "YOUR CLUSTER NAME " in "YOUR REGION " region is ready
115
86
` ` `
116
87
117
- This process may take ~ 15-30 minutes to complete.
88
+ # # Step 2: (Optional) Install Calico onto your cluster
118
89
119
- ## Step 2: Adjust the K8s storage class
120
-
121
- Once you've created the cluster, adjust the default Kubernetes storage class to
122
- support immediate volume binding.
123
-
124
- 1 . Make sure that you're pointed to the correct context:
125
-
126
- ``` console
127
- kubectl config current-context
128
- ```
129
-
130
- 1 . If you're pointed to the correct context, delete the gp2 storage class:
131
-
132
- ``` console
133
- kubectl delete sc gp2
134
- ```
135
-
136
- 1 . Recreate the gp2 storage class with the ` volumeBindingMode ` set to
137
- ` Immediate ` :
138
-
139
- ``` console
140
- cat <<EOF | kubectl apply -f -
141
- apiVersion: storage.k8s.io/v1
142
- kind: StorageClass
143
- metadata:
144
- annotations:
145
- storageclass.kubernetes.io/is-default-class: "true"
146
- name: gp2
147
- provisioner: kubernetes.io/aws-ebs
148
- parameters:
149
- type: gp2
150
- fsType: ext4
151
- volumeBindingMode: Immediate
152
- allowVolumeExpansion: true
153
- EOF
154
- ```
155
-
156
- > See the
157
- > [ Kubernetes docs] ( https://kubernetes.io/docs/concepts/storage/storage-classes/#volume-binding-mode )
158
- > for information on choosing the right parameter for ` volumeBindingMode ` ; Coder
159
- > accepts both ` Immediate ` and ` WaitForFirstConsumer ` .
160
-
161
- ### Modifying your cluster to support CVMs
162
-
163
- To create clusters allowing you to
164
- [ enable container-based virtual machines (CVMs)] ( ../../admin/workspace-management/cvms.md )
165
- as a workspace deployment option, you'll need to
166
- [ create a nodegroup] ( https://eksctl.io/usage/eks-managed-nodes/#creating-managed-nodegroups ) .
167
-
168
- 1 . Define your config file (we've named the file ` coder-node.yaml ` , but you can
169
- call it whatever you'd like):
170
-
171
- ``` yaml
172
- apiVersion : eksctl.io/v1alpha5
173
- kind : ClusterConfig
174
-
175
- metadata :
176
- version : " <YOUR_K8s_VERSION>"
177
- name : <YOUR_CLUSTER_NAME>
178
- region : <YOUR_AWS_REGION>
179
-
180
- managedNodeGroups :
181
- - name : coder-node-group
182
- amiFamily : Ubuntu2004 # AmazonLinux2 is also a supported option
183
- # Custom EKS-compatible AMIs can be used instead of amiFamily
184
- # ami: <your Ubuntu 20.04 AMI ID>
185
- instanceType : <instance-type>
186
- minSize : 1
187
- maxSize : 2
188
- desiredCapacity : 1
189
- # Uncomment "overrideBootstrapCommand" if you are using a custom AMI
190
- # overrideBootstrapCommand: |
191
- # #!/bin/bash -xe
192
- # sudo /etc/eks/bootstrap.sh <YOUR_CLUSTER_NAME>
193
- ```
194
-
195
- > See
196
- > [ the list of EKS-compatible Ubuntu AMIs] ( https://cloud-images.ubuntu.com/docs/aws/eks/ )
197
- > and info on
198
- > [ Latest & Custom AMI support] ( https://eksctl.io/usage/custom-ami-support/ ) .
199
-
200
- 1 . Create your nodegroup (be sure to provide the correct file name):
201
-
202
- ``` console
203
- eksctl create nodegroup --config-file=coder-node.yaml
204
- ```
205
-
206
- ## Step 3: (Optional) Install Calico onto your cluster
207
-
208
- AWS uses
209
- [ Calico] ( https://docs.amazonaws.cn/en_us/eks/latest/userguide/calico.html ) to
210
- implement network segmentation and tenant isolation. We strongly recommend
211
- executing the following steps; please see
212
- [ Network Policies] ( ../requirements.md#network-policies ) for more information.
90
+ AWS uses [Calico](https://docs.amazonaws.cn/en_us/eks/latest/userguide/calico.html)
91
+ to implement network segmentation and tenant isolation. For production deployments,
92
+ we recommend Calico to enforce workspace pod isolation; please see [Network Policies](../requirements.md#network-policies)
93
+ for more information.
213
94
214
95
1. Apply the Calico manifest to your cluster :
215
96
@@ -232,20 +113,15 @@ executing the following steps; please see
232
113
calico-node 3 3 3 3 ...
233
114
` ` `
234
115
235
- ## Access control
116
+ # # Cleanup | Delete EKS cluster
236
117
237
- EKS allows you to create and manage user permissions using IAM identity
238
- providers (IdPs). EKS also supports user authentication via OpenID Connect
239
- (OIDC) identity providers .
118
+ To delete the EKS cluster including any installation of Coder, substitute your
119
+ cluster name and zone in the following `eksctl` command. This will take several
120
+ minutes and can be monitored in the CloudFormation stack .
240
121
241
- Using IAM with Kubernetes' native Role-Based Access Control (RBAC) allows you to
242
- grant access to your EKS cluster using existing IDPs and fine-tune permissions
243
- with RBAC.
244
-
245
- For more information, see:
246
-
247
- - [ AWS identity providers and federation] ( https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers.html )
248
- - [ Kubernetes RBAC authorization] ( https://kubernetes.io/docs/reference/access-authn-authz/rbac/ )
122
+ ` ` ` console
123
+ eksctl delete cluster --region=us-east-1 --name=trial-cluster
124
+ ` ` `
249
125
250
126
# # Next steps
251
127
@@ -255,4 +131,4 @@ provider](../../admin/workspace-providers/deployment/index.md).
255
131
To access Coder through a secure domain, review our guides on configuring and
256
132
using [TLS certificates](../../guides/tls-certificates/index.md).
257
133
258
- Once complete, see our page on [ installation] ( ../installation.md ) .
134
+ Once complete, see our page on [Coder installation](../installation.md).
0 commit comments