diff --git a/admin/access-url.md b/admin/access-url.md index 13e77ee07..6f7c00d4c 100644 --- a/admin/access-url.md +++ b/admin/access-url.md @@ -11,40 +11,40 @@ domain name that you can use to access your Coder deployment. The steps to do this vary based on the DNS provider you're using, but the general steps required are as follows: -1. Check the contents of your namespace to obtain your ingress controller's - IP address: +1. Check the contents of your namespace to obtain your ingress controller's IP + address: - ```console - kubectl get all -n -o wide - ``` +```console +kubectl get all -n -o wide +``` - Find the **service/ingress-nginx** line and copy the **external IP** value - shown. +Find the **service/ingress-nginx** line and copy the **external IP** value +shown. -1. Get the ingress IP address and point your DNS records from your custom - domain to the external IP address you obtained in the previous step. +1. Get the ingress IP address and point your DNS records from your custom domain + to the external IP address you obtained in the previous step. -> If your custom domain uses the HTTPS protocol, make sure that you have [SSL -certificates](../guides/ssl-certificates/index.md) for use with your Coder -deployment. Otherwise, you can skip this step. +> If your custom domain uses the HTTPS protocol, make sure that you have +> [SSL certificates](../guides/ssl-certificates/index.md) for use with your +> Coder deployment. Otherwise, you can skip this step. ## Step 2: Update the Helm chart and redeploy Coder -When changing your access URL, you'll need to [update your Helm -chart](../guides/admin/helm-charts.md) and [redeploy -Coder](../setup/updating.md): +When changing your access URL, you'll need to +[update your Helm chart](../guides/admin/helm-charts.md) and +[redeploy Coder](../setup/updating.md): helm upgrade coder coder/coder \ - --set devurls.host="*.example.com" \ - --set ingress.host="coder.example.com" \ + --set devurls.host="\*.example.com" \ + --set ingress.host="coder.example.com" \ > See the [enterprise-helm repo](https://github.com/cdr/enterprise-helm) for > more information on Coder's Helm charts. ## Step 3: Provide the access URL in the Coder UI -1. Log into Coder as a site admin/site manager and go to - **Manage** > **Admin** > **Infrastructure**. +1. Log into Coder as a site admin/site manager and go to **Manage** > + **Admin** > **Infrastructure**. 1. Provide your custom domain in the **Access URL** field. The URL you provide must match the value you provided as `ingress.host` in the previous step. diff --git a/admin/devurls.md b/admin/devurls.md index cd706800e..ad9547edc 100644 --- a/admin/devurls.md +++ b/admin/devurls.md @@ -87,9 +87,9 @@ scroll down to **Dev URL Access Permissions**. You can set the maximum access level, but developers may choose to restrict access further. -For example, if you set the maximum access level as -**Authenticated**, then any dev URLs created for workspaces in your Coder -deployment will be accessible to any authenticated Coder user. +For example, if you set the maximum access level as **Authenticated**, then any +dev URLs created for workspaces in your Coder deployment will be accessible to +any authenticated Coder user. The developer, however, can choose to set a stricter permission level (e.g., allowing only those in their organization to use the dev URL). Developers cannot @@ -107,4 +107,5 @@ To do so, you can either: - Use SSH tunneling to tunnel the web app to individual developers' `localhost` instead of dev URLs (this is also an out-of-the-box feature included with VS Code Remote) -- Use this workaround for [multiple callback sub-URLs](https://stackoverflow.com/questions/35942009/github-oauth-multiple-authorization-callback-url/38194107#38194107) +- Use this workaround for + [multiple callback sub-URLs](https://stackoverflow.com/questions/35942009/github-oauth-multiple-authorization-callback-url/38194107#38194107) diff --git a/admin/workspace-management/cvms.md b/admin/workspace-management/cvms.md index 9201e6ad7..5d11cf93c 100644 --- a/admin/workspace-management/cvms.md +++ b/admin/workspace-management/cvms.md @@ -3,23 +3,24 @@ title: Docker in workspaces description: Learn how to enable support for secure Docker inside workspaces. --- -If you're a site admin or a site manager, you can enable [container-based -virtual machines (CVMs)](../../workspaces/cvms.md) as a workspace deployment -option. CVMs allow users to run system-level programs, such as Docker and -systemd, in their workspaces. +If you're a site admin or a site manager, you can enable +[container-based virtual machines (CVMs)](../../workspaces/cvms.md) as a +workspace deployment option. CVMs allow users to run system-level programs, such +as Docker and systemd, in their workspaces. ## Infrastructure requirements -- CVMs leverage the [Sysbox container - runtime](https://github.com/nestybox/sysbox), so the Kubernetes Node must run - a supported Linux distro with the minimum kernel version (see [Sysbox distro - compatibility](https://github.com/nestybox/sysbox/blob/master/docs/distro-compat.md) +- CVMs leverage the + [Sysbox container runtime](https://github.com/nestybox/sysbox), so the + Kubernetes Node must run a supported Linux distro with the minimum kernel + version (see + [Sysbox distro compatibility](https://github.com/nestybox/sysbox/blob/master/docs/distro-compat.md) for more information) - The cluster must allow privileged containers and `hostPath` mounts. Read more about why this is still secure [here](#security). > Coder doesn't support legacy versions of cluster-wide proxy services such as -Istio, and CVMs do not currently support NFS as a file system. +> Istio, and CVMs do not currently support NFS as a file system. ### GPUs diff --git a/admin/workspace-management/extensions.md b/admin/workspace-management/extensions.md index 8e64dbb32..9f823c065 100644 --- a/admin/workspace-management/extensions.md +++ b/admin/workspace-management/extensions.md @@ -28,8 +28,8 @@ environment: 1. Set the **Extension Marketplace Type** to **Custom** 1. Set the **Extension Marketplace API URL** to `https://open-vsx.org/vscode/gallery` (this value comes from the `serviceUrl` - path described in [open-vsx's - documentation](https://github.com/eclipse/openvsx/wiki/Using-Open-VSX-in-VS-Code)). + path described in + [open-vsx's documentation](https://github.com/eclipse/openvsx/wiki/Using-Open-VSX-in-VS-Code)). ## Air-gapped marketplaces diff --git a/guides/ssl-certificates/azureDNS.md b/guides/ssl-certificates/azureDNS.md index 30db41562..fc67479d5 100644 --- a/guides/ssl-certificates/azureDNS.md +++ b/guides/ssl-certificates/azureDNS.md @@ -1,6 +1,8 @@ --- title: Azure DNS -description: Learn how to use cert-manager to set up SSL certificates using Azure DNS for DNS01 challenges. +description: + Learn how to use cert-manager to set up SSL certificates using Azure DNS for + DNS01 challenges. --- [cert-manager](https://cert-manager.io/) allows you to enable HTTPS on your @@ -8,13 +10,13 @@ Coder installation, regardless of whether you're using [Let's Encrypt](https://letsencrypt.org/) or you have your own certificate authority. -This guide will show you how to install cert-manager v1.0.1 and set up your +This guide will show you how to install cert-manager v1.4.0 and set up your cluster to issue Let's Encrypt certificates for your Coder installation so that you can enable HTTPS on your Coder deployment. It will also show you how to configure your Coder hostname and dev URLs. -There are three available methods to configuring the Azure DNS DNS01 Challenge via -cert-manager: +There are three available methods to configuring the Azure DNS DNS01 Challenge +via cert-manager: - [Managed Identity Using AAD Pod Identities](#step-1:-set-up-a-managed-identity) - [Managed Identity Using AKS Kubelet Identity](https://cert-manager.io/docs/configuration/acme/dns01/azuredns/#managed-identity-using-aks-kubelet-identity) @@ -31,29 +33,37 @@ are the same regardless of which option you choose. You must have: -- A Kubernetes cluster with internet connectivity +- A Kubernetes cluster + [of a supported version](../../setup/kubernetes/index.md#supported-kubernetes-versions) + with internet connectivity - Installed [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) -- Installed [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/?view=azure-cli-latest) +- Installed + [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/?view=azure-cli-latest) You should also: - Be a cluster admin - Have access to your DNS provider -- Have a paid Azure account that allows you to access [Azure DNS](https://azure.microsoft.com/en-us/services/dns/) +- Have a paid Azure account that allows you to access + [Azure DNS](https://azure.microsoft.com/en-us/services/dns/) ## Step 1: Create an Azure DNS Zone Log into the [Azure Portal](portal.azure.com). Using the search bar, look for -**DNS Zones** and navigate to this service. Click **New** to create a new zone, -and when prompted: +**DNS Zones** and navigate to this service. + +If Azure DNS is the registrar for your domain, the zone will already exist so +you can skip to Step 3. + +Click **New** to create a new zone, and when prompted: 1. Select your **subscription** and the **resource group** where your Coder deployment is 1. Provide a **name** for your new zone -Click **Review + create**. Review the summary information, and if -it's correct, click **Create** to proceed. +Click **Review + create**. Review the summary information, and if it's correct, +click **Create** to proceed. Once Azure has deployed your resource, click **Go to resource**. Make a note of the name server records (e.g., `ns1-09.azure-dns.com.`) presented to you, since @@ -70,7 +80,7 @@ the domain you're using for your Coder deployment. cert-manager: ```console - kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.2.0/cert-manager.yaml + kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.4.0/cert-manager.yaml ``` 1. Check that cert-manager installs correctly by running @@ -86,6 +96,11 @@ the domain you're using for your Coder deployment. ```console kubectl get all -n cert-manager + + NAME READY STATUS RESTARTS AGE + cert-manager-7cd5cdf774-vb2pr 1/1 Running 0 84s + cert-manager-cainjector-6546bf7765-ssxhf 1/1 Running 0 84s + cert-manager-webhook-7f68b65458-zvzn9 1/1 Running 0 84s ``` ## Step 4: Set up a managed identity @@ -116,99 +131,112 @@ az role assignment create --role "DNS Zone Contributor" --assignee $PRINCIPAL_ID ## Step 5: Deploy the managed identity -1. Export the following environment variables: +1. Export the following environment variables with your own values: + + ```console + export SUBSCRIPTION_ID="05e8b285-4ce1-46a3-b4c9-f51ba67d6acc" + export RESOURCE_GROUP="workshop-202103" + export CLUSTER_NAME="coder-workshop-202103" + ``` - ```console - export SUBSCRIPTION_ID="05e8b285-4ce1-46a3-b4c9-f51ba67d6acc" - export RESOURCE_GROUP="workshop-202103" - export CLUSTER_NAME="coder-workshop-202103" - ``` + The **subscription ID** comes from your Azure subscription. The **resource + group** should be set to the resource group that owns the cluster. The + **cluster name** is the name Azure uses to refer to the required Kubernetes + cluster. 1. Deploy the AAD Pod Identity components to an RBAC-enabled cluster: - ```console - kubectl apply -f https://raw.githubusercontent.com/Azure/ aad-pod-identity/master/deploy/infra/deployment-rbac.yaml + ```console + kubectl apply -f https://raw.githubusercontent.com/Azure/ aad-pod-identity/master/deploy/infra/deployment-rbac.yaml - # For AKS clusters, deploy the MIC and AKS add-on exception by running the following - kubectl apply -f https://raw.githubusercontent.com/Azure/ aad-pod-identity/master/deploy/infra/mic-exception.yaml - ``` + # For AKS clusters, deploy the MIC and AKS add-on exception by running the following + kubectl apply -f https://raw.githubusercontent.com/Azure/ aad-pod-identity/master/deploy/infra/mic-exception.yaml + ``` - > If you're using a non-RBAC cluster, remove the `-rbac` flag from the initial - > command + > If you're using a non-RBAC cluster, remove the `-rbac` flag from the + > initial command 1. Deploy AzureIdentity and AzureIdentityBinding. To do so, create an - `azureId.yaml` file using the template below to deploy the custom resources - required to assign the identity: - - ```yaml - apiVersion: "aadpodidentity.k8s.io/v1" - kind: AzureIdentity - metadata: - annotations: - # We recommend using namespaced identities https://azure.github.io/ aad-pod-identity/docs/configure/match_pods_in_namespace/ - aadpodidentity.k8s.io/Behavior: namespaced - name: certman-identity - namespace: cert-manager # Change to your preferred namespace - spec: - type: 0 # MSI - resourceID: # Resource ID From Previous step - clientID: # Client ID from previous step - --- - apiVersion: "aadpodidentity.k8s.io/v1" - kind: AzureIdentityBinding - metadata: - name: certman-id-binding - namespace: cert-manager # Change to your preferred namespace - spec: - azureIdentity: certman-identity - selector: certman-label # The label that needs to be set on cert-manager pods - ``` + `azureId.yaml` file using the template below to deploy the custom resources + required to assign the identity: + + ```yaml + apiVersion: "aadpodidentity.k8s.io/v1" + kind: AzureIdentity + metadata: + annotations: + # We recommend using namespaced identities https://azure.github.io/ aad-pod-identity/docs/configure/match_pods_in_namespace/ + aadpodidentity.k8s.io/Behavior: namespaced + name: certman-identity + namespace: cert-manager # Change to your preferred namespace + spec: + type: 0 # MSI + resourceID: # Resource ID From Previous step + clientID: # Client ID from previous step + --- + apiVersion: "aadpodidentity.k8s.io/v1" + kind: AzureIdentityBinding + metadata: + name: certman-id-binding + namespace: cert-manager # Change to your preferred namespace + spec: + azureIdentity: certman-identity + selector: certman-label # The label that needs to be set on cert-manager pods + ``` 1. Apply the `azureId.yaml` file: - ```console - kubectl apply -f azureId.yaml - ``` + ```console + kubectl apply -f azureId.yaml + ``` 1. Set the pod identity label on the cert-manager pod: - ```yaml - spec: - template: - metadata: - labels: - aadpodidbinding: certman-label # must match selector in AzureIdentityBinding - ``` + ```yaml + spec: + template: + metadata: + labels: + aadpodidbinding: certman-label # must match selector in AzureIdentityBinding + ``` + + This label tells the cluster which pods are allowed to use the IAM role + specified earlier. For our purposes, we want the cert-manager pod to be able + to set the DNS records for dns01 challenges. The side effect is that any pod + with that label will be able to change DNS settings in the authorized zone. ## Step 6: Create the ACME Issuer 1. Create a file called `letsencrypt.yaml` (you can name it whatever you'd like) -to specify the `hostedZoneName`, `resourceGroupName` and `subscriptionID` fields -for the DNS Zone: - - ```yaml - apiVersion: cert-manager.io/v1 - kind: ClusterIssuer - metadata: - name: letsencrypt - spec: - acme: - email: user@example.com - server: https://acme-v02.api.letsencrypt.org/directory - privateKeySecretRef: - name: example-issuer-account-key - solvers: - - selector: - dnsZones: - - # Your Azure DNS Zone - dns01: - azureDNS: - subscriptionID: SUBSCRIPTION_ID - resourceGroupName: RESOURCE_GROUP - hostedZoneName: ZONE_ID - # Azure Cloud Environment, default to AzurePublicCloud - environment: AzurePublicCloud - ``` + to specify the `hostedZoneName`, `resourceGroupName` and `subscriptionID` + fields for the DNS Zone: + + ```yaml + apiVersion: cert-manager.io/v1 + kind: ClusterIssuer + metadata: + name: letsencrypt + spec: + acme: + email: user@example.com + server: https://acme-v02.api.letsencrypt.org/directory + privateKeySecretRef: + name: example-issuer-account-key + solvers: + - selector: + dnsZones: + - # Your Azure DNS Zone + dns01: + azureDNS: + subscriptionID: SUBSCRIPTION_ID + resourceGroupName: RESOURCE_GROUP + hostedZoneName: ZONE_ID + # Azure Cloud Environment, default to AzurePublicCloud + environment: AzurePublicCloud + ``` + + More information on the values in the YAML file above can be found in + [the dns01 solver configuration documentation](https://cert-manager.io/docs/configuration/acme/dns01/). 1. Apply your configuration changes: @@ -240,6 +268,10 @@ helm install coder coder/coder --namespace coder \ --wait ``` +The `hostSecretName` and `devurlsHostSecretName` are arbitrary strings that you +should set to some value that does not conflict with any other secrets in the +Coder namespace. + There are also a few additional steps to make sure that your hostname and dev URLs work. @@ -253,8 +285,8 @@ URLs work. 1. Return to Azure and go to **DNS zones**. -1. Create a new record for your hostname; provide `coder` as the record name, and - paste the external IP as the `value`. Save. +1. Create a new record for your hostname; provide `coder` as the record name, + and paste the external IP as the `value`. Save. 1. Create another record for your dev URLs: set it to `*.dev.exampleCo` or similar and use the same external IP as the previous step for `value`. Save. diff --git a/guides/ssl-certificates/cloudDNS.md b/guides/ssl-certificates/cloudDNS.md index 38dec55d3..55e5602c8 100644 --- a/guides/ssl-certificates/cloudDNS.md +++ b/guides/ssl-certificates/cloudDNS.md @@ -1,8 +1,8 @@ --- -title: Cloud DNS +title: Google Cloud DNS description: - Learn how to use cert-manager to set up SSL certificates using Cloud DNS for - DNS01 challenges. + Learn how to use cert-manager to set up SSL certificates using Google Cloud + DNS for DNS01 challenges. --- [cert-manager](https://cert-manager.io/) allows you to enable HTTPS on your @@ -10,7 +10,7 @@ Coder installation, regardless of whether you're using [Let's Encrypt](https://letsencrypt.org/) or you have your own certificate authority. -This guide will show you how to install cert-manager v1.0.1 and set up your +This guide will show you how to install cert-manager v1.4.0 and set up your cluster to issue Let's Encrypt certificates for your Coder installation so that you can enable HTTPS on your Coder deployment. It will also show you how to configure your Coder hostname and dev URLs. @@ -21,8 +21,10 @@ configure your Coder hostname and dev URLs. You must have: -- A Kubernetes cluster with internet connectivity -- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) +- A Kubernetes cluster + [of a supported version](../../setup/kubernetes/index.md#supported-kubernetes-versions) + with internet connectivity +- Installed [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) - A [Cloud DNS](https://cloud.google.com/dns) account - A [GCP Service Account](https://cloud.google.com/iam/docs/creating-managing-service-accounts) @@ -30,16 +32,14 @@ You must have: ## Step 1: Add cert-manager to your Kubernetes cluster -To add cert-manager to your cluster (which we assume to be running Kubernetes -1.16+), run: +To add cert-manager to your cluster, run: ```console -kubectl apply --validate=false -f \ -https://github.com/jetstack/cert-manager/releases/download/v1.0.1/cert-manager.yaml +kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.4.0/cert-manager.yaml ``` -> `--validate=false` is required to bypass kubectl's resource validation on the -> client-side that exists in older versions of Kubernetes. +More specifics can be found in the +[cert-manager install documentation](https://cert-manager.io/docs/installation/kubernetes/#installing-with-regular-manifests). Once you've started the installation process, verify that all the pods are running: @@ -91,38 +91,41 @@ secret/clouddns-dns01-solver-svc-acct created called `letsencrypt.yaml` (you can name it whatever you'd like) that includes your newly created private key: - ```yaml - apiVersion: cert-manager.io/v1alpha2 - kind: ClusterIssuer - metadata: - name: letsencrypt - spec: - acme: - privateKeySecretRef: - name: gclouddnsissuersecret - server: https://acme-v02.api.letsencrypt.org/directory - solvers: - - dns01: - clouddns: - # The ID of the GCP project - project: - # This is the secret used to access the service account - serviceAccountSecretRef: - name: clouddns-dns01-solver-svc-acct - key: key.json - ``` + ```yaml + apiVersion: cert-manager.io/v1 + kind: ClusterIssuer + metadata: + name: letsencrypt + spec: + acme: + privateKeySecretRef: + name: gclouddnsissuersecret + server: https://acme-v02.api.letsencrypt.org/directory + solvers: + - dns01: + clouddns: + # The ID of the GCP project + project: + # This is the secret used to access the service account + serviceAccountSecretRef: + name: clouddns-dns01-solver-svc-acct + key: key.json + ``` + + More information on the values in the YAML file above can be found in + [the dns01 solver configuration documentation](https://cert-manager.io/docs/configuration/acme/dns01/). 1. Apply your configuration changes: - ```console - kubectl apply -f ./letsencrypt.yaml - ``` + ```console + kubectl apply -f letsencrypt.yaml + ``` -If successful, you'll see a response similar to: + If successful, you'll see a response similar to: -```console -clusterissuer.cert-manager.io/letsencrypt created -``` + ```console + clusterissuer.cert-manager.io/letsencrypt created + ``` ## Step 5: Install Coder @@ -143,44 +146,12 @@ helm install coder coder/coder --namespace coder \ ``` The cluster-issuer will create the certificates you need, using the values -provided in the `helm install` command for the dev URL and host secret. The -following is a sample `certificates.yaml` file issued for your Coder instance: - -```yaml -apiVersion: cert-manager.io/v1alpha2 -kind: Certificate -metadata: - name: coder-root - namespace: # Your Coder deployment namespace -spec: - secretName: coder-root-cert # Your Coder base url secret name. Use hyphens in place of spaces. - duration: 2160h # 90d - renewBefore: 360h # 15d - dnsNames: - - domain.com # Your base domain for Coder - issuerRef: - name: letsencrypt - kind: ClusterIssuer - ---- -apiVersion: cert-manager.io/v1alpha2 -kind: Certificate -metadata: - name: coder-devurls - namespace: # Your Coder deployment namespace -spec: - secretName: coder-devurls-cert # Your Coder devurls secret name - duration: 2160h # 90d - renewBefore: 360h # 15d - dnsNames: - - "*.domain.com" # Your dev URLs wildcard subdomain - issuerRef: - name: letsencrypt - kind: ClusterIssuer -``` +provided in the `helm install` command for the dev URL and host secret. There are additional steps to make sure that your hostname and Dev URLs work. +## Step 6: Configure DNS resolution + 1. Check the contents of your namespace ```console diff --git a/guides/ssl-certificates/cloudflare.md b/guides/ssl-certificates/cloudflare.md index 37d0a1100..9f4cda47c 100644 --- a/guides/ssl-certificates/cloudflare.md +++ b/guides/ssl-certificates/cloudflare.md @@ -1,7 +1,7 @@ --- title: Cloudflare description: - Learn how to use cert-manager to set up SSL certificates using Cloudflare for + Learn how to use cert-manager to set up TLS certificates using Cloudflare for DNS01 challenges. --- @@ -10,7 +10,7 @@ Coder installation, regardless of whether you're using [Let's Encrypt](https://letsencrypt.org/) or you have your own certificate authority. -This guide will show you how to install cert-manager v1.0.1 and set up your +This guide will show you how to install cert-manager v1.4.0 and set up your cluster to issue Let's Encrypt certificates for your Coder installation so that you can enable HTTPS on your Coder deployment. @@ -22,20 +22,19 @@ you can enable HTTPS on your Coder deployment. You must have: -- A Kubernetes cluster [meeting Coder's - requirements](../../setup/kubernetes/index.md) with internet connectivity -- kubectl with patch version - [greater than v1.18.8, v1.17.11, or v1.16.14](https://cert-manager.io/docs/installation/upgrading/upgrading-0.15-0.16/#issue-with-older-versions-of-kubectl) +- A Kubernetes cluster + [of a supported version](../../setup/kubernetes/index.md#supported-kubernetes-versions) + with internet connectivity +- Installed [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) ## Step 1: Add cert-manager to your Kubernetes cluster ```console -$ kubectl apply --validate=false -f \ -https://github.com/jetstack/cert-manager/releases/download/v1.0.1/cert-manager-legacy.yaml +kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.4.0/cert-manager.yaml ``` -> `--validate=false` is required to bypass kubectl's resource validation on the -> client-side that exists in older versions of Kubernetes. +More specifics can be found in the +[cert-manager install documentation](https://cert-manager.io/docs/installation/kubernetes/#installing-with-regular-manifests). Once you've started the installation process, you can verify that all the pods are running: @@ -97,7 +96,7 @@ stringData: api-token: "" # Your Cloudflare API token (from earlier) --- -apiVersion: cert-manager.io/v1alpha2 +apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: letsencrypt @@ -125,6 +124,9 @@ spec: - "example.com" ``` +More information on the values in the YAML file above can be found in +[the dns01 solver configuration documentation](https://cert-manager.io/docs/configuration/acme/dns01/). + ### ClusterIssuers cert-manager has a concept of **Issuer** (which are per-namespace) or @@ -136,8 +138,7 @@ following changes: - Change the namespace of the secret to **cert-manager** - Change the kind of the **Issuer** to **ClusterIssuer** - Remove the namespace of the **ClusterIssuer** -- Change the additional annotations to - `cert-manager.io/cluster-issuer: letsencrypt` +- Change the annotations to `cert-manager.io/cluster-issuer: "letsencrypt"` For further information, see [Setting Up Issuers](https://docs.cert-manager.io/en/release-0.8/tasks/issuers/index.html). @@ -154,7 +155,7 @@ issuer.cert-manager.io/letsencrypt created ## Step 3: Configure Coder to issue and use the certificates -If your installation uses an external egress, you'll need to configure your +If your installation uses an external ingress, you'll need to configure your ingress to use the **coder-root-cert** and **coder-devurls-cert**. However, if you're using the default @@ -170,14 +171,18 @@ ingress: enable: true hostSecretName: coder-root-cert devurlsHostSecretName: coder-devurls-cert - additionalAnnotations: - - "cert-manager.io/issuer: letsencrypt" + annotations: + cert-manager.io/issuer: "letsencrypt" devurls: host: "*.coder.example.com" ``` +The `hostSecretName` and `devurlsHostSecretName` are arbitrary strings that you +should set to some value that does not conflict with any other secrets in the +Coder namespace. + Be sure to redeploy Coder after changing your Helm values. If, after -redeploying, you're not getting a valid certificate, see [cert-manager's -troubleshooting guide](https://cert-manager.io/docs/faq/acme/) for additional -assistance. +redeploying, you're not getting a valid certificate, see +[cert-manager's troubleshooting guide](https://cert-manager.io/docs/faq/acme/) +for additional assistance. diff --git a/guides/ssl-certificates/route53.md b/guides/ssl-certificates/route53.md index c4133d4bc..bdd9270c3 100644 --- a/guides/ssl-certificates/route53.md +++ b/guides/ssl-certificates/route53.md @@ -1,7 +1,7 @@ --- title: Route 53 description: - Learn how to use cert-manager to set up SSL certificates using Route 53 for + Learn how to use cert-manager to set up TLS certificates using Route 53 for DNS01 challenges. --- @@ -10,7 +10,7 @@ Coder installation, regardless of whether you're using [Let's Encrypt](https://letsencrypt.org/) or you have your own certificate authority. -This guide will show you how to install cert-manager v1.0.1 and set up your +This guide will show you how to install cert-manager v1.4.0 and set up your cluster to issue Let's Encrypt certificates for your Coder installation so that you can enable HTTPS on your Coder deployment. It will also show you how to configure your Coder hostname and dev URLs. @@ -23,9 +23,10 @@ configure your Coder hostname and dev URLs. You must have: -- A Kubernetes cluster [meeting Coder's - requirements](../../setup/kubernetes/index.md) with internet connectivity -- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) +- A Kubernetes cluster + [of a supported version](../../setup/kubernetes/index.md#supported-kubernetes-versions) + with internet connectivity +- Installed [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) You should also: @@ -41,7 +42,7 @@ You should also: cert-manager: ```console - kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.2.0/cert-manager.yaml + kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.4.0/cert-manager.yaml ``` 1. Check that cert-manager installs correctly by running @@ -57,6 +58,12 @@ You should also: ```console kubectl get all -n cert-manager + + NAME READY STATUS RESTARTS AGE + cert-manager-7cd5cdf774-vb2pr 1/1 Running 0 84s + cert-manager-cainjector-6546bf7765-ssxhf 1/1 Running 0 84s + cert-manager-webhook-7f68b65458-zvzn9 1/1 Running 0 84s + ``` ## Step 2: Delegate your domain names and set up DNS01 challenges @@ -65,6 +72,9 @@ Because Coder dynamically generates domains (specifically the dev URLs), your certificates need to be approved and challenged. The following steps will show you how to use Route 53 for DNS01 challenges. +If your domain name is managed by Route 53, the hosted zone will already exist +so skip to step 3. + 1. Log in to AWS Route 53. On the Dashboard, click **Hosted Zone**. 1. Click **Create Hosted Zone**. In the configuration screen, provide the @@ -86,6 +96,10 @@ you how to use Route 53 for DNS01 challenges. To make sure that your `clusterIssuer` can change your DNS settings, [create the required IAM role](https://cert-manager.io/docs/configuration/acme/dns01/route53/#set-up-an-iam-role) +When you create the secret for cert-manager, referenced below as +`route53-credentials` be sure it is in the cert-manager namespace since it's +used by the cert-manager pod to perform DNS configuration changes. + ## Step 4: Create the ACME Issuer 1. Using the text editor of your choice, create a new @@ -118,6 +132,9 @@ To make sure that your `clusterIssuer` can change your DNS settings, - yourDomain.com ``` + More information on the values in the YAML file above can be found in + [the dns01 solver configuration documentation](https://cert-manager.io/docs/configuration/acme/dns01/). + 1. Apply your configuration changes ```console @@ -127,7 +144,7 @@ To make sure that your `clusterIssuer` can change your DNS settings, If successful, you'll see a response similar to ```console - clusterissuer.cert-manager.io/letsencrypt-alt created + clusterissuer.cert-manager.io/letsencrypt created ``` ## Step 5: Install Coder @@ -148,6 +165,10 @@ helm install coder coder/coder --namespace coder \ --wait ``` +The `hostSecretName` and `devurlsHostSecretName` are arbitrary strings that you +should set to some value that does not conflict with any other secrets in the +Coder namespace. + There are also a few additional steps to make sure that your hostname and dev URLs work. @@ -170,5 +191,6 @@ URLs work. At this point, you can return to **step 6** of the [installation](../../setup/installation.md) guide to obtain the admin credentials you need to log in. If you are not getting a valid certificate after -redeploying, see [cert-manager's troubleshooting -guide](https://cert-manager.io/docs/faq/acme/) for additional assistance. +redeploying, see +[cert-manager's troubleshooting guide](https://cert-manager.io/docs/faq/acme/) +for additional assistance. diff --git a/setup/kubernetes/aws.md b/setup/kubernetes/aws.md index b2c57c14a..31efaec42 100644 --- a/setup/kubernetes/aws.md +++ b/setup/kubernetes/aws.md @@ -25,16 +25,16 @@ machine: The node type and size that you select impact how you use Coder. When choosing, be sure to account for the number of developers you expect to use Coder, as well -as the resources they need to run their workspaces. See our guide on on [compute -resources](../../guides/admin/resources.md) for additional information. +as the resources they need to run their workspaces. See our guide on on +[compute resources](../../guides/admin/resources.md) for additional information. If you expect to provision GPUs to your Coder workspaces, you **must** use an -EC2 instance from AWS' [accelerated computing instance -family](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/accelerated-computing-instances.html). +EC2 instance from AWS' +[accelerated computing instance family](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/accelerated-computing-instances.html). -> GPUs are not supported in workspaces deployed as [container-based virtual -> machines (CVMs)](../../workspaces/cvms.md) unless you're running Coder in a -> bare-metal Kubernetes environment. +> GPUs are not supported in workspaces deployed as +> [container-based virtual machines (CVMs)](../../workspaces/cvms.md) unless +> you're running Coder in a bare-metal Kubernetes environment. ## Preliminary steps diff --git a/setup/kubernetes/azure.md b/setup/kubernetes/azure.md index e87988442..b6eb2393e 100644 --- a/setup/kubernetes/azure.md +++ b/setup/kubernetes/azure.md @@ -19,17 +19,17 @@ the prompts). The node type and size that you select impact how you use Coder. When choosing, be sure to account for the number of developers you expect to use Coder, as well -as the resources they need to run their workspaces. See our guide on on [compute -resources](../../guides/admin/resources.md) for additional information. +as the resources they need to run their workspaces. See our guide on on +[compute resources](../../guides/admin/resources.md) for additional information. If you expect to provision GPUs to your Coder workspaces, you **must** use an -Azure Virtual Machine with support for GPUs. See the [Azure -documentation](https://docs.microsoft.com/en-us/azure/virtual-machines/sizes-gpu) +Azure Virtual Machine with support for GPUs. See the +[Azure documentation](https://docs.microsoft.com/en-us/azure/virtual-machines/sizes-gpu) for more information. -> GPUs are not supported in workspaces deployed as [container-based virtual -> machines (CVMs)](../../workspaces/cvms.md) unless you're running Coder in a -> bare-metal Kubernetes environment. +> GPUs are not supported in workspaces deployed as +> [container-based virtual machines (CVMs)](../../workspaces/cvms.md) unless +> you're running Coder in a bare-metal Kubernetes environment. ## Step 1: Create the resource group diff --git a/setup/kubernetes/google.md b/setup/kubernetes/google.md index f77605207..188f6654c 100644 --- a/setup/kubernetes/google.md +++ b/setup/kubernetes/google.md @@ -21,18 +21,18 @@ assistance selecting the correct options for your cluster. The node type and size that you select impact how you use Coder. When choosing, be sure to account for the number of developers you expect to use Coder, as well -as the resources they need to run their workspaces. See our guide on on [compute -resources](../../guides/admin/resources.md) for additional information. +as the resources they need to run their workspaces. See our guide on on +[compute resources](../../guides/admin/resources.md) for additional information. If you expect to provision GPUs to your Coder workspaces, you **must** use a -general-purpose [N1 machine -type](https://cloud.google.com/compute/docs/machine-types#gpus) in your GKE -cluster and add GPUs to the nodes. We recommend doing this in a separate -GPU-specific node pool. +general-purpose +[N1 machine type](https://cloud.google.com/compute/docs/machine-types#gpus) in +your GKE cluster and add GPUs to the nodes. We recommend doing this in a +separate GPU-specific node pool. -> GPUs are not supported in workspaces deployed as [container-based virtual -> machines (CVMs)](../../workspaces/cvms.md) unless you're running Coder in a -> bare-metal Kubernetes environment. +> GPUs are not supported in workspaces deployed as +> [container-based virtual machines (CVMs)](../../workspaces/cvms.md) unless +> you're running Coder in a bare-metal Kubernetes environment. ## Set up the GKE cluster @@ -58,8 +58,8 @@ This option uses an Ubuntu node image to enable support of allowing system-level functionalities such as Docker in Docker. > Please note that the sample script creates a `n1-highmem-4` instance; -> depending on your needs, you can choose a [larger -> size](https://cloud.google.com/compute/docs/machine-types#machine_type_comparison) +> depending on your needs, you can choose a +> [larger size](https://cloud.google.com/compute/docs/machine-types#machine_type_comparison) > instead. See [requirements](../requirements.md) for help estimating your > cluster size. @@ -99,8 +99,8 @@ requirements. It does _not_ enable the use of [CVMs](../../admin/workspace-management/cvms.md). > Please note that the sample script creates a `n1-highmem-4` instance; -> depending on your needs, you can choose a [larger -> size](https://cloud.google.com/compute/docs/machine-types#machine_type_comparison) +> depending on your needs, you can choose a +> [larger size](https://cloud.google.com/compute/docs/machine-types#machine_type_comparison) > instead. See [requirements](../requirements.md) for help estimating your > cluster size. diff --git a/setup/kubernetes/index.md b/setup/kubernetes/index.md index 1daffa008..8935343c9 100644 --- a/setup/kubernetes/index.md +++ b/setup/kubernetes/index.md @@ -15,21 +15,19 @@ You can deploy Coder to any [compatible Kubernetes cluster]. Coder follows the version of Coder supports the previous two minor releases as well as the current release of Kubernetes at time of publication. -Coder may run successfully with -older versions of Kubernetes. However, we strongly recommend running one of the -currently-supported versions so that you receive applicable fixes, including -security updates, from the Kubernetes project maintainers. +Coder may run successfully with older versions of Kubernetes. However, we +strongly recommend running one of the currently-supported versions so that you +receive applicable fixes, including security updates, from the Kubernetes +project maintainers. -Coder continuously removes usage of deprecated Kubernetes API versions once -the minimum baseline version of Kubernetes supports the necessary features in -a stable version. We follow this policy to ensure that Coder stops -using deprecated features before they are removed from new versions of -Kubernetes. +Coder continuously removes usage of deprecated Kubernetes API versions once the +minimum baseline version of Kubernetes supports the necessary features in a +stable version. We follow this policy to ensure that Coder stops using +deprecated features before they are removed from new versions of Kubernetes. [compatible kubernetes cluster]: ../requirements.md [kubernetes upstream version support policy]: https://kubernetes.io/docs/setup/release/version-skew-policy/ - [installation guide]: ../installation.md diff --git a/setup/kubernetes/k3s.md b/setup/kubernetes/k3s.md index a07cf96df..2b8d58b3a 100644 --- a/setup/kubernetes/k3s.md +++ b/setup/kubernetes/k3s.md @@ -9,9 +9,9 @@ machine for use with Coder. [K3s](https://k3s.io/) is a lightweight Kubernetes distribution that works well for single-node or multi-node clusters. This guide covers the installation of K3s onto a new Ubuntu 20.04 LTS machine. If you want to install Coder on a local -machine or an existing host, a [kind cluster](./kind.md) or [k3d -cluster](https://k3d.io/) may be a better choice, as it leverages Docker to set -up/tear down clusters with little hassle. +machine or an existing host, a [kind cluster](./kind.md) or +[k3d cluster](https://k3d.io/) may be a better choice, as it leverages Docker to +set up/tear down clusters with little hassle. > This installation method is not officially supported or tested by Coder. If > you have questions or run into issues, feel free to reach out using our @@ -24,11 +24,11 @@ up/tear down clusters with little hassle. Before proceeding, please make sure that: - You have an **Ubuntu 20.04 machine**: This can be a bare metal or a virtual - machine. + machine. - Ensure that the machine's specs satisfy Coder's [resource - requirements](../requirements.md), since your experience with Coder is - dependent on your system specs. + Ensure that the machine's specs satisfy Coder's + [resource requirements](../requirements.md), since your experience with Coder + is dependent on your system specs. - You have the following software installed on your machine: @@ -46,8 +46,8 @@ Before proceeding, please make sure that: ## Step 1: Change the default SSH port > If you've enabled Networking v2 after installing Coder (you can do so by going -to **Manage** > **Admin** > **Infrastructure**), this step to SSH into -workspaces isn't necessary, since TURNS is used instead. +> to **Manage** > **Admin** > **Infrastructure**), this step to SSH into +> workspaces isn't necessary, since TURNS is used instead. To allow [SSH into workspaces](../../workspaces/ssh), you must change the host's default SSH port to free up port `22`. You may also need to modify your firewall @@ -55,13 +55,12 @@ to accept incoming traffic from the alternative port (e.g., if you rename port `22` to `5522`, then your firewall must accept traffic from `5522`). > If you don't know how to change the SSH port in Linux, please review this -> [guide from -> Linuxize](https://linuxize.com/post/how-to-change-ssh-port-in-linux/) +> [guide from Linuxize](https://linuxize.com/post/how-to-change-ssh-port-in-linux/) ## Step 2: Install K3s with Calico -The following steps are based on [Calico's quickstart -guide](https://docs.projectcalico.org/getting-started/kubernetes/k3s/quickstart) +The following steps are based on +[Calico's quickstart guide](https://docs.projectcalico.org/getting-started/kubernetes/k3s/quickstart) for setting up K3s. However, you will disable K3s' default network policies and Traefik in favor of Calico and nginx-ingress. @@ -71,8 +70,8 @@ Traefik in favor of Calico and nginx-ingress. curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" INSTALL_K3S_EXEC="--flannel-backend=none --cluster-cidr=192.168.0.0/16 --disable-network-policy --disable=traefik" sh - ``` - > Per the [Calico - > docs](https://docs.projectcalico.org/getting-started/kubernetes/k3s/quickstart): + > Per the + > [Calico docs](https://docs.projectcalico.org/getting-started/kubernetes/k3s/quickstart): > > If `192.168.0.0/16` is already in use within your network, you must select > a different pod network CIDR by replacing `192.168.0.0/16` in the above diff --git a/setup/kubernetes/local-preview.md b/setup/kubernetes/local-preview.md index 4d6e3d35d..760efabff 100644 --- a/setup/kubernetes/local-preview.md +++ b/setup/kubernetes/local-preview.md @@ -23,7 +23,8 @@ Before proceeding, please make sure that you have the following installed: ## Limitations -**We do not recommend using local previews for production deployments of Coder.** +**We do not recommend using local previews for production deployments of +Coder.** ### Resource allocation and performance @@ -76,7 +77,8 @@ curl -fsSL https://coder.com/try.sh | PORT="80" sh -s -- ``` > Note: you can edit the value of `PORT` to control where the Coder dashboard -> will be available. However, dev URLs will only work when `PORT` is set to `80`. +> will be available. However, dev URLs will only work when `PORT` is set to +> `80`. When the installation process completes, you'll see the URL and login credentials you need to access Coder: @@ -100,8 +102,8 @@ automatically configured for you, so there's no first-time setup to do. ### Dev URLs -Coder allows you to access services you're developing in your workspace via [dev -URLs](../../workspaces/devurls.md). You can enable dev URLs after you've +Coder allows you to access services you're developing in your workspace via +[dev URLs](../../workspaces/devurls.md). You can enable dev URLs after you've installed Coder. > If you do not want to enable dev URLs, you can use SSH port forwarding or @@ -231,8 +233,8 @@ curl -fsSL https://coder.com/try.sh | sh -s -- down Because Coder runs inside Docker, you should have nothing left on your machine after tear down. -If you added a custom DNS to use [dev URLs](#dev-urls), you can -revert these changes by uninstalling dnsmasq and removing the resolver config: +If you added a custom DNS to use [dev URLs](#dev-urls), you can revert these +changes by uninstalling dnsmasq and removing the resolver config: ```console # MacOS