Register a Kubernetes Cluster with a Static kubeconfig
While you can register a Kubernetes cluster with Teleport by running the Teleport Kubernetes Service on that cluster, you can also run the Teleport Kubernetes Service on a Linux host outside the cluster. This is useful if you want to decouple your Teleport deployment from the Kubernetes clusters you want to manage access to.
In this setup, the Teleport Kubernetes Service uses a kubeconfig
file to
authenticate to the API server of your chosen Kubernetes cluster.
Best practices for production security
When running Teleport in production, you should adhere to the following best practices to avoid security incidents:
- Avoid using
sudo
in production environments unless it's necessary. - Create new, non-root, users and use test instances for experimenting with Teleport.
- Run Teleport's services as a non-root user unless required. Only the SSH
Service requires root access. Note that you will need root permissions (or
the
CAP_NET_BIND_SERVICE
capability) to make Teleport listen on a port numbered <1024
(e.g.443
). - Follow the principle of least privilege. Don't give users
permissive roles when more a restrictive role will do.
For example, don't assign users the built-in
access,editor
roles, which give them permissions to access and edit all cluster resources. Instead, define roles with the minimum required permissions for each user and configure access requests to provide temporary elevated permissions. - When you enroll Teleport resources—for example, new databases or applications—you
should save the invitation token to a file.
If you enter the token directly on the command line, a malicious user could view
it by running the
history
command on a compromised system.
You should note that these practices aren't necessarily reflected in the examples used in documentation. Examples in the documentation are primarily intended for demonstration and for development environments.
Prerequisites
-
A running Teleport cluster version 17.0.0-dev or above. If you want to get started with Teleport, sign up for a free trial or set up a demo environment.
-
The
tctl
admin tool andtsh
client tool.Visit Installation for instructions on downloading
tctl
andtsh
.
- A Kubernetes cluster you would like to access.
- A Linux host deployed on your own infrastructure to run the Teleport Kubernetes Service. This can run outside of your Kubernetes cluster.
- The
kubectl
command line tool installed on your workstation. - To check that you can connect to your Teleport cluster, sign in with
tsh login
, then verify that you can runtctl
commands using your current credentials. For example:If you can connect to the cluster and run the$ tsh login --proxy=teleport.example.com --user=email@example.com
$ tctl status
# Cluster teleport.example.com
# Version 17.0.0-dev
# CA pin sha256:abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678tctl status
command, you can use your current credentials to run subsequenttctl
commands from your workstation. If you host your own Teleport cluster, you can also runtctl
commands on the computer that hosts the Teleport Auth Service for full permissions.
Step 1/4. Generate a kubeconfig file
The Teleport Kubernetes Service uses a kubeconfig
file to authenticate to your
Kubernetes cluster. In this section, we will generate a kubeconfig
file so we
can configure the Teleport Kubernetes Service to use it later in this guide.
Ensure your context is correct
First, configure your local kubectl
command to point at the Kubernetes cluster
you want to register. You can use the following command to verify that
the correct cluster is selected:
$ kubectl config get-contexts
Use this command to switch to the cluster assigned to CONTEXT_NAME
:
# e.g., my-context
$ CONTEXT_NAME=context-name
$ kubectl config use-context ${CONTEXT_NAME?}
Run the script
On your workstation, download Teleport's get-kubeconfig.sh
script, which you
will use to generate the kubeconfig
file:
$ curl -OL \
https://raw.githubusercontent.com/gravitational/teleport/v17.0.0-dev/examples/k8s-auth/get-kubeconfig.sh
get-kubeconfig.sh
creates a service account for the Teleport Kubernetes
Service that can get Kubernetes pods as well as impersonate users, groups, and
other service accounts. The Teleport Kubernetes Service uses this service
account to manage access to resources in your Kubernetes cluster. The script
also ensures that there is a Kubernetes Secret
in your cluster to store
service account credentials.
get-kubeconfig.sh
also creates a namespace called teleport
for the
resources it deploys, though you can choose a different name by assigning the
TELEPORT_NAMESPACE
environment variable in the shell where you run the
script.
After creating resources, get-kubeconfig.sh
writes a new kubeconfig
file
called kubeconfig
in the directory where you run the script.
Run the get-kubeconfig.sh
script:
$ bash get-kubeconfig.sh
The script is successful if you see this message:
Done!
Move the kubeconfig
file to the host you are using to run the Teleport
Kubernetes Service. We will assume that the kubeconfig
file exists at
/var/lib/teleport/kubeconfig
.
Connecting multiple Kubernetes clusters?
You can connect multiple Kubernetes clusters to Teleport from one kubeconfig
file if it contains multiple entries. Use
merge-kubeconfigs.sh
to combine multiple kubeconfig
files generated by get-kubeconfig.sh
.
Step 2/4. Set up the Teleport Kubernetes Service
In this step, you will install the Teleport Kubernetes Service and configure it
to use the kubeconfig
file you generated to access a Kubernetes cluster.
Get a join token
Establish trust between your Teleport cluster and your new Kubernetes Service instance by creating a join token:
$ tctl tokens add --type=kube --format=text --ttl=1h
abcd123-insecure-do-not-use-this
On the host where you are running the Teleport Kubernetes Service, create a
file called /tmp/token
that consists only of your token:
$ echo join-token | sudo tee /tmp/token
Install the Teleport Kubernetes Service
Run the following commands on the host where you will install the Teleport Kubernetes Service:
Install Teleport on your Linux server:
-
Assign edition to one of the following, depending on your Teleport edition:
Edition Value Teleport Enterprise Cloud cloud
Teleport Enterprise (Self-Hosted) enterprise
Teleport Community Edition oss
-
Get the version of Teleport to install. If you have automatic agent updates enabled in your cluster, query the latest Teleport version that is compatible with the updater:
$ TELEPORT_DOMAIN=example.teleport.com
$ TELEPORT_VERSION="$(curl https://$TELEPORT_DOMAIN/v1/webapi/automaticupgrades/channel/default/version | sed 's/v//')"Otherwise, get the version of your Teleport cluster:
$ TELEPORT_DOMAIN=example.teleport.com
$ TELEPORT_VERSION="$(curl https://$TELEPORT_DOMAIN/v1/webapi/ping | jq -r '.server_version')" -
Install Teleport on your Linux server:
$ curl https://cdn.teleport.dev/install-v15.4.11.sh | bash -s ${TELEPORT_VERSION} edition
The installation script detects the package manager on your Linux server and uses it to install Teleport binaries. To customize your installation, learn about the Teleport package repositories in the installation guide.
Configure the Teleport Kubernetes Service
On the host where you will run the Teleport Kubernetes Service, create a file at
/etc/teleport.yaml
with the following content:
version: v3
teleport:
join_params:
token_name: "/tmp/token"
method: token
proxy_server: teleport.example.com:443
auth_service:
enabled: off
proxy_service:
enabled: off
ssh_service:
enabled: off
kubernetes_service:
enabled: "yes"
kubeconfig_file: "/var/lib/teleport/kubeconfig"
labels:
"region": "us-east1"
Edit /etc/teleport.yaml
to replace teleport.example.com:443
with the host
and port of your Teleport Proxy Service or Teleport Cloud tenant, e.g.,
mytenant.teleport.sh:443
.
When using kubeconfig_file
, Amazon EKS users may need to replace illegal
characters in the context
names. Supported characters are alphanumeric
characters, .
, _
, and -
. EKS typically includes :
and @
in their
kubeconfig
files, which are not allowed in Teleport.
Start the Teleport Kubernetes Service
Configure the Teleport Kubernetes Service to start automatically when the host boots up by creating a systemd service for it. The instructions depend on how you installed the Teleport Kubernetes Service.
- Package Manager
- TAR Archive
On the host where you will run the Teleport Kubernetes Service, enable and start Teleport:
$ sudo systemctl enable teleport
$ sudo systemctl start teleport
On the host where you will run the Teleport Kubernetes Service, create a systemd service configuration for Teleport, enable the Teleport service, and start Teleport:
$ sudo teleport install systemd -o /etc/systemd/system/teleport.service
$ sudo systemctl enable teleport
$ sudo systemctl start teleport
You can check the status of the Teleport Kubernetes Service with systemctl status teleport
and view its logs with journalctl -fu teleport
.
Step 3/4. Grant access to your Teleport user
Enable your Teleport user to access resources in your Kubernetes cluster so you can connect to the cluster later in this guide.
To authenticate to a Kubernetes cluster via Teleport, your Teleport user's roles must allow access as at least one Kubernetes user or group.
-
Retrieve a list of your current user's Teleport roles. The example below requires the
jq
utility for parsing JSON:$ CURRENT_ROLES=$(tsh status -f json | jq -r '.active.roles | join ("\n")')
-
Retrieve the Kubernetes groups your roles allow you to access:
$ echo "$CURRENT_ROLES" | xargs -I{} tctl get roles/{} --format json | \
jq '.[0].spec.allow.kubernetes_groups[]?' -
Retrieve the Kubernetes users your roles allow you to access:
$ echo "$CURRENT_ROLES" | xargs -I{} tctl get roles/{} --format json | \
jq '.[0].spec.allow.kubernetes_users[]?' -
If the output of one of the previous two commands is non-empty, your user can access at least one Kubernetes user or group, so you can proceed to the next step.
-
If both lists are empty, create a Teleport role for the purpose of this guide that can view Kubernetes resources in your cluster.
Create a file called
kube-access.yaml
with the following content:kind: role
metadata:
name: kube-access
version: v7
spec:
allow:
kubernetes_labels:
'*': '*'
kubernetes_resources:
- kind: '*'
namespace: '*'
name: '*'
verbs: ['*']
kubernetes_groups:
- viewers
deny: {} -
Apply your changes:
$ tctl create -f kube-access.yaml
-
Assign the
kube-access
role to your Teleport user by running the appropriate commands for your authentication provider:- Local User
- GitHub
- SAML
- OIDC
-
Retrieve your local user's roles as a comma-separated list:
$ ROLES=$(tsh status -f json | jq -r '.active.roles | join(",")')
-
Edit your local user to add the new role:
$ tctl users update $(tsh status -f json | jq -r '.active.username') \
--set-roles "${ROLES?},kube-access" -
Sign out of the Teleport cluster and sign in again to assume the new role.
-
Open your
github
authentication connector in a text editor:$ tctl edit github/github
-
Edit the
github
connector, addingkube-access
to theteams_to_roles
section.The team you should map to this role depends on how you have designed your organization's role-based access controls (RBAC). However, the team must include your user account and should be the smallest team possible within your organization.
Here is an example:
teams_to_roles:
- organization: octocats
team: admins
roles:
- access
+ - kube-access -
Apply your changes by saving closing the file in your editor.
-
Sign out of the Teleport cluster and sign in again to assume the new role.
-
Retrieve your
saml
configuration resource:$ tctl get --with-secrets saml/mysaml > saml.yaml
Note that the
--with-secrets
flag adds the value ofspec.signing_key_pair.private_key
to thesaml.yaml
file. Because this key contains a sensitive value, you should remove the saml.yaml file immediately after updating the resource. -
Edit
saml.yaml
, addingkube-access
to theattributes_to_roles
section.The attribute you should map to this role depends on how you have designed your organization's role-based access controls (RBAC). However, the group must include your user account and should be the smallest group possible within your organization.
Here is an example:
attributes_to_roles:
- name: "groups"
value: "my-group"
roles:
- access
+ - kube-access -
Apply your changes:
$ tctl create -f saml.yaml
-
Sign out of the Teleport cluster and sign in again to assume the new role.
-
Retrieve your
oidc
configuration resource:$ tctl get oidc/myoidc --with-secrets > oidc.yaml
Note that the
--with-secrets
flag adds the value ofspec.signing_key_pair.private_key
to theoidc.yaml
file. Because this key contains a sensitive value, you should remove the oidc.yaml file immediately after updating the resource. -
Edit
oidc.yaml
, addingkube-access
to theclaims_to_roles
section.The claim you should map to this role depends on how you have designed your organization's role-based access controls (RBAC). However, the group must include your user account and should be the smallest group possible within your organization.
Here is an example:
claims_to_roles:
- name: "groups"
value: "my-group"
roles:
- access
+ - kube-access -
Apply your changes:
$ tctl create -f oidc.yaml
-
Sign out of the Teleport cluster and sign in again to assume the new role.
-
Configure the
viewers
group in your Kubernetes cluster to have the built-inview
ClusterRole. When your Teleport user assumes thekube-access
role and sends requests to the Kubernetes API server, the Teleport Kubernetes Service impersonates theviewers
group and proxies the requests.Create a file called
viewers-bind.yaml
with the following contents, binding the built-inview
ClusterRole with theviewers
group you enabled your Teleport user to access:apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: viewers-crb
subjects:
- kind: Group
# Bind the group "viewers", corresponding to the kubernetes_groups we assigned our "kube-access" role above
name: viewers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
# "view" is a default ClusterRole that grants read-only access to resources
# See: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles
name: view
apiGroup: rbac.authorization.k8s.io -
Apply the
ClusterRoleBinding
withkubectl
:$ kubectl apply -f viewers-bind.yaml
Step 4/4. Access your Kubernetes cluster
After Teleport starts with the above config, you should be able to see all new clusters:
$ tsh kube ls
Kube Cluster Name Labels Selected
--------------------------------------- ------ --------
my-cluster region=us-east-1
To access your cluster, run the following command, replacing my-cluster
with
the name of the cluster you would like to access:
$ tsh kube login my-cluster
Logged into kubernetes cluster "my-cluster". Try 'kubectl version' to test the connection.