Creating kubernetes cluster using kops

Before deploying anything to kubernetes cluster, we have to create the cluster. TokenD supports deploy only to cluster in AWS.

Defining variables

# To make sure kops will load your ~/.aws/config
export AWS_SDK_LOAD_CONFIG=1

# Tell tools below which profile they should use to authenticate with AWS
# If you don't know what it is, leave it "default", this should work
export AWS_PROFILE=default

# k8s cluster name, usually it will be the same as your namespace
NAME=example

# AWS region to deploy to
REGION=us-east-2

Choose region wisely. Different regions have different cost, latencies, etc. Table of region codes can be found here

Creating bucket for kops state

kops will store cluster's state in this s3 bucket. If you want somehow update your cluster (e.g. change amount or size of worker nodes) you need this bucket. Kuberenetes config file for connecting to cluster will also be stored in this bucket.

# create S3 bucket for storing kops state
aws s3 mb s3://$NAME-kops-state --region $REGION

Spin up k8s cluster

Just run this command, give it some time and you are there! If you are interested in command flags, they are described here.

If copy-pasted, this command will create cluster of:

  • four c5.large worker nodes;
  • one t3.medium master node;
  • running in private network with ingress gateway.
kops create cluster \
--name $NAME.k8s.local \
--zones=${REGION}a \
--master-zones=${REGION}a \
--networking kube-router \
--dns=private \
--topology=private \
--node-count=4 \
--node-size=c5.large \
--master-count=1 \
--master-size=t3.medium \
--ssh-public-key ~/.ssh/id_rsa.pub \
--state s3://$NAME-kops-state \
--cloud=aws \
--yes

To validate that the cluster is up and running you can run this command. Before executing it, please allow the cluster to warm up for at least half an hour.

# make sure everything provisioned correctly
kops validate cluster --state s3://$NAME-kops-state

Allow cluster instances to access S3

TokenD uses S3 buckets for storing history and KYC documents. AWS allows to authorize client by IAM role, attach to the machine running the client. By executing following command, you'll give this access to the worker nodes of your cluster.

aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess --role-name nodes.$NAME.k8s.local

results matching ""

    No results matching ""