[ハンズオン] EKS + Karpenter のデプロイ演習

7月 7, 2025Uncategorized

概要

対象読者

  • 今回は、Amazon EKS + Karpenter のハンズオンをブログにまとめました。対象読者は、EKS の初学者、Karpenter の導入方法を学びたい方、コントロールプレーンをプライベートエンドポイントで使用したい方にお読みいただきたいと思います。また、私がTry & Error を行いながら検証した結果を踏まえ、一部エラー事例とトラブルシューティングの内容も記載しております。
  • 今回のハンズオンは、Karpenter の公式サイト (https://karpenter.sh/docs/getting-started/getting-started-with-karpenter/) を参考にしています。

 AWS EKS (Amazon Elastic Kubernetes Service) について

  • Kubernetesは、クラスター全体を管理するコントロールプレーンと、コンテナが実際に動作するデータプレーン(ワーカーノードの集まり)で構成されます。EKSを利用すると、最も複雑なコントロールプレーンの構築・運用・スケーリングをAWS のマネージドサービスに任せることができます。今回のハンズオンでは、このデータプレーンを構成するノードの管理を、次に説明するKarpenterを使って自動化します。
  • コントロールプレーンを管理するKubernetes API へのアクセス方法は、パブリックエンドポイント、パブリックとプライベートを併用するエンドポイント、プライベートエンドポイントの3種類があります。今回は、より実運用に近いプライベートエンドポイントを利用したハンズオンとなります。

 Karpenter について

  • Karpenterは、EKSのデータプレーンを構成するワーカーノードを、自動で管理してくれるオープンソースの高性能なオートスケーラーです。
  • 次の図が示すように、Karpenterは従来のオートスケーラーとは異なるアプローチで動作します。
    • スケジュールされていないPodをトリガーに起動:既存のノードにリソースの空きがなく、スケジュールできないPod(Unschedulable Pods) が発生すると、Karpenterがそれを即座に検知します。そして、そのPodが必要とするリソース(CPU、メモリなど)に最適なスペックを持つEC2インスタンスをジャストインタイムで起動します。
    • ノードグループ管理からの解放:事前にEC2のインスタンスタイプを細かく定義したノードグループを複数用意する必要がありません。Karpenterが状況に応じて最適なインスタンスを自動で選択してくれます。
    • コスト効率の最大化: ワークロードがなくなればノードを速やかに終了したり、より安価なスポットインスタンスを積極的に活用したりすることで、運用コストを大幅に削減できます。
  • この様に、EKSとKarpenterを組み合わせることで、運用負荷を下げながら、コスト効率の高いコンテナ実行基盤を構築することができます。

 

ハンズオン1:VPC および関連リソースの作成

CloudFormation を使ったVPC および関連リソースの作成

  • CloudFormation を使って、VPC および関連リソースを作成します。AWSマネジメントコンソールからCloudFormationを選択します。
  • 画面右上の「スタックの作成」から「新しいリソースを使用 (標準)」を選択します。
  • 「既存のテンプレートを選択」を選択します。「テンプレートの指定」から「テンプレートファイルのアップロード」を選択、「ファイルの選択」を選択し、CFnテンプレートファイルを選択します。

 

  • 今回のテンプレートファイルは、以下の YAMLを使用します。この YAMLは、こちらの AWSドキュメントに記載されているAWS CloudFormation VPC テンプレートをカスタマイズしたものです。
Description:  This template deploys a VPC, with a pair of public and private subnets spread
  across two Availability Zones. It deploys an internet gateway, with a default
  route on the public subnets. It deploys a pair of NAT gateways (one in each AZ),
  and default routes for them in the private subnets.

Parameters:
  EnvironmentName:
    Description: An environment name that is prefixed to resource names
    Type: String

  VpcCIDR:
    Description: Please enter the IP range (CIDR notation) for this VPC
    Type: String
    Default: 10.192.0.0/16

  PublicSubnet1CIDR:
    Description: Please enter the IP range (CIDR notation) for the public subnet in the first Availability Zone
    Type: String
    Default: 10.192.10.0/24

  PublicSubnet2CIDR:
    Description: Please enter the IP range (CIDR notation) for the public subnet in the second Availability Zone
    Type: String
    Default: 10.192.11.0/24

  PrivateSubnet1CIDR:
    Description: Please enter the IP range (CIDR notation) for the private subnet in the first Availability Zone
    Type: String
    Default: 10.192.20.0/24

  PrivateSubnet2CIDR:
    Description: Please enter the IP range (CIDR notation) for the private subnet in the second Availability Zone
    Type: String
    Default: 10.192.21.0/24

Resources:
  VPC:
    Type: AWS::EC2::VPC
    Properties:
      CidrBlock: !Ref VpcCIDR
      EnableDnsSupport: true
      EnableDnsHostnames: true
      Tags:
        - Key: Name
          Value: !Sub ${EnvironmentName}-vpc

  InternetGateway:
    Type: AWS::EC2::InternetGateway
    Properties:
      Tags:
        - Key: Name
          Value: !Sub ${EnvironmentName}-igw

  InternetGatewayAttachment:
    Type: AWS::EC2::VPCGatewayAttachment
    Properties:
      InternetGatewayId: !Ref InternetGateway
      VpcId: !Ref VPC

  PublicSubnet1:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId: !Ref VPC
      AvailabilityZone: !Select [ 0, !GetAZs '' ]
      CidrBlock: !Ref PublicSubnet1CIDR
      MapPublicIpOnLaunch: true
      Tags:
        - Key: Name
          Value: !Sub ${EnvironmentName}-sntpub1

  PublicSubnet2:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId: !Ref VPC
      AvailabilityZone: !Select [ 1, !GetAZs  '' ]
      CidrBlock: !Ref PublicSubnet2CIDR
      MapPublicIpOnLaunch: true
      Tags:
        - Key: Name
          Value: !Sub ${EnvironmentName}-sntpub2

  PrivateSubnet1:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId: !Ref VPC
      AvailabilityZone: !Select [ 0, !GetAZs  '' ]
      CidrBlock: !Ref PrivateSubnet1CIDR
      MapPublicIpOnLaunch: false
      Tags:
        - Key: Name
          Value: !Sub ${EnvironmentName}-sntpri1

  PrivateSubnet2:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId: !Ref VPC
      AvailabilityZone: !Select [ 1, !GetAZs  '' ]
      CidrBlock: !Ref PrivateSubnet2CIDR
      MapPublicIpOnLaunch: false
      Tags:
        - Key: Name
          Value: !Sub ${EnvironmentName}-sntpri2

  NatGateway1EIP:
    Type: AWS::EC2::EIP
    DependsOn: InternetGatewayAttachment
    Properties:
      Domain: vpc

  NatGateway2EIP:
    Type: AWS::EC2::EIP
    DependsOn: InternetGatewayAttachment
    Properties:
      Domain: vpc

  NatGateway1:
    Type: AWS::EC2::NatGateway
    Properties:
      AllocationId: !GetAtt NatGateway1EIP.AllocationId
      SubnetId: !Ref PublicSubnet1

  NatGateway2:
    Type: AWS::EC2::NatGateway
    Properties:
      AllocationId: !GetAtt NatGateway2EIP.AllocationId
      SubnetId: !Ref PublicSubnet2

  PublicRouteTable:
    Type: AWS::EC2::RouteTable
    Properties:
      VpcId: !Ref VPC
      Tags:
        - Key: Name
          Value: !Sub ${EnvironmentName}-rtbpub

  DefaultPublicRoute:
    Type: AWS::EC2::Route
    DependsOn: InternetGatewayAttachment
    Properties:
      RouteTableId: !Ref PublicRouteTable
      DestinationCidrBlock: 0.0.0.0/0
      GatewayId: !Ref InternetGateway

  PublicSubnet1RouteTableAssociation:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      RouteTableId: !Ref PublicRouteTable
      SubnetId: !Ref PublicSubnet1

  PublicSubnet2RouteTableAssociation:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      RouteTableId: !Ref PublicRouteTable
      SubnetId: !Ref PublicSubnet2

  PrivateRouteTable1:
    Type: AWS::EC2::RouteTable
    Properties:
      VpcId: !Ref VPC
      Tags:
        - Key: Name
          Value: !Sub ${EnvironmentName}-rtbpri1

  DefaultPrivateRoute1:
    Type: AWS::EC2::Route
    Properties:
      RouteTableId: !Ref PrivateRouteTable1
      DestinationCidrBlock: 0.0.0.0/0
      NatGatewayId: !Ref NatGateway1

  PrivateSubnet1RouteTableAssociation:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      RouteTableId: !Ref PrivateRouteTable1
      SubnetId: !Ref PrivateSubnet1

  PrivateRouteTable2:
    Type: AWS::EC2::RouteTable
    Properties:
      VpcId: !Ref VPC
      Tags:
        - Key: Name
          Value: !Sub ${EnvironmentName}-rtbpri2

  DefaultPrivateRoute2:
    Type: AWS::EC2::Route
    Properties:
      RouteTableId: !Ref PrivateRouteTable2
      DestinationCidrBlock: 0.0.0.0/0
      NatGatewayId: !Ref NatGateway2

  PrivateSubnet2RouteTableAssociation:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      RouteTableId: !Ref PrivateRouteTable2
      SubnetId: !Ref PrivateSubnet2

Outputs:
  VPC:
    Description: A reference to the created VPC
    Value: !Ref VPC

  PublicSubnets:
    Description: A list of the public subnets
    Value: !Join [ ",", [ !Ref PublicSubnet1, !Ref PublicSubnet2 ]]

  PrivateSubnets:
    Description: A list of the private subnets
    Value: !Join [ ",", [ !Ref PrivateSubnet1, !Ref PrivateSubnet2 ]]

  PublicSubnet1:
    Description: A reference to the public subnet in the 1st Availability Zone
    Value: !Ref PublicSubnet1

  PublicSubnet2:
    Description: A reference to the public subnet in the 2nd Availability Zone
    Value: !Ref PublicSubnet2

  PrivateSubnet1:
    Description: A reference to the private subnet in the 1st Availability Zone
    Value: !Ref PrivateSubnet1

  PrivateSubnet2:
    Description: A reference to the private subnet in the 2nd Availability Zone
    Value: !Ref PrivateSubnet2

 

  • スタックの名前を入力します。
  • パラメータを指定します。EnvironmentName に任意の名前を指定し、その他のパラメータ (VPC,Subnet のCIDR) は変更しなくても構いません。

 

  • スタックオプションは特に指定しません。
  • レビューを確認し、「送信」を押します。
  • スタック一覧より、該当のスタックの「ステータス」が「CREATE_COMPLETE」と表示されたことを確認します。スタックの実行完了には、しばらく時間が掛かります。

 

  • コントロールプレーンへプライベートエンドポイントを使用してアクセスする場合、VPC エンドポイントが必要となります。しかし、EKS クラスタの作成時にエラーが発生するため、VPC エンドポイントの作成は後の工程で行います。

 

 

ハンズオン2:Cloudshell を使ってDocker イメージを作成

ECR のリポジトリ作成

  • AWSマネジメントコンソールからECR(Elastic Container Registry) を選択します。続いて、Amazon ECR のリポジトリを選択します。
  • 「リポジトリを作成」を選択します。

 

  • 可視性設定に「プライベート」を選択、リポジトリ名を指定し、「リポジトリを作成」を押します。

  • リポジトリが作成されました。

 

Dockerイメージを作成してリポジトリにプッシュ

  • Cloudshell のターミナルから下記コマンドを実行します。
mkdir ecs_hello
cd ecs_hello/
mkdir conf
  • ecs_hello/ 配下に、Dockerfile ファイルを作成します。
    • Dockerfile ファイルは、コンテナイメージを管理するための定義ファイルです。
    • ベースとして使用する既存のイメージの指定(→ FROM コマンド)、イメージの作成プロセス時に実行されるコマンド(→ ADD コマンド)、コンテナイメージの新しいインスタンスが展開されるときに実行されるコマンド(→ RUN コマンド)などの定義が含まれます。
  • 今回使用するDockerfile ファイルは、下記となります。
FROM nginx:latest

ADD conf/nginx.conf /etc/nginx/
RUN echo "Hello EKS!" > /usr/share/nginx/html/index.html
  • ecs_hello/conf/ 配下に、nginx.conf を作成します。
  • 今回使用する nginx.conf は、下記となります。
    • デフォルトから変更している箇所は、worker_processes を1にしています。autoindex on を指定しています。
    • 公開ディレクトリは、/usr/share/nginx/html です。
# For more information on configuration, see:
#   * Official English Documentation: http://nginx.org/en/docs/
#   * Official Russian Documentation: http://nginx.org/ru/docs/

user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 4096;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    # Load modular configuration files from the /etc/nginx/conf.d directory.
    # See http://nginx.org/en/docs/ngx_core_module.html#include
    # for more information.
    include /etc/nginx/conf.d/*.conf;

    server {
        listen       80;
        listen       [::]:80;
        server_name  _;
        root         /usr/share/nginx/html;

        # Load configuration files for the default server block.
        include /etc/nginx/default.d/*.conf;

        location / {
            autoindex on;
        }

        error_page 404 /404.html;
        location = /404.html {
        }

        error_page 500 502 503 504 /50x.html;
        location = /50x.html {
        }
    }
}

 

  • Docker イメージの作成、プッシュを行うコマンドを確認するため、ECR のコンソールに戻り、作成したECR リポジトリを選択します。
  • 「プッシュコマンドの表示」ボタンを押します。下記の画面が表示されます。

 

  • CloudShell のターミナルに戻り、Docker イメージの作成、ECR リポジトリへプッシュを行います。先ほどECR リポジトリの画面で確認したコマンド 1~4を使用します。(以下コマンドの111111111111 にはアカウント番号が入ります。リージョンはお使いの環境に合わせて、修正ください。)
ecs_hello $ aws ecr get-login-password --region us-west-2 | docker login --username AWS --password-stdin 111111111111.dkr.ecr.us-west-2.amazonaws.com
WARNING! Your password will be stored unencrypted in /home/cloudshell-user/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
ecs_hello $ docker build -t niikawa-testenv .
[+] Building 20.5s (8/8) FINISHED
 => [internal] load build definition from Dockerfile
 => => transferring dockerfile: 200B
 => [internal] load metadata for docker.io/library/nginx:latest
 => [internal] load .dockerignore
 => => transferring context: 2B
 => [internal] load build context
 => => transferring context: 1.79kB
 => [1/3] FROM docker.io/library/nginx:latest@sha256:93230cd54060f497430c7a120e2347894846a81b6a5dd2110f7362c5423b4abc
 => => resolve docker.io/library/nginx:latest@sha256:93230cd54060f497430c7a120e2347894846a81b6a5dd2110f7362c5423b4abc
 => => sha256:9592f5595f2b12c2ede5d2ce9ec936b33fc328225a00b3901b96019e3dd83528 8.58kB / 8.58kB
 => => sha256:3da95a905ed546f99c4564407923a681757d89651a388ec3f1f5e9bf5ed0b39d 28.23MB / 28.23MB
 => => sha256:9bbbd7ee45b78c411208ea69e41a52a06a7e3872dfd0235e79bbb637e4789c1d 626B / 626B
 => => sha256:13920fe73b382aa9017f7cf38b1377bc46ffb605fe980eb00f61aad26311ebf7 2.29kB / 2.29kB
 => => sha256:6c8e51cf00871b029c189d3e2145e2307bbba361bb62e20b696c18b2e8cd2f52 43.97MB / 43.97MB
 => => sha256:93230cd54060f497430c7a120e2347894846a81b6a5dd2110f7362c5423b4abc 10.25kB / 10.25kB
 => => sha256:48670a58a68fc689138b916491d7c5aa6ea6fb2e4227a7edef275ec7003c9569 956B / 956B
 => => sha256:ce7132063a5679c245d63b972b414a24de1686b42f8231c8df6f703c50a5ac38 404B / 404B
 => => sha256:ee95256df0301df55618ec5c24f6bf41b6d005d3026e0e67e95fef0b0fbc2691 1.40kB / 1.40kB
 => => sha256:23e05839d684c6d82bd5fd45968bb8997da3a639f1fe8ca502a4edbcffa8655d 1.21kB / 1.21kB
 => => extracting sha256:3da95a905ed546f99c4564407923a681757d89651a388ec3f1f5e9bf5ed0b39d
 => => extracting sha256:6c8e51cf00871b029c189d3e2145e2307bbba361bb62e20b696c18b2e8cd2f52
 => => extracting sha256:9bbbd7ee45b78c411208ea69e41a52a06a7e3872dfd0235e79bbb637e4789c1d
 => => extracting sha256:48670a58a68fc689138b916491d7c5aa6ea6fb2e4227a7edef275ec7003c9569
 => => extracting sha256:ce7132063a5679c245d63b972b414a24de1686b42f8231c8df6f703c50a5ac38
 => => extracting sha256:23e05839d684c6d82bd5fd45968bb8997da3a639f1fe8ca502a4edbcffa8655d
 => => extracting sha256:ee95256df0301df55618ec5c24f6bf41b6d005d3026e0e67e95fef0b0fbc2691
 => [2/3] ADD conf/nginx.conf /etc/nginx/
 => [3/3] RUN echo "Hello EKS!" > /usr/share/nginx/html/index.html
 => exporting to image
 => => exporting layers
 => => writing image sha256:df4b7a47d4bf3e205adecfa92071802d45e3210e8b97e6c6d1c668048253c791
 => => naming to docker.io/library/niikawa-testenv
ecs_hello $ docker tag niikawa-testenv:latest 111111111111.dkr.ecr.us-west-2.amazonaws.com/niikawa-testenv:latest
ecs_hello $ docker push 111111111111.dkr.ecr.us-west-2.amazonaws.com/niikawa-testenv:latest
The push refers to repository [111111111111.dkr.ecr.us-west-2.amazonaws.com/niikawa-testenv]
47b688823c04: Pushed 
44f91dac31b0: Pushed 
07eaefc6ebf2: Pushed 
de2ef8ceb76a: Pushed 
e6c40b7bdc83: Pushed 
f941308035cf: Pushed 
81a9d30670ec: Pushed 
1bf33238ab09: Pushed 
1bb35e8b4de1: Pushed 
latest: digest: sha256:9390ddcc28aec8ee8857b41ef38b22dd387b02e1075d89c2b6dc6d4b2e673e07 size: 2192

 

  • ECR リポジトリに、イメージがプッシュがされました。ダイジェストが docker pushコマンドの結果と一致します。

 

ハンズオン3:Cloudshell にツール導入

~ $ which eksctl
/usr/bin/which: no eksctl in (/home/cloudshell-user/.local/bin:/home/cloudshell-user/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/nodejs20/lib/node_modules/aws-cdk/bin)
~ $ curl -L "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 33.3M  100 33.3M    0     0  14.3M      0  0:00:02  0:00:02 --:--:-- 21.7M
~ $ sudo mv /tmp/eksctl /usr/local/bin
~ $ which eksctl
/usr/local/bin/eksctl
~ $ 
~ $ eksctl version
0.210.0
~ $ 
~ $ which kubectl
/usr/local/bin/kubectl
~ $ 
~ $ kubectl version
Client Version: v1.32.0-eks-aeac579
Kustomize Version: v5.5.0
The connection to the server localhost:8080 was refused - did you specify the right host or port?
~ $ 
~ $ kubectl version --client
Client Version: v1.32.0-eks-aeac579
Kustomize Version: v5.5.0
~ $ which helm
/usr/bin/which: no helm in (/home/cloudshell-user/.local/bin:/home/cloudshell-user/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/nodejs20/lib/node_modules/aws-cdk/bin)
~ $ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
~ $ chmod 700 get_helm.sh
~ $ ls -l get_helm.sh 
-rwx------. 1 cloudshell-user cloudshell-user 11913 Jul  5 22:43 get_helm.sh
~ $ ./get_helm.sh
Downloading https://get.helm.sh/helm-v3.18.3-linux-amd64.tar.gz
Verifying checksum... Done.
Preparing to install helm into /usr/local/bin
helm installed into /usr/local/bin/helm
~ $ 
~ $ which helm
/usr/local/bin/helm
~ $ 
~ $ helm version
version.BuildInfo{Version:"v3.18.3", GitCommit:"6838ebcf265a3842d1433956e8a622e3290cf324", GitTreeState:"clean", GoVersion:"go1.24.4"}

 

ハンズオン4:EKSクラスタ作成

  • ハンズオンに利用する環境に合わせて、環境変数を設定します。
  • 後述のトラブルシューティングに使用するため、KARPENTER_VERSION に互換性がないバージョンを設定しています。トラブルシューティングをスキップする場合は、「」に変更ください。
export KARPENTER_NAMESPACE="karpenter"
export KARPENTER_VERSION="1.1.1"
export K8S_VERSION="1.33"
export AWS_PARTITION=$(aws sts get-caller-identity --query "Arn" | cut -d: -f2)
export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
export AWS_DEFAULT_REGION="us-west-2"
export CLUSTER_NAME="niikawa-karpenter-demo"
export TEMPOUT="$(mktemp)"
export ALIAS_VERSION="$(aws ssm get-parameter --name "/aws/service/eks/optimized-ami/${K8S_VERSION}/amazon-linux-2023/x86_64/standard/recommended/image_id" --query Parameter.Value | xargs aws ec2 describe-images --query 'Images[0].Name' --image-ids | sed -r 's/^.*(v[[:digit:]]+).*$/\1/')"
export VPC_ID="vpc-0bc25a2c291abb4fa"
export AWS_AVAILABILITY_ZONE_A="us-west-2a"
export AWS_AVAILABILITY_ZONE_B="us-west-2b"
export PRIVATE_SUBNET_A_ID="subnet-0f5ac1ad37caadba7"
export PRIVATE_SUBNET_B_ID="subnet-03de07e15a1f369cb"
export PUBLIC_SUBNET_A_ID="subnet-0ad846fe5448c1bbd"
export PUBLIC_SUBNET_B_ID="subnet-0ba5b930ebdac3688"

 

  • Karpenter 公式サイトの手順に従い、Cloud Formation を利用して以下のリソースを作成します。
    • Karpenterが起動するEC2ノード用のIAMロール (リソース名: KarpenterNodeRole)
    • Karpenterコントローラー用のIAMポリシー (リソース名: KarpenterControllerPolicy)
    • インスタンス中断を処理するためのSQSキュー
    • イベントをSQSキューに転送するためのEventBridgeルール
~ $ curl -fsSL https://raw.githubusercontent.com/aws/karpenter-provider-aws/v"${KARPENTER_VERSION}"/website/content/en/preview/getting-started/getting-started-with-karpenter/cloudformation.yaml  > "${TEMPOUT}" \
> && aws cloudformation deploy \
>   --stack-name "Karpenter-${CLUSTER_NAME}" \
>   --template-file "${TEMPOUT}" \
>   --capabilities CAPABILITY_NAMED_IAM \
>   --parameter-overrides "ClusterName=${CLUSTER_NAME}"

Waiting for changeset to be created..
Waiting for stack create/update to complete
Successfully created/updated stack - Karpenter-niikawa-karpenter-demo

 

  • 以下のリソースが作成されました。

 

 

  • 以下のテンプレートを基に、YAML を作成する。以下のテンプレートを控える。(作成方法は次のコマンドとなる)

 

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: ${CLUSTER_NAME}
  region: ${AWS_DEFAULT_REGION}
  version: "${K8S_VERSION}"
  tags:
    karpenter.sh/discovery: ${CLUSTER_NAME}

vpc:
  id: ${VPC_ID}
  subnets:
    private:
      ${AWS_AVAILABILITY_ZONE_A}: { id: ${PRIVATE_SUBNET_A_ID} }
      ${AWS_AVAILABILITY_ZONE_B}: { id: ${PRIVATE_SUBNET_B_ID} }
    public:
      ${AWS_AVAILABILITY_ZONE_A}: { id: ${PUBLIC_SUBNET_A_ID} }
      ${AWS_AVAILABILITY_ZONE_B}: { id: ${PUBLIC_SUBNET_B_ID} }
  clusterEndpoints:
    publicAccess: true
    privateAccess: false

iam:
  withOIDC: true
  podIdentityAssociations:
  - namespace: "${KARPENTER_NAMESPACE}"
    serviceAccountName: karpenter
    roleName: ${CLUSTER_NAME}-karpenter
    permissionPolicyARNs:
    - arn:${AWS_PARTITION}:iam::${AWS_ACCOUNT_ID}:policy/KarpenterControllerPolicy-${CLUSTER_NAME}

iamIdentityMappings:
- arn: "arn:${AWS_PARTITION}:iam::${AWS_ACCOUNT_ID}:role/KarpenterNodeRole-${CLUSTER_NAME}"
  username: system:node:{{EC2PrivateDNSName}}
  groups:
  - system:bootstrappers
  - system:nodes

managedNodeGroups:
- name: ${CLUSTER_NAME}-ng
  instanceType: m5.large
  amiFamily: AmazonLinux2023
  desiredCapacity: 2
  minSize: 1
  maxSize: 5
  privateNetworking: true
  subnets:
  - ${PRIVATE_SUBNET_A_ID}
  - ${PRIVATE_SUBNET_B_ID}

addons:
- name: eks-pod-identity-agent

 

  • catコマンドを使用し、前述のテンプレートを貼り付け、EOF を入力する。
  • headコマンドを使用し、正しく環境変数がセットされたことを確認する。
~ $ cat <<EOF > karpenter-cluster.yaml
> apiVersion: eksctl.io/v1alpha5
> kind: ClusterConfig
> 
> metadata:
> name: ${CLUSTER_NAME}
> region: ${AWS_DEFAULT_REGION}
> version: "${K8S_VERSION}"
> tags:
> karpenter.sh/discovery: ${CLUSTER_NAME}
> 
> vpc:
> id: ${VPC_ID}
> subnets:
> private:
> ${AWS_AVAILABILITY_ZONE_A}: { id: ${PRIVATE_SUBNET_A_ID} }
> ${AWS_AVAILABILITY_ZONE_B}: { id: ${PRIVATE_SUBNET_B_ID} }
> public:
> ${AWS_AVAILABILITY_ZONE_A}: { id: ${PUBLIC_SUBNET_A_ID} }
> ${AWS_AVAILABILITY_ZONE_B}: { id: ${PUBLIC_SUBNET_B_ID} }
> clusterEndpoints:
> publicAccess: true
> privateAccess: false
> 
> iam:
> withOIDC: true
> podIdentityAssociations:
> - namespace: "${KARPENTER_NAMESPACE}"
> serviceAccountName: karpenter
> roleName: ${CLUSTER_NAME}-karpenter
> permissionPolicyARNs:
> - arn:${AWS_PARTITION}:iam::${AWS_ACCOUNT_ID}:policy/KarpenterControllerPolicy-${CLUSTER_NAME}
> 
> iamIdentityMappings:
> - arn: "arn:${AWS_PARTITION}:iam::${AWS_ACCOUNT_ID}:role/KarpenterNodeRole-${CLUSTER_NAME}"
> username: system:node:{{EC2PrivateDNSName}}
> groups:
> - system:bootstrappers
> - system:nodes
> 
> managedNodeGroups:
> - name: ${CLUSTER_NAME}-ng
> instanceType: m5.large
> amiFamily: AmazonLinux2023
> desiredCapacity: 2
> minSize: 1
> maxSize: 5
> privateNetworking: true
> subnets:
> - ${PRIVATE_SUBNET_A_ID}
> - ${PRIVATE_SUBNET_B_ID}
> 
> addons:
> - name: eks-pod-identity-agent
> EOF
~ $ 
~ $ head karpenter-cluster.yaml 
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: niikawa-karpenter-demo
  region: us-west-2
  version: "1.33"
  tags:
    karpenter.sh/discovery: niikawa-karpenter-demo

~ $ 

 

 

  • eksctl でクラスタ作る。。

 

~ $ eksctl create cluster -f karpenter-cluster.yaml
2025-07-14 23:23:13 [ℹ]  eksctl version 0.210.0
2025-07-14 23:23:13 [ℹ]  using region us-west-2
2025-07-14 23:23:13 [✔]  using existing VPC (vpc-0bc25a2c291abb4fa) and subnets (private:map[us-west-2a:{subnet-0f5ac1ad37caadba7 us-west-2a 10.192.20.0/24 0 } us-west-2b:{subnet-03de07e15a1f369cb us-west-2b 10.192.21.0/24 0 }] public:map[us-west-2a:{subnet-0ad846fe5448c1bbd us-west-2a 10.192.10.0/24 0 } us-west-2b:{subnet-0ba5b930ebdac3688 us-west-2b 10.192.11.0/24 0 }])
2025-07-14 23:23:13 [!]  custom VPC/subnets will be used; if resulting cluster doesn't function as expected, make sure to review the configuration of VPC/subnets
2025-07-14 23:23:13 [ℹ]  nodegroup "niikawa-karpenter-demo-ng" will use "" [AmazonLinux2023/1.33]
2025-07-14 23:23:13 [ℹ]  using Kubernetes version 1.33
2025-07-14 23:23:13 [ℹ]  creating EKS cluster "niikawa-karpenter-demo" in "us-west-2" region with managed nodes
2025-07-14 23:23:13 [ℹ]  1 nodegroup (niikawa-karpenter-demo-ng) was included (based on the include/exclude rules)
2025-07-14 23:23:13 [ℹ]  will create a CloudFormation stack for cluster itself and 1 managed nodegroup stack(s)
2025-07-14 23:23:13 [ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-west-2 --cluster=niikawa-karpenter-demo'
2025-07-14 23:23:13 [ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "niikawa-karpenter-demo" in "us-west-2"
2025-07-14 23:23:13 [ℹ]  CloudWatch logging will not be enabled for cluster "niikawa-karpenter-demo" in "us-west-2"
2025-07-14 23:23:13 [ℹ]  you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-west-2 --cluster=niikawa-karpenter-demo'
2025-07-14 23:23:13 [ℹ]  default addons metrics-server, vpc-cni, kube-proxy, coredns were not specified, will install them as EKS addons
2025-07-14 23:23:13 [ℹ]  
2 sequential tasks: { create cluster control plane "niikawa-karpenter-demo", 
    2 sequential sub-tasks: { 
        6 sequential sub-tasks: { 
            1 task: { create addons },
            wait for control plane to become ready,
            associate IAM OIDC provider,
            no tasks,
            update VPC CNI to use IRSA if required,
            create IAM identity mappings,
        },
        create managed nodegroup "niikawa-karpenter-demo-ng",
    } 
}
2025-07-14 23:23:13 [ℹ]  building cluster stack "eksctl-niikawa-karpenter-demo-cluster"
2025-07-14 23:23:14 [ℹ]  deploying stack "eksctl-niikawa-karpenter-demo-cluster"
2025-07-14 23:23:44 [ℹ]  waiting for CloudFormation stack "eksctl-niikawa-karpenter-demo-cluster"
2025-07-14 23:24:14 [ℹ]  waiting for CloudFormation stack "eksctl-niikawa-karpenter-demo-cluster"
2025-07-14 23:25:14 [ℹ]  waiting for CloudFormation stack "eksctl-niikawa-karpenter-demo-cluster"
2025-07-14 23:26:14 [ℹ]  waiting for CloudFormation stack "eksctl-niikawa-karpenter-demo-cluster"
2025-07-14 23:27:14 [ℹ]  waiting for CloudFormation stack "eksctl-niikawa-karpenter-demo-cluster"
2025-07-14 23:28:14 [ℹ]  waiting for CloudFormation stack "eksctl-niikawa-karpenter-demo-cluster"
2025-07-14 23:29:14 [ℹ]  waiting for CloudFormation stack "eksctl-niikawa-karpenter-demo-cluster"
2025-07-14 23:30:14 [ℹ]  waiting for CloudFormation stack "eksctl-niikawa-karpenter-demo-cluster"
2025-07-14 23:31:14 [ℹ]  waiting for CloudFormation stack "eksctl-niikawa-karpenter-demo-cluster"
2025-07-14 23:31:15 [ℹ]  creating addon: eks-pod-identity-agent
2025-07-14 23:31:16 [ℹ]  successfully created addon: eks-pod-identity-agent
2025-07-14 23:31:16 [ℹ]  creating addon: metrics-server
2025-07-14 23:31:16 [ℹ]  successfully created addon: metrics-server
2025-07-14 23:31:17 [!]  recommended policies were found for "vpc-cni" addon, but since OIDC is disabled on the cluster, eksctl cannot configure the requested permissions; the recommended way to provide IAM permissions for "vpc-cni" addon is via pod identity associations; after addon creation is completed, add all recommended policies to the config file, under `addon.PodIdentityAssociations`, and run `eksctl update addon`
2025-07-14 23:31:17 [ℹ]  creating addon: vpc-cni
2025-07-14 23:31:17 [ℹ]  successfully created addon: vpc-cni
2025-07-14 23:31:17 [ℹ]  creating addon: kube-proxy
2025-07-14 23:31:17 [ℹ]  successfully created addon: kube-proxy
2025-07-14 23:31:18 [ℹ]  creating addon: coredns
2025-07-14 23:31:18 [ℹ]  successfully created addon: coredns
2025-07-14 23:33:19 [ℹ]  addon "vpc-cni" active
2025-07-14 23:33:20 [ℹ]  deploying stack "eksctl-niikawa-karpenter-demo-addon-vpc-cni"
2025-07-14 23:33:20 [ℹ]  waiting for CloudFormation stack "eksctl-niikawa-karpenter-demo-addon-vpc-cni"
2025-07-14 23:33:50 [ℹ]  waiting for CloudFormation stack "eksctl-niikawa-karpenter-demo-addon-vpc-cni"
2025-07-14 23:33:50 [ℹ]  updating addon
2025-07-14 23:34:00 [ℹ]  addon "vpc-cni" active
2025-07-14 23:34:00 [ℹ]  checking arn arn:aws:iam::243708603692:role/KarpenterNodeRole-niikawa-karpenter-demo against entries in the auth ConfigMap
2025-07-14 23:34:00 [ℹ]  adding identity "arn:aws:iam::243708603692:role/KarpenterNodeRole-niikawa-karpenter-demo" to auth ConfigMap
2025-07-14 23:34:01 [ℹ]  building managed nodegroup stack "eksctl-niikawa-karpenter-demo-nodegroup-niikawa-karpenter-demo-ng"
2025-07-14 23:34:01 [ℹ]  deploying stack "eksctl-niikawa-karpenter-demo-nodegroup-niikawa-karpenter-demo-ng"
2025-07-14 23:34:01 [ℹ]  waiting for CloudFormation stack "eksctl-niikawa-karpenter-demo-nodegroup-niikawa-karpenter-demo-ng"
2025-07-14 23:34:31 [ℹ]  waiting for CloudFormation stack "eksctl-niikawa-karpenter-demo-nodegroup-niikawa-karpenter-demo-ng"
2025-07-14 23:35:12 [ℹ]  waiting for CloudFormation stack "eksctl-niikawa-karpenter-demo-nodegroup-niikawa-karpenter-demo-ng"
2025-07-14 23:37:02 [ℹ]  waiting for CloudFormation stack "eksctl-niikawa-karpenter-demo-nodegroup-niikawa-karpenter-demo-ng"
2025-07-14 23:37:02 [ℹ]  waiting for the control plane to become ready
2025-07-14 23:37:03 [✔]  saved kubeconfig as "/home/cloudshell-user/.kube/config"
2025-07-14 23:37:03 [ℹ]  no tasks
2025-07-14 23:37:03 [✔]  all EKS cluster resources for "niikawa-karpenter-demo" have been created
2025-07-14 23:37:03 [ℹ]  nodegroup "niikawa-karpenter-demo-ng" has 2 node(s)
2025-07-14 23:37:03 [ℹ]  node "ip-10-192-20-142.us-west-2.compute.internal" is ready
2025-07-14 23:37:03 [ℹ]  node "ip-10-192-21-220.us-west-2.compute.internal" is ready
2025-07-14 23:37:03 [ℹ]  waiting for at least 1 node(s) to become ready in "niikawa-karpenter-demo-ng"
2025-07-14 23:37:03 [ℹ]  nodegroup "niikawa-karpenter-demo-ng" has 2 node(s)
2025-07-14 23:37:03 [ℹ]  node "ip-10-192-20-142.us-west-2.compute.internal" is ready
2025-07-14 23:37:03 [ℹ]  node "ip-10-192-21-220.us-west-2.compute.internal" is ready
2025-07-14 23:37:03 [✔]  created 1 managed nodegroup(s) in cluster "niikawa-karpenter-demo"
2025-07-14 23:37:03 [ℹ]  1 task: { 
    2 sequential sub-tasks: { 
        create IAM role for pod identity association for service account "karpenter/karpenter",
        create pod identity association for service account "karpenter/karpenter",
    } }2025-07-14 23:37:03 [ℹ]  deploying stack "eksctl-niikawa-karpenter-demo-podidentityrole-karpenter-karpenter"
2025-07-14 23:37:03 [ℹ]  waiting for CloudFormation stack "eksctl-niikawa-karpenter-demo-podidentityrole-karpenter-karpenter"
2025-07-14 23:37:33 [ℹ]  waiting for CloudFormation stack "eksctl-niikawa-karpenter-demo-podidentityrole-karpenter-karpenter"
2025-07-14 23:37:34 [ℹ]  created pod identity association for service account "karpenter" in namespace "karpenter"
2025-07-14 23:37:34 [ℹ]  all tasks were completed successfully
2025-07-14 23:37:35 [ℹ]  kubectl command should work with "/home/cloudshell-user/.kube/config", try 'kubectl get nodes'
2025-07-14 23:37:35 [✔]  EKS cluster "niikawa-karpenter-demo" in "us-west-2" region is ready

 

  • 以下はCloudFormation の画面。

 

  • クラスタやノードグループが出来ていることを確認。

 

~ $ eksctl get cluster
NAME                    REGION          EKSCTL CREATED
niikawa-karpenter-demo  us-west-2       True
~ $ kubectl cluster-info
Kubernetes control plane is running at https://694AE41B50174BD885DD8EAA01E91626.gr7.us-west-2.eks.amazonaws.com
CoreDNS is running at https://694AE41B50174BD885DD8EAA01E91626.gr7.us-west-2.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
~ $ kubectl get nodes
NAME                                          STATUS   ROLES    AGE   VERSION
ip-10-192-20-142.us-west-2.compute.internal   Ready       11m   v1.33.0-eks-802817d
ip-10-192-21-220.us-west-2.compute.internal   Ready       11m   v1.33.0-eks-802817d

 

export CLUSTER_ENDPOINT="$(aws eks describe-cluster --name "${CLUSTER_NAME}" --query "cluster.endpoint" --output text)"
export KARPENTER_IAM_ROLE_ARN="arn:${AWS_PARTITION}:iam::${AWS_ACCOUNT_ID}:role/${CLUSTER_NAME}-karpenter"

echo "${CLUSTER_ENDPOINT} ${KARPENTER_IAM_ROLE_ARN}"

 

helm upgrade --install karpenter oci://public.ecr.aws/karpenter/karpenter --version "${KARPENTER_VERSION}" --namespace "${KARPENTER_NAMESPACE}" --create-namespace \
  --set "settings.clusterName=${CLUSTER_NAME}" \
  --set "settings.interruptionQueue=${CLUSTER_NAME}" \
  --set controller.resources.requests.cpu=1 \
  --set controller.resources.requests.memory=1Gi \
  --set controller.resources.limits.cpu=1 \
  --set controller.resources.limits.memory=1Gi \
  --wait

 

helm upgrade --install karpenter oci://public.ecr.aws/karpenter/karpenter --version "${KARPENTER_VERSION}" --namespace "${KARPENTER_NAMESPACE}" --create-namespace \
  --set "settings.clusterName=${CLUSTER_NAME}" \
  --set "settings.interruptionQueue=${CLUSTER_NAME}" \
  --set controller.resources.requests.cpu=1 \
  --set controller.resources.requests.memory=1Gi \
  --set controller.resources.limits.cpu=1 \
  --set controller.resources.limits.memory=1Gi \
  --wait

~ $ helm upgrade --install karpenter oci://public.ecr.aws/karpenter/karpenter --version "${KARPENTER_VERSION}" --namespace "${KARPENTER_NAMESPACE}" --create-namespace \
>   --set "settings.clusterName=${CLUSTER_NAME}" \
>   --set "settings.interruptionQueue=${CLUSTER_NAME}" \
>   --set controller.resources.requests.cpu=1 \
>   --set controller.resources.requests.memory=1Gi \
>   --set controller.resources.limits.cpu=1 \
>   --set controller.resources.limits.memory=1Gi \
>   --wait
Release "karpenter" does not exist. Installing it now.
Pulled: public.ecr.aws/karpenter/karpenter:1.1.1
Digest: sha256:b42c6d224e7b19eafb65e2d440734027a8282145569d4d142baf10ba495e90d0
Error: context deadline exceeded
~ $ 
~ $ kubectl get namespace
NAME              STATUS   AGE
default           Active   59m
karpenter         Active   35m
kube-node-lease   Active   59m
kube-public       Active   59m
kube-system       Active   59m
~ $ kubectl get all -n karpenter
NAME                            READY   STATUS             RESTARTS         AGE
pod/karpenter-9bd684877-2bjnw   0/1     CrashLoopBackOff   11 (3m46s ago)   35m
pod/karpenter-9bd684877-pkzqv   0/1     CrashLoopBackOff   11 (4m1s ago)    35m

NAME                TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/karpenter   ClusterIP   172.20.42.85           8080/TCP   35m

NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/karpenter   0/2     2            0           35m

NAME                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/karpenter-9bd684877   2         2         0       35m
~ $ kubectl logs -n karpenter karpenter-9bd684877-2bjnw
panic: validating kubernetes version, karpenter version is not compatible with K8s version 1.33

goroutine 1 [running]:
github.com/samber/lo.must({0x38c81e0, 0xc0008c8560}, {0x0, 0x0, 0x0})
        github.com/samber/lo@v1.47.0/errors.go:53 +0x1df
github.com/samber/lo.Must0(...)
        github.com/samber/lo@v1.47.0/errors.go:72
github.com/aws/karpenter-provider-aws/pkg/operator.NewOperator({0x42d15e0, 0xc0006f3aa0}, 0xc000532880)
        github.com/aws/karpenter-provider-aws/pkg/operator/operator.go:155 +0xf7c
main.main()
        github.com/aws/karpenter-provider-aws/cmd/controller/main.go:28 +0x2a
~ $ kubectl logs -n karpenter karpenter-9bd684877-pkzqv
panic: validating kubernetes version, karpenter version is not compatible with K8s version 1.33

goroutine 1 [running]:
github.com/samber/lo.must({0x38c81e0, 0xc000ad6160}, {0x0, 0x0, 0x0})
        github.com/samber/lo@v1.47.0/errors.go:53 +0x1df
github.com/samber/lo.Must0(...)
        github.com/samber/lo@v1.47.0/errors.go:72
github.com/aws/karpenter-provider-aws/pkg/operator.NewOperator({0x42d15e0, 0xc0004a6600}, 0xc00054f540)
        github.com/aws/karpenter-provider-aws/pkg/operator/operator.go:155 +0xf7c
main.main()
        github.com/aws/karpenter-provider-aws/cmd/controller/main.go:28 +0x2a

 

 

apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
  name: default
spec:
  template:
    spec:
      requirements:
        - key: kubernetes.io/arch
          operator: In
          values: ["amd64"]
        - key: kubernetes.io/os
          operator: In
          values: ["linux"]
        - key: karpenter.sh/capacity-type
          operator: In
          values: ["on-demand"]
        - key: karpenter.k8s.aws/instance-category
          operator: In
          values: ["c", "m", "r"]
        - key: karpenter.k8s.aws/instance-generation
          operator: Gt
          values: ["2"]
      nodeClassRef:
        group: karpenter.k8s.aws
        kind: EC2NodeClass
        name: default
      expireAfter: 720h # 30 * 24h = 720h
  limits:
    cpu: 1000
  disruption:
    consolidationPolicy: WhenEmptyOrUnderutilized
    consolidateAfter: 1m
---
apiVersion: karpenter.k8s.aws/v1
kind: EC2NodeClass
metadata:
  name: default
spec:
  role: "KarpenterNodeRole-${CLUSTER_NAME}" # replace with your cluster name
  amiSelectorTerms:
    - alias: "al2023@${ALIAS_VERSION}"
  subnetSelectorTerms:
    - tags:
        karpenter.sh/discovery: "${CLUSTER_NAME}" # replace with your cluster name
  securityGroupSelectorTerms:
    - tags:
        karpenter.sh/discovery: "${CLUSTER_NAME}" # replace with your cluster name
EOF

 

~ $ cat <<EOF > karpenter-rules.yaml
> apiVersion: karpenter.sh/v1
> kind: NodePool
> metadata:
>   name: default
> spec:
>   template:
>     spec:
>       requirements:
>         - key: kubernetes.io/arch
>           operator: In
>           values: ["amd64"]
>         - key: kubernetes.io/os
>           operator: In
>           values: ["linux"]
>         - key: karpenter.sh/capacity-type
>           operator: In
>           values: ["on-demand"]
>         - key: karpenter.k8s.aws/instance-category
>           operator: In
>           values: ["c", "m", "r"]
>         - key: karpenter.k8s.aws/instance-generation
>           operator: Gt
>           values: ["2"]
>       nodeClassRef:
>         group: karpenter.k8s.aws
>         kind: EC2NodeClass
>         name: default
>       expireAfter: 720h # 30 * 24h = 720h
>   limits:
>     cpu: 1000
>   disruption:
>     consolidationPolicy: WhenEmptyOrUnderutilized
>     consolidateAfter: 1m
> ---
> apiVersion: karpenter.k8s.aws/v1
> kind: EC2NodeClass
> metadata:
>   name: default
> spec:
>   role: "KarpenterNodeRole-${CLUSTER_NAME}" # replace with your cluster name
>   amiSelectorTerms:
>     - alias: "al2023@${ALIAS_VERSION}"
>   subnetSelectorTerms:
>     - tags:
>         karpenter.sh/discovery: "${CLUSTER_NAME}" # replace with your cluster name
>   securityGroupSelectorTerms:
>     - tags:
>         karpenter.sh/discovery: "${CLUSTER_NAME}" # replace with your cluster name
> EOF
~ $ 
~ $ head karpenter-rules.yaml 
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
  name: default
spec:
  template:
    spec:
      requirements:
        - key: kubernetes.io/arch
          operator: In
~ $ 
~ $ kubectl apply -f karpenter-rules.yaml
nodepool.karpenter.sh/default created
ec2nodeclass.karpenter.k8s.aws/default created

 

 

apiVersion: apps/v1
kind: Deployment
metadata:
  name: niikawa-testenv
spec:
  replicas: 0
  selector:
    matchLabels:
      app: niikawa-testenv
  template:
    metadata:
      labels:
        app: niikawa-testenv
    spec:
      terminationGracePeriodSeconds: 0
#      securityContext:
#        runAsUser: 1000
#        runAsGroup: 3000
#        fsGroup: 2000
      containers:
      - name: niikawa-testenv
        image: 243708603692.dkr.ecr.us-west-2.amazonaws.com/niikawa-testenv:latest
        resources:
          requests:
            cpu: 1
        securityContext:
          allowPrivilegeEscalation: false
EOF

 

~ $ cat < karpenter-deployment.yaml
> apiVersion: apps/v1
> kind: Deployment
> metadata:
>   name: niikawa-testenv
> spec:
>   replicas: 0
>   selector:
>     matchLabels:
>       app: niikawa-testenv
>   template:
>     metadata:
>       labels:
>         app: niikawa-testenv
>     spec:
>       terminationGracePeriodSeconds: 0
>       securityContext:
>         runAsUser: 1000
>         runAsGroup: 3000
>         fsGroup: 2000
>       containers:
>       - name: niikawa-testenv
>         image: 243708603692.dkr.ecr.us-west-2.amazonaws.com/niikawa-testenv:latest
>         resources:
>           requests:
>             cpu: 1
>         securityContext:
>           allowPrivilegeEscalation: false
> EOF
~ $ head karpenter-deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: niikawa-testenv
spec:
  replicas: 0
  selector:
    matchLabels:
      app: niikawa-testenv
  template:
~ $ 
~ $ kubectl apply -f karpenter-deployment.yaml 
deployment.apps/niikawa-testenv created
~ $ 
~ $ 
~ $ kubectl scale deployment niikawa-testenv --replicas 5
deployment.apps/niikawa-testenv scaled

 

ハンズオン2:Cloudshell VPC Environment を設定

 

 

 

Uncategorized

Posted by takaaki