<kbd id="afajh"><form id="afajh"></form></kbd>
<strong id="afajh"><dl id="afajh"></dl></strong>
    <del id="afajh"><form id="afajh"></form></del>
        1. <th id="afajh"><progress id="afajh"></progress></th>
          <b id="afajh"><abbr id="afajh"></abbr></b>
          <th id="afajh"><progress id="afajh"></progress></th>

          初試 kubevela

          共 15143字,需瀏覽 31分鐘

           ·

          2020-12-13 01:22

          KubeVela?是一個(gè)簡(jiǎn)單易用且高度可擴(kuò)展的應(yīng)用管理平臺(tái)與核心引擎,KubeVela 是基于 Kubernetes 與 Open Application Model(OAM) 技術(shù)構(gòu)建的。
          OAM 全稱是 Open Application Model,從名稱上來(lái)看它所定義的就是一種模型,同時(shí)也實(shí)現(xiàn)了基于 OAM 的我認(rèn)為這種模型旨在定義了云原生應(yīng)用的標(biāo)準(zhǔn)。
          • 開放(Open):支持異構(gòu)的平臺(tái)、容器運(yùn)行時(shí)、調(diào)度系統(tǒng)、云供應(yīng)商、硬件配置等,總之與底層無(wú)關(guān)
          • 應(yīng)用(Application):云原生應(yīng)用
          • 模型(Model):定義標(biāo)準(zhǔn),以使其與底層平臺(tái)無(wú)關(guān)
          在 OAM 中,一個(gè)應(yīng)用程序包含三個(gè)核心理念。
          • 第一個(gè)核心理念是組成應(yīng)用程序的組件(Component),它可能包含微服務(wù)集合、數(shù)據(jù)庫(kù)和云負(fù)載均衡器;
          • 第二個(gè)核心理念是描述應(yīng)用程序運(yùn)維特征(Trait)的集合,例如,彈性伸縮和 Ingress 等功能。它們對(duì)應(yīng)用程序的運(yùn)行至關(guān)重要,但在不同環(huán)境中其實(shí)現(xiàn)方式各不相同;
          • 最后,為了將這些描述轉(zhuǎn)化為具體的應(yīng)用程序,運(yùn)維人員使用應(yīng)用配置(Application Configuration)來(lái)組合組件和相應(yīng)的特征,以構(gòu)建應(yīng)部署的應(yīng)用程序的具體實(shí)例
          對(duì)于開發(fā)人員來(lái)說,KubeVela本身是一個(gè)易于使用的工具,能夠以最小的工作量描述應(yīng)用程序并將其發(fā)布到Kubernetes。只需管理一個(gè)以應(yīng)用程序?yàn)橹行牡墓ぷ髁鞒碳纯奢p松地與任何CI / CD管道集成,無(wú)需管理少量的Kubernetes YAML文件,只需一個(gè)簡(jiǎn)單的docker -compose樣式的Appfile。
          對(duì)于平臺(tái)開發(fā)人員來(lái)說,KubeVela是一個(gè)框架,使他們能夠輕松創(chuàng)建面向開發(fā)人員的高度可擴(kuò)展的平臺(tái)。詳細(xì)地說,KubeVela通過執(zhí)行以下操作減輕了構(gòu)建此類平臺(tái)的麻煩:
          • 以應(yīng)用為中心。在Appfile的后面,KubeVela實(shí)施了一個(gè)應(yīng)用程序概念,因?yàn)樗闹饕狝PI和ALL?KubeVela的功能僅滿足應(yīng)用程序的需求。這是通過采用開放應(yīng)用程序模型作為KubeVela的核心API來(lái)實(shí)現(xiàn)的。
          • 本地?cái)U(kuò)展。KubeVela中的應(yīng)用程序由各種可插拔工作負(fù)載類型和操作功能(即特征)組成。Kubernetes生態(tài)系統(tǒng)的功能可以隨時(shí)通過Kubernetes CRD注冊(cè)機(jī)制作為新的工作負(fù)載類型或特征添加到KubeVela中。
          • 簡(jiǎn)單但可擴(kuò)展的抽象機(jī)制。KubeVela的主要用戶界面(即Appfile和CLI)使用基于CUELang的抽象引擎構(gòu)建,該引擎將面向用戶的模式轉(zhuǎn)換為下劃線的Kubernetes資源。KubeVela提供了一組內(nèi)置的抽象對(duì)象,平臺(tái)構(gòu)建者可以隨時(shí)隨意對(duì)其進(jìn)行修改。抽象更改將在運(yùn)行時(shí)生效,無(wú)需重新編譯或重新部署KubeVela。

          架構(gòu)

          KubeVela 整體架構(gòu)如下圖所示:


          在架構(gòu)上,KubeVela 只有一個(gè) controller 并且以插件的方式運(yùn)行在 Kubernetes 之上,為 Kubernetes 帶來(lái)了面向應(yīng)用層的抽象,以及以此為基礎(chǔ)的面向用戶的使用界面,即Appfile。Appfile 乃至 KubeVela 運(yùn)行機(jī)制背后的核心,則是其能力管理模型 Open Application Model (OAM) 。基于這個(gè)模型,KubeVela 為系統(tǒng)管理員提供了一套基于注冊(cè)與自發(fā)現(xiàn)的能力裝配流程,來(lái)接入 Kubernetes 生態(tài)中的任意能力到 KubeVela 中,從而以“一套核心框架搭配不同能力”的方式,適配各種使用場(chǎng)景

          概念和術(shù)語(yǔ)

          從上面的架構(gòu)圖上我們可以看到kubevela中存在一些專業(yè)術(shù)語(yǔ)applicationservice,workload typetrait,他們之間的關(guān)系如下圖:


          workload type:聲明運(yùn)行時(shí)基礎(chǔ)設(shè)施應(yīng)該考慮到應(yīng)用管理的一些特性。workload type類型可以是“長(zhǎng)期運(yùn)行的服務(wù)”或“一次性任務(wù)”。
          trait: 定義一個(gè)組件所需的運(yùn)維策略與配置,例如環(huán)境變量、Ingress、AutoScaler、Volume 等。
          Service: service 不同于k8s中的對(duì)象service,在這里定義了一個(gè)服務(wù)在Kubernetes中運(yùn)行應(yīng)用程序所需的運(yùn)行時(shí)配置(即workload type,trait)。service是KubeVela中基本可部署單元的描述符
          Application: application在k8s中時(shí)應(yīng)用的集合,描述了開發(fā)人員需要定義的內(nèi)容,application由KubeVela中的Appfilevela.yaml默認(rèn)命名)定義。

          安裝

          要求:
          • Kubernetes集群> = v1.15.0
          • 安裝并配置了kubectl

          下載kubevela

          通過腳本下載
          1
          curl -fsSl https://kubevela.io/install.sh | bash
          從github下載
          • vela從發(fā)行頁(yè)面下載最新的二進(jìn)制文件。
          • 解壓縮vela二進(jìn)制文件并將其添加$PATH到入門中。
          1
          $ sudo mv ./vela /usr/local/bin/vela

          初始化kubevela

          執(zhí)行vela install安裝KubeVela服務(wù)器組件及其依賴組件。
          將安裝以下依賴項(xiàng)組件以及Vela服務(wù)器組件:
          • Prometheus Stack
          • Cert-manager
          • Flagger
          • KEDA
          注意:如果k8s集群的monitoring這個(gè)namespace下已經(jīng)安裝了Prometheus-operator,則kubevela安裝的 prometheus stack 會(huì)造成沖突。
          配置保存在“ vela-system / vela-config”中的ConfigMap中
          1
          2
          3
          4
          5
          6
          7
          8
          9
          10
          11
          12
          13
          14
          15
          16
          17
          18
          19
          20
          21
          22
          23
          24
          25
          26
          27
          28
          29
          30
          [root@master-01 kubevela]# vela install
          - Installing Vela Core Chart:
          install chart vela-core, version 0.1.0, desc : A Helm chart for Kube Vela core, contains 35 file
          Successfully installed the chart, status: deployed, last deployed time = 2020-12-03 11:06:34.3800069 +0800 CST m=+6.951945903 Automatically discover capabilities successfully ? Add(8) Update(0) Delete(0)

          TYPE CATEGORY DESCRIPTION
          +task workload One-off task to run a piece of code or script to completion
          +webservice workload Long-running scalable service with stable endpoint to receive external traffic
          +worker workload Long-running scalable backend worker without network endpoint
          +autoscale trait Automatically scale the app following certain triggers or metrics
          +metrics trait Configure metrics targets to be monitored for the app
          +rollout trait Configure canary deployment strategy to release the app
          +route trait Configure route policy to the app
          +scaler trait Manually scale the app

          - Finished successfully.
          [root@master-01 kubevela]# kubectl get pod -n vela-system
          NAME READY STATUS RESTARTS AGE
          flagger-7846864bbf-m6wxt 1/1 Running 0 50s
          kubevela-vela-core-f8b987775-mdjqm 0/1 Running 0 65s
          [root@master-01 kubevela]# kubectl get pod -n cert-manager
          NAME READY STATUS RESTARTS AGE
          cert-manager-79c5f9946-gx9s9 1/1 Running 0 70s
          cert-manager-cainjector-76c9d55b6f-f478g 1/1 Running 0 70s
          cert-manager-webhook-6d4c5c44bb-bw9vh 1/1 Running 0 70s
          [root@master-01 kubevela]# kubectl get pod -n keda
          NAME READY STATUS RESTARTS AGE
          keda-operator-566d494bf-mqpn8 0/1 Running 0 68s
          keda-operator-metrics-apiserver-698865dc8b-fg4gn 1/1 Running 0 68s

          卸載

          執(zhí)行:
          1
          2
          $ helm uninstall -n vela-system kubevela
          $ rm -r ~/.vela
          這將卸載KubeVela服務(wù)器組件及其依賴組件。這還將清理本地CLI緩存。
          然后清理CRD(默認(rèn)情況下,不會(huì)通過chart刪除CRD):
          1
          2
          3
          4
          5
          6
          7
          8
          9
          10
          11
          12
          13
          14
          15
          16
          17
          18
          19
          20
          21
          $ kubectl delete crd \
          applicationconfigurations.core.oam.dev \
          applicationdeployments.core.oam.dev \
          autoscalers.standard.oam.dev \
          certificaterequests.cert-manager.io \
          certificates.cert-manager.io \
          challenges.acme.cert-manager.io \
          clusterissuers.cert-manager.io \
          components.core.oam.dev \
          containerizedworkloads.core.oam.dev \
          healthscopes.core.oam.dev \
          issuers.cert-manager.io \
          manualscalertraits.core.oam.dev \
          metricstraits.standard.oam.dev \
          orders.acme.cert-manager.io \
          podspecworkloads.standard.oam.dev \
          routes.standard.oam.dev \
          scopedefinitions.core.oam.dev \
          servicemonitors.monitoring.coreos.com \
          traitdefinitions.core.oam.dev \
          workloaddefinitions.core.oam.dev

          使用vela 部署一個(gè)服務(wù)

          下載官方提供的示例文件
          1
          2
          $ git clone https://github.com/oam-dev/kubevela.git
          $ cd kubevela/docs/examples/testapp
          該示例包含NodeJS應(yīng)用程序代碼,用于構(gòu)建應(yīng)用程序的Dockerfile。
          注意:要將image: misterli/testapp:v1用戶替換為自己用戶,以便進(jìn)行推送。
          1
          2
          3
          4
          5
          6
          7
          8
          9
          10
          11
          12
          13
          14
          15
          16
          17
          18
          19
          20
          21
          22
          23
          24
          25
          26
          27
          28
          29
          30
          31
          32
          33
          34
          35
          36
          37
          38
          39
          40
          41
          42
          43
          44
          45
          46
          47
          48
          49
          50
          51
          52
          53
          54
          55
          [root@master-01 testapp]# ls 
          Dockerfile package.json server.js vela.yaml
          [root@master-01 testapp]# cat vela.yaml
          name: testapp

          services:
          express-server:
          # this image will be used in both build and deploy steps
          image: misterli/testapp:v1

          build:
          # Here more runtime specific build templates will be supported, like NodeJS, Go, Python, Ruby.
          docker:
          file: Dockerfile
          context: .

          # Uncomment the following to push to local kind cluster
          # push:
          # local: kind

          # type: webservice (default) | worker | task

          cmd: ["node", "server.js"]
          port: 8080

          # scaler:
          # replicas: 1

          # route:
          # domain: example.com
          # rules:
          # - path: /testapp
          # rewriteTarget: /

          # metrics:
          # format: "prometheus"
          # port: 8080
          # path: "/metrics"
          # scheme: "http"
          # enabled: true

          # autoscale:
          # min: 1
          # max: 4
          # cron:
          # startAt: "14:00"
          # duration: "2h"
          # days: "Monday, Thursday"
          # replicas: 2
          # timezone: "America/Los_Angeles"

          # pi:
          # image: perl
          # cmd: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]

          部署

          1
          2
          3
          4
          5
          6
          7
          8
          9
          10
          11
          12
          13
          14
          15
          16
          17
          18
          19
          20
          21
          22
          23
          24
          25
          26
          27
          28
          29
          30
          31
          32
          33
          34
          35
          36
          37
          38
          39
          40
          41
          42
          43
          44
          45
          46
          47
          48
          49
          50
          51
          52
          53
          54
          55
          56
          57
          58
          59
          60
          61
          62
          63
          64
          65
          66
          67
          68
          69
          70
          71
          72
          73
          74
          75
          76
          77
          78
          79
          80
          81
          82
          83
          84
          85
          86
          87
          88
          89
          90
          91
          92
          93
          94
          95
          96
          97
          98
          99
          100
          101
          102
          103
          104
          105
          106
          107
          108
          109
          110
          111
          112
          113
          114
          115
          116
          [root@master-01 testapp]# vela  up 
          Parsing vela.yaml ...
          Loading templates ...

          Building service (express-server)...
          Sending build context to Docker daemon 7.68kB
          Step 1/10 : FROM mhart/alpine-node:12
          12: Pulling from mhart/alpine-node
          31603596830f: Pulling fs layer
          a1768851dab2: Pulling fs layer
          31603596830f: Verifying Checksum
          31603596830f: Download complete
          31603596830f: Pull complete
          a1768851dab2: Verifying Checksum
          a1768851dab2: Download complete
          a1768851dab2: Pull complete
          Digest: sha256:31eebb77c7e3878c45419a69e5e7dddd376d685e064279e024e488076d97c7e4
          Status: Downloaded newer image for mhart/alpine-node:12
          ---> b13e0277346d
          Step 2/10 : WORKDIR /app
          ---> Running in ab10b920fb85
          Removing intermediate container ab10b920fb85
          ---> 9f6c8afc0ac4
          Step 3/10 : COPY package.json ./
          ---> a4432016a818
          Step 4/10 : RUN npm install
          ---> Running in c13d25b9a074
          npm notice created a lockfile as package-lock.json. You should commit this file.
          npm WARN [email protected] No repository field.
          npm WARN [email protected] No license field.

          added 50 packages from 37 contributors and audited 50 packages in 6.037s
          found 0 vulnerabilities

          Removing intermediate container c13d25b9a074
          ---> ba5e090aa522
          Step 5/10 : RUN npm ci --prod
          ---> Running in f0fa46706fdc
          npm WARN prepare removing existing node_modules/ before installation
          added 50 packages in 0.3s
          Removing intermediate container f0fa46706fdc
          ---> a9345a48a79a
          Step 6/10 : FROM mhart/alpine-node:slim-12
          slim-12: Pulling from mhart/alpine-node
          31603596830f: Already exists
          de802a068b6a: Pulling fs layer
          de802a068b6a: Verifying Checksum
          de802a068b6a: Download complete
          de802a068b6a: Pull complete
          Digest: sha256:12e59927fda21237348acf1a229ad09cf37fb232d251c3e54e1dac3ddac6feeb
          Status: Downloaded newer image for mhart/alpine-node:slim-12
          ---> 6d25d4327eff
          Step 7/10 : WORKDIR /app
          ---> Running in d541b38c1823
          Removing intermediate container d541b38c1823
          ---> 1e0777fd03d8
          Step 8/10 : COPY --from=0 /app .
          ---> abe26ca579ed
          Step 9/10 : COPY . .
          ---> 6e9f13fd2777
          Step 10/10 : CMD ["node", "server.js"]
          ---> Running in e2a66724e4f1
          Removing intermediate container e2a66724e4f1
          ---> 533e1502cb2c
          Successfully built 533e1502cb2c
          Successfully tagged misterli/testapp:v1
          pushing image (misterli/testapp:v1)...
          The push refers to repository [docker.io/misterli/testapp]
          c84892b4351c: Preparing
          fac1f8a2295d: Preparing
          5d57bb81c0cc: Preparing
          2864da400028: Preparing
          89ae5c4ee501: Preparing
          89ae5c4ee501: Mounted from mhart/alpine-node
          2864da400028: Mounted from mhart/alpine-node
          5d57bb81c0cc: Pushed
          c84892b4351c: Pushed
          fac1f8a2295d: Pushed
          v1: digest: sha256:6ac7865710892ddd57c0604d02560f1dd9bbf007b23fbacfa45fdbf718a41669 size: 1365

          Rendering configs for service (express-server)...
          Writing deploy config to (.vela/deploy.yaml)

          Applying deploy configs ...
          Checking if app has been deployed...
          App has not been deployed, creating a new deployment...
          ? App has been deployed ???
          Port forward: vela port-forward testapp
          SSH: vela exec testapp
          Logging: vela logs testapp
          App status: vela status testapp
          Service status: vela status testapp --svc express-server

          [root@master-01 testapp]# vela status testapp
          About:

          Name: testapp
          Namespace: default
          Created at: 2020-12-03 11:17:10.202380171 +0800 CST
          Updated at: 2020-12-03 11:17:10.202380322 +0800 CST

          Services:
          - Name: express-server
          Type: webservice
          HEALTHY Ready:1/1
          Traits:

          Last Deployment:
          Created at: 2020-12-03 11:17:10 +0800 CST
          Updated at: 2020-12-03T11:17:10+08:00
          [root@master-01 rabbitmq]# kubectl get pod
          NAME READY STATUS RESTARTS AGE
          busybox-deployment-7bfd6d554c-nqrln 1/1 Running 831 6d1h
          busybox-deployment-7bfd6d554c-s6lrw 1/1 Running 831 6d1h
          check-ecs-price-7cdc97b997-j9w9q 1/1 Running 0 7d
          express-server-7b5d47c867-hcq99 1/1 Running 0 88s
          我們可以看到執(zhí)行vela up之后,先進(jìn)行了docker鏡像的build push 操作,然后根據(jù)vela.yaml文件內(nèi)容生成.vela/deploy.yaml文件然后進(jìn)行apply操作,我們查看一下.vela/deploy.yaml文件內(nèi)容:
          1
          2
          3
          4
          5
          6
          7
          8
          9
          10
          11
          12
          13
          14
          15
          16
          17
          18
          19
          20
          21
          22
          23
          24
          25
          26
          27
          28
          29
          30
          31
          32
          33
          34
          35
          36
          37
          38
          39
          40
          41
          42
          43
          44
          45
          46
          47
          48
          49
          50
          51
          52
          53
          54
          55
          56
          57
          58
          59
          60
          61
          62
          63
          64
          65
          66
          67
          68
          69
          70
          71
          72
          73
          74
          [root@master-01 testapp]# cat .vela/deploy.yaml 
          apiVersion: core.oam.dev/v1alpha2
          kind: ApplicationConfiguration
          metadata:
          creationTimestamp: null
          name: testapp
          namespace: default
          spec:
          components:
          - componentName: express-server
          scopes:
          - scopeRef:
          apiVersion: core.oam.dev/v1alpha2
          kind: HealthScope
          name: testapp-default-health
          traits:
          - trait:
          apiVersion: core.oam.dev/v1alpha2
          kind: ManualScalerTrait
          metadata:
          labels:
          trait.oam.dev/type: scaler
          spec:
          replicaCount: 2
          status:
          dependency: {}
          observedGeneration: 0

          ---
          apiVersion: core.oam.dev/v1alpha2
          kind: Component
          metadata:
          creationTimestamp: null
          name: express-server
          namespace: default
          spec:
          workload:
          apiVersion: apps/v1
          kind: Deployment
          metadata:
          labels:
          workload.oam.dev/type: webservice
          spec:
          selector:
          matchLabels:
          app.oam.dev/component: express-server
          template:
          metadata:
          labels:
          app.oam.dev/component: express-server
          spec:
          containers:
          - command:
          - node
          - server.js
          image: misterli/testapp:v1
          name: express-server
          ports:
          - containerPort: 8080
          status:
          observedGeneration: 0

          ---
          apiVersion: core.oam.dev/v1alpha2
          kind: HealthScope
          metadata:
          creationTimestamp: null
          name: testapp-default-health
          namespace: default
          spec:
          workloadRefs: []
          status:
          scopeHealthCondition:
          healthStatus: ""

          我們修改vela.yaml文件中,取消如下注釋并修改為2

          1
          2
          scaler:
          replicas: 2

          我們?cè)趫?zhí)行vela up ,pod的副本數(shù)將變?yōu)?個(gè)

          1
          2
          3
          4
          5
          6
          7
          [root@master-01 rabbitmq]# kubectl get pod 
          NAME READY STATUS RESTARTS AGE
          busybox-deployment-7bfd6d554c-nqrln 1/1 Running 832 6d1h
          busybox-deployment-7bfd6d554c-s6lrw 1/1 Running 832 6d1h
          check-ecs-price-7cdc97b997-j9w9q 1/1 Running 0 7d
          express-server-7b5d47c867-g4jbh 1/1 Running 0 70s
          express-server-7b5d47c867-hcq99 1/1 Running 0 5m33s
          注意:我們這里刪除服務(wù)如果使用kubectl 刪除deployment 是不起作用的,vela會(huì)自動(dòng)拉取新的deployment,刪除服務(wù)我們需要使用vela delete APP_NAME
          1
          2
          3
          4
          5
          6
          7
          8
          9
          10
          11
          12
          13
          14
          15
          16
          17
          18
          19
          20
          21
          22
          23
          24
          25
          #錯(cuò)誤的刪除方式
          [root@master-01 testapp]# kubectl get deployments.apps
          NAME READY UP-TO-DATE AVAILABLE AGE
          check-ecs-price 1/1 1 1 8d
          express-server 1/1 1 1 3m45s
          [root@master-01 testapp]# kubectl delete deployments.apps express-server
          deployment.apps "express-server" deleted
          [root@master-01 testapp]# kubectl get deployments.apps
          NAME READY UP-TO-DATE AVAILABLE AGE
          check-ecs-price 1/1 1 1 8d
          express-server 0/1 1 0 1s

          #正確的刪除方式
          [root@master-01 testapp]# vela ls
          SERVICE APP TYPE TRAITS STATUS CREATED-TIME
          express-server testapp webservice metrics,scaler Deployed 2020-12-03 11:17:10 +0800 CST
          [root@master-01 testapp]# vela delete testapp
          Deleting Application "testapp"
          delete apps succeed testapp from default
          [root@master-01 testapp]# vela ls
          SERVICE APP TYPE TRAITS STATUS CREATED-TIME
          [root@master-01 testapp]# kubectl get deployments.apps
          NAME READY UP-TO-DATE AVAILABLE AGE
          check-ecs-price 1/1 1 1 8d



          ?點(diǎn)擊屏末?|??|?即刻學(xué)習(xí)

          瀏覽 100
          點(diǎn)贊
          評(píng)論
          收藏
          分享

          手機(jī)掃一掃分享

          分享
          舉報(bào)
          評(píng)論
          圖片
          表情
          推薦
          點(diǎn)贊
          評(píng)論
          收藏
          分享

          手機(jī)掃一掃分享

          分享
          舉報(bào)
          <kbd id="afajh"><form id="afajh"></form></kbd>
          <strong id="afajh"><dl id="afajh"></dl></strong>
            <del id="afajh"><form id="afajh"></form></del>
                1. <th id="afajh"><progress id="afajh"></progress></th>
                  <b id="afajh"><abbr id="afajh"></abbr></b>
                  <th id="afajh"><progress id="afajh"></progress></th>
                  爱搞一区| 日韩va在线观看 日韩成人免费大片 | 国内精品久久久 | 青青青青青欧美在线观视频观看 | A片视频免费观看 |