[TOC]
根据马哥教育k8s教程学习。
k8s学习大纲
- 基础大纲
- 集群部署及陈述式命令管理
- 资源类型及配置清单
- pod资源
- pod 控制器
- service资源
- 存储卷
- configmap与secret资源
- statefulset控制器
- 认证、授权及准入控制
- 网络模型及网络策略
- pod资源调度
- crd、自定义资源、自定义控制器及自定义API server, cni=Custom/Container Network Interface, crd=CustomResourceDefinition, CRI=Container Runtime Interface
- 资源指标与HPA控制器
- Helm管理器
- 高可用kubernetes
Cloud Native Apps: 云原生应用,程序开发出来就是运行在云平台上,而非单机上的应用。
Serverless: 与云原生应用组合,成为新的发展趋势,FaaS。 Knative。
单体应用程序: 亦称巨石应用,牵一发而动全身。
分成架构服务: 每个团队维护一个层次。(比如用户,商品,支付)
微服务(Microservice): 服务拆分成多个小功能,单独运行。
- 服务注册和服务发现: 分布式和微服务存在的问题
- 三五个程序运行支撑的服务转变为 三五十个微型服务,服务之间的调用成为网状结构。
- 非静态配置,动态服务发现,服务总线
- 服务编排系统: 解决运维部署难的问题
- 容器编排系统: 解决系统异构,服务依赖环境各不相同。 服务编排系统 --> 容器编排系统
容器编排系统: (主流: k8s, Borg, Docker Swarm, Apache Mesos Marathon DC/OS )
what is container orchestration?
- container orchestration is all about managing the lifecycles of containers, especially in large, dynamic environments.
- software teams use container orchestration to control and automade many task:
- provisioning and deployment of containers
- redundancy and availability of containers
- scaling up or removing containers to spread application load evently across host infrastructure
- movement of containers from one host to another if there is a shortage of resources in host, or if a host dies.
- allocation of resources between containers
- exteral exposure of services running in a container with the outside world
- load balancing of service discovery between containers
- heath monitoring of containers and hosts
- configuration of an application in relation to the containers runing it
简单来说,容器编排是指容器应用的自动布局、协同及管理,它主要负责完成以下具体内容:
service discovery
load balancing
secrets/configuration/storage management
heath checks
auto-[scaling/restart/healing] of containers and nodes
zero-downtime deploys
- pod资源
- pod 控制器:
pod创建方式:
- 自助式pod: 直接创建pod
- 由控制器管理pod: 创建deployment,service等
ReplicationControler:
ReplicaSet/rs:
- 副本数
- 标签选择
- pod资源摸版
- kubectl explain rs
- kubectl explain rs.spec
- replicas
- selector
- template
Deployment: 无状态任务,关注群体行为. 只关注数量,不关注个体.
- kubectl explain deploy.spec
- replicas
- selector
- template
- strategy: 更新策略
- rollingUpdate
- maxSurge: 最多超出目标pod数量 1/20%
- maxUnavailable: 最多不可用数量. 1/80%
- type
- Recreate
- rollingUpdate
- minReadySeconds
- revisionHistoryLimit: 保存历史版本数量限制
DaemonSet/ds: 每个node节点只运行一个,eg:ELK.
- kubectl explain ds.spec
- selector
- template
- minReadySeconds
- revisionHistoryLimit: 保存历史版本数量限制
- updateStrategy
Service:
工作模式: userspace, iptables, ipvs. (kube-proxy)
userspace: 1.1-
iptables: 1.10-
ipvs: 1.11+
- NodePort/ClusterIP : client --> NodeIP:NodePort --> ClusterIP:ServicePort --> PodIP:containerPort
- LBAAS(LoadBalancerAsAService): 公有云环境中.
- LoadBalancer
- ExternalName
- FQDN
- CNAME -> FQDN
- No ClusterIP: Headless Service
- ServiceName -> PodIP
- kubectl explain svc
- type: ExternalName, ClusterIP, NodePort, LoadBalancer
- port:
- NodePort # type=NodePort 时 可用. client --> NodeIP:NodePort --> ClusterIP:ServicePort --> PodIP:containerPort
- LBAAS(LoadBalancerAsAService): 公有云环境中.
- LoadBalancer
- ExternalName
- FQDN
- CNAME -> FQDN
-
- port
- targetPort
Ingress:
Service: ingress-nginx (NodePort, DaemonSet - HostNetwork)
IngressController: ingress-nginx
Ingress:
- site1.ikubernetes.io (virtual host)
- site2.ikubernetes.io (virtual host)
- example.com/path1
- example.com/path2
- Service: site1
- pod1
- pod2
- Service: site2
- pod3
- pod4
- 存储卷
- emptyDir
- 临时目录, 内存使用, 没有持久性
- gitRepo
- hostPath
- 共享存储
- SAN: iSCSI
- NAS: nfs, cifs, http
- 分布式存储:
- glusterfs
- ceph: rbd
- cephfs:
- 云存储
- EBS
- Azure Disk
- pvc: persistentVolumeClaim
- pv
- pvc
- pod - volumes -
- secret
base64 的 configmap
- configmap: 配置中心
配置容器化应用的方式:
1. 自定义命令行参数;
args: []
2.把配置文件直接copy进镜像
3.环境变量
1.Cloud Native的应用程序一般可直接通过环境变量加载配置
2.通过entrypoint.sh脚本来预处理变量为配置文件中的配置信息
4.存储卷
- 变量注入 --> pod 读取变量
- 挂载存储卷 --> pod 读取配置
- 自定义命令行参数
- StatefulSet:
PetSet -> StatefulSet
应对有以下要求的服务:
1.稳定且唯一的网络标识符
2.稳定且持久的存储
3.有序平滑的部署和扩展
4.有序平滑的删除和终止
5.有序的滚动更新
三个组件:
- headless service: clusterIP: None
- StatefulSet
- volumeClaimTemplate
- 认证、授权及准入控制
- Authentication
- restful API: token
- tls: 双向认证
- user/password
- Authorization
- rbac
- role
- rolebinding
- ClusterRole
- ClusterRoleBinding
- webhook
- abac
- Admission Control
client --> API Server
pod --> api server
- ServiceAccount?
- secret
- ServiceAccountName
kubectl get secret
kubectl get sa | serviceaccount
kubectl create serviceaccount my-sa -o yaml --dry-run
kubectl get pods myapp -o yaml --export
API request path: http://IP:port/apis/apps/v1/namespaces/default/deployments/myapp-deploy/
http request ver:
get, post, put, delete
http request --> API requests verb:
get, list, create, update, patch, watch, proxy, redirect, delete, delectcollection
Resource:
Subresource:
Namespace:
API group:
- ServiceAccount:
alex.crt
(umask 077; openssl genrsa -out alex.key 2048)
openssl req -new -key alex.key -out alex.csr -subj "/CN=alex"
openssl x509 -req -in alex.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out alex.crt -days 3650
openssl x509 -in alex.crt -text -noout
kubectl config set-cluster alex-cluster --server="https://192.168.137.131:6443" --certificate-authority=/etc/kubernetes/pki/ca.crt --embed-certs=true --kubeconfig=/tmp/alex.conf
kubectl config set-credentials admin \
--client-certificate=datasets/work/admin/admin.pem \
--client-key=datasets/work/admin/admin-key.pem \
--embed-certs=true \
--kubeconfig=/tmp/alex.conf
kubectl config set-context alex@kubernetes --cluster=kubernetes --user=admin --kubeconfig=/tmp/alex.conf
kubectl config use-context kubernetes --kubeconfig=/tmp/alex.conf
kubectl get pods --kubeconfig=/tmp/alex.conf (error from server forbidden)
- RBAC
- role, clusterrole
object:
resource group
resource
non-resource url
action:
get, list, watch, patch, delete, deletecollection, ...
- rolebinding, clusterrolebinding
subject:
user
group
serviceaccount
role:
- role:
- operations
- objects
- rolebinding:
- user account or service account
- role
- clusterrole
- clusterrolebinding
# kubectl create role --help
# kubectl create rolebinding --help
# kubectl create role pods-reader --verb=get,list,watch --resource=pods --dry-run=client -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
creationTimestamp: null
name: pods-reader
namespace: default
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
- watch
# kubectl create rolebinding alex-read-pods --role=pods-reader --user=alex --dry-run=client -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
creationTimestamp: null
name: alex-read-pods
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: pods-reader
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: alex
# 接着上面 ServiceAccount 的例子
# kubectl config use-context kubernetes --kubeconfig=/tmp/alex.conf
# kubectl get pods --kubeconfig=/tmp/alex.conf
# kubectl get pods -n kube-system --kubeconfig=/tmp/alex.conf (error from server forbidden)
# kubectl create clusterrole --help
# kubectl create clusterrolebinding --help
- Kubernetes Dashboard
- helm install kubernetes-dashboard/kubernetes-dashboard --version 2.3.0 --name=k8s-dashboard [--namespaces=dashboard]
- helm fetch kubernetes-dashboard/kubernetes-dashboard --version 2.3.0
- tar xf kubernetes-dashboard-2.3.0.tgz
- vim kubernetes-dashboard/values.yaml
- helm install kubernetes-dashboard/kubernetes-dashboard --version 2.3.0 --name=k8s-dashboard -f kubernetes-dashboard/values.yaml [--namespaces=dashboard]
- kubectl create clusterrolebinding k8s-dashboard-admin --clusterrole=cluster-admin --serviceaccount=default:k8s-dashboard / [--serviceaccount=dashboard:k8s-dashboard]
- kubectl describe sa k8s-dashboard [-n dashboard]
- kubectl describe secret k8s-dashboard-token-xxxx [-n dashboard]
- 网络模型及网络策略
flannel
kubectl get daemonset -n kube-system
kubectl get pods -o wide -n kube-system |grep -i kube-flannel
kubectl get configmap -n kube-system
kubectl get configmap kube-flannel-cfg -o json -n kube-system
from 10.244.1.59 ping 10.244.2.76
tcpdump -i cni0 -nn icmp
tcpdump -i flannel.1 -nn
tcpdump -i ens32 -nn host 192.168.137.131
overlay otv
calico
- pod资源调度
调度器:
预选策略:
优先函数:
节点选择器: nodeSelector, nodeName
节点亲和调度: nodeAffinity
taint的effect定义对Pod排斥效果:
NoSchedule:仅影响调度过程,对现存的Pod对象不产生影响;
NoExecute:既影响调度过程,也影响现在的Pod对象;不容忍的Pod对象将被驱逐;
PreferNoSchedule:
- crd、自定义资源、自定义控制器及自定义API server
- HeapSter (数据采集)
- cAdvisor (数据指标检测)
- InfluxDB (历史数据记录: 时序数据库系统)
- Grafana (数据展示)
- RBAC
资源指标:
metrics-server: k8s资源聚合器
自定义指标:
- prometheus
- k8s-prometheus-adapter
- MertricServer
- PrometheusOperator
- NodeExporter
- kubeStateMetrics
- Prometheus
- Grafana
新一代架构:
核心指标流水线:
由kubelet、metrics-server以及由API server提供的api组成;CPU累积使用率、内存实时使用率、Pod的资源占用率及容器的磁盘占用率;
监控流水线:
用于从系统收集各种指标数据并提供终端用户、存储系统以及HPA,它们包含核心指标及许多非核心指标。非核心指标本身不能被k8s所解析,
metrics-server:
API server
- Helm管理器: chart repository
Tiller:
chart:
- 配置清单
- 模板文件
-
helm install mem1 stable/memcached
# kubectl explain ingress.spec
FIELDS:
backend <Object>
resource <Object>
serviceName <string>
servicePort <string>
ingressClassName <string>
rules <[]Object>
host <string>
http <Object>
paths <[]Object> -required-
backend <Object> -required-
path <string>
pathType <string>
tls <[]Object>
Ingress Controller:
- Nginx
- Traefik
- Envoy
namespace: ingress-nginx
Job: 一次性任务
Cronjob: 周期性任务
StatefulSet: 关注个体. 对
EDR: Custom Defined Resources, 1.8+
Operator: etcd
example:
- # ReplicaSet
- vim rs-demo.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: myapp
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: myapp
release: canary
template:
metadata:
name: myapp-pod
labels:
app: myapp
release: canary
environments: qa
sepe:
containers:
- name: myapp-container
image: ikubernetes/myapp:v1
ports:
- name: http
containerPort: 80
- kubectl create -f rs-demo.yaml
- kubectl edit rs myapp --> replicas: 5
- kubectl get pods
- kubectl edit rs myapp --> image: ikubernetes/myapp:v2
- kubectl get rs -o wide
- curl xx.xx.xxx.xx # 只有重建的pod会使用新的image
- # Deployment
- vim deploy-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deploy
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: myapp
release: canary
template:
metadata:
name: myapp-pod
labels:
app: myapp
release: canary
environments: qa
sepe:
containers:
- name: myapp-container
image: ikubernetes/myapp:v1
ports:
- name: http
containerPort: 80
- kubectl apply -f deploy-demo.yaml
- kubectl get deploy
- kubectl get rs -o wide
myapp-deploy-69b47bc96d --> 模板的hash值
- kubectl get pods
- kubectl get pods -l app=myapp -w # 新开窗口
- vim deploy-demo.yaml --> image: ikubernetes/myapp:v2
- kubectl apply -f deploy-demo.yaml
- kubectl get rs -o wide
- kubectl rollout history deployment myapp-deploy
# 打补丁方式增加pod
- kubectl patch deployment myapp-deploy -p '{"spec":{"replicas":5}}'
- kubectl get pods
- kubectl patch deployment myapp-deploy -p '{"spec":{"strategy":{"rollingUpdate":{"maxSurge":1,"maxUnavailable":0}}}}'
- kubectl describe deployment myapp-deploy
- kubectl get pods -l app=myapp -w # 新开窗口-1
- kubectl set image deployment myapp-deploy myapp=ikubernetes/myapp:v3 && kubectl rollout pause deployment myapp-deploy # 暂停rollout, 金丝雀发布
- kubectl rollout status deployment myapp-deploy # 新开窗口-2
- kubectl rollout resume deployment myapp-deploy --> # 查看新开窗口 1, 2
- kubectl rollout history deployment myapp-deploy # 查看历史版本
- kubectl get rs -o wide
# 回滚版本
- kubectl rollout undo --help
- kubectl rollout undo deployment myapp-deploy --to-revision=1
- kubectl rollout history deployment myapp-deploy # 查看历史版本
- # DaemonSet
- vim ds-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
namespaces: default
spec:
replicas: 1
selector:
matchLabels:
app: redis
role: logstor
template:
metadata:
labels:
app: redis
role: logstor
spec:
containers:
- name: redis
image: redis:4.0-alpine
ports:
- name: redis
containerPort: 6379
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat-ds
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: filebeat
release: stable
template:
metadata:
name: filebeat-pod
labels:
app: filebeat
release: stable
sepe:
containers:
- name: filebeat
image: ikubernetes/filebeat:5.6.5-alphine
env:
- name: REDIS_HOST
value: redis.default.svc.cluster.local
- name: REDIS_LOG_LEVEL
value: info
- kubectl apply -f ds-demo.yaml
- kubectl get pods
- kubectl expose deployment redis --port=6379
- kubect get svc
- kubectl exec -it redist-5bxxxx-xxx -- /bin/sh
# netstat -tnl
# nslookup redis.default.svc.cluster.local
# redis-cli -h redis.default.svc.cluster.local
# kyes *
# DaemonSet 滚动更新
- kubectl set image demonsets filebeat-ds filebeat=ikubernets/filebeat-5.6.6-alphine
- kubectl get pods -w # 一个一个的更新.
- # Service
- vim redis-svc.yaml
apiVersion: v1
kind: Service
medatada:
name: redis
namespace: default
spec:
selector:
app: redis
role: logstor
# clusterIP: 10.96.96.96
type: ClusterIP
port:
- port: 6349
targetPort: 6379
- kubectl apply -f redis-svc.yaml
- kubectl get svc
- kubectl describe svc redis
-
- curl xx.xx.xxx.xx # 只有重建的pod会使用新的image
- service资源
Helm: like Linux yum/apt/apk ...
k8s Server
Master
API Server
- port: 6443
- auth: 双向认证, /etc/kubernetes/pki
- config: ~/.kube/config
- kubectl config view
- restful api
- kubectl api-versions
- json
- kubectl get pod == curl https://master:6443/v1/...
- communicate with all the service
- kubectl
- kube-controller-manager
- kube-scheduler
- etcd
- kubelet
- kube-proxy
- api 接口中的资源分成多个逻辑组合 apiVersion
- 和解循环(Reconciliation Loop): status --> spec
Scheduler
Controller
- 和解循环(Reconciliation Loop): status --> spec
Node:
- pod
- service -> iptables/ipvs -> kube-proxy
- kube-proxy
Resource
资源有两个级别:
- 集群级别
- Node
- Namespace
- Role
- ClusterRole
- RoleBinding
- ClusterRoleBinding
- PersistentVolume
- 名称空间级别
- pod
- service
- deploy
- 元数据型资源
- HPA
- PodTemplate
- LimitRange
资源组成部分:
- apiVersion: group/version (kubectl api-versions)
- kind: 资源类别
- metadata
- spec: 资源期望状态
- labels/tag: kubectl label
- annotations: kubectl annotate
- initContainers: kubectl explain pods.spec.initContainers: 容器初始化, 运行前, 运行后, 运行时 (存活检测, 就绪检测)
- lifecycle
- livenessProbe
- readinessProbe
- startupProbe
-
- Containers: kubectl explain pods.spec.Containers: 容器初始化, 运行前, 运行后, 运行时 (存活检测, 就绪检测)
- lifecycle: preStart hook, preStop hook
- livenessProbe
- readinessProbe
- startupProbe
- name:
- command: ["/bin/bash" "-c" "sleep 3600"]
- args:
- image:
- imagePullPolicy:
- Never
- Always
- IfNotPresent
- port:
- name:
- hostIP:
- hostPort:
- protocol
- containerPort
- status: 资源当前状态
资源引用Object URL:
/apis/<GROUP>/<VERSION>/namespaces/<NAMESPACE_NAME>/<KIND>[/OBJECT_ID]/
- /api/GROUP/VERSION/namespaces/NAMESPACE/TYPE/NAME
- kubectl get pod/nginx-ds-s4hpn
- selfLink: /api/v1/namespaces/default/pods/nginx-ds-s4hpn
资源记录:
SVC_NAME.NS_NAME.DOMAIN.LTD.
redis.default.svc.cluster.local
- Pod
- Pod Controller
- Deployment: 类型 --> ngx-deploy --> nginx pod
- Service
- nginx-svc --> 关联到 nginx pod
kubeadm
kubectl
- kubectl explain pods.spec.initContainers
- kubectl -h
- basic commands beginner
- create
- expose
- run
- set
- basic commands intermediate
- explan
- get
- edit
- delete
- deploy commands
- rollout
- scale
- autoscale
- cluster management commands
- certificate
- cluster-info
- top
- cordon
- uncordon
- drain
- taint
- troubleshooting and debugging commands
- describe
- logs
- attach
- exec: kubectl exec -it PodName -c ContainerName -- /bin/sh
- port-forward
- kubectl config view [-o wide/json/yaml]
# kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://192.168.137.131:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
- kubectl api-resources: 支持的资源类型 以及缩写
- kubectl get [-o wide/json/yaml] [-n default/kube-system/...] [-L label-name] [-l label-name ==/!= label-value]
- all
- nodes
- pods
- ns/namespace
- default
- kube-public
- kube-system
- deploy
- svc/service
- kubectl create [ -f file]
- job
- namespace
- kubectl create namespace testing
- kubectl create namespace prod
- kubectl create namespace develop
- kubectl get ns
- kubectl delete namespace testing
- kubectl delete ns/prod ns/develop
- deployment
- kubectl create deployment nginx-deploy --image=nginx:1.14-alpine
- kubectl get all -o wide
- pod/nginx-deploy-xxxx (ip: 10.244.1.2)
- deployment.apps/nginx-deploy
- replicaset.apps/nginx-deploy-xxxx
- curl 10.244.1.2 ( welcome nginx!)
- kubectl delete pod/nginx-deploy-xxxx
- kubectl get all -o wide
- pod/nginx-deploy-xxxx (ip: 10.244.3.2)
...
- curl 10.244.3.2 ( welcome nginx!)
- service
- kubectl create service -h
- ClusterIP
- NodePort
- kubectl create service clusterip nginx-deploy --tcp=80:80 (名字和上面deploy保持一致,就会自动分配IP)
- kubectl get svc/nginx-deploy -o yaml
- clusterip: 10.110.129.64
- endpoints: 10.244.3.2
- kubectl describe svc/nginx-deploy
# 删除 pod 测试
- curl 10.110.129.64 ( welcome nginx!)
- kubectl delete pod/nginx-deploy-xxxx
- kubectl get all -o wide
- pod/nginx-deploy-xxxx (ip: 10.244.1.3)
...
- kubectl get svc/nginx-deploy -o yaml
- clusterip: 10.110.129.64
- endpoints: 10.244.1.3 (自动关联到最新的pod)
# 删除 service 测试
- curl nginx-deploy.default.svc.cluster.local. ( welcome nginx!)
- kubectl delete svc/nginx-deploy
- kubectl create service clusterip nginx-deploy --tcp=80:80 (名字和上面deploy保持一致,就会自动分配IP)
- kubectl describe svc/nginx-deploy -o yaml
- clusterip: 10.111.215.249
- endpoints: 10.244.3.2
- curl nginx-deploy.default.svc.cluster.local. ( welcome nginx!)
# 按需伸缩 pod 测试
- kubectl create deploy myapp --image=ikubernetes/myapp:v1
- kubectl get deploy
- kubectl get pods -o wide
- ip : 10.244.3.3
- curl 10.244.3.3
- curl 10.244.3.3/hostname.html (show the pod name: myapp-xxxx-yyyy)
- kubectl create service clusterip myapp --tcp=80:80
- kubectl describe svc/myapp
- IP 10.100.182.218
- Endpoints: 10.244.3.3
- curl nginx-deploy.default.svc.cluster.local. ( welcome myapp!)
- curl nginx-deploy.default.svc.cluster.local/hostname.html (show the pod name: myapp-xxxx-yyyy)
- kubectl scale --replicas=3 myapp (deploy name)
- kubectl describe svc/myapp
- IP 10.100.182.218
- Endpoints: 10.244.3.3, 10.244.1.4, 10.244.2.2
- curl nginx-deploy.default.svc.cluster.local/hostname.html (随机显示不同IP的pod name,多次重复执行查看效果)
- kubectl scale --replicas=2 myapp
- kubectl describe svc/myapp
- IP 10.100.182.218
- Endpoints: 10.244.3.3, 10.244.1.4
# nodeport 外网访问
- kubectl delete svc/myapp
- kubectl create service nodeport -h
- kubectl create service nodeport myapp --tcp=80:80
- kubectl get svc
- ports: 80:31996/TCP
- 集群外部访问所有nodes的 http://nodesip:31996/hostname.html
- 自动创建规则在每一个节点的iptables --> kube-proxy
- ssh nodes "iptables -t nat -vnL"
eg:
- kubectl expose
- kubectl set image deployment myapp myapp=ikubernetes/myapp:v2
- kubectl label [--overwrite] (-f FILENAME | TYPE NAME) KEY_1=VAL_1 ... KEY_N=VAL_N [--resource-version=version]
kubectl label pods -n dev-namespaces apptag=my-app release=stable deltag-
- kubectl api-versions
- kubectl describe
Network
- node network
- service network: service 注册/发现
- pod network
外网访问:
- Service: NodePort
- hostport:
- hostNetwork:
ipvs/iptables 4 层调度器 ingress 7 层调度器