珠海网站艰涩和,工程造价管理,wordpress enter主题,河南建筑业城乡建设网站查询6、日志审计 log audit
你必须连接到正确的主机。不这样做可能导致零分。
[candidatebase] $ ssh cks001098
Context
您必须为 kubeadm 配置的集群实施审计。
Task
首先#xff0c;重新配置集群的 API 服务器#xff0c;以便#xff1a;
⚫ 位于 /etc/kubernetes/logpolicy/…6、日志审计 log audit你必须连接到正确的主机。不这样做可能导致零分。[candidatebase] $ ssh cks001098Context您必须为 kubeadm 配置的集群实施审计。Task首先重新配置集群的 API 服务器以便⚫ 位于 /etc/kubernetes/logpolicy/sample-policy.yaml 的基本审计策略被使用⚫ 日志存储在 /var/log/kubernetes/audit-logs.txt⚫ 最多保留 2 个日志保留时间为 10 天注意基本策略仅指定不记录的内容。然后编辑并扩展基本策略以记录⚫ RequestResponse 级别的 persistentvolumes 事件⚫ front-apps namespace 中的 configmaps 事件的请求正文⚫ Metadata 级别的所有 namespace 中的 ConfigMap 和 Secret 的更改⚫ Metadata 级别记录所有其他请求注意确保 API 服务器使用扩展后的策略。解题思路:参考网页:中文 kubernetes.io -任务-监控日志和调试-集群故障排查-审计英文kubernetes.io -Tasks - monitoring,logging,and debugging-Troubleshooting Clusters-Auditing1、按要求编写日志审计策略2、配置apiserver策略启用日志审计● 指定日志策略位置● 指定日志存储位置● 指定日志存储时间和日志保存数量解题过程:参考官网文档输出日志策略,编辑kube-apiserver.yaml 重启kubeletThe log backend writes audit events to afileinJSONlines format. You can configure the log audit backend using the following kube-apiserver flags: --audit-log-path specifies the logfilepath that log backend uses towriteaudit events. Not specifying this flag disables log backend. - means standard out --audit-log-maxage defined the maximum number of days to retain old audit log files#存储数量--audit-log-maxbackup defines the maximum number of audit log files to retain#最大备份数量--audit-log-maxsize defines the maximum sizeinmegabytes of the audit logfilebefore it gets rotated If your clusters control plane runs the kube-apiserver as a Pod, remember to mount the hostPath to the location of the policy file and log file, so that audit records are persisted. For example: - --audit-policy-file/etc/kubernetes/audit-policy.yaml #策略文件位置 - --audit-log-path/var/log/kubernetes/audit/audit.log #日志输出位置 #编辑apiserver配置文件按题目要求开启日志审计配置 candidatemaster01:~$ cd /etc/kubernetes/logpolicy/ candidatemaster01:/etc/kubernetes/logpolicy$ ll total 12 drwxrwxr-x 2 root root 4096 Mar 15 2025 ./ drwxrwxr-x 6 root root 4096 Mar 15 2025 ../ -rw-rw-r-- 1 root root 528 Mar 15 2025 sample-policy.yaml candidatemaster01:/etc/kubernetes/logpolicy$ cp sample-policy.yaml sample-policy.yamlbak candidatemaster01:/etc/kubernetes/logpolicy$ sudo su rootmaster01:/etc/kubernetes/logpolicy# cd /root rootmaster01:~# ll total 88 drwx------ 8 root root 4096 Oct 22 21:26 ./ drwxr-xr-x 20 root root 4096 Mar 15 2025 ../ -rwxr-xr-x 1 root root 1613 Mar 15 2025 api.sh* -rw------- 1 root root 16 Apr 22 2025 .bash_history -rw-r--r-- 1 root root 3106 Dec 5 2019 .bashrc drwx------ 2 root root 4096 Jan 1 2023 .cache/ drwxr-xr-x 3 root root 4096 Mar 15 2025 .config/ -rwxr-xr-x 1 root root 1660 Mar 15 2025 imagePolicy.sh* drwxr-xr-x 3 root root 4096 Mar 2 2025 .kube/ -rwxr-xr-x 1 root root 1696 Mar 15 2025 kubelet-etcd.sh* drwxr-xr-x 3 root root 4096 Jan 1 2023 .local/ -rwxr-xr-x 1 root root 1645 Mar 15 2025 log-audit.sh* -rw-r--r-- 1 root root 161 Dec 5 2019 .profile -rw-r--r-- 1 root root 75 Mar 15 2025 .selected_editor drwx------ 3 root root 4096 Oct 22 21:26 snap/ drwx------ 2 root root 4096 Jan 1 2023 .ssh/ -rw------- 1 root root 12438 Apr 22 2025 .viminfo -rw-r--r-- 1 root root 255 Oct 24 10:54 .wget-hsts -rw------- 1 root root 54 Apr 22 2025 .Xauthority rootmaster01:~# sh log-audit.sh 请稍等1分钟正在初始化这道题的环境配置。 Please wait for 1 minutes, the environment configuration for this question is being initialized. 初始化完成 Initialization completed. rootmaster01:~# cd /etc/kubernetes/logpolicy/ rootmaster01:/etc/kubernetes/logpolicy# ll total 16 drwxrwxr-x 2 root root 4096 Oct 24 11:00 ./ drwxrwxr-x 6 root root 4096 Mar 15 2025 ../ -rw-rw-r-- 1 root root 528 Mar 15 2025 sample-policy.yaml -rw-rw-r-- 1 candidate candidate 528 Oct 24 11:00 sample-policy.yamlbak rootmaster01:/etc/kubernetes/logpolicy# vim sample-policy.yaml rootmaster01:/etc/kubernetes/logpolicy# cat sample-policy.yaml apiVersion: audit.k8s.io/v1 # This is required. kind: Policy # Dont generate audit eventsforall requestsinRequestReceived stage. omitStages: -RequestReceivedrules:# Log pod changes at RequestResponse level- level: RequestResponse resources: - group:resources:[persistentvolumes]# Log the request body of configmap changes in kube-system.- level: Request resources: - group:# core API groupresources:[configmaps]# This rule only applies to resources in the kube-system namespace.# The empty string can be used to select non-namespaced resources.namespaces:[front-apps]# Log configmap and secret changes in all other namespaces at the Metadata level.- level: Metadata resources: - group:# core API groupresources:[secrets,configmaps]# A catch-all rule to log all other requests at the Metadata level.- level: Metadata# Long-running requests like watches that fall under this rule will not# generate an audit event in RequestReceived.omitStages: -RequestReceivedrootmaster01:/etc/kubernetes/logpolicy# cd /etc/kubernetes/manifests/rootmaster01:/etc/kubernetes/manifests# lltotal28drwxrwxr-x2root root4096Oct2411:01 ./ drwxrwxr-x6root root4096Mar152025../ -rw-------1root root2551Mar22025etcd.yaml -rw-------1root root4278Oct2411:01 kube-apiserver.yaml -rw-------1root root3417Mar22025kube-controller-manager.yaml -rw-r--r--1root root0Jan152025.kubelet-keep -rw-------1root root1680Mar22025kube-scheduler.yaml rootmaster01:/etc/kubernetes/manifests# vim kube-apiserver.yamlrootmaster01:/etc/kubernetes/manifests# cd /var/log/kubernetes/rootmaster01:/var/log/kubernetes# lltotal3508drwxrwxr-x2root root4096Oct2411:24 ./ drwxrwxr-x14root syslog4096Oct2411:01../ -rw-------1root root3580616Oct2411:25 audit-logs.txt rootmaster01:/var/log/kubernetes# cd /etc/kubernetes/logpolicy/rootmaster01:/etc/kubernetes/logpolicy# lltotal16drwxrwxr-x2root root4096Oct2411:14 ./ drwxrwxr-x6root root4096Mar152025../ -rw-rw-r--1root root1114Oct2411:14 sample-policy.yaml -rw-rw-r--1candidate candidate528Oct2411:00 sample-policy.yamlbak rootmaster01:/etc/kubernetes/logpolicy# cat sample-policy.yamlapiVersion: audit.k8s.io/v1# This is required.kind: Policy# Dont generate audit events for all requests in RequestReceived stage.omitStages: -RequestReceivedrules:# Log pod changes at RequestResponse level- level: RequestResponse resources: - group:resources:[persistentvolumes]# Log the request body of configmap changes in kube-system.- level: Request resources: - group:# core API groupresources:[configmaps]# This rule only applies to resources in the kube-system namespace.# The empty string can be used to select non-namespaced resources.namespaces:[front-apps]# Log configmap and secret changes in all other namespaces at the Metadata level.- level: Metadata resources: - group:# core API groupresources:[secrets,configmaps]# A catch-all rule to log all other requests at the Metadata level.- level: Metadata# Long-running requests like watches that fall under this rule will not# generate an audit event in RequestReceived.omitStages: -RequestReceivedrootmaster01:/etc/kubernetes/logpolicy# cd /etc/kubernetes/manifests/rootmaster01:/etc/kubernetes/manifests# lltotal28drwxrwxr-x2root root4096Oct2411:24 ./ drwxrwxr-x6root root4096Mar152025../ -rw-------1root root2551Mar22025etcd.yaml -rw-------1root root4466Oct2411:24 kube-apiserver.yaml -rw-------1root root3417Mar22025kube-controller-manager.yaml -rw-r--r--1root root0Jan152025.kubelet-keep -rw-------1root root1680Mar22025kube-scheduler.yaml rootmaster01:/etc/kubernetes/manifests# cat kube-apiserver.yamlapiVersion: v1 kind: Pod metadata: annotations: kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint:11.0.1.111:6443 creationTimestamp: null labels: component: kube-apiserver tier: control-plane name: kube-apiserver namespace: kube-system spec: containers: - command: - kube-apiserver - --advertise-address11.0.1.111 - --allow-privilegedtrue - --authorization-modeNode,RBAC - --client-ca-file/etc/kubernetes/pki/ca.crt - --enable-admission-pluginsNodeRestriction - --enable-bootstrap-token-authtrue - --etcd-cafile/etc/kubernetes/pki/etcd/ca.crt - --etcd-certfile/etc/kubernetes/pki/apiserver-etcd-client.crt - --etcd-keyfile/etc/kubernetes/pki/apiserver-etcd-client.key - --etcd-servershttps://127.0.0.1:2379 - --kubelet-client-certificate/etc/kubernetes/pki/apiserver-kubelet-client.crt - --kubelet-client-key/etc/kubernetes/pki/apiserver-kubelet-client.key - --kubelet-preferred-address-typesInternalIP,ExternalIP,Hostname - --proxy-client-cert-file/etc/kubernetes/pki/front-proxy-client.crt - --proxy-client-key-file/etc/kubernetes/pki/front-proxy-client.key - --requestheader-allowed-namesfront-proxy-client - --requestheader-client-ca-file/etc/kubernetes/pki/front-proxy-ca.crt - --requestheader-extra-headers-prefixX-Remote-Extra- - --requestheader-group-headersX-Remote-Group - --requestheader-username-headersX-Remote-User - --secure-port6443- --service-account-issuerhttps://kubernetes.default.svc.cluster.local - --service-account-key-file/etc/kubernetes/pki/sa.pub - --service-account-signing-key-file/etc/kubernetes/pki/sa.key - --service-cluster-ip-range10.96.0.0/12 - --tls-cert-file/etc/kubernetes/pki/apiserver.crt - --tls-private-key-file/etc/kubernetes/pki/apiserver.key - --audit-policy-file/etc/kubernetes/logpolicy/sample-policy.yaml - --audit-log-path/var/log/kubernetes/audit-logs.txt - --audit-log-maxage10- --audit-log-maxbackup2#新增点systemctl daemon-relaod systemctl restart kubelet rootmaster01:/etc/kubernetes/manifests# systemctl daemon-reloadrootmaster01:/etc/kubernetes/manifests# systemctl restart kubeletrootmaster01:/etc/kubernetes/manifests# systemctl status kubelet● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded(/lib/systemd/system/kubelet.service;enabled;vendor preset: enabled)Drop-In: /usr/lib/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: active(running)since Fri2025-10-2411:24:53 CST;5s ago Docs: https://kubernetes.io/docs/ Main PID:25237(kubelet)Tasks:11(limit:2510)Memory:66.3M CGroup: /system.slice/kubelet.service └─25237 /usr/bin/kubelet --bootstrap-kubeconfig/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig/etc/kubernetes/kubelet.conf --config/var/lib/kubelet/config.yaml --containOct2411:24:58 master01 kubelet[25237]: E102411:24:58.14422925237dns.go:153]Nameserver limits exceedederrNameserver limits were exceeded, some nameservers have been omitted, the Oct 24 11:24:58 master01 kubelet[25237]: E1024 11:24:58.291336 25237 dns.go:153] Nameserver limits exceeded errNameserver limits were exceeded, some nameservers have been omitted, theOct2411:24:58 master01 kubelet[25237]: E102411:24:58.30544125237dns.go:153]Nameserver limits exceedederrNameserver limits were exceeded, some nameservers have been omitted, the Oct 24 11:24:58 master01 kubelet[25237]: E1024 11:24:58.367925 25237 dns.go:153] Nameserver limits exceeded errNameserver limits were exceeded, some nameservers have been omitted, theOct2411:24:58 master01 kubelet[25237]: E102411:24:58.36930825237dns.go:153]Nameserver limits exceedederrNameserver limits were exceeded, some nameservers have been omitted, the Oct 24 11:24:58 master01 kubelet[25237]: I1024 11:24:58.369632 25237 scope.go:117] RemoveContainer containerIDfb719fb7a051a191415975fb66b52f706d128d6992637886dc41fafe3370cf63 Oct 24 11:24:58 master01 kubelet[25237]: E1024 11:24:58.370527 25237 dns.go:153] Nameserver limits exceeded errNameserver limits were exceeded, some nameservers have been omitted, theOct2411:24:58 master01 kubelet[25237]: E102411:24:58.37061825237dns.go:153]Nameserver limits exceedederrNameserver limits were exceeded, some nameservers have been omitted, the Oct 24 11:24:58 master01 kubelet[25237]: E1024 11:24:58.372012 25237 dns.go:153] Nameserver limits exceeded errNameserver limits were exceeded, some nameservers have been omitted, theOct2411:24:58 master01 kubelet[25237]: I102411:24:58.60193025237pod_startup_latency_tracker.go:104]Observed pod startup durationpodkube-system/kube-apiserver-master01podStartrootmaster01:/etc/kubernetes/manifests# kubectl get pods -n kube-ststemNAMESPACE NAME READY STATUS RESTARTS AGE kube-system cilium-cxrfj1/1 Running8(120m ago)222d kube-system cilium-dt6np1/1 Running8(120m ago)222d kube-system cilium-operator-76556ddb85-sq7sg1/1 Running10(8s ago)222d kube-system cilium-wbnr50/1 Running8(120m ago)222d kube-system coredns-6766b7b6bb-cgz2g1/1 Running3(120m ago)184d kube-system coredns-6766b7b6bb-mrlm41/1 Running8(120m ago)222d kube-system etcd-master011/1 Running9(120m ago)235d kube-system kube-apiserver-master010/1 Running06s kube-system kube-controller-manager-master010/1 Running11(11s ago)235d kube-system kube-proxy-2cgdg1/1 Running9(120m ago)235d kube-system kube-proxy-62gnq1/1 Running9(120m ago)235d kube-system kube-proxy-sfzz81/1 Running9(120m ago)235d kube-system kube-scheduler-master010/1 Running11(11s ago)235d kube-system metrics-server-8467fcc7b7-5dg4m1/1 Running8(120m ago)222d7、网络策略 Deny 和 Allow你必须连接到正确的主机。不这样做可能导致零分。[candidatebase] $ ssh cks000031Context您必须实施 NetworkPolicy 来控制现有的 Deployments 跨 namespace 的流量。Taks首先为了阻止所有入口流量在 prod namespace 中创建一个名为 deny-policy 的 NetworkPolicy。PS: prod namespace 被标记为 env: prod然后为了仅允许来自 prod namespace 中 Pod 的入口流量在 data namespace 中创建一个名为 allow-from-prod 的 NetworkPolicy。使用 prod namespace 的标签来允许流量。PS: data namespace 被标记为 env: data注意请勿修改或删除任何 namespace 或 Pod。仅创建必须得 NetworkPolicy。解题思路:在 prod namespace 中创建名为 deny-policy 的 NetworkPolicy○ 目的阻止所有入口ingress流量。○ 提示prod namespace 有标签 env: prod这个信息可能用于其他策略引用但本策略本身不需要引用标签只需作用于该 namespace。在 data namespace 中创建名为 allow-from-prod 的 NetworkPolicy○ 目的仅允许来自 prod namespace 中 Pod 的入口流量。○ 提示使用 prod namespace 的标签即 env: prod来允许流量。○ data namespace 有标签 env: data这个信息可能用于识别 namespace但策略本身作用于 data namespace 内的 Pod。解题过程:参考官网文档:https://kubernetes.io/docs/concepts/services-networking/network-policies/1、创建deny-policy networkpolicy#1、指定namespace prod#2、指定name deny-policy#3、拒绝所有入口的ingress流量apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-policy namespace: prod spec: policyTypes: - Ingress2、创建allow-from-prod 的networkpolicy#1、指定namespace data#2、指定name allow-from-prod#3、仅允许来自prod标签的namespaceapiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-prod namespace: data spec: policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: env: prodNetworkPolicy 的基本行为● 默认行为○ 如果没有 NetworkPolicy允许所有进出流量非沙箱环境○ 一旦在某个 namespace 中定义了针对 ingress/egress 的 NetworkPolicy则■ ingress 策略存在 → 默认拒绝所有入站流量只允许明确允许的■ egress 策略存在 → 默认拒绝所有出站流量所以创建一个只包含 policyTypes: [Ingress] 且无 ingress 规则的 NetworkPolicy就等效于“拒绝所有入站”podSelector 字段● 用于选择该 NetworkPolicy 应用于哪些 Pod。● {} 表示“选择该 namespace 中的所有 Pod”● 如果不写默认也是 {}但建议显式写出8、使用 ingress 公开 https 服务你必须连接到正确的主机。不这样做可能导致零分。[candidatebase] $ ssh cks000032Context您必须使用 HTTPS 路由来公开 web 应用程序。Task在 prod02 namespace 创建一个名为 web 的 Ingress 资源并按照如下方式配置它⚫ 将主机 web.k8sng.local 和所有路径的流量路由到现有的 web Service。⚫ 使用现有的 web-cert Secret 来启用 TLS 终止。⚫ 将 HTTP 请求重定向到 HTTPSPS: 你可以使用以下命令测试 Ingress 配置[candidatecks000032] $ curl -Lk https://web.k8sng.local解题思路:1、创建ingress webnamespace prod022、主机为 web.k8sng.local所有路径即 /都路由到名为 web 的 Service3、使用指定的tls web-cert并启用https4、http重定向到https 这通常由 Ingress Controller 的注解或配置实现比如 Nginx Ingress 的 nginx.ingress.kubernetes.io/ssl-redirect: “true”。这个注解必背解题步骤:比较简单难点在于注解背好命令即可创建kubectl create ingress --help Create an ingress with the specified name. Aliases: ingress, ing Examples:# Create a single ingress called simple that directs requests to foo.com/bar to svc# svc1:8080 with a TLS secret my-certkubectl create ingress simple --rulefoo.com/barsvc1:8080,tlsmy-cert# Create a catch all ingress of /path pointing to service svc:port and Ingress Class as otheringresskubectl create ingress catch-all --classotheringress --rule/pathsvc:port# Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2kubectl create ingress annotated --classdefault --rulefoo.com/barsvc:port\--annotation ingress.annotation1foo\--annotation ingress.annotation2bla# Create an ingress with the same host and multiple pathskubectl create ingress multipath --classdefault\--rulefoo.com/svc:port\--rulefoo.com/admin/svcadmin:portadmin# Create an ingress with multiple hosts and the pathType as Prefixkubectl create ingress ingress1 --classdefault\--rulefoo.com/path*svc:8080\--rulebar.com/admin*svc2:http# Create an ingress with TLS enabled using the default ingress certificate and different path typeskubectl create ingress ingtls --classdefault\--rulefoo.com/svc:https,tls\--rulefoo.com/path/subpath*othersvc:8080# Create an ingress with TLS enabled using a specific secret and pathType as Prefixkubectl create ingress ingsecret --classdefault\--rulefoo.com/*svc:8080,tlssecret1# Create an ingress with a default backendkubectl create ingress ingdefault --classdefault\--default-backenddefaultsvc:http\--rulefoo.com/*svc:8080,tlssecret1#综合题目目标包含tls和主机名/*默认解析因此下面这个最接近kubectl create ingress ingsecret --classdefault\--rulefoo.com/*svc:8080,tlssecret1#查看现有serviceprod02 web ClusterIP10.104.168.212none80/TCP#综合题目目标包含tls和主机名/*默认解析因此下面这个最接近kubectl create ingress web --classdefault\#题目没要求class可以省略--ruleweb.k8sng.local/*webku:8080,tlssecret1#同时题目要求需要实现http-》https因此最终命令kubectl create ingress web --namespaceprod02 --ruleweb.k8sng.local/*web:80,tlsweb-cert--annotationnginx.ingress.kubernetes.io/ssl-redirecttrue#验证结果kubectl get ingress -n prod02 web -o yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/ssl-redirect:truecreationTimestamp:2025-09-28T15:35:46Zgeneration:1name: web namespace: prod02 resourceVersion:193267uid: a4a535bc-d53b-4d34-b392-1562875a1859 spec: ingressClassName: nginx rules: - host: web.k8sng.local http: paths: - backend: service: name: web port: number:80path: / pathType: Prefix tls: - hosts: - web.k8sng.local secretName: web-cert status: loadBalancer: ingress: - ip:10.111.189.94 candidatemaster01:~$curl-Lk https://web.k8sng.local Hello World ^_^9、关闭 API 凭据自动挂载你必须连接到正确的主机。不这样做可能导致零分。[candidatebase] $ ssh cks000033Context安全审计发现某个 Deployment 有不合规的服务账号令牌这可能导致安全漏洞。Task首先修改 monitoring namespace 中现有的 stats-monitor-sa ServiceAccount以关闭 API 凭据自动挂载。然后修改 monitoring namespace 中现有的 stats-monitor Deployment以注入装载在/var/run/secrets/kubernetes.io/serviceaccount/token 的 ServiceAccount 令牌。使用名为 token 的投射卷来注入 ServiceAccount 令牌并确保它以只读方式挂载。PS: 部署的清单配置文件可以在以下位置找到~/stats-monitor/deployment.yaml解题思路:1、默认情况下Pod 会自动挂载其关联 ServiceAccount 的 API 凭据即 token位于 /var/run/secrets/kubernetes.io/serviceaccount/本题要求关闭自动挂载 → 需将 automountServiceAccountToken: false 设置在 ServiceAccount 上题目明确要求修改 ServiceAccount。2、指定deployment注入 serviceaccount令牌解题过程:1、参考kubernetes.io 网页https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/kubectl get serviceaccounts -n monitoring stats-monitor-sa -o yaml apiVersion: v1 kind: ServiceAccount metadata: annotations: kubectl.kubernetes.io/last-applied-configuration:|{apiVersion:v1,kind:ServiceAccount,metadata:{annotations:{},name:stats-monitor-sa,namespace:monitoring},secrets:[{name:stats-monitor-sa-token}]}creationTimestamp:2025-03-15T05:38:58Zname: stats-monitor-sa namespace: monitoring resourceVersion:20156uid: 591d03a7-132e-494b-9187-dea63c582868 secrets: - name: stats-monitor-sa-token#新增automountServiceAccountToken:false#查看目标deployment内容kubectl get deployments.apps -n monitoring stats-monitor -o yaml apiVersion: apps/v1 kind: Deployment metadata: labels: app: stats-monitor name: stats-monitor namespace: monitoring spec: progressDeadlineSeconds:600replicas:1revisionHistoryLimit:10selector: matchLabels: app: stats-monitor strategy: rollingUpdate: maxSurge:25% maxUnavailable:25% type: RollingUpdate template: metadata: creationTimestamp: null labels: app: stats-monitor spec: automountServiceAccountToken:falsecontainers: - image: vicuu/nginx:hello imagePullPolicy: IfNotPresent name: nginx resources:{}terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /usr/local/apache2/conf/httpd.conf name: httpcf subPath: httpd.conf#新增内容- mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: token readOnly:truednsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext:{}serviceAccount: stats-monitor-sa serviceAccountName: stats-monitor-sa terminationGracePeriodSeconds:30volumes: - configMap: defaultMode:420items: - key: httpd.conf path: httpd.conf name: httpcf name: httpcf#新增挂载- name: token projected: defaultMode:420sources: - serviceAccountToken: expirationSeconds:3607path: token#应用并查看pod状态kubectl apply -f deployment.yaml deployment.apps/stats-monitor configured candidatemaster01:~$ kubectl get deployments.apps -n monitoring stats-monitor NAME READY UP-TO-DATE AVAILABLE AGE stats-monitor1/111198d candidatemaster01:~$ kubectl get pods -n monitoring stats-monitor-7f5765dfdf-h5ssz NAME READY STATUS RESTARTS AGE stats-monitor-7f5765dfdf-h5ssz1/1 Running017s