Skip to content

Commit e8b136c

Browse files
author
Mengjiao Liu
committed
Use code_sample shortcode instead of code shortcode
1 parent 234458f commit e8b136c

File tree

110 files changed

+280
-280
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

110 files changed

+280
-280
lines changed

content/en/docs/concepts/cluster-administration/flow-control.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -470,7 +470,7 @@ traffic, you can configure rules to block any health check requests
470470
that originate from outside your cluster.
471471
{{< /caution >}}
472472

473-
{{% code file="priority-and-fairness/health-for-strangers.yaml" %}}
473+
{{% code_sample file="priority-and-fairness/health-for-strangers.yaml" %}}
474474

475475
## Diagnostics
476476

@@ -885,7 +885,7 @@ from other requests.
885885

886886
Example FlowSchema object to isolate list event requests:
887887

888-
{{% code file="priority-and-fairness/list-events-default-service-account.yaml" %}}
888+
{{% code_sample file="priority-and-fairness/list-events-default-service-account.yaml" %}}
889889

890890
- This FlowSchema captures all list event calls made by the default service
891891
account in the default namespace. The matching precedence 8000 is lower than the

content/en/docs/concepts/cluster-administration/logging.md

+5-5
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ Kubernetes captures logs from each container in a running Pod.
3939
This example uses a manifest for a `Pod` with a container
4040
that writes text to the standard output stream, once per second.
4141

42-
{{% code file="debug/counter-pod.yaml" %}}
42+
{{% code_sample file="debug/counter-pod.yaml" %}}
4343

4444
To run this pod, use the following command:
4545

@@ -255,7 +255,7 @@ For example, a pod runs a single container, and the container
255255
writes to two different log files using two different formats. Here's a
256256
manifest for the Pod:
257257

258-
{{% code file="admin/logging/two-files-counter-pod.yaml" %}}
258+
{{% code_sample file="admin/logging/two-files-counter-pod.yaml" %}}
259259

260260
It is not recommended to write log entries with different formats to the same log
261261
stream, even if you managed to redirect both components to the `stdout` stream of
@@ -265,7 +265,7 @@ the logs to its own `stdout` stream.
265265

266266
Here's a manifest for a pod that has two sidecar containers:
267267

268-
{{% code file="admin/logging/two-files-counter-pod-streaming-sidecar.yaml" %}}
268+
{{% code_sample file="admin/logging/two-files-counter-pod-streaming-sidecar.yaml" %}}
269269

270270
Now when you run this pod, you can access each log stream separately by
271271
running the following commands:
@@ -332,7 +332,7 @@ Here are two example manifests that you can use to implement a sidecar container
332332
The first manifest contains a [`ConfigMap`](/docs/tasks/configure-pod-container/configure-pod-configmap/)
333333
to configure fluentd.
334334

335-
{{% code file="admin/logging/fluentd-sidecar-config.yaml" %}}
335+
{{% code_sample file="admin/logging/fluentd-sidecar-config.yaml" %}}
336336

337337
{{< note >}}
338338
In the sample configurations, you can replace fluentd with any logging agent, reading
@@ -342,7 +342,7 @@ from any source inside an application container.
342342
The second manifest describes a pod that has a sidecar container running fluentd.
343343
The pod mounts a volume where fluentd can pick up its configuration data.
344344

345-
{{% code file="admin/logging/two-files-counter-pod-agent-sidecar.yaml" %}}
345+
{{% code_sample file="admin/logging/two-files-counter-pod-agent-sidecar.yaml" %}}
346346

347347
### Exposing logs directly from the application
348348

content/en/docs/concepts/cluster-administration/manage-deployment.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ Many applications require multiple resources to be created, such as a Deployment
2222
Management of multiple resources can be simplified by grouping them together in the same file
2323
(separated by `---` in YAML). For example:
2424

25-
{{% code file="application/nginx-app.yaml" %}}
25+
{{% code_sample file="application/nginx-app.yaml" %}}
2626

2727
Multiple resources can be created the same way as a single resource:
2828

content/en/docs/concepts/configuration/configmap.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -113,7 +113,7 @@ technique also lets you access a ConfigMap in a different namespace.
113113
114114
Here's an example Pod that uses values from `game-demo` to configure a Pod:
115115

116-
{{% code file="configmap/configure-pod.yaml" %}}
116+
{{% code_sample file="configmap/configure-pod.yaml" %}}
117117

118118
A ConfigMap doesn't differentiate between single line property values and
119119
multi-line file-like values.

content/en/docs/concepts/overview/working-with-objects/_index.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,7 @@ request.
7777

7878
Here's an example `.yaml` file that shows the required fields and object spec for a Kubernetes Deployment:
7979

80-
{{% code file="application/deployment.yaml" %}}
80+
{{% code_sample file="application/deployment.yaml" %}}
8181

8282
One way to create a Deployment using a `.yaml` file like the one above is to use the
8383
[`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands#apply) command

content/en/docs/concepts/policy/limit-range.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -54,12 +54,12 @@ A `LimitRange` does **not** check the consistency of the default values it appli
5454

5555
For example, you define a `LimitRange` with this manifest:
5656

57-
{{% code file="concepts/policy/limit-range/problematic-limit-range.yaml" %}}
57+
{{% code_sample file="concepts/policy/limit-range/problematic-limit-range.yaml" %}}
5858

5959

6060
along with a Pod that declares a CPU resource request of `700m`, but not a limit:
6161

62-
{{% code file="concepts/policy/limit-range/example-conflict-with-limitrange-cpu.yaml" %}}
62+
{{% code_sample file="concepts/policy/limit-range/example-conflict-with-limitrange-cpu.yaml" %}}
6363

6464

6565
then that Pod will not be scheduled, failing with an error similar to:
@@ -69,7 +69,7 @@ Pod "example-conflict-with-limitrange-cpu" is invalid: spec.containers[0].resour
6969

7070
If you set both `request` and `limit`, then that new Pod will be scheduled successfully even with the same `LimitRange` in place:
7171

72-
{{% code file="concepts/policy/limit-range/example-no-conflict-with-limitrange-cpu.yaml" %}}
72+
{{% code_sample file="concepts/policy/limit-range/example-no-conflict-with-limitrange-cpu.yaml" %}}
7373

7474
## Example resource constraints
7575

content/en/docs/concepts/policy/resource-quotas.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -687,7 +687,7 @@ plugins:
687687

688688
Then, create a resource quota object in the `kube-system` namespace:
689689

690-
{{% code file="policy/priority-class-resourcequota.yaml" %}}
690+
{{% code_sample file="policy/priority-class-resourcequota.yaml" %}}
691691

692692
```shell
693693
kubectl apply -f https://k8s.io/examples/policy/priority-class-resourcequota.yaml -n kube-system

content/en/docs/concepts/scheduling-eviction/assign-pod-node.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -121,7 +121,7 @@ your Pod spec.
121121

122122
For example, consider the following Pod spec:
123123

124-
{{% code file="pods/pod-with-node-affinity.yaml" %}}
124+
{{% code_sample file="pods/pod-with-node-affinity.yaml" %}}
125125

126126
In this example, the following rules apply:
127127

@@ -171,7 +171,7 @@ scheduling decision for the Pod.
171171

172172
For example, consider the following Pod spec:
173173

174-
{{% code file="pods/pod-with-affinity-anti-affinity.yaml" %}}
174+
{{% code_sample file="pods/pod-with-affinity-anti-affinity.yaml" %}}
175175

176176
If there are two possible nodes that match the
177177
`preferredDuringSchedulingIgnoredDuringExecution` rule, one with the
@@ -287,7 +287,7 @@ spec.
287287

288288
Consider the following Pod spec:
289289

290-
{{% code file="pods/pod-with-pod-affinity.yaml" %}}
290+
{{% code_sample file="pods/pod-with-pod-affinity.yaml" %}}
291291

292292
This example defines one Pod affinity rule and one Pod anti-affinity rule. The
293293
Pod affinity rule uses the "hard"

content/en/docs/concepts/scheduling-eviction/pod-scheduling-readiness.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ each schedulingGate can be removed in arbitrary order, but addition of a new sch
3131

3232
To mark a Pod not-ready for scheduling, you can create it with one or more scheduling gates like this:
3333

34-
{{% code file="pods/pod-with-scheduling-gates.yaml" %}}
34+
{{% code_sample file="pods/pod-with-scheduling-gates.yaml" %}}
3535

3636
After the Pod's creation, you can check its state using:
3737

@@ -61,7 +61,7 @@ The output is:
6161
To inform scheduler this Pod is ready for scheduling, you can remove its `schedulingGates` entirely
6262
by re-applying a modified manifest:
6363

64-
{{% code file="pods/pod-without-scheduling-gates.yaml" %}}
64+
{{% code_sample file="pods/pod-without-scheduling-gates.yaml" %}}
6565

6666
You can check if the `schedulingGates` is cleared by running:
6767

content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@ tolerations:
6464
6565
Here's an example of a pod that uses tolerations:
6666
67-
{{% code file="pods/pod-with-toleration.yaml" %}}
67+
{{% code_sample file="pods/pod-with-toleration.yaml" %}}
6868
6969
The default value for `operator` is `Equal`.
7070

content/en/docs/concepts/scheduling-eviction/topology-spread-constraints.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -284,7 +284,7 @@ graph BT
284284
If you want an incoming Pod to be evenly spread with existing Pods across zones, you
285285
can use a manifest similar to:
286286

287-
{{% code file="pods/topology-spread-constraints/one-constraint.yaml" %}}
287+
{{% code_sample file="pods/topology-spread-constraints/one-constraint.yaml" %}}
288288

289289
From that manifest, `topologyKey: zone` implies the even distribution will only be applied
290290
to nodes that are labelled `zone: <any value>` (nodes that don't have a `zone` label
@@ -377,7 +377,7 @@ graph BT
377377
You can combine two topology spread constraints to control the spread of Pods both
378378
by node and by zone:
379379

380-
{{% code file="pods/topology-spread-constraints/two-constraints.yaml" %}}
380+
{{% code_sample file="pods/topology-spread-constraints/two-constraints.yaml" %}}
381381

382382
In this case, to match the first constraint, the incoming Pod can only be placed onto
383383
nodes in zone `B`; while in terms of the second constraint, the incoming Pod can only be
@@ -466,7 +466,7 @@ and you know that zone `C` must be excluded. In this case, you can compose a man
466466
as below, so that Pod `mypod` will be placed into zone `B` instead of zone `C`.
467467
Similarly, Kubernetes also respects `spec.nodeSelector`.
468468

469-
{{% code file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" %}}
469+
{{% code_sample file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" %}}
470470

471471
## Implicit conventions
472472

content/en/docs/concepts/services-networking/dns-pod-service.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -300,7 +300,7 @@ Below are the properties a user can specify in the `dnsConfig` field:
300300

301301
The following is an example Pod with custom DNS settings:
302302

303-
{{% code file="service/networking/custom-dns.yaml" %}}
303+
{{% code_sample file="service/networking/custom-dns.yaml" %}}
304304

305305
When the Pod above is created, the container `test` gets the following contents
306306
in its `/etc/resolv.conf` file:

content/en/docs/concepts/services-networking/dual-stack.md

+5-5
Original file line numberDiff line numberDiff line change
@@ -135,7 +135,7 @@ These examples demonstrate the behavior of various dual-stack Service configurat
135135
[headless Services](/docs/concepts/services-networking/service/#headless-services) with selectors
136136
will behave in this same way.)
137137

138-
{{% code file="service/networking/dual-stack-default-svc.yaml" %}}
138+
{{% code_sample file="service/networking/dual-stack-default-svc.yaml" %}}
139139

140140
1. This Service specification explicitly defines `PreferDualStack` in `.spec.ipFamilyPolicy`. When
141141
you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6
@@ -151,14 +151,14 @@ These examples demonstrate the behavior of various dual-stack Service configurat
151151
* On a cluster with dual-stack enabled, specifying `RequireDualStack` in `.spec.ipFamilyPolicy`
152152
behaves the same as `PreferDualStack`.
153153

154-
{{% code file="service/networking/dual-stack-preferred-svc.yaml" %}}
154+
{{% code_sample file="service/networking/dual-stack-preferred-svc.yaml" %}}
155155

156156
1. This Service specification explicitly defines `IPv6` and `IPv4` in `.spec.ipFamilies` as well
157157
as defining `PreferDualStack` in `.spec.ipFamilyPolicy`. When Kubernetes assigns an IPv6 and
158158
IPv4 address in `.spec.ClusterIPs`, `.spec.ClusterIP` is set to the IPv6 address because that is
159159
the first element in the `.spec.ClusterIPs` array, overriding the default.
160160

161-
{{% code file="service/networking/dual-stack-preferred-ipfamilies-svc.yaml" %}}
161+
{{% code_sample file="service/networking/dual-stack-preferred-ipfamilies-svc.yaml" %}}
162162

163163
#### Dual-stack defaults on existing Services
164164

@@ -171,7 +171,7 @@ dual-stack.)
171171
`.spec.ipFamilies` to the address family of the existing Service. The existing Service cluster IP
172172
will be stored in `.spec.ClusterIPs`.
173173

174-
{{% code file="service/networking/dual-stack-default-svc.yaml" %}}
174+
{{% code_sample file="service/networking/dual-stack-default-svc.yaml" %}}
175175

176176
You can validate this behavior by using kubectl to inspect an existing service.
177177

@@ -211,7 +211,7 @@ dual-stack.)
211211
`--service-cluster-ip-range` flag to the kube-apiserver) even though `.spec.ClusterIP` is set to
212212
`None`.
213213

214-
{{% code file="service/networking/dual-stack-default-svc.yaml" %}}
214+
{{% code_sample file="service/networking/dual-stack-default-svc.yaml" %}}
215215

216216
You can validate this behavior by using kubectl to inspect an existing headless service with selectors.
217217

content/en/docs/concepts/services-networking/ingress.md

+10-10
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ Make sure you review your Ingress controller's documentation to understand the c
7373

7474
A minimal Ingress resource example:
7575

76-
{{% code file="service/networking/minimal-ingress.yaml" %}}
76+
{{% code_sample file="service/networking/minimal-ingress.yaml" %}}
7777

7878
An Ingress needs `apiVersion`, `kind`, `metadata` and `spec` fields.
7979
The name of an Ingress object must be a valid
@@ -140,7 +140,7 @@ setting with Service, and will fail validation if both are specified. A common
140140
usage for a `Resource` backend is to ingress data to an object storage backend
141141
with static assets.
142142

143-
{{% code file="service/networking/ingress-resource-backend.yaml" %}}
143+
{{% code_sample file="service/networking/ingress-resource-backend.yaml" %}}
144144

145145
After creating the Ingress above, you can view it with the following command:
146146

@@ -229,7 +229,7 @@ equal to the suffix of the wildcard rule.
229229
| `*.foo.com` | `baz.bar.foo.com` | No match, wildcard only covers a single DNS label |
230230
| `*.foo.com` | `foo.com` | No match, wildcard only covers a single DNS label |
231231

232-
{{% code file="service/networking/ingress-wildcard-host.yaml" %}}
232+
{{% code_sample file="service/networking/ingress-wildcard-host.yaml" %}}
233233

234234
## Ingress class
235235

@@ -238,7 +238,7 @@ configuration. Each Ingress should specify a class, a reference to an
238238
IngressClass resource that contains additional configuration including the name
239239
of the controller that should implement the class.
240240

241-
{{% code file="service/networking/external-lb.yaml" %}}
241+
{{% code_sample file="service/networking/external-lb.yaml" %}}
242242

243243
The `.spec.parameters` field of an IngressClass lets you reference another
244244
resource that provides configuration related to that IngressClass.
@@ -369,7 +369,7 @@ configured with a [flag](https://kubernetes.github.io/ingress-nginx/#what-is-the
369369
`--watch-ingress-without-class`. It is [recommended](https://kubernetes.github.io/ingress-nginx/#i-have-only-one-instance-of-the-ingresss-nginx-controller-in-my-cluster-what-should-i-do) though, to specify the
370370
default `IngressClass`:
371371

372-
{{% code file="service/networking/default-ingressclass.yaml" %}}
372+
{{% code_sample file="service/networking/default-ingressclass.yaml" %}}
373373

374374
## Types of Ingress
375375

@@ -379,7 +379,7 @@ There are existing Kubernetes concepts that allow you to expose a single Service
379379
(see [alternatives](#alternatives)). You can also do this with an Ingress by specifying a
380380
*default backend* with no rules.
381381

382-
{{% code file="service/networking/test-ingress.yaml" %}}
382+
{{% code_sample file="service/networking/test-ingress.yaml" %}}
383383

384384
If you create it using `kubectl apply -f` you should be able to view the state
385385
of the Ingress you added:
@@ -411,7 +411,7 @@ down to a minimum. For example, a setup like:
411411
412412
It would require an Ingress such as:
413413
414-
{{% code file="service/networking/simple-fanout-example.yaml" %}}
414+
{{% code_sample file="service/networking/simple-fanout-example.yaml" %}}
415415
416416
When you create the Ingress with `kubectl apply -f`:
417417
@@ -456,7 +456,7 @@ Name-based virtual hosts support routing HTTP traffic to multiple host names at
456456
The following Ingress tells the backing load balancer to route requests based on
457457
the [Host header](https://tools.ietf.org/html/rfc7230#section-5.4).
458458

459-
{{% code file="service/networking/name-virtual-host-ingress.yaml" %}}
459+
{{% code_sample file="service/networking/name-virtual-host-ingress.yaml" %}}
460460

461461
If you create an Ingress resource without any hosts defined in the rules, then any
462462
web traffic to the IP address of your Ingress controller can be matched without a name based
@@ -467,7 +467,7 @@ requested for `first.bar.com` to `service1`, `second.bar.com` to `service2`,
467467
and any traffic whose request host header doesn't match `first.bar.com`
468468
and `second.bar.com` to `service3`.
469469

470-
{{% code file="service/networking/name-virtual-host-ingress-no-third-host.yaml" %}}
470+
{{% code_sample file="service/networking/name-virtual-host-ingress-no-third-host.yaml" %}}
471471

472472
### TLS
473473

@@ -505,7 +505,7 @@ certificates would have to be issued for all the possible sub-domains. Therefore
505505
section.
506506
{{< /note >}}
507507

508-
{{% code file="service/networking/tls-example-ingress.yaml" %}}
508+
{{% code_sample file="service/networking/tls-example-ingress.yaml" %}}
509509

510510
{{< note >}}
511511
There is a gap between TLS features supported by various Ingress

0 commit comments

Comments
 (0)