Skip to content

Commit 698e93b

Browse files
chrisohaverk8s-ci-robot
authored andcommitted
coredns default (#10200)
1 parent 66eaff0 commit 698e93b

File tree

2 files changed

+28
-25
lines changed

2 files changed

+28
-25
lines changed

content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md

+23-20
Original file line numberDiff line numberDiff line change
@@ -36,10 +36,10 @@ The output is similar to this:
3636

3737
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
3838
...
39-
kube-dns-autoscaler 1 1 1 1 ...
39+
dns-autoscaler 1 1 1 1 ...
4040
...
4141

42-
If you see "kube-dns-autoscaler" in the output, DNS horizontal autoscaling is
42+
If you see "dns-autoscaler" in the output, DNS horizontal autoscaling is
4343
already enabled, and you can skip to
4444
[Tuning autoscaling parameters](#tuning-autoscaling-parameters).
4545

@@ -53,10 +53,13 @@ The output is similar to this:
5353

5454
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
5555
...
56-
kube-dns 1 1 1 1 ...
56+
coredns 2 2 2 2 ...
5757
...
5858

59-
In Kubernetes versions earlier than 1.5 DNS is implemented using a
59+
60+
In Kubernetes versions earlier than 1.12, the DNS Deployment was called "kube-dns".
61+
62+
In Kubernetes versions earlier than 1.5 DNS was implemented using a
6063
ReplicationController instead of a Deployment. So if you don't see kube-dns,
6164
or a similar name, in the preceding output, list the ReplicationControllers in
6265
your cluster in the kube-system namespace:
@@ -77,7 +80,7 @@ If you have a DNS Deployment, your scale target is:
7780
Deployment/<your-deployment-name>
7881

7982
where <dns-deployment-name> is the name of your DNS Deployment. For example, if
80-
your DNS Deployment name is kube-dns, your scale target is Deployment/kube-dns.
83+
your DNS Deployment name is coredns, your scale target is Deployment/coredns.
8184

8285
If you have a DNS ReplicationController, your scale target is:
8386

@@ -111,20 +114,20 @@ DNS horizontal autoscaling is now enabled.
111114

112115
## Tuning autoscaling parameters
113116

114-
Verify that the kube-dns-autoscaler ConfigMap exists:
117+
Verify that the dns-autoscaler ConfigMap exists:
115118

116119
kubectl get configmap --namespace=kube-system
117120

118121
The output is similar to this:
119122

120123
NAME DATA AGE
121124
...
122-
kube-dns-autoscaler 1 ...
125+
dns-autoscaler 1 ...
123126
...
124127

125128
Modify the data in the ConfigMap:
126129

127-
kubectl edit configmap kube-dns-autoscaler --namespace=kube-system
130+
kubectl edit configmap dns-autoscaler --namespace=kube-system
128131

129132
Look for this line:
130133

@@ -151,15 +154,15 @@ There are other supported scaling patterns. For details, see
151154
There are a few options for turning DNS horizontal autoscaling. Which option to
152155
use depends on different conditions.
153156

154-
### Option 1: Scale down the kube-dns-autoscaler deployment to 0 replicas
157+
### Option 1: Scale down the dns-autoscaler deployment to 0 replicas
155158

156159
This option works for all situations. Enter this command:
157160

158-
kubectl scale deployment --replicas=0 kube-dns-autoscaler --namespace=kube-system
161+
kubectl scale deployment --replicas=0 dns-autoscaler --namespace=kube-system
159162

160163
The output is:
161164

162-
deployment.extensions/kube-dns-autoscaler scaled
165+
deployment.extensions/dns-autoscaler scaled
163166

164167
Verify that the replica count is zero:
165168

@@ -169,33 +172,33 @@ The output displays 0 in the DESIRED and CURRENT columns:
169172

170173
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
171174
...
172-
kube-dns-autoscaler 0 0 0 0 ...
175+
dns-autoscaler 0 0 0 0 ...
173176
...
174177

175-
### Option 2: Delete the kube-dns-autoscaler deployment
178+
### Option 2: Delete the dns-autoscaler deployment
176179

177-
This option works if kube-dns-autoscaler is under your own control, which means
180+
This option works if dns-autoscaler is under your own control, which means
178181
no one will re-create it:
179182

180-
kubectl delete deployment kube-dns-autoscaler --namespace=kube-system
183+
kubectl delete deployment dns-autoscaler --namespace=kube-system
181184

182185
The output is:
183186

184-
deployment.extensions "kube-dns-autoscaler" deleted
187+
deployment.extensions "dns-autoscaler" deleted
185188

186-
### Option 3: Delete the kube-dns-autoscaler manifest file from the master node
189+
### Option 3: Delete the dns-autoscaler manifest file from the master node
187190

188-
This option works if kube-dns-autoscaler is under control of the
191+
This option works if dns-autoscaler is under control of the
189192
[Addon Manager](https://git.k8s.io/kubernetes/cluster/addons/README.md)'s
190193
control, and you have write access to the master node.
191194

192195
Sign in to the master node and delete the corresponding manifest file.
193-
The common path for this kube-dns-autoscaler is:
196+
The common path for this dns-autoscaler is:
194197

195198
/etc/kubernetes/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml
196199

197200
After the manifest file is deleted, the Addon Manager will delete the
198-
kube-dns-autoscaler Deployment.
201+
dns-autoscaler Deployment.
199202

200203
{{% /capture %}}
201204

content/en/examples/admin/dns/dns-horizontal-autoscaler.yaml

+5-5
Original file line numberDiff line numberDiff line change
@@ -1,18 +1,18 @@
11
apiVersion: apps/v1
22
kind: Deployment
33
metadata:
4-
name: kube-dns-autoscaler
4+
name: dns-autoscaler
55
namespace: kube-system
66
labels:
7-
k8s-app: kube-dns-autoscaler
7+
k8s-app: dns-autoscaler
88
spec:
99
selector:
1010
matchLabels:
11-
k8s-app: kube-dns-autoscaler
11+
k8s-app: dns-autoscaler
1212
template:
1313
metadata:
1414
labels:
15-
k8s-app: kube-dns-autoscaler
15+
k8s-app: dns-autoscaler
1616
spec:
1717
containers:
1818
- name: autoscaler
@@ -24,7 +24,7 @@ spec:
2424
command:
2525
- /cluster-proportional-autoscaler
2626
- --namespace=kube-system
27-
- --configmap=kube-dns-autoscaler
27+
- --configmap=dns-autoscaler
2828
- --target=<SCALE_TARGET>
2929
# When cluster is using large nodes(with more cores), "coresPerReplica" should dominate.
3030
# If using small nodes, "nodesPerReplica" should dominate.

0 commit comments

Comments
 (0)