Skip to content

Commit 1f03756

Browse files
authored
Merge branch 'source' into greenkeeper/eslint-6.0.1
2 parents 30a1d7d + 62e4ace commit 1f03756

File tree

12 files changed

+330
-22
lines changed

12 files changed

+330
-22
lines changed
-4.71 KB
Loading
913 KB
Loading
174 KB
Loading
184 KB
Loading
Lines changed: 294 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,294 @@
1+
---
2+
slug: "2019/07/01/cross-cluster-service-mesh"
3+
title: "Service Mesh"
4+
subtitle: "A Multi Cloud Mesh with Istio"
5+
date: 2019-07-01
6+
cover: ./evoluon.jpg
7+
coverDescription: "Evoluon"
8+
coverLink: "https://goo.gl/maps/WPrtxowKszHqgLNw9"
9+
asciinema: false
10+
imageFb: ./2019-07-01-multi-cloud-mesh-fb.png
11+
imageTw: ./2019-07-01-multi-cloud-mesh-tw.png
12+
type: post
13+
comments: true
14+
tags:
15+
- kubernetes
16+
- service mesh
17+
- cloud
18+
- aws
19+
- google
20+
- docker
21+
authors:
22+
- niek
23+
---
24+
25+
# A multi cloud service mesh with Istio
26+
27+
*This post guide you to create a multi cloud service mesh with Istio on Kubernetes clusters in Amazon and Google cloud.*
28+
29+
<p style="text-align: right">
30+
<a href="https://github.com/npalm/cross-cluster-mesh-postcard" target="sourcecode">
31+
<i class="fab fa-github" style="font-size: 200%">&nbsp;</i>Source code for this post</a></p>
32+
33+
## Introduction
34+
The last years we have seen a huge adoption of micro services architectures. Typically micro services brings a lot ff benefits such as flexibility, modularity, autonomy. But deploying and managing micro services architectures brings other difficulties. How dow you know what is running, how do you know your services are compliant? Another pattern that we see is that micro services ar typically heavy loaded with common dependencies for logging, authentication, authorization, tracing and many more cross cutting concerns.
35+
36+
A service mesh brings transparency to the chaos of micro services. A mesh can help with implementing, enforcing and managing requirements such as authentication, authorization, traceability and, data integrity. It also provides features as orchestration and collection of telemetry. There are currently several service meshes out, for example [App Mesh](https://aws.amazon.com/app-mesh/) from Amazon, [Consol](https://www.consul.io/) from HashiCorp, [Linkerd](https://linkerd.io/) from the CNNF and [Istio](https://istio.io/) launched by IBM, Lyft and Google in 2016. Istio is a fully open source solution based on the high performance proxy [Envoy](https://www.envoyproxy.io/).
37+
38+
Another trend in the industry is multi cloud or hybrid cloud. Confusing terms, no clear definitions. But it look likes common sense when we speaking about multi cloud we point to combining public clouds. And hybrid cloud is when you mix and match public with private cloud. When you start running cross cloud clusters it becomes even harder to manage all your micro services. Be confident that policies are implemented well or compliant. In this multi / hybrid topology, abstraction from cross cutting concerns to a mesh becomes even more important. In this blog we go to build a multi cloud [Kubernetes](https://kubernetes.io/) cluster with an Istio service mesh.
39+
40+
## A bit more about a Service Mesh
41+
Before we dive into building a multi cloud service mesh, a few words about how a mesh works. An Istio mesh consists of two parts. A data plane, intelligent proxies (Envoy) deployed as sidecars. Those proxies mediate and control the network traffic. The second component is the control plane which manages and configures the proxies, and enforce policies.
42+
43+
![istio-architecture](./istio-arch.svg)
44+
45+
To create a cross cluster mesh with Istio there are two topologies. In the first scenario there is a single control plain that controls all the cluster. In the second scenario a single control plain is deployed to every cluster and you have to ensure the same configuration is pushed to each cluster. In this post we will create an example based on the second option, a control plane in every cluster.
46+
47+
![istio-multi-cluster](./multicluster-with-gateways.svg)
48+
49+
## Building a cross cluster mesh
50+
Lets get started with building a multi cloud service mesh. A quite similar similar example as below is available on my [GitHub](https://github.com/npalm/cross-cluster-mesh-postcard). The example on GitHub is scripted with a set of simple shell scripts and created 3 clusters in 2 clouds. For this post we limited our selves to just 2 clusters in 2 clouds.
51+
52+
We will create 2 clusters and install in both clusters the Istio service mesh, one cluster on AWS (EKS) and the second on Google (GKE). For creating clusters you need to setup your environment with the right tools and credentials. For AWS the [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) is required, for Google you need the [Google Cloud SDK](https://cloud.google.com/sdk/). Furthermore you need to install [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) to interact with Kubernetes. And [helm](https://helm.sh/) as packagem manager for Kubernetes.
53+
54+
### Setup and configure EKS cluster
55+
First we create EKS clusters with `eksctl`. But cluster has a quite minimal configuration. See the page of [weaveworks ekstcl](https://eksctl.io/) or [AWS](https://docs.aws.amazon.com/eks/latest/userguide/eksctl.html) fom more details. Creating a cluster roughly takes 20 minutes. With the command below we create a default EKS cluster, with only one node it's own VPC.
56+
57+
```
58+
export KUBECONFIG=kubeconfig-istio-1.config
59+
export CLUSTER_NAME=mesh-1
60+
eksctl create cluster \
61+
--name $CLUSTER_NAME \
62+
--region eu-central-1 \
63+
--version 1.13 \
64+
--node-type t3.medium \
65+
--nodes 1 \
66+
--nodes-min 2 \
67+
--nodes-max 4
68+
--node-ami auto \
69+
--kubeconfig=$KUBECONFIG \
70+
--tags "environment=multi-cloud-mesh"
71+
72+
kubectl get nodes
73+
```
74+
75+
You should now have a Kubernetes cluster running on AWS. Next we download, install and configure Istio for our service mesh. In the steps below we use for simplicity the sample certificates provided by the Istio distribution. Should I really mention that you should replace those certificates for a real life setup...
76+
77+
```
78+
# Download Istio
79+
export ISTIO_VERSION=1.1.9
80+
curl -L https://git.io/getLatestIstio | sh -
81+
export PATH="$PATH:$PWD/istio-1.1.9/bin"
82+
83+
# Create namespace and install demo certificates
84+
kubectl apply -f istio-$ISTIO_VERSION/install/kubernetes/namespace.yaml
85+
kubectl create secret generic -n istio-system cacerts \
86+
--from-file=istio-$ISTIO_VERSION/samples/certs/ca-cert.pem \
87+
--from-file=istio-$ISTIO_VERSION/samples/certs/ca-key.pem \
88+
--from-file=istio-$ISTIO_VERSION/samples/certs/root-cert.pem \
89+
--from-file=istio-$ISTIO_VERSION/samples/certs/cert-chain.pem
90+
91+
# Create a service account for helm.
92+
kubectl create -f \
93+
istio-$ISTIO_VERSION/install/kubernetes/helm/helm-service-account.yaml
94+
helm init --service-account tiller
95+
96+
# Install the Istio resources
97+
helm install istio-$ISTIO_VERSION/install/kubernetes/helm/istio-init \
98+
--name istio-init --namespace istio-system
99+
100+
```
101+
The last step could take a minute or so, check with `
102+
kubectl get crds | grep 'istio.io' | wc -l` if the Istio custom resources are created. Once finished there should be 53 custom resources created. The last step for installing the service mesh is to create the Istio control plan.
103+
104+
```
105+
helm install --name istio --namespace istio-system \
106+
istio-$ISTIO_VERSION/install/kubernetes/helm/istio \
107+
--values istio-$ISTIO_VERSION/install/kubernetes/helm/istio/example-values/values-istio-multicluster-gateways.yaml
108+
```
109+
110+
For this example we simply enable sidecar injection for every pod in the default namespace. The sidecar will function as proxy and intercepts all network traffic to the containers in the pad.
111+
112+
```
113+
kubectl label namespace default istio-injection=enabled
114+
```
115+
116+
Services in a local Kubernetes share a common DNS suffix (e.g. `svc.cluster.local`). To ba able to route our remote service we have to stub the Kubernetes DNS for the domain name `.glabal`. Service in a remote cluster can then be addressed with the naming convention `<name>.<namespace>.<global>`. Therefore we need to stub the Kubernetes DNS. EKS is using CoreDNS, so we add the config map below.
117+
118+
```
119+
kubectl apply -f - <<EOF
120+
apiVersion: v1
121+
kind: ConfigMap
122+
metadata:
123+
name: coredns
124+
namespace: kube-system
125+
data:
126+
Corefile: |
127+
.:53 {
128+
errors
129+
health
130+
kubernetes cluster.local in-addr.arpa ip6.arpa {
131+
pods insecure
132+
upstream
133+
fallthrough in-addr.arpa ip6.arpa
134+
}
135+
prometheus :9153
136+
proxy . /etc/resolv.conf
137+
cache 30
138+
loop
139+
reload
140+
loadbalance
141+
}
142+
global:53 {
143+
errors
144+
cache 30
145+
proxy . $(kubectl get svc -n istio-system istiocoredns -o jsonpath={.spec.clusterIP})
146+
}
147+
EOF
148+
```
149+
150+
The service mesh on EKS is now ready to start serving applications.
151+
152+
153+
### The Postcard sample application
154+
In this blog post we use a simple polyglot postcards application. It is always fun to write an application in yet another language. The postcard application consists of twe services. The first service is the [greeter](https://github.com/npalm/cross-cluster-mesh-postcard/tree/master/greeter), the greeter is written in NodeJS and generates a webpage with a postcard. The message on the postcard should be provided by the second application, the [messenger](https://github.com/npalm/cross-cluster-mesh-postcard/tree/master/messenger). THe messenger is a Rust application that will run on the second cluster. I returns a string message based on configuration.
155+
156+
![sequence](./postcard-app.png)
157+
158+
The greeter app will print an error message in case the messenger is not available. We deploy now the greeter to the Kubernetes cluster on AWS. We create a standard deployment for the pod, a service, a Istio Gateway, and a Istio Virtual service. The Istio resources are created to make the postcard app public available via an ingress. You can check out the configuration files [here](https://github.com/npalm/cross-cluster-mesh-postcard/tree/master/mesh/demo/greeter). During deployment we update the configuration with name of the cluster.
159+
160+
```
161+
# Deploy greeter pod and service
162+
curl -L \
163+
https://raw.githubusercontent.com/npalm/cross-cluster-mesh-postcard/master/mesh/demo/greeter/greeter.yaml \
164+
| sed "s/CLUSTER_NAME/${CLUSTER_NAME}/" | kubectl apply -f -
165+
166+
# create gateway for greeter service
167+
kubectl apply -f \
168+
https://raw.githubusercontent.com/npalm/cross-cluster-mesh-postcard/master/mesh/demo/greeter/gateway.yaml
169+
170+
```
171+
172+
Once deployed we should already be able to see our postcard via a browser. You can construct the URL with the command below. It can taka few minutes before Load Balancer is ready to accept traffic.
173+
174+
```
175+
export ISTIO_INGRESS=$(kubectl -n istio-system \
176+
get service istio-ingressgateway \
177+
-o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
178+
open http://${ISTIO_INGRESS}/greeter
179+
```
180+
181+
The postcard shows no message from the second cluster. But our greeter app is up and running. The next steps is creating the second cluster.
182+
183+
![postcard](./postcard-1.png)
184+
185+
186+
### Setup and configure GKE cluster
187+
For the second cluster we create a GKE cluster on Google Cloud. You can replace by a second cluster in Amazon or one in another cloud. The first step is similar to creating a cluster in AWS but now one in Google. Open a new terminal to avoid conflicting environment variables.
188+
189+
```
190+
export KUBECONFIG=kubeconfig-istio-2.config
191+
export CLUSTER_NAME=mesh-2
192+
gcloud container clusters create \
193+
--machine-type n1-standard-4 \
194+
--num-nodes 1 --enable-autoscaling \
195+
--min-nodes 1 --max-nodes 5 \
196+
--addons HttpLoadBalancing,HorizontalPodAutoscaling,KubernetesDashboard \
197+
--cluster-version 1.13 --zone europe-west3-b \
198+
$CLUSTER_NAME
199+
200+
kubectl create clusterrolebinding mt-admin --user "$(gcloud config get-value core/account)" --clusterrole cluster-admin
201+
202+
kubectl get nodes
203+
```
204+
205+
The cluster will created in roughly 5 minutes. Next you have to install Istio. The steps are exactly the same as above. Only the last step where we configure the DNS is different since GKE use kube-dns instead of core-dns. Execute the command below to stub the DNS.
206+
207+
```
208+
kubectl apply -f - <<EOF
209+
apiVersion: v1
210+
kind: ConfigMap
211+
metadata:
212+
name: kube-dns
213+
namespace: kube-system
214+
data:
215+
stubDomains: |
216+
{"global": ["$(kubectl get svc -n istio-system istiocoredns -o jsonpath={.spec.clusterIP})"]}
217+
EOF
218+
```
219+
220+
### The Postcard sample application - part II
221+
On the second cluster we deploy the messenger app that simply sends a message back. The message can be configures via an environment variable. Install the messenger the app with the `kubectl` command below.
222+
223+
```
224+
# Deploy greeter pod and service
225+
curl -L \
226+
https://raw.githubusercontent.com/npalm/cross-cluster-mesh-postcard/master/mesh/demo/messenger/messenger.yaml \
227+
| sed "s/MESSAGE_TEXT/All good from Google Cloude/" \
228+
| kubectl apply -f -
229+
```
230+
231+
Only one step left now. We need to configure the first cluster with a service entry so the mesh knows how to route calls for `messanger.default.global`. We switch back to the terminal where we have created the first cluster on Amazon. And lookup using the kubeconfig from the second cluster the ip address of the ingress loadbalancer. Next we create a service entry.
232+
233+
234+
```
235+
# Lookup the ingress ip address
236+
export CLUSTER_GW_ADDR=$(kubectl \
237+
--kubeconfig=kubeconfig-istio-2.config \
238+
get svc --selector=app=istio-ingressgateway -n istio-system \
239+
-o jsonpath='{.items[0].status.loadBalancer.ingress[0].ip}')
240+
echo $CLUSTER_GW_ADDR
241+
242+
# Create a service entry
243+
kubectl apply -f - <<EOF
244+
apiVersion: networking.istio.io/v1alpha3
245+
kind: ServiceEntry
246+
metadata:
247+
name: messenger
248+
spec:
249+
hosts:
250+
- messenger.default.global
251+
location: MESH_INTERNAL
252+
ports:
253+
- name: http1
254+
number: 3000
255+
protocol: http
256+
resolution: DNS
257+
addresses:
258+
- 127.127.42.69
259+
endpoints:
260+
- address: $CLUSTER_GW_ADDR
261+
ports:
262+
http1: 15443 # Do not change this port value
263+
EOF
264+
```
265+
266+
Now our postcard application should get the message from the second cluster, go back to the browser and refresh the page. You should now see card as below.
267+
268+
![postcard](./postcard-2.png)
269+
270+
That is all you need to do for creating a cross cluster service mesh.
271+
272+
273+
## Cleanup
274+
The cleanup steps below assume you created fresh cluster and don't use them for hosting other applications. The Amazon cluster was created via `eksctl`. This tool actually create cloud formation stacks in Amazon. Before you delete the stack, you need to delete the load balancer created by Istio.
275+
276+
```
277+
# Find the load balancer
278+
aws elb describe-load-balancers | jq -r ".LoadBalancerDescriptions[].LoadBalancerName"
279+
280+
# delete the load balancer
281+
aws elb delete-load-balancer --load-balancer-name <lb-name>
282+
```
283+
Now you can delete the cluster, the deletion is a asynchronous process. Ensure you verify the deletion went well. In case it fails, happens many times to me. Delete the VPC and retry the delete via cloud formation.
284+
```
285+
delete cluster --name mesh-1 --region=eu-central-1
286+
```
287+
288+
For Google Cloud the deletion is very easy. Deletion of the cluster will delete all dependent resources.
289+
```
290+
cloud container clusters delete --zone europe-west3-b $CLUSTER_NAME
291+
```
292+
293+
## Acknowledgements
294+
The example used in the blog is inspired by a great talk from Matt Turner at the KubeCon - CloudNativeCon 2019 in Barcelona. His talk and example is available in the blog post [Cross-cluster Calls Made Easy with Istio 1.1](https://mt165.co.uk/speech/cross-cluster-calls-istio-1-1-kubecon-eu-19/).

content/posts/2019-07-01-multi-cloud-service-mesh/istio-arch.svg

Lines changed: 1 addition & 0 deletions
Loading

content/posts/2019-07-01-multi-cloud-service-mesh/multicluster-with-gateways.svg

Lines changed: 1 addition & 0 deletions
Loading
187 KB
Loading
192 KB
Loading
14.7 KB
Loading

0 commit comments

Comments
 (0)