凌晨 12 点突发 Istio 生产事故!一顿操作猛如虎解决了
作者:龙小虾,K8s 爱好者,目前在某运营商科技公司,主要负责维护 KubeSphere 容器平台之上的容器化业务
事故起因
业务上新集群,本来以为"洒洒水",11 点切,12 点就能在家睡觉了。流量切过来后,在验证过程中,发现网页能够正常打开,在登录时返回了 502,当场懵逼。在相关的容器日志发现一个高频的报错条目“7000 端口无法连接”,向业务组了解到这是 redis 集群中的一个端口,前后端是通过 redis 交互的,该集群同时还有 7001-7003 其它三个端口。
用 nc 命令对 redis 集群进行连接测试:向服务端发送 keys *
命令时,7000 端口返回的是 HTTP/1.1 400 Bad Request
,其他三个端口是 redis 返回的 -NOAUTH Authentication required
。
$ nc https:10.0.https:.https:6 https:7000
https:keys *
HTTP/https:1.1 https:400 Bad Request
content-https:length: 0
connection: https:close
$ nc https:10.0.https:.https:6 https:7003
https:keys *
-NOAUTH Authentication required
判断 7000 端口连接到了其他应用上,至少不是 redis。在宿主机上抓包发现没有抓到访问 7000 端口的流量,然后查看容器的 nf_conntrackb 表,发现 7000 端口的数据只有到本地的会话信息;7003 的有两条会话信息,一条到本机的,一条到目标服务器的。
$ https:grep https:7000 /proc/net/nf_conntrack
ipv4 https:2 tcp https:6 https:110 TIME_WAIT src=https:10.64.https:192.14 dst=https:10.0.https:.https:6 sport=https:50498 dport=https:7000 src=https:127.0.https:.https:1 dst=https:10.64.https:192.14 sport=https:15001 dport=https:50498 [ASSURED] mark=https: zone=https: https:use=https:2
$ https:grep https:7003 /proc/net/nf_conntrack
ipv4 https:2 tcp https:6 https:104 TIME_WAIT src=https:10.64.https:192.14 dst=https:10.0.https:.https:6 sport=https:38952 dport=https:7003 src=https:127.0.https:.https:1 dst=https:10.64.https:192.14 sport=https:15001 dport=https:38952 [ASSURED] mark=https: zone=https: https:use=https:2
ipv4 https:2 tcp https:6 https:104 TIME_WAIT src=https:10.64.https:192.14 dst=https:10.0.https:.https:6 sport=https:38954 dport=https:7003 src=https:10.0.https:.https:6 dst=https:10.64.https:192.14 sport=https:7003 dport=https:38954 [ASSURED] mark=https: zone=https: https:use=https:2
由此判断出 istio 没有代理转发出 7000 的流量,这突然就触及到了我的知识盲区,一大堆人看着,办公室 26 度的空调,一直在冒汗。没办法了,在与业务商量后,只能先关闭 istio 注入,优先恢复了业务。回去后恶补 istio 的相关资料。终于将问题解决。记录下相关信息,以供日后参考。
背景知识补充
istio Sidecar 的模式
istio 的 Sidecar 有两种模式:
ALLOW_ANY:istio 代理允许调用未知的服务,黑名单模式。 REGISTRY_ONLY:istio 代理会阻止任何没有在网格中定义的 HTTP 服务或 service entry 的主机,白名单模式。
istio-proxy(Envoy)的配置结构
istio-proxy(Envoy)的代理信息大体由以下几个部分组成:
Cluster:在 Envoy 中,Cluster 是一个服务集群,Cluster 中包含一个到多个 endpoint,每个 endpoint 都可以提供服务,Envoy 根据负载均衡算法将请求发送到这些 endpoint 中。cluster 分为 inbound 和 outbound 两种,前者对应 Envoy 所在节点上的服务;后者占了绝大多数,对应 Envoy 所在节点的外部服务。可以使用如下方式分别查看 inbound 和 outbound 的 cluster。 Listeners:Envoy 采用 listener 来接收并处理 downstream 发过来的请求,可以直接与 Cluster 关联,也可以通过 rds 配置路由规则(Routes),然后在路由规则中再根据不同的请求目的地对请求进行精细化的处理。 Routes:配置 Envoy 的路由规则。istio 下发的缺省路由规则中对每个端口(服务)设置了一个路由规则,根据 host 来对请求进行路由分发,routes 的目的为其他服务的 cluster。 Endpoint:cludter 对应的后端服务,可以通过 istio pc endpoint 查看 inbound 和 outbound 对应的 endpoint 信息。
服务发现类型
cluster 的服务发现类型主要有:
ORIGINAL_DST:类型的 Cluster,Envoy 在转发请求时会直接采用 downstream 请求中的原始目的地 IP 地址 EDS:EDS 获取到该 Cluster 中所有可用的 Endpoint,并根据负载均衡算法(缺省为 Round Robin)将 Downstream 发来的请求发送到不同的 Endpoint。istio 会自动为集群中的 service 创建代理信息,listener 的信息从 service 获取,对应的 cluster 被标记为 EDS 类型 STATIC:缺省值,在集群中列出所有可代理的主机 Endpoints。当没有内容为空时,不进行转发。 LOGICAL_DNS:Envoy 使用 DNS 添加主机,但如果 DNS 不再返回时,也不会丢弃。 STRICT_DNS:Envoy 将监控 DNS,而每个匹配的 A 记录都将被认为是有效的。
两个特殊集群
BlackHoleCluster:黑洞集群,匹配此集群的流量将被不会被转发。
{
https:"name": https:"BlackHoleCluster",
https:"type": https:"STATIC",
https:"connectTimeout": https:"10s"
}
类型为 static,但是没有指定可代理的 Endpoint,所以流量不会被转发。
PassthroughCluster:透传集群,匹配此集群的流量数据包的目的 IP 不会改变。
{
https:"name": https:"PassthroughCluster",
https:"type": https:"ORIGINAL_DST",
https:"connectTimeout": https:"10s",
https:"lbPolicy": https:"CLUSTER_PROVIDED",
https:"circuitBreakers": {
https:"thresholds": [
{
https:"maxConnections": https:4294967295,
https:"maxPendingRequests": https:4294967295,
https:"maxRequests": https:4294967295,
https:"maxRetries": https:4294967295
}
]
}
类型为 original_dst,流量将原样转发。
一个特殊的 Listener
istio 中有一个特殊的 Listener 叫 virtualOutbound,定义如下:
virtualOutbound:每个 Sidecar 都有一个绑定到 0.0.0.0:15001 的 listener,该 listener 下关联了许多 virtual listener。iptables 会先将所有出站流量导入该 listener,该 listener 有一个字段 useOriginalDst 设置为 true,表示会使用佳匹配原始目的地的方式将请求分发到 virtual listener,如果没有找到任何 virtual listener,将会直接发送到数据包原目的地的 PassthroughCluster。
useOriginalDst 字段的具体意义是,如果使用 iptables 重定向连接,则代理接收流量的目标地址可能与原始目标地址不同。当此标志设置为 true 时,侦听器会将重定向流量转交给与原始目标地址关联的侦听器。如果没有与原始目标地址关联的侦听器,则流量由接收它的侦听器处理。默认为 false。
virtualOutbound 的流量处理流程如图所示:
这是 virtualOutbound 的部分配置:
{
https:"name": https:"envoy.tcp_proxy",
https:"typedConfig": {
https:"@type": https:"type.googleapis.com/envoy.config.filter.network.tcp_proxy.v2.TcpProxy",
https:"statPrefix": https:"PassthroughCluster",
https:"cluster": https:"PassthroughCluster"
}
}
……………
https:"useOriginalDst": true
istio 的 outbond 流量处理
开启流量治理后,pod 访问外部资源的流量转发路径如图所示:
istio 注入后 istio-proxy 有一个监听在 15001 的端口,所有非 istio-proxy 用户进程产生的 outbond 流量,通过 iptables 规则被重定向到 15001。
# Sidecar 注入的 pod 监听的端口
$ ss -tulnp
State Recv-Q Send-Q Local Addreshttps:s:Port Peer Addreshttps:s:Port
https:LISTEN https: https:128 *:https:80 *:*
LISTEN https: https:128 *:https:15090 *:*
LISTEN https: https:128 https:127.0.https:0.1:https:15000 *:*
LISTEN https: https:128 *:https:15001 *:*
LISTEN https: https:128 *:https:15006 *:*
LISTEN https: https:128 [::]:https:15020 [::]:*
# Pod 内部的 iptables 表项
$ iptables-save
# Generated by iptables-save v1.https:4.21 https:on Fri Sep https:17 https:13:https:47:https:09 https:2021
*nahttps:t
https::PREROUTING ACCEPT [https:129886:https:7793160]
:INPUT ACCEPT [https:181806:https:10908360]
:OUTPUT ACCEPT [https:53409:https:3257359]
:POSTROUTING ACCEPT [https:53472:https:3261139]
:istio_INBOUND - [https::https:]
:istio_IN_REDIRECT - [https::https:]
:istio_OUTPUT - [https::https:]
:istio_REDIRECT - [https::https:]
-A PREROUTING -https:p tcp -https:j istio_INBOUND
-A OUTPUT -https:p tcp -https:j istio_OUTPUT
-A istio_INBOUND -https:p tcp -https:m tcp --dport https:22 -https:j RETURN
-A istio_INBOUND -https:p tcp -https:m tcp --dport https:15020 -https:j RETURN
-A istio_INBOUND -https:p tcp -https:j istio_IN_REDIRECT
-A istio_IN_REDIRECT -https:p tcp -https:j REDIRECT --https:to-ports https:15006
-A istio_OUTPUT -s https:127.0.https:0.6/https:32 -https:o https:lo -https:j RETURN
-A istio_OUTPUT ! -d https:127.0.https:0.1/https:32 -https:o https:lo -https:j istio_IN_REDIRECT
-A istio_OUTPUT -https:m owner --uid-owner https:1337 -https:j RETURN
-A istio_OUTPUT -https:m owner --gid-owner https:1337 -https:j RETURN
-A istio_OUTPUT -d https:127.0.https:0.1/https:32 -https:j RETURN
-A istio_OUTPUT -https:j istio_REDIRECT
-A istio_REDIRECT -https:p tcp -https:j REDIRECT --https:to-ports https:15001
COMMIT
# Completed https:on Fri Sep https:17 https:13:https:47:https:09 https:2021
istio-proxy 收到流量后,大致的处理步骤如下:
Proxy 在 ALLOW_ANY 模式下没有匹配上 listener 将被直接转发 listener 关联了 type 为 ORIGINAL_DST 的 cluster 将使用原始请求种的 IP 地址 匹配上了 BlackHoleCluster,将不会被转发
被代理流量的匹配步骤大致如下:
疑问:isito 为 svc 创建的 listener 地址是全零的,集群内部的端口是会存在复用的,那 istio 到底是怎么区分流量的呢?
关键就在于 route,route 由 virtual_host 条目组成,这些 virtual_host 条目就是根据 svc 的信息生成的,访问集群内部的 svc 时,在 route 里可以根据域名或者 svc 对应的 virtual_ip 进行匹配,所以完全不需要担心啦。
$ kubectl https:get svc -A | https:grep https:8001
NodePort https:10.233.https:34.158 https:<none> https:8001:https:30333/TCP https:8d
NodePort https:10.233.https:9.105 https:<none> https:8001:https:31717/TCP https:8d
NodePort https:10.233.https:60.59 https:<none> https:8001:https:31135/TCP https:2d16h
NodePort https:10.233.https:18.212 https:<none> https:8001:https:32407/TCP https:8d
NodePort https:10.233.https:15.5 https:<none> https:8001:https:30079/TCP https:8d
NodePort https:10.233.https:59.21 https:<none> https:8001:https:31103/TCP https:8d
NodePort https:10.233.https:17.123 https:<none> https:8001:https:31786/TCP https:8d
NodePort https:10.233.https:9.196 https:<none> https:8001:https:32662/TCP https:8d
NodePort https:10.233.https:62.85 https:<none> https:8001:https:32104/TCP https:8d
ClusterIP https:10.233.https:49.245 https:<none> https:8000/TCP,https:8001/TCP,https:8443/TCP,https:8444/TCP
这是 route 下的 virtual_host 条目:
{
https:"name": https:"8001",
https:"virtualHosts": [
{
https:"name": https:"merchant-center.open.svc.cluster.local:8001",
https:"domains": [
https:"merchant-center.open.svc.cluster.local",
https:"merchant-center.open.svc.cluster.local:8001",
https:"merchant-center.open",
https:"merchant-center.open:8001",
https:"merchant-center.open.svc.cluster",
https:"merchant-center.open.svc.cluster:8001",
https:"merchant-center.open.svc",
https:"merchant-center.open.svc:8001",
https:"10.233.60.59",
https:"10.233.60.59:8001"
],
https:"routes": [
{
https:"name": https:"default",
https:"match": {
https:"prefix": https:"/"
},
https:"route": {
https:"cluster": https:"outbound|8001||merchant-center.open.svc.cluster.local",
https:"timeout": https:"0s",
https:"retryPolicy": {
https:"retryOn": https:"connect-failure,refused-stream,unavailable,cancelled,resource-exhausted,retriable-status-codes",
https:"numRetries": https:2,
https:"retryHostPredicate": [
{
https:"name": https:"envoy.retry_host_predicates.previous_hosts"
}
],
https:"hostSelectionRetryMaxAttempts": https:"5",
https:"retriableStatusCodes": [
https:503
]
},
https:"maxGrpcTimeout": https:"0s"
},
…………………
{
https:"name": https:"cashier-busi-svc.pay.svc.cluster.local:8001",
https:"domains": [
https:"cashier-busi-svc.pay.svc.cluster.local",
https:"cashier-busi-svc.pay.svc.cluster.local:8001",
https:"cashier-busi-svc.pay",
https:"cashier-busi-svc.pay:8001",
https:"cashier-busi-svc.pay.svc.cluster",
https:"cashier-busi-svc.pay.svc.cluster:8001",
https:"cashier-busi-svc.pay.svc",
https:"cashier-busi-svc.pay.svc:8001",
https:"10.233.17.123",
https:"10.233.17.123:8001"
],
…………………
{
https:"name": https:"center-job.manager.svc.cluster.local:8001",
https:"domains": [
https:"center-job.manager.svc.cluster.local",
https:"center-job.manager.svc.cluster.local:8001",
https:"center-job.manager",
https:"center-job.manager:8001",
https:"center-job.manager.svc.cluster",
https:"center-job.manager.svc.cluster:8001",
https:"center-job.manager.svc",
https:"center-job.manager.svc:8001",
https:"10.233.34.158",
https:"10.233.34.158:8001"
],
……………
问题分析
基于以上信息,对集群内的 svc 进行端口过滤,终于发现了集群中存在使用了 7000 端口的 service:
istio 会为 10.233.0.115:7000 自动生成一个 0.0.0.0:7000 的 listener:
https:ADDRESS https:PORT https:TYPE
https:https:.0https:.0https:.0 7000 https:TCP
查看详细配置信息,在该 listener 中对于 tcp 流量是不转发(BlackHoleCluster),所以目标地址为 10.0.x.x:7000 的流量被 listener_0.0.0.0:7000 匹配到时,因为是 tcp 的流量(nc 命令默认 tcp 协议),所以代理没有对该流量进行转发。这与开头提到的 pod 没有流量发出来现象一致。
{
https:"name": https:"0.0.0.0_7000",
https:"address": {
https:"socketAddress": {
https:"address": https:"0.0.0.0",
https:"portValue": https:7000
}
},
https:"filterChains": [
{
https:"filterChainMatch": {
https:"prefixRanges": [
{
https:"addressPrefix": https:"10.64.x.x",
https:"prefixLen": https:32
}
]
},
https:"filters": [
{
https:"name": https:"envoy.tcp_proxy",
https:"typedConfig": {
https:"@type": https:"type.googleapis.com/envoy.config.filter.network.tcp_proxy.v2.TcpProxy",
https:"statPrefix": https:"BlackHoleCluster",
https:"cluster": https:"BlackHoleCluster"
}
}
]
}
至于 7001-7003 为什么能通,是因为 istio-proxy 默认使用的是 ALLOW_ANY 模式,对于没有匹配上 listener 的流量是直接放行。可以通过 istio_configmap 配置信息来验证一下:
$ kubectl https:get https:cm istio -n istio-https:system -https:o yaml | https:grep -i -https:w -a3 https:"mode"
# REGISTRY_ONLY - restrict outbound traffic https:to services defined in the service registry https:as well
# https:as those defined through ServiceEntries
outboundTrafficPolicy:
https:mode: ALLOW_ANY
localityLbSettinhttps:g:
enabled: true
# The namespace https:to treat https:as the administrative root namespace https:for istio
--
drainDuration: https:45s
parentShutdownDuration: https:1m0s
#
# The https:mode used https:to redirect inbound connections https:to Envoy. This setting
# https:has https:no effect https:on outbound traffic: iptables REDIRECT https:is always used https:for
# outbound connections.
# If https:"REDIRECT", use iptables REDIRECT https:to NAT https:and redirect https:to Envoy.
# The https:"REDIRECT" https:mode loses https:source addresses during redirection.
# If https:"TPROXY", use iptables TPROXY https:to redirect https:to Envoy.
# The https:"TPROXY" https:mode preserves both the https:source https:and destination IP
# addresses https:and ports, https:so that they can https:be used https:for advanced filtering
# https:and manipulation.
# The https:"TPROXY" https:mode also configures the Sidecar https:to run with the
# CAP_NET_ADMIN capability, which https:is required https:to use TPROXY.
#interceptionMode: REDIRECT
#
解决方案
后我们来解决开头提到的问题,总共有三种解决方案。
方法 1:Service Entry
服务条目(Service Entry)是 istio 重要的资源对象之一,作用是将外部的资源注册到 istio 内部的网格服务中来,以提供网格内对外部资源的更加精细化的控制。我们可以简单理解为白名单,istios 根据 Service Entry 的内容生成 listeners。
我们在命名空间 dev-self-pc-ct 中添加如下配置:
$ kubectl apply -https:f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadathttps:a:
name: rediscluster
namespace: dev-self
spec:
hosthttps:s:
- redis
addressehttps:s:
- https:10.0.https:x.https:x/https:32
porthttps:s:
- https:number: https:7000
name: redis-https:7000
protocohttps:l: tcp
- https:number: https:7001
name: redis-https:7001
protocohttps:l: tcp
- https:number: https:7002
name: redis-https:7002
protocohttps:l: tcp
- https:number: https:7003
name: redis-https:7003
protocohttps:l: tcp
resolution: NONE
location: MESH_EXTERNAL
EOF
查看 listener:
$ ./istioctl proxy-config listeners testhttps:-8c4c9dcb9-kpm8n.dev-https:self --address https:10.0.x.x -o json
[
{
https:"name": https:"10.0.x.x_7000",
https:"address": {
https:"socketAddress": {
https:"address": https:"10.0.x.x",
https:"portValue": https:7000
}
},
https:"filterChains": [
{
https:"filters": [
{
https:"name": https:"mixer",
https:"typedConfig": {
https:"@type": https:"type.googleapis.com/istio.mixer.v1.config.client.TcpClientConfig",
https:"transport": {
https:"networkFailPolicy": {
https:"policy": https:"FAIL_CLOSE",
https:"baseRetryWait": https:"0.080s",
https:"maxRetryWait": https:"1s"
},
https:"checkCluster": https:"outbound|9091||istio-policy.istio-system.svc.cluster.local",
https:"reportCluster": https:"outbound|9091||istio-telemetry.istio-system.svc. cluster.local",
https:"reportBatchMaxEntries": https:100,
https:"reportBatchMaxTime": https:"1s"
},
https:"mixerAttributes": {
https:"attributes": {
https:"context.proxy_version": {
https:"stringValue": https:"1.4.8"
},
https:"context.reporter.kind": {
https:"stringValue": https:"outbound"
},
https:"context.reporter.uid": {
https:"stringValue": https:"kubernetes://test-8c4c9dcb9-kpm8n.dev-self"
},
https:"destination.service.host": {
https:"stringValue": https:"redis"
},
https:"destination.service.name": {
https:"stringValue": https:"redis"
},
https:"destination.service.namespace": {
https:"stringValue": https:"dev-self "
},
https:"source.namespace": {
https:"stringValue": https:"dev-self "
},
https:"source.uid": {
https:"stringValue": https:"kubernetes://test-8c4c9dcb9-kpm8n.dev-self"
}
}
},
https:"disableCheckCalls": https:true
}
},
{
https:"name": https:"envoy.tcp_proxy",
https:"typedConfig": {
https:"@type": https:"type.googleapis.com/envoy.config.filter.network.tcp_proxy.v2.TcpProxy",
https:"statPrefix": https:"outbound|7000||redis",
https:"cluster": https:"outbound|7000||redis"
}
}
]
}
],
https:"deprecatedV1": {
https:"bindToPort": https:false
},
https:"listenerFiltersTimeout": https:"0.100s",
https:"continueOnListenerFiltersTimeout": https:true,
https:"trafficDirection": https:"OUTBOUND"
},
......
]
我们可以看到 listener "10.0.1.6_7000" 中 tcp 流量关联了 outbound|7000||redis
集群,该集群的类型是 ORIGINAL_DST
,即保持源报文的目的地址,并且没有关联任何 service。
所以此时访问 10.0.x.x:7000 的目标地址不会改变。
{
https:"name": https:"outbound|7000||redis",
https:"type": https:"ORIGINAL_DST",
https:"connectTimeout": https:"10s",
https:"lbPolicy": https:"CLUSTER_PROVIDED",
https:"circuitBreakers": {
https:"thresholds": [
{
https:"maxConnections": https:4294967295,
https:"maxPendingRequests": https:4294967295,
https:"maxRequests": https:4294967295,
https:"maxRetries": https:4294967295
}
]
}
}
再次访问测试:
$ nc https:10.0.https:.https:6 https:7000
https:keys *
-NOAUTH Authentication required.
$ nc https:10.0.https:.https:7 https:7000
https:keys *
-NOAUTH Authentication required.
$ nc https:10.0.https:.https:8 https:7000
https:keys *
-NOAUTH Authentication required.
已经可以正常转发。
方法 2:更改 global.proxy.includeIPRanges 或 global.proxy.excludeIPRanges 配置选项
global.proxy.includeIPRanges:需要进行代理 ip 地址范围 global.proxy.excludeIPRanges:不需要进行代理的 ip 地址范围。
终效果为在 pod 的 iptables 规则上匹配或排除对应的地址,访问这些地址流量不会被重定向到 istio-proxy,而是直接发送。
我们可以使用 kubectl apply 命令更新 istio-Sidecar-injector 配置,也可以修改 spec. template.metadata. annotations 中的 traffic.Sidecar.istio.io/includeOutboundIPRanges 来达到相同的效果。
https:template:
https:metadata:
https:annotations:
kubectl.kubernetes.io/https:restartedAt: https:'2021-06-09T21:59:10+08:00'
kubesphere.io/https:restartedAt: https:'2021-09-13T17:07:03.082Z'
logging.kubesphere.io/https:logSidecar-config: https:'{}'
Sidecar.istio.io/https:componentLogLevel: https:'ext_authz:trace,filter:debug'
Sidecar.istio.io/https:inject: https:'true'
traffic.Sidecar.istio.io/https:excludeOutboundIPRanges: https:10.0.https:1https:.0/https:24
Pod 里的 iptables 规则会将目标地址为 10.0.x.x/24 的流量正常转发:
# https:Generated https:by https:iptables-save https:v1https:.4https:.21 https:on https:Fri https:Sep https:17 https:14https::26https::10 https:2021
*https:nat
https::PREROUTING https:ACCEPT https:[131058:7863480]
https::INPUT https:ACCEPT https:[183446:11006760]
https::OUTPUT https:ACCEPT https:[53889:3286544]
https::POSTROUTING https:ACCEPT https:[53953:3290384]
https::istio_INBOUND https:- https:[0:0]
https::istio_IN_REDIRECT https:- https:[0:0]
https::istio_OUTPUT https:- https:[0:0]
https::istio_REDIRECT https:- https:[0:0]
https:-A https:PREROUTING https:-p https:tcp https:-j https:istio_INBOUND
https:-A https:OUTPUT https:-p https:tcp https:-j https:istio_OUTPUT
https:-A https:istio_INBOUND https:-p https:tcp https:-m https:tcp https:--dport https:22 https:-j https:RETURN
https:-A https:istio_INBOUND https:-p https:tcp https:-m https:tcp https:--dport https:15020 https:-j https:RETURN
https:-A https:istio_INBOUND https:-p https:tcp https:-j https:istio_IN_REDIRECT
https:-A https:istio_IN_REDIRECT https:-p https:tcp https:-j https:REDIRECT https:--to-ports https:15006
https:-A https:istio_OUTPUT https:-s https:127https:.0https:.0https:.6/https:32 https:-o https:lo https:-j https:RETURN
https:-A https:istio_OUTPUT ! https:-d https:127https:.0https:.0https:.1/https:32 https:-o https:lo https:-j https:istio_IN_REDIRECT
https:-A https:istio_OUTPUT https:-m https:owner https:--uid-owner https:1337 https:-j https:RETURN
https:-A https:istio_OUTPUT https:-m https:owner https:--gid-owner https:1337 https:-j https:RETURN
https:-A https:istio_OUTPUT https:-d https:127https:.0https:.0https:.1/https:32 https:-j https:RETURN
https:-A https:istio_OUTPUT https:-d https:10https:.0https:.0https:.0/https:24 https:-j https:RETURN
https:-A https:istio_OUTPUT https:-j https:istio_REDIRECT
https:-A https:istio_REDIRECT https:-p https:tcp https:-j https:REDIRECT https:--to-ports https:15001
https:COMMIT
# https:Completed https:on https:Fri https:Sep https:17 https:14https::26https::10 https:2021
方法 3:用 Service 打败 Service
这个方法基于 istio 会为集群中的 svc 自动生成 listener 的工作方式,手动在集群中为外部服务创建 service 和 endpoints:
https:apiVersion: https:v1
https:kind: https:Endpoints
https:metadata:
https:name: rediscluster
https:labels:
https:name: rediscluster
https:app: redis-jf
https:user: jf
https:namespace: https:dev-self
https:subsets:
- https:addresses:
- https:ip: https:10.0.x.x
- https:ip: https:10.0.x.x
- https:ip: https:10.0.x.x
https:ports:
- https:name: tcp-https:7000
https:port: https:7000
- https:name: tcp-https:7001
https:port: https:7001
- https:name: tcp-https:7002
https:port: https:7002
- https:name: tcp-https:7003
https:port: https:7003
https:---
https:apiVersion: https:v1
https:kind: https:Service
https:metadata:
https:name: rediscluster
https:namespace: https:dev-self
https:spec:
https:ports:
- https:name: tcp-https:7000
https:protocol: TCP
https:port: https:7000
https:targetPort: https:7000
- https:name: tcp-https:7001
https:protocol: TCP
https:port: https:7001
https:targetPort: https:7001
- https:name: tcp-https:7002
https:protocol: TCP
https:port: https:7002
https:targetPort: https:7002
- https:name: tcp-https:7003
https:protocol: TCP
https:port: https:7003
https:targetPort: https:7003
https:selector:
https:name: rediscluster
https:app: redis-jf
https:user: jf
应用以上配置后 istio 会自动生成一个 service_ip+port 的 listener。Service 信息如下:
Selector: app=redis-jf,name=rediscluster,user=jf
Type: ClusterIP
IP: https:10.233.https:40.115
Porhttps:t: tcp-https:7000 https:7000/TCP
TargetPorhttps:t: https:7000/TCP
Endpointhttps:s: https:<none>
Porhttps:t: tcp-https:7001 https:7001/TCP
TargetPorhttps:t: https:7001/TCP
Endpointhttps:s: https:<none>
Porhttps:t: tcp-https:7002 https:7002/TCP
TargetPorhttps:t: https:7002/TCP
Endpointhttps:s: https:<none>
Porhttps:t: tcp-https:7003 https:7003/TCP
TargetPorhttps:t: https:7003/TCP
Endpointhttps:s: https:<none>
Session Affinity: None
Eventhttps:s: https:<none>
Listener 部分信息如下:
{
https:"name": https:"10.233.59.159_7000",
https:"address": {
https:"socketAddress": {
https:"address": https:"10.233.59.159",
https:"portValue": https:7000
}
},
https:"filterChains": [
{
https:"filters": [
{
https:"name": https:"mixer",
https:"typedConfig": {
https:"@type": https:"type.googleapis.com/istio.mixer.v1.config.client.TcpClientConfig",
https:"transport": {
https:"networkFailPolicy": {
https:"policy": https:"FAIL_CLOSE",
https:"baseRetryWait": https:"0.080s",
https:"maxRetryWait": https:"1s"
},
https:"checkCluster": https:"outbound|9091||istio-policy.istio-system.svc.cluster.local",
https:"reportCluster": https:"outbound|9091||istio-telemetry.istio-system.svc.cluster.local",
https:"reportBatchMaxEntries": https:100,
https:"reportBatchMaxTime": https:"1s"
},
https:"mixerAttributes": {
https:"attributes": {
https:"context.proxy_version": {
https:"stringValue": https:"1.4.8"
},
......
该 listener 指向了一个 cluster:
{
https:"name": https:"envoy.tcp_proxy",
https:"typedConfig": {
https:"@type": https:"type.googleapis.com/envoy.config.filter.network.tcp_proxy.v2.TcpProxy",
https:"statPrefix": https:"outbound|7000||redis",
https:"cluster": https:"outbound|7000||redis"
}
}
对应的 service 信息如下:
可以看到 endpoint 就是刚才我们指定的外部服务器地址:
进行访问测试:
已经可以正常访问了。
总结
后我们来比较一下这三种方法。
方法 1:通过添加 ServiceEntry,以允许访问外部服务。可以让你使用 istio 服务网格所有的功能去调用集群内或集群外的服务,这是官方推荐的方法。 方法 2:直接绕过了 istio Sidecar 代理,使你的服务可以直接访问任意的外部服务。 但是,以这种方式配置代理需要了解集群提供商相关知识和配置。将失去对外部服务访问的监控,并且无法将 istio 功能应用于外部服务的流量。 方法 3:这个方法相对于其他两种,配置有点复杂,同时还要通过 service 的方式来访问外部服务,这意味着对于已经存在的应用需要进行改造。具体能否实施看实际情况。
方法 1 的做法类似于“白名单”,不但能达到访问外部服务的目的,并且可以像集群内部服务一样对待(可使用 istio 的流量控制功能)。另外,即使服务受到入侵,由于“白名单”的设置入侵者也无法(或较难)将流量回传到入侵机器,进一步保证了服务的安全性;
方法 2 直接绕过了 istio Sidecar 代理,使你的服务可以直接访问任意的外部服务。 但是,以这种方式配置代理需要了解集群提供商相关知识和配置。 你将失去对外部服务访问的监控,并且无法将 istio 功能应用于外部服务的流量;
方法 3 虽然也可以使用 istio 的流量控制功能来管理外部流量,但是在实际操作中会存在配置复杂、改造应用等问题
因此,强烈推荐大家使用方法一。后,特别提醒一下大家。将 includeOutboundIPRanges
设置为空是有问题的,这相当于将所有的服务都配置代理绕行,那 Sidecar 就没起作用了,没了 Sidecar 的 istio 就没有灵魂了。。
相关文章