Bug description
After configuring a very simply Lua script the Istio Proxy crashes with a segfault.
Expected behavior
A Lua script should never make the proxy crash, the script may fail and result in an HTTP 500, but it should never crash.
Steps to reproduce the bug
First create a Service that is available externally via the ingress gateway:
apiVersion: v1
kind: Namespace
metadata:
name: foo
labels:
istio-injection: enabled
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway-testserver-foo
namespace: foo
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- test.server
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: testserver
namespace: foo
spec:
gateways:
- gateway-testserver-foo
hosts:
- test.server
http:
- route:
- destination:
host: testserver.foo.svc.cluster.local
port:
number: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: testserver
namespace: foo
spec:
selector:
matchLabels:
app: testserver-foo
replicas: 1
template:
metadata:
labels:
app: testserver-foo
spec:
containers:
- name: testserver
image: kennethreitz/httpbin
ports:
- containerPort: 80
name: http-testserver
---
apiVersion: v1
kind: Service
metadata:
name: testserver
namespace: foo
spec:
selector:
app: testserver-foo
ports:
- name: http
port: 8080
protocol: TCP
targetPort: http-testserver
---
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: default
namespace: foo
spec:
peers:
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: plain
namespace: foo
spec:
host: testserver.foo.svc.cluster.local
trafficPolicy:
tls:
mode: DISABLE
Afterwards configure this EnvoyFilter for the gateway listener of the ingress gateway:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: test-lua
namespace: foo
spec:
workloadLabels:
istio: ingressgateway
filters:
- listenerMatch:
listenerType: GATEWAY
listenerProtocol: HTTP
filterName: envoy.lua
filterType: HTTP
filterConfig:
inlineCode: |
function envoy_on_request(request_handle)
request_handle:logWarn("Execute Envoy Filter")
local copy = {}
for k,v in pairs(request_handle:headers()) do
copy[k] = v
end
request_handle:logWarn("And return")
end
Now fire requests against this service:
while true; do
curl test.server:80/get --resolve test.server:80:127.0.0.1 -v -H 'a.bc: test' -H 'b.ce: abc' -H 'c.de: 424242424'
done
(I use curl --resolve as I am running Istio locally on Docker for Mac)
After a few requests the Istio-proxy of the ingress-gateway crashes with a segfault:
2019-04-30T09:08:26.333592Z info Envoy command: [-c /etc/istio/proxy/envoy-rev0.json --restart-epoch 0 --drain-time-s 45 --parent-shutdown-time-s 60 --service-cluster istio-ingressgateway --service-node router~10.1.12.183~istio-ingressgateway-6894877c7f-9f6hw.istio-system~istio-system.svc.cluster.local --max-obj-name-len 189 --allow-unknown-fields -l warning]
[2019-04-30 09:08:26.357][66][warning][misc] [external/envoy/source/common/protobuf/utility.cc:174] Using deprecated option 'envoy.api.v2.Cluster.hosts' from file cds.proto. This configuration will be removed from Envoy soon. Please see https://www.envoyproxy.io/docs/envoy/latest/intro/deprecated for details.
[2019-04-30 09:08:26.357][66][warning][misc] [external/envoy/source/common/protobuf/utility.cc:174] Using deprecated option 'envoy.api.v2.Cluster.hosts' from file cds.proto. This configuration will be removed from Envoy soon. Please see https://www.envoyproxy.io/docs/envoy/latest/intro/deprecated for details.
[2019-04-30 09:08:26.433][66][warning][misc] [external/envoy/source/common/protobuf/utility.cc:174] Using deprecated option 'envoy.api.v2.Cluster.hosts' from file cds.proto. This configuration will be removed from Envoy soon. Please see https://www.envoyproxy.io/docs/envoy/latest/intro/deprecated for details.
[2019-04-30 09:08:26.439][66][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:86] gRPC config stream closed: 14, no healthy upstream
[2019-04-30 09:08:26.439][66][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:49] Unable to establish new stream
2019-04-30T09:08:26.673252Z info Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
[2019-04-30 09:08:27.370][75][warning][lua] [external/envoy/source/extensions/filters/http/lua/lua_filter.cc:538] script log: Execute Envoy Filter
[2019-04-30 09:08:27.370][75][warning][lua] [external/envoy/source/extensions/filters/http/lua/lua_filter.cc:538] script log: And return
[2019-04-30 09:08:27.395][77][warning][lua] [external/envoy/source/extensions/filters/http/lua/lua_filter.cc:538] script log: Execute Envoy Filter
[2019-04-30 09:08:27.395][77][warning][lua] [external/envoy/source/extensions/filters/http/lua/lua_filter.cc:538] script log: And return
[2019-04-30 09:08:27.417][75][warning][lua] [external/envoy/source/extensions/filters/http/lua/lua_filter.cc:538] script log: Execute Envoy Filter
[2019-04-30 09:08:27.418][75][warning][lua] [external/envoy/source/extensions/filters/http/lua/lua_filter.cc:538] script log: And return
[2019-04-30 09:08:27.447][73][warning][lua] [external/envoy/source/extensions/filters/http/lua/lua_filter.cc:538] script log: Execute Envoy Filter
[2019-04-30 09:08:27.447][73][warning][lua] [external/envoy/source/extensions/filters/http/lua/lua_filter.cc:538] script log: And return
[2019-04-30 09:08:27.473][73][warning][lua] [external/envoy/source/extensions/filters/http/lua/lua_filter.cc:538] script log: Execute Envoy Filter
[2019-04-30 09:08:27.473][73][warning][lua] [external/envoy/source/extensions/filters/http/lua/lua_filter.cc:538] script log: And return
[2019-04-30 09:08:27.498][75][warning][lua] [external/envoy/source/extensions/filters/http/lua/lua_filter.cc:538] script log: Execute Envoy Filter
[2019-04-30 09:08:27.498][75][warning][lua] [external/envoy/source/extensions/filters/http/lua/lua_filter.cc:538] script log: And return
[2019-04-30 09:08:27.522][74][warning][lua] [external/envoy/source/extensions/filters/http/lua/lua_filter.cc:538] script log: Execute Envoy Filter
[2019-04-30 09:08:27.522][74][warning][lua] [external/envoy/source/extensions/filters/http/lua/lua_filter.cc:538] script log: And return
[2019-04-30 09:08:27.548][77][warning][lua] [external/envoy/source/extensions/filters/http/lua/lua_filter.cc:538] script log: Execute Envoy Filter
[2019-04-30 09:08:27.548][77][warning][lua] [external/envoy/source/extensions/filters/http/lua/lua_filter.cc:538] script log: And return
[2019-04-30 09:08:27.573][77][warning][lua] [external/envoy/source/extensions/filters/http/lua/lua_filter.cc:538] script log: Execute Envoy Filter
[2019-04-30 09:08:27.573][77][warning][lua] [external/envoy/source/extensions/filters/http/lua/lua_filter.cc:538] script log: And return
[2019-04-30 09:08:27.601][77][warning][lua] [external/envoy/source/extensions/filters/http/lua/lua_filter.cc:538] script log: Execute Envoy Filter
PANIC: unprotected error in call to Lua API (8)
[2019-04-30 09:08:27.605][77][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:81] Caught Segmentation fault, suspect faulting address 0x6c61667078be
[2019-04-30 09:08:27.605][77][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:69] Backtrace (use tools/stack_decode.py to get line numbers):
[2019-04-30 09:08:27.606][77][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:73] #0: __restore_rt [0x7f84af534390]
[2019-04-30 09:08:27.611][77][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:73] #1: Envoy::Upstream::ClusterManagerImpl::get() [0x906e02]
[2019-04-30 09:08:27.616][77][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:73] #2: Envoy::Grpc::AsyncStreamImpl::initialize() [0xa45ab6]
[2019-04-30 09:08:27.623][77][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:73] #3: Envoy::Grpc::AsyncRequestImpl::initialize() [0xa46b8e]
[2019-04-30 09:08:27.627][77][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:73] #4: Envoy::Grpc::AsyncClientImpl::send() [0xa458aa]
[2019-04-30 09:08:27.631][77][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:73] #5: Envoy::Utils::GrpcTransport<>::GrpcTransport() [0x4c0054]
[2019-04-30 09:08:27.635][77][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:73] #6: Envoy::Utils::GrpcTransport<>::GetFunc()::{lambda()#1}::operator()() [0x4bfee5]
[2019-04-30 09:08:27.638][77][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:73] #7: std::_Function_handler<>::_M_invoke() [0x4bfd94]
[2019-04-30 09:08:27.641][77][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:73] #8: istio::mixerclient::ReportBatch::FlushWithLock() [0x4df513]
[2019-04-30 09:08:27.645][77][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:73] #9: istio::mixerclient::ReportBatch::~ReportBatch() [0x4df11a]
[2019-04-30 09:08:27.648][77][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:73] #10: istio::mixerclient::ReportBatch::~ReportBatch() [0x4df24e]
[2019-04-30 09:08:27.651][77][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:73] #11: istio::mixerclient::MixerClientImpl::~MixerClientImpl() [0x4dc28c]
[2019-04-30 09:08:27.655][77][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:73] #12: istio::mixerclient::MixerClientImpl::~MixerClientImpl() [0x4dc3fe]
[2019-04-30 09:08:27.658][77][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:73] #13: istio::control::ClientContextBase::~ClientContextBase() [0x4a7e34]
[2019-04-30 09:08:27.662][77][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:73] #14: istio::control::http::ControllerImpl::~ControllerImpl() [0x4c4e1f]
[2019-04-30 09:08:27.666][77][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:73] #15: istio::control::http::ControllerImpl::~ControllerImpl() [0x4c4eae]
[2019-04-30 09:08:27.669][77][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:73] #16: Envoy::Http::Mixer::Control::~Control() [0x47c0d2]
[2019-04-30 09:08:27.671][77][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:73] #17: Envoy::ThreadLocal::InstanceImpl::ThreadLocalData::~ThreadLocalData() [0x87f7a9]
[2019-04-30 09:08:27.671][77][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:75] #18: [0x7f84aee8a5ff]
2019-04-30T09:08:27.689676Z warn Epoch 0 terminated with an error: signal: segmentation fault
2019-04-30T09:08:27.689716Z warn Aborted all epochs
2019-04-30T09:08:27.689758Z info Epoch 0: set retry delay to 1.6s, budget to 6
2019-04-30T09:08:28.672921Z info Envoy proxy is NOT ready: failed retrieving Envoy stats: Get http://127.0.0.1:15000/stats?usedonly: dial tcp 127.0.0.1:15000: connect: connection refused
2019-04-30T09:08:29.290223Z info Reconciling retry (budget 6)
2019-04-30T09:08:29.291158Z info Epoch 0 starting
2019-04-30T09:08:29.294586Z info Envoy command: [-c /etc/istio/proxy/envoy-rev0.json --restart-epoch 0 --drain-time-s 45 --parent-shutdown-time-s 60 --service-cluster istio-ingressgateway --service-node router~10.1.12.183~istio-ingressgateway-6894877c7f-9f6hw.istio-system~istio-system.svc.cluster.local --max-obj-name-len 189 --allow-unknown-fields -l warning]
[2019-04-30 09:08:29.352][81][warning][misc] [external/envoy/source/common/protobuf/utility.cc:174] Using deprecated option 'envoy.api.v2.Cluster.hosts' from file cds.proto. This configuration will be removed from Envoy soon. Please see https://www.envoyproxy.io/docs/envoy/latest/intro/deprecated for details.
[2019-04-30 09:08:29.352][81][warning][misc] [external/envoy/source/common/protobuf/utility.cc:174] Using deprecated option 'envoy.api.v2.Cluster.hosts' from file cds.proto. This configuration will be removed from Envoy soon. Please see https://www.envoyproxy.io/docs/envoy/latest/intro/deprecated for details.
[2019-04-30 09:08:29.352][81][warning][misc] [external/envoy/source/common/protobuf/utility.cc:174] Using deprecated option 'envoy.api.v2.Cluster.hosts' from file cds.proto. This configuration will be removed from Envoy soon. Please see https://www.envoyproxy.io/docs/envoy/latest/intro/deprecated for details.
[2019-04-30 09:08:29.357][81][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:86] gRPC config stream closed: 14, no healthy upstream
[2019-04-30 09:08:29.357][81][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:49] Unable to establish new stream
2019-04-30T09:08:30.673322Z info Envoy proxy is ready
Alternatively the problem can also be reproduced directly with the istio-proxy without the control plane by starting istio-proxy with this configuration:
static_resources:
listeners:
- name: main
address:
socket_address:
address: 0.0.0.0
port_value: 8888
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
stat_prefix: ingress_http
codec_type: auto
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains:
- "*"
routes:
- match:
prefix: "/"
route:
cluster: web_service
http_filters:
- name: envoy.lua
config:
inline_code: |
function envoy_on_request(request_handle)
request_handle:logWarn("Execute Envoy Filter")
local copy = {}
for k,v in pairs(request_handle:headers()) do
copy[k] = v
end
request_handle:logWarn("And return")
end
- name: envoy.router
config: {}
clusters:
- name: web_service
connect_timeout: 0.25s
type: strict_dns # static
lb_policy: round_robin
hosts:
- socket_address:
address: 127.0.0.1
port_value: 8086
admin:
access_log_path: "/dev/null"
address:
socket_address:
address: 0.0.0.0
port_value: 8001
I could only reproduce it with the istio-proxy of the official Docker image.
A local build on Mac did not crash.
But this is reasonable as this build deviates in a number of factors from what is delivered: Different compiler, different OS, afaik istio-proxy 1.1.3 and 1.1.4 are not built from the code that is publicly available...
There are actually a number of different stack traces that appear in the crash, the log above shows that it crashes in the mixer plugin.
When running with the minimal config above, ie. without the mixer plugin, I also see sth like this:
[2019-04-30 09:47:48.706][820][warning][lua] [external/envoy/source/extensions/filters/http/lua/lua_filter.cc:538] script log: Execute Envoy Filter
PANIC: unprotected error in call to Lua API (8)
[2019-04-30 09:47:48.707][820][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:81] Caught Segmentation fault, suspect faulting address 0x2e
[2019-04-30 09:47:48.707][820][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:69] Backtrace (use tools/stack_decode.py to get line numbers):
[2019-04-30 09:47:48.707][820][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:73] #0: __restore_rt [0x7fb5b98fb0c0]
[2019-04-30 09:47:48.710][820][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:73] #1: Envoy::Stats::ThreadLocalStoreImpl::ScopeImpl::tlsHistogram() [0x8f5f2e]
[2019-04-30 09:47:48.713][820][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:73] #2: Envoy::Stats::ParentHistogramImpl::recordValue() [0x8f7205]
[2019-04-30 09:47:48.716][820][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:73] #3: Envoy::Http::Http1::ConnPoolImpl::ActiveClient::~ActiveClient() [0x9ec150]
[2019-04-30 09:47:48.719][820][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:73] #4: Envoy::Http::Http1::ConnPoolImpl::ActiveClient::~ActiveClient() [0x9ec2f5]
[2019-04-30 09:47:48.722][820][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:73] #5: Envoy::Event::DispatcherImpl::clearDeferredDeleteList() [0x8d3dc7]
[2019-04-30 09:47:48.725][820][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:73] #6: Envoy::Http::Http1::ConnPoolImpl::~ConnPoolImpl() [0x9ec662]
[2019-04-30 09:47:48.728][820][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:73] #7: Envoy::Http::Http1::ProdConnPoolImpl::~ProdConnPoolImpl() [0x9ec91e]
[2019-04-30 09:47:48.732][820][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:73] #8: absl::container_internal::raw_hash_set<>::clear() [0x9103fd]
[2019-04-30 09:47:48.739][820][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:73] #9: Envoy::Upstream::ConnPoolMap<>::~ConnPoolMap() [0x9100a8]
[2019-04-30 09:47:48.743][820][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:73] #10: std::_Sp_counted_ptr_inplace<>::_M_dispose() [0x90fe6c]
[2019-04-30 09:47:48.746][820][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:73] #11: std::__detail::_Hashtable_alloc<>::_M_deallocate_node() [0x90fd65]
[2019-04-30 09:47:48.750][820][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:73] #12: Envoy::Upstream::ClusterManagerImpl::ThreadLocalClusterManagerImpl::~ThreadLocalClusterManagerImpl() [0x90c343]
[2019-04-30 09:47:48.755][820][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:73] #13: Envoy::ThreadLocal::InstanceImpl::ThreadLocalData::~ThreadLocalData() [0x87f7a9]
[2019-04-30 09:47:48.755][820][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:75] #14: [0x7fb5b927ceaf]
Version (include the output of istioctl version --remote and kubectl version)
This happens with the istio-proxy of 1.1.3 and 1.1.4.
I could not reproduce the problem with 1.1.2.
How was Istio installed?
helm install ~/istio/istio-1.1.4/install/kubernetes/helm/istio-init --name istio-init --namespace istio-system -f ~/istio/values.yaml
helm install ~/istio/istio-1.1.4/install/kubernetes/helm/istio --name istio --namespace istio-system -f ~/istio/values.yaml
Environment where bug was observed (cloud vendor, OS, etc)
Running locally on Kubernetes 1.13.0 on Docker for Mac.
Affected product area (please put an X in all that apply)
[ ] Configuration Infrastructure
[ ] Docs
[ ] Installation
[ ] Networking
[ ] Performance and Scalability
[ ] Policies and Telemetry
[ ] Security
[X] Test and Release
[ ] User Experience
This is the dump of the cluster config, (I had to fix an error in the dump_kubernetes.sh script, it doesn't work on OSX)
istio-dump.tar.gz
Bug description
After configuring a very simply Lua script the Istio Proxy crashes with a segfault.
Expected behavior
A Lua script should never make the proxy crash, the script may fail and result in an HTTP 500, but it should never crash.
Steps to reproduce the bug
First create a Service that is available externally via the ingress gateway:
Afterwards configure this EnvoyFilter for the gateway listener of the ingress gateway:
Now fire requests against this service:
(I use
curl --resolveas I am running Istio locally on Docker for Mac)After a few requests the Istio-proxy of the ingress-gateway crashes with a segfault:
Alternatively the problem can also be reproduced directly with the istio-proxy without the control plane by starting istio-proxy with this configuration:
I could only reproduce it with the istio-proxy of the official Docker image.
A local build on Mac did not crash.
But this is reasonable as this build deviates in a number of factors from what is delivered: Different compiler, different OS, afaik istio-proxy 1.1.3 and 1.1.4 are not built from the code that is publicly available...
There are actually a number of different stack traces that appear in the crash, the log above shows that it crashes in the mixer plugin.
When running with the minimal config above, ie. without the mixer plugin, I also see sth like this:
Version (include the output of
istioctl version --remoteandkubectl version)This happens with the istio-proxy of 1.1.3 and 1.1.4.
I could not reproduce the problem with 1.1.2.
How was Istio installed?
Environment where bug was observed (cloud vendor, OS, etc)
Running locally on Kubernetes 1.13.0 on Docker for Mac.
Affected product area (please put an X in all that apply)
[ ] Configuration Infrastructure
[ ] Docs
[ ] Installation
[ ] Networking
[ ] Performance and Scalability
[ ] Policies and Telemetry
[ ] Security
[X] Test and Release
[ ] User Experience
This is the dump of the cluster config, (I had to fix an error in the dump_kubernetes.sh script, it doesn't work on OSX)
istio-dump.tar.gz