Maps pod-to-pod traffic, pod-to-Internet traffic, and even AWS IAM traffic, with zero-config.
- About
- Try the network mapper
- Installation instructions
- How does the network mapper work?
- Exporting a network map
- Learn more
- Contributing
- Slack
mapper.mp4
The Otterize network mapper is a zero-config tool that aims to be lightweight and doesn't require you to adapt anything in your cluster. Its goal is to give you insights about traffic in your cluster without a complete overhaul or the need to adapt anything to it, unlike other solutions, which may require deploying a new CNI, a service mesh, and so on.
You can use the Otterize CLI to list the traffic by client, visualize the traffic, export the results as JSON or YAML, or reset the traffic the mapper remembers.
Example output after running otterize network-mapper visualize on the Google Cloud microservices demo:

The same microservices demo in the Otterize Cloud access graph, as it appears when you choose to connect the network mapper to Otterize Cloud:

Example output after running otterize network-mapper list on the Google Cloud microservices demo:
cartservice in namespace otterize-ecom-demo calls:
- redis-cart
checkoutservice in namespace otterize-ecom-demo calls:
- cartservice
- currencyservice
- emailservice
- paymentservice
- productcatalogservice
- shippingservice
frontend in namespace otterize-ecom-demo calls:
- adservice
- cartservice
- checkoutservice
- currencyservice
- productcatalogservice
- recommendationservice
- shippingservice
loadgenerator in namespace otterize-ecom-demo calls:
- frontend
recommendationservice in namespace otterize-ecom-demo calls:
- productcatalogserviceTry the quickstart to get a hands-on experience in 5 minutes.
Looking to map AWS traffic? Check out the AWS visibility tutorial.
helm repo add otterize https://helm.otterize.com
helm repo update
helm install network-mapper otterize/network-mapper -n otterize-system --create-namespace --waitMac
brew install otterize/otterize/otterize-cliLinux 64-bit
wget https://get.otterize.com/otterize-cli/v2.0.3/otterize_linux_x86_64.tar.gz
tar xf otterize_linux_x86_64.tar.gz
sudo cp otterize /usr/local/binWindows
scoop bucket add otterize-cli https://github.com/otterize/scoop-otterize-cli
scoop update
scoop install otterize-cliFor more platforms, see the installation guide.
- Mapper - the mapper is deployed once per cluster, and receives traffic information from the sniffer and watchers, and resolves the information to communications between service identities.
- Sniffer - the sniffer is deployed to each node using a DaemonSet, and is responsible for capturing node-local DNS traffic and inspecting open connections.
- Kafka watcher - the Kafka watcher is deployed once per cluster and is responsible for detecting accesses to Kafka topics, which services perform those accesses and which operations they use.
- Istio watcher - the Istio watcher is part of the Mapper and queries Istio Envoy sidecars for HTTP traffic statistics, which are used to detect HTTP traffic with paths. Currently, the Istio watcher has a limitation where it reports all HTTP traffic seen by the sidecar since it was started, regardless of when it was seen.
DNS is a common network protocol used for service discovery. When a pod (checkoutservice) tries to connect to a Kubernetes service
(orderservice) or another pod, a DNS query is sent out. The network mapper watches DNS responses and extracts the IP addresses, which are used for the service identity resolving process.
DNS responses will only appear when new connections are opened. To handle long-lived connections, the network mapper also queries open TCP connections in a manner similar to netstat or ss. The IP addresses are used for the service identity resolving process, as above.
The Kafka watcher periodically examines logs of Kafka servers provided by the user through configuration, parses them and deduces topic-level access to Kafka from pods in the cluster.
The watcher is only able to parse Kafka logs when Kafka servers' Authorizer logger is configured to output logs to stdout with DEBUG level.
The Istio watcher, part of the Network mapper periodically queries for all pods with the security.istio.io/tlsMode label, queries each pod's Istio sidecar for metrics about connections, and deduces connections with HTTP paths between pods covered by the Istio service mesh.
Service names are resolved in one of two ways:
- If an
otterize/service-namelabel is present, that name is used. - If not, a recursive look-up is performed for the Kubernetes resource owner for a pod until the root is reached.
For example, if you have a
Deploymentnamedclient, which then creates and owns aReplicaSet, which then creates and owns aPod, then the service name for that pod isclient- same as the name of theDeployment. The goal is to generate a mapping that speaks in the same language that dev teams use.
The network mapper continuously builds a map of pod to pod communication in the cluster. The map can be exported at any time in either JSON or YAML formats with the Otterize CLI.
The YAML export is formatted as ClientIntents Kubernetes resource files. Client intents files can be consumed by the Otterize intents operator to configure pod-to-pod access with network policies, or Kafka client access with Kafka ACLs and mTLS.
Explore our documentation site to learn how to:
- Map pod-to-pod communication.
- Automate network policies.
- And more...
- Feel free to fork and open a pull request! Include tests and document your code in Godoc style
- In your pull request, please refer to an existing issue or open a new one.
- See our Contributor License Agreement.
To join the conversation, ask questions, and engage with other users, join the Otterize Slack!
The mapper reports anonymous usage information back to the Otterize team, to help the team understand how the software is used in the community and what aspects users find useful. No personal or organizational identifying information is transmitted in these metrics: they only reflect patterns of usage. You may opt out at any time through a single configuration flag.
To disable sending usage information:
- Via the Otterize OSS Helm chart:
--set global.telemetry.enabled=false. - Via an environment variable:
OTTERIZE_TELEMETRY_ENABLED=false. - If running a mapper directly:
--telemetry-enabled=false.
If the telemetry flag is omitted or set to true, telemetry will be enabled: usage information will be reported.
Read more about it in the Usage telemetry Documentation

