Cribl Observability Pipelines For Dummies
Cribl Observability Pipelines For Dummies
by Alexandra Gates
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
Observability Pipelines For Dummies®, 2nd Cribl Special Edition
Published by
John Wiley & Sons, Inc.
111 River St.
Hoboken, NJ 07030-5774
www.wiley.com
Copyright © 2023 by John Wiley & Sons, Inc., Hoboken, New Jersey
No part of this publication may be reproduced, stored in a retrieval system or transmitted in any
form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise,
except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without
the prior written permission of the Publisher. Requests to the Publisher for permission should be
addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ
07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permissions.
Trademarks: Wiley, For Dummies, the Dummies Man logo, The Dummies Way, Dummies.com,
Making Everything Easier, and related trade dress are trademarks or registered trademarks of
John Wiley & Sons, Inc. and/or its affiliates in the United States and other countries, and may not
be used without written permission. Cribl and the Cribl logo are registered trademarks of Cribl.
All other trademarks are the property of their respective owners. John Wiley & Sons, Inc., is not
associated with any product or vendor mentioned in this book.
For general information on our other products and services, or how to create a custom For
Dummies book for your business or organization, please contact our Business Development
Department in the U.S. at 877-409-4177, contact [email protected], or visit www.wiley.com/go/
custompub. For information about licensing the For Dummies brand for products or services,
contact BrandedRights&[email protected].
ISBN: 978-1-119-98212-8 (pbk); ISBN: 978-1-119-98213-5 (ebk). Some blank pages in the print
version may not be included in the ePDF version.
10 9 8 7 6 5 4 3 2 1
Publisher’s Acknowledgments
Some of the people who helped bring this book to market include the following:
Project Manager and Managing Editor: Camille Graves
Development Editor: Acquisitions Editor: Ashley Coffey
Carrie Burchfield-Leighton
Business Development
Sr. Managing Editor: Rev Mengle Representative: Molly Daugherty
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
Table of Contents
INTRODUCTION................................................................................................ 1
About This Book.................................................................................... 1
Foolish Assumptions............................................................................. 2
Icons Used in This Book........................................................................ 2
Beyond the Book................................................................................... 3
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
Tradeoff #2: Flexibility........................................................................ 29
Tradeoff #3: Comprehensive Data Analysis..................................... 29
Tradeoff #4: Adding an Index for Faster Searches.......................... 30
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
Introduction
H
ow do you get data visibility from your infrastructure and
applications in order to properly observe, monitor, and
secure their running states while minimizing overlap,
wasted resources, and cost?
Introduction 1
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
Foolish Assumptions
When writing this book, I made a couple of assumptions about
you, the reader:
This icon highlights information that may save you time, money,
and more. Tips can help you do things quicker or easier.
If you like to know the technical details about a topic, pay atten-
tion to this icon. It provides you with all the techie, juicy details.
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
Beyond the Book
This book can help you discover more about observability pipe-
lines, but if you want resources beyond what this book offers,
here’s some insight for you:
Introduction 3
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
IN THIS CHAPTER
»» Understanding observability
Chapter 1
Embracing the Practice
of Observability
Y
ou may be reading this book to discover how an observabil-
ity pipeline helps you get control over all the data you use
in your observability and security projects. Before I get into
the nitty gritty, in this chapter, I cover the basics: what observ-
ability is, how you should collect and structure your data, and how
observability pipelines can tie all these topics together.
Defining Observability
Observability is a growing practice in the world of software devel-
opment and operations. Observability gives you the opportunity
to learn about the important aspects of your environment with-
out knowing in advance the questions you need to ask. Put more
simply, it seeks to answer how much you can understand about a
system by looking at it from the outside.
For example, how much can you tell from outside an aircraft about
whether the engine is working? You can probably tell if it’s run-
ning because it’s making noise and vibrating — and maybe that
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
it’s running properly if there’s not a lot of violent shaking inside
the cabin, if the plane flies, and so on. But to learn more intricate
details, you need sensors — different ones that measure things
such as how much air is being pulled through the engine, how hot
the engine is, the consistency of the RPM, and how efficiently the
plane is consuming fuel.
For example, break down your sales targets a bit to see where
observability can help. For instance, assume you’re a clothing
retailer. Sales are impacted by the demand for your clothes, your
prices, store locations, and your online presence. The obvious
place for observability to play a part in sales is web sales. Ask
yourself the following questions:
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
»» How quickly does our website load?
»» Are there differences in site performance for mobile or
desktop browsers?
»» Can we guarantee the safety of sensitive customer informa-
tion (do you think twice about buying from a company that
just announced hackers compromised its customer data)?
So, what kinds of sensors should you have in place to make the
measurements that will help you understand how well things are
running? You can collect this data in several ways:
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
»» Agents are another type of software that sit on all the end
points of your systems and collect metrics (see Chapter 2)
that explain what’s going on within your environment.
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
Asking Questions of Your Data
In order to ask questions of your data, you have to structure it in
a way that the analytics tools your organization uses can under-
stand. Unfortunately, many data sources have unique structures
that aren’t easily readable by all analytics tools, and some data
sources aren’t structured at all. Some of the tools your organiza-
tion uses to review and analyze data may expect the data to have
already been written to log files in a particular format, known
as schema-on-write, and some tools involve an indexing step to
process the data into the required format as it arrives, known as
schema-on-read.
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
that may arise. If you were taking a test and knew which ques-
tions were going to be on it, you’d only have to study a narrow
set of topics. Because you don’t know what’s on the test, it’s best
to master all aspects of your subject if you want that A grade. Of
course, studying everything takes a lot of time, just as collect-
ing more data can be expensive without the right approach. An
observability pipeline can help you achieve that mastery over your
environment without crippling your budget.
As your goals evolve, you have the freedom to make new choices,
including new tools and destinations as well as new data formats.
The right observability pipeline helps you get the data you want,
in the formats you need, to wherever you want it to go.
Data volumes are growing year over year, and at the same time,
companies are trying to analyze new sources of data to get a com-
plete picture of their IT and security environments. They need
flexibility to get data into multiple tools from multiple sources but
don’t want to add a lot of new infrastructure and agents. These
companies need a better strategy for retaining data long term
that’s also cost effective.
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
As an example, take a look at the observability pipeline in
Figure 1-1. It’s a universal receiver of data that can collect, reduce,
and transform data and route it to a wide variety of observability
and security tools and storage destinations.
Check out Chapter 5 for more reasons why you’d want to use an
observability pipeline as your data solution.
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
IN THIS CHAPTER
»» Figuring out your observability goals
Chapter 2
Evaluating Where You
Stand in Meeting
Your Goals
Y
ou can’t get to where you’re going if you don’t know where
you’re starting. To be successful on a journey, you need to
set a course to your final destination. So how do you do that
for your IT environment? Data is how you evaluate how well
you’re meeting these goals. It can show you where you are on the
map, how quickly you’re traveling, and which roads get you to
your destination most efficiently. In this chapter, I give you infor-
mation on evaluating your goals both in security and IT opera-
tions. You also discover how to get information from your data
and deliver that to your analytical tools.
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
Determining Your Enterprise’s
Observability Goals
If observability is about seeking answers to questions about how
well your IT environment is running, then you need to know
how to measure what’s acceptable for meeting the observability
goals of your business. Some of these goals revolve around the
following:
To start your list of metrics, begin by breaking down all the ways
your systems impact your observability goals. Then take higher
level measurements and break them down into smaller com-
ponents that impact that metric. For example, site uptime can
be impacted by average response time, CPU utilization percent-
age, available storage percentage, and the number of concurrent
query requests to an inventory database. Many of these items
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
will already be captured by standard log and metrics collection,
but you can add new measurements as you understand how they
impact larger goals.
After you have your list of metrics, try to understand which data
sources can be queried to determine the values of your desired
metrics. In many cases, the sources of data reside within the
infrastructure itself. In other cases, you may need to add tools to
measure other aspects of your environment that you aren’t col-
lecting today.
One of the most direct tools for recording metrics about your sys-
tems and applications are agents. Agents are software programs
deployed on your infrastructure that write out information about
what’s going on in that server, application, network device, and
so on. Together with standard information coming from the
infrastructure, data recorded by agents can give you a better
picture of how well you’re meeting your goals and which finer
grained aspects need attention. Just like secret agents in a spy
movie, agent software sneaks in and collects intel without anyone
knowing it was there. You may also want to factor in third-party
data sources that may impact how your systems are running. For
example, how does external air temperature impact the perfor-
mance of a server farm?
Metrics
Metrics are numeric representations of data measured over inter-
vals of time. Metrics can harness the power of mathematical
modeling and prediction to derive knowledge of the behavior of a
system over intervals of time in the present and future.
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
For example, every time you go for a medical checkup, the nurse
who takes you back to a room collects a set of metrics, such as
your height, weight, blood pressure, temperature, and pulse. The
nurse logs the time, as well as other “dimensions” such as your
name, patient number, what doctor you’re seeing, and the rea-
son for your checkup. Think of this collection of multiple metrics,
with one set of dimensions, as a metric event.
»» A timestamp
»» Metric values, such as
• Percentage of CPU utilization
• Percentage of memory in use
• Load average
• CPU temperature
»» Dimensions, such as
• Hostname
• Location
• Department
• Business function
Each of these events can be used to analyze and report on what’s
measured. They consolidate important measurements for moni-
toring for quick point-in-time health checks. By aggregating, you
get a sense of performance without having to store each unique
log file.
Logs
A log is a system-generated record of data that occurs when an event
(see the preceding section) has happened, and this log describes
what’s going on during the event. A log message contains the log
data. Log data are the details about the event such as a resource that
was accessed, who accessed it, and the time. Each event in a system
is going to have different sets of data in the message.
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
Think about a ship’s log from back in the old wooden sailing-ship
days. Several times a day, the captain (or someone assigned to the
task) noted standard things:
In the digital age, log entries are called log events. Log events of
a particular type, or those from the same source, are written to a
log file locally — or sent across the network to another system.
There are different approaches one can use in the transmission of
the log events, but generically you can refer to the whole process
as “sending log events to a log server.” And, just as you had a
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
captain’s log (things logged by the captain), the digital equiva-
lent of this includes Windows security logs, web server logs, email
logs, and so on.
Traces
A trace marks the path taken by a transaction within an appli-
cation to be completed. This may be a query of the database or
execution of a purchase by a customer. Think of those Indiana
Jones movies where they showed the red line traversing the globe
to represent how he got from one adventure to the next. A single
trace can provide visibility into both the route traveled as well as
the structure of a request.
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
Delivering Data to Your Analytical Tools
The data you need to collect can come from some or all the pil-
lars of observability (see the earlier section “Gleaning Informa-
tion from Your Valuable Data”), and believe me when I tell you
that many strong opinions exist around which pillar is the most
valuable, what kind of data to collect, and how to collect it. But in
reality, it all depends entirely on your business and its goals.
Each tool your organization uses may have widely different for-
mats for how data can be read and interpreted. Think of some-
thing as simple as a timestamp. Some tools may be formatted
YYYY-MM-DD hh:mm:ss; others may include fractional seconds
or the day of the week in abbreviated format such as Thu. Differ-
ent tools may also have different names for field values or expect
log data to be in a specific order. Regardless of the format of the
tools you use, you have to deliver that data to the tools in a way
that they can use.
Regardless of why the event enters the stream, you need to decide
how to get that data to the right tool to be analyzed. In simple
environments, you can create unique pipelines for each pair of
data sources and destinations. For most organizations, however,
that approach will quickly become cumbersome because you have
multiple tools analyzing overlapping pieces of the same data, as
illustrated in Figure 2-1.
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
FIGURE 2-1: Creating unique pipelines for each pair of data sources and
destinations.
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
IN THIS CHAPTER
»» Reviewing and analyzing data
Chapter 3
Identifying Your Choices
to Structure Data for
Your Analytics Tools
A
n important consideration at this point is how you struc-
ture the data for your analytics tools. The format and
methods of this structure impact storage volume, compute
requirements, and analytical performance. This chapter covers
several approaches and helps you decide which are best suited to
meet your observability goals.
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
The format for data in an analytics tool is often called a schema,
and each tool has a unique schema. Getting data into these sche-
mas generally falls into two approaches:
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
ingested. With schema-on-write, you also require decisions to be
made up front about which data may be useful in order to get the
answers you need.
You may be asking yourself why developers can’t just add the
structure to logs as they’re generated. This doesn’t work for a few
reasons:
»» That isn’t their job. That’s the truth; once they have their
applications working the way they want, they move on to the
next project.
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
»» A lot of log data is generated based on hardware or network
protocols, and the structure (or lack thereof) is fixed.
»» Developers can’t always go back and re-instrument their
systems to perfectly match the needs of how your tools
analyze data.
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
You also can’t change the fact that data comes in all shapes and
sizes, so spending a lot of time forcing data into a structure that
fits today’s goals is likely to be upended when business goals
evolve and you’re asking new questions of your infrastructure.
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
IN THIS CHAPTER
»» Dealing with multiple types of expenses
Chapter 4
Looking at the Tradeoffs
in Making IT Decisions
»» Cost
»» Flexibility
»» Comprehensive data analysis
»» Speed
How much observability data you collect and how long you retain
it can impact all four of these factors.
Being able to add new tools gives you a more flexible approach to
observability but can bust your budget (check out Chapter 3 for
more about analytical tools). An observability pipeline can help
you balance these tradeoffs and eliminate the negative impacts
from these choices. This chapter discusses these tradeoffs in more
detail.
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
Tradeoff #1: Cost
Many factors influence the cost of analyzing data. Three of the
biggest costs are software licensing fees, data storage costs, and
infrastructure compute expenses.
Where you store data impacts the cost (for example, you can
choose S3 or other low-cost storage), but your choices may be
limited if you need access to this data for later analysis.
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
Tradeoff #2: Flexibility
When companies started analyzing log data, they may have been
satisfied with a single tool for the needs of different departments.
As the types of analysis and the availability of tools have evolved,
companies are increasingly employing multiple tools to get the
answers they need. Different teams need to get different answers,
so it makes sense that they have the flexibility they need to choose
the best tool for the jobs they’re performing. You can attain this
flexibility and mitigate this tradeoff through an observability
pipeline.
Most of these analytics tools have their own agents and collec-
tors that need to be placed on all the endpoints across the enter-
prise. This means you’re collecting a lot of the same data multiple
times, just in different formats. All this duplicative data adds to
the amount you have to store, which, you guessed it, drives more
costs.
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
Some questions, however, require the analysis of past data.
One example is investigating a security breach. These breaches
often occur long before they’re discovered. If you can’t easily
access data from the time of the breach, your investigation may
be incomplete. Consider adding the ability to route some data
sets for immediate analysis and other data sets to long-term,
cheap storage to meet data retention, compliance, and act as an
insurance policy should you need additional context on a security
breach.
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
Therein lies the tradeoff — the size of data required for an index.
Indexes require extra storage in addition to the raw text, but
indexes can greatly reduce the amount of time and computing
power required to go find rare terms in large data sets. The abil-
ity to rapidly find rare terms in terabytes or petabytes of raw data
is a massive innovation. Unfortunately, as is often the case, this
core innovation has been stretched to be a one-size-fits-all solu-
tion for all log data problems. For some workloads, indexes are a
wasteful optimization.
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
IN THIS CHAPTER
»» Discovering strategies for observability
Chapter 5
Navigating a Successful
Observability Journey
T
o truly understand your environment and learn how to
improve it, you need to study the data it generates about
how it operates. In Chapter 3, I mention that observability
requires you to collect and analyze all types of data, regardless of
format or schema. This means gathering performance, health,
and security measurements from all your applications, infra-
structure, and other endpoints — that can add up to a lot of data
and get expensive (check out Chapter 4 for more about cost).
Luckily, an observability pipeline can help you manage massive
amounts of data without swallowing up your budget.
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
Looking at Proven Observability
Strategies
Collecting, storing, and analyzing data in your pursuit of observ-
ability nirvana can take several paths. In this section, I outline
some proven strategies using an observability pipeline.
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
an index are a significantly larger data footprint and the need to
use more specialized storage technology. These tradeoffs make
retaining data long term in these tools much more expensive. For
more about this tradeoff, check out Chapter 4.
Separate the systems you use to analyze data from the systems you
use to store data for longer-term retention. S3-compliant storage
has significant advantages as a system of retention, as compared
to indexed analytics tools. If you choose infrequent access — an
option that assumes you won’t need access to the data often — S3
storage can cost a fraction of the cost of the block storage that an
indexed tool uses.
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
Choose the best tool for each team to answer their observabil-
ity questions. Tools have become a lot more specialized over the
years, which means that a security team may want to choose the
best security information and event management (SIEM) tool to
meet its goals, while the IT operations team may want a different
tool to analyze data for its projects. Both of these teams are likely
looking at a lot of the same data, from the same sources, but they
may be asking different questions. And depending on your cloud
and digital transformation goals, you may need to send similar
data to premises-based and cloud-hosted tooling. By transform-
ing your universal set of data to meet the needs of individual
teams, or business priorities, you reduce the tradeoffs among
flexibility, cost, and the complexity of adding extra agents (check
out Chapter 4 for more on tradeoffs). An observability pipeline
allows you to collect that data once, enhance and filter it, and then
use it with whichever set of tools best fits your needs.
With Cribl Stream and Cribl Edge, you can process observabil-
ity data — logs, traces, metrics, and so on (I cover these in
Chapter 2) — in real time and deliver them to your analysis plat-
form of choice. It allows you to
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
»» Complement or update your existing agent structure, using
one agent to feed multiple data streams and analytics tools.
»» Collect and send only the data relevant to your observability,
security, and analytics goals or compliance mandates.
SOURCES
Splunk UF/HF TCP Amazon SQS
Syslog TCP JSON Azure Event Hubs
Elastic Beats Kafka Metrics
Fluentd Amazon S3 SNMP Traps
HTTP/S Amazon Kinesis … more
Pre-process or
Pre-processing normalize events
pipelines from a source
(optional)
ROUTES
Routes map events
Filters Route Route Route to pipelines and
1 2 3
destinations
Processing pipelines
Pipeline 1
Pipeline 2
Pipeline 3
Processing
pipelines perform all event
transformations
Post-processing Post-process or
pipelines normalize events to a
destination (optional)
DESTINATIONS
Splunk AWS S3, Kinesis, SQS, CWL Statsd/Graphite
Syslog Databricks InfluxDB
Elastic Snowflake MiniO
Kafka/Confluent SNMP Traps Honeycomb
Azure, Blob, Monitor, Hub NFS/Filesystem … more
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
Stream also has a special feature called Stream Replay that lets
you land data in lower-cost storage and send it to an analytics
system later if you need it. Analytics tools typically look at data
that streams in real time — or very close to when it’s actually
being measured. Replay allows you to turn previously collected
data into streaming data so it can be analyzed by these tools.
The data you’ve collected from lower-cost storage or application
programming interfaces (APIs) is processed and then routed to
the right tool (replayed) through Stream whenever you want to
analyze it.
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
IN THIS CHAPTER
»» Routing and reducing your data
Chapter 6
Ten Reasons to Use
a Highly Flexible,
Performant
Observability Pipeline
Y
ou have a wide variety of options for structure, content,
routing, and storage of your data, but an observability pipe-
line allows you to ingest and get value from data in any for-
mat and from any source, and then you can direct it to your
destination of choice — to keep up with the growth of data without
bankrupting your company. The right approach to observability
helps you find the balance between cost, performance, complexity,
and comprehensiveness.
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
tooling, but replaying the data is an intense manual effort of
writing and running scripts and workarounds. Even deciding at
ingestion time where to send data is impossible in most pipelines.
Most solution ingestion pipelines are built only for that solution.
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
»» Remove duplicate data.
»» Drop fields you’ll never analyze.
»» Free up space for emerging data sources that add more
value for the business.
Using an observability pipeline means you keep all the data you
need and only pay to analyze and store what’s important to you.
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
Replaying your data is collecting data from object storage, third-
party data sources, or application programming interfaces (APIs)
and restreaming them to an analytics tool. Replay allows you to
take data at rest and stream it as if it was flowing in real time.
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
Manage Who Sees What with
Role-Based Access Control
Role-based access control allows you to assign access policies —
implementing restrictions or giving access — for teams and indi-
viduals. This security step gives your organization much more
control over who can access particular types of data and what level
of functionality they can use in your logging and pipeline tools.
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
These materials are © 2023 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
WILEY END USER LICENSE AGREEMENT
Go to www.wiley.com/go/eula to access Wiley’s ebook EULA.