Observability is generally understood as using a system’s outputs to understand its inner state. Traditional Application Performance Monitoring uses metrics, traces, and logs as a measure of the health of the systems that support applications which, in the last decade have proven to be crucial to operating software. 

At Neblic our drive is to understand the health of the software itself. That is, each service, and each operation is executing the actions that the developers originally intended. Traces, logs, and metrics can’t answer this, again, they provide insights about the systems, not the software.

Extending the principle of using a system’s outputs to understand its inner workings, we concluded that the answer lies within the data. Each process, each function, and each Kafka topic has an input and output of data that can provide insights into what it’s doing and how that’s changing over time. 

The term Data Observability has been growing by Data Engineers as a way to understand the health of their data. But in the context of software developers and applications, traditional Data Observability is about measuring aspects of the final output of the data.

Enter Application Data Observability.

For us, it was clear that software developers who want to understand the state of their applications can’t just rely on static tables of outputs. Application Data Observability for us means monitoring the data of an application, in multiple points of an application, continuously as the application is operating. 

The more granular the analysis the more accurate the insights will be. These intermediate points are often just lost data as traditionally the only purpose of this was to serve as input of other dependencies of the application or as inputs for other applications, we are now capturing that lost data to monitor an application’s health. 

How can we use all this in-between data to extract valuable insights? We don’t simply ingest all the data and display it, that would be valuable in some situations, but it would be overwhelming to most people and generate unnecessary overhead. Our approach involves analyzing and aggregating data into three pillars: structure, values, and volume. These are data-oriented metrics that can be measured and tracked across time, which makes it easier for individuals to view trends and anomalies. 

By converting data into meaningful metrics, we significantly reduce the overhead of reading data from all those in-between application points, we can guarantee privacy, and we can still provide the right insights for troubleshooting and debugging applications. 

Monitoring data structure involves providing schema, field types, and field presence. Monitoring data values is about monitoring the actual seen values, null values, averages, min and max values for numbers, field distributions, TopN occurrences, and more. Analyzing data volume across services or topics can help identify trends and errors at that level. 

Monitoring these metrics alone with granularity across an application could provide key insights to help debug applications, but it also provides Neblic with ground truths to build relationships on top. Automatically derive field lineage, so teams can visualize what other services or teams are using their outputs, help onboard new team members by understanding the system as a whole, and detect anomalies as soon as they happen. 

Application Data Observability from Neblic sits at the convergence of traditional Application Performance Monitoring tools and traditional Data Observability for developers as a way to empower software developers to debug and troubleshoot production applications.

Register on our closed beta waiting list

We’ll reach out!

Bringing application data to your troubleshooting journey.

© Copyright 2024 neblic.com — All rights reserved.