Skip to main content

Performance monitoring using Node.js, socket.io and MarkLogic - Historical reporting

5 min read

Older Article

This article was published 10 years ago. Some information may be outdated or no longer applicable.

In a previous article we built a system that collects performance metrics using socket.io and a Node.js library, then plots those values on a chart.

This time we’re adding historical reporting. Pick a date range, and a chart fills up with the data points for that period.

The updated source code for the app can be found here: https://github.com/tpiros/system-information/tree/historical - note it’s in a branch called ‘historical’

MarkLogic ships with a built-in history monitoring tool for the server. It could be a great addition to this project. If you’re curious, read more about it here: https://docs.marklogic.com/guide/monitoring/history

Requirements revisited

Let’s think about what we want. An interface with a date picker that sends a request to the backend. The backend pulls documents from the database and pushes results back to the client (browser), where the chart gets drawn.

We’ll start with the front-end this time, adding visual elements and JavaScript code.

Updating the template

The template got a visual refresh with Bootstrap elements. One of them is a tab panel with a ‘Current’ and a ‘Historical’ tab. The historical tab holds a date picker and a button for submitting requests:

<div role="tabpanel" class="tab-pane" id="historical">
  <div class="container">
    <div class="col-sm-6">
      <form class="form-inline">
        <div class="form-group">
          <div class="input-group date" id="datetimepicker1">
            <input
              type="text"
              id="from"
              class="form-control"
              placeholder="YYYY-MM-DD HH:mm:ss"
            />
            <span class="input-group-addon">
              <span class="glyphicon glyphicon-calendar"></span>
            </span>
          </div>
        </div>
        <div class="form-group">
          <div class="input-group date" id="datetimepicker2">
            <input
              type="text"
              id="to"
              class="form-control"
              placeholder="YYYY-MM-DD HH:mm:ss"
            />
            <span class="input-group-addon">
              <span class="glyphicon glyphicon-calendar"></span>
            </span>
          </div>
        </div>
        <button type="button" class="btn btn-info" id="apply">Apply</button>
        <div
          id="curve_chart_historical"
          style="width: 900px; height: 500px"
        ></div>
      </form>
    </div>
  </div>
</div>

My choice fell on Eonasdan’s Bootstrap date picker. There are other Bootstrap-compatible date pickers out there; pick the one that fits your needs.

Notice the <div> element under the ‘Apply’ button with a different id from the other chart in the template. That’s crucial, otherwise data would load into the wrong chart.

Request from the client to the server

With the chart in place, there needs to be a method that feeds data to it. A new historical.js file handles the logic.

This script uses jQuery and the Google Charting API. The process is simple:

  • Grab the ‘from’ and ‘to’ dates from the date picker and convert them to epochs
  • Fire a request to the server
  • Process the returned data and produce the chart

The code below does exactly that:

$('#apply').click(function () {
  var from = parseInt(new Date($('#from').val()).getTime());
  var to = parseInt(new Date($('#to').val()).getTime());

  $.post('/api/historical', { from: from, to: to }).done(function (data) {
    parseData(data, function (historicalDataArray) {
      var options = {
        title: 'System Utilisation',
        curveType: 'function',
        legend: { position: 'bottom' },
        pointSize: 3,
        width: 900,
        height: 400,
      };

      var historicalData =
        google.visualization.arrayToDataTable(historicalDataArray);

      var chart = new google.visualization.LineChart(
        document.getElementById('curve_chart_historical')
      );
      chart.draw(historicalData, options);
    });
  });
});

A separate parseData() function keeps the .done() callback from getting unwieldy. It just reshapes the data into the format the Google Charting API expects:

function parseData(dataFromServer, cb) {
  var historicalDataArray = [['Time', 'CPU Average (%)', 'Used Memory (GB)']];
  dataFromServer.forEach(function (document) {
    historicalDataArray.push([
      new Date(document.content.recorded),
      parseFloat(document.content.cpu),
      parseFloat(document.content.memory),
    ]);
  });

  cb(historicalDataArray);
}

The client side is ready. Time to build the backend. The Ajax call above already hints at an endpoint that needs creating.

The backend and database query

The backend needs an endpoint that queries the database and returns data for the front-end. Because the application uses Node.js with Express, we also need a package to handle the request body from the browser. I went with body-parser:

const bodyParser = require('body-parser');
// ...
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: false }));
// ...

A bit on the database and indexes

Before we continue with the endpoint, there’s a data structure change to mention. In the previous article, documents in the database had the format { cpu: 'value', memory: 'value' }. For historical reporting, that needs updating to { cpu: 'value', memory: 'value', created: 'epoch' }. This makes retrieving the right documents much easier.

Remember that documents in the MarkLogic database are stored with the URI format of /data/EPOCH.json. There are other ways to retrieve documents for historical reporting (a mapping between timestamps from the client and URIs could work) but it’d involve more effort.

MarkLogic supports indexes, and we need one on the created JSON property. This enables range queries and lets us retrieve documents in sorted order based on indexed values.

To learn more about Range Indexes and their requirements in MarkLogic please take a look at this article.

Retrieving the documents

Time to create the endpoint. Defining endpoints in Express is quick work. Remember that the client side specified an HTTP POST request against /api/historical:

let historicalRoute = (req, res) => {
  let from = req.body.from;
  let to = req.body.to;
  db.documents
    .query(
      qb
        .where(
          qb.and([
            qb.range('recorded', '>=', from),
            qb.range('recorded', '<=', to),
          ])
        )
        .orderBy(qb.sort('recorded'))
        .slice(0, 500)
    )
    .result()
    .then((response) => {
      res.json(response);
    })
    .catch((error) => {
      console.log(error);
    });
};

router.route('/api/historical').post(historicalRoute);

That code grabs from and to from req.body and runs a query. In plain English: “give me 500 documents where the recorded JSON property is greater than from and less than to.”

The 500 data point limit is a sensible default. You can change it anytime, or pass it as a parameter from the client.

The query returns JSON, and that’s what arrives at the client.

Conclusion

The performance monitoring application is now complete. It’s got real-time reporting on CPU and memory utilisation, plus a reporting interface that lets you look back at past performance metrics.

The application shows how socket.io can power real-time apps. Feel free to modify it and add your own metrics. I’d be interested to see what you build with it.