Skip to content

tdeekens/promster

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

1,704 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Logo

⏰ Promster - Measure metrics from Hapi, express, Marble.js, Apollo or Fastify servers with Prometheus 🚦

Promster is a Prometheus Exporter for Node.js servers written for Express, Hapi, Marble.js, Apollo or Fastify.

❀️ Hapi Β· Express Β· Marble.js Β· Fastify Β· Apollo Β· TypeScript Β· Vitest Β· oxlint Β· Changesets Β· Prometheus πŸ™

Test & build status Codecov Coverage Status Known Vulnerabilities Made with Coffee

Table of Contents

❯ Why another Prometheus exporter for Express and Hapi?

These packages are a combination of observations and experiences I have had with other exporters which I tried to fix.

  1. 🏎 Use process.hrtime.bigint() for high-resolution real time in metrics in seconds (converting from nanoseconds)
    • process.hrtime.bigint() calls libuv's uv_hrtime, without system call like new Date
  2. βš”οΈ Allow normalization of all pre-defined label values
  3. πŸ–₯ Expose Garbage Collection among other metric of the Node.js process by default
  4. 🚨 Expose a built-in server to expose metrics quickly (on a different port) while also allowing users to integrate with existing servers
  5. πŸ“Š Define two metrics one histogram for buckets and a summary for percentiles for performant graphs in e.g. Grafana
  6. πŸ‘©β€πŸ‘©β€πŸ‘§ One library to integrate with Hapi, Express and potentially more (managed as a mono repository)
  7. πŸ¦„ Allow customization of labels while sorting them internally before reporting
  8. 🐼 Expose Prometheus client on Express locals or Hapi app to easily allow adding more app metrics

❯ Package Status

Package Version Downloads
promster/hapi hapi Version hapi Downloads
promster/express express Version express Downloads
promster/marblejs marblejs Version marblejs Downloads
promster/fastify fastify Version fastify Downloads
promster/apollo apollo Version apollo Downloads
promster/undici undici Version undici Downloads
promster/server server Version server Downloads
promster/metrics metrics Version metrics Downloads
promster/types types Version types Downloads

❯ Installation

This is a mono repository maintained using changesets. It contains multiple packages including framework integrations (express, hapi, fastify, marblejs, apollo, undici), a core metrics library, a standalone server for exposing metrics, and shared types.

Depending on the preferred integration use:

yarn add @promster/express or npm i @promster/express --save

or

yarn add @promster/hapi or npm i @promster/hapi --save

Please additionally make sure you have prom-client installed. It is a peer dependency of @promster as some projects might already have an existing prom-client installed, which otherwise would result in different default registries.

yarn add prom-client or npm i prom-client --save

❯ Getting Started

Promster has to be set up with your server -- either as an Express middleware, a Hapi plugin, a Fastify plugin, or similar. You can expose the gathered metrics via a built-in small server or through your own.

Note: Do not be scared by the variety of options. @promster can be set up without any additional configuration options and has sensible defaults. However, trying to suit many needs and different existing setups (e.g. metrics having recording rules over histograms) it comes with all those options listed in Configuration.

Express

import app from './your-express-app';
import { createMiddleware } from '@promster/express';

// Note: This should be done BEFORE other routes
// Pass 'app' as middleware parameter to additionally expose Prometheus under 'app.locals'
app.use(createMiddleware({ app, options }));

Passing the app into the createMiddleware call attaches the internal prom-client to your Express app's locals. This may come in handy as later you can:

// Create an e.g. custom counter
const counter = new app.locals.Prometheus.Counter({
  name: 'metric_name',
  help: 'metric_help',
});

// to later increment it
counter.inc();

Fastify

import app from './your-fastify-app';
import { plugin as promsterPlugin } from '@promster/fastify';

fastify.register(promsterPlugin);

The plugin attaches the internal prom-client to your Fastify instance. This may come in handy as later you can:

// Create an e.g. custom counter
const counter = new fastify.Prometheus.Counter({
  name: 'metric_name',
  help: 'metric_help',
});

// to later increment it
counter.inc();

Hapi

import { createPlugin } from '@promster/hapi';
import app from './your-hapi-app';

app.register(createPlugin({ options }));

Here you do not have to pass in the app into the createPlugin call as the internal prom-client will be exposed onto Hapi as in:

// Create an e.g. custom counter
const counter = new app.Prometheus.Counter({
  name: 'metric_name',
  help: 'metric_help',
});

// to later increment it
counter.inc();

Marble.js

import {
  createMiddleware,
  getContentType,
  getSummary,
} from '@promster/marblejs';

const middlewares = [
  createMiddleware(),
  //...
];

const serveMetrics$ = r
  .matchPath('/metrics')
  .matchType('GET')
  .use(async (req$) =>
    req$.pipe(
      mapTo({
        headers: { 'Content-Type': getContentType() },
        body: await getSummary(),
      }),
    ),
  );

Apollo

import { createPlugin as createPromsterMetricsPlugin } from '@promster/apollo';

const server = new ApolloServer({
  typeDefs,
  resolvers,
  plugins: [createPromsterMetricsPlugin()],
});

await server.listen();

Undici

Depending on the undici version used you have to use a pool metrics exporter or can use the agent metrics exporter.

Agent metrics (undici v7.9.0 or later)

import { createAgentMetricsExporter } from '@promster/undici';

createAgentMetricsExporter([agentA, agentB]);

You can then also always add additional agents:

import { addObservedAgent } from '@promster/undici';

addObservedAgent(agent);

Pool metrics (undici before v7.9.0)

import { createPoolMetricsExporter } from '@promster/undici';

createPoolMetricsExporter({ poolA, poolB });

You can then also always add additional pools:

import { addObservedPool } from '@promster/undici';

addObservedPool(origin, pool);

To integrate this with an undici agent you can use the factory function:

const agent = new Agent({
  factory(origin: string, opts: Pool.Options) {
    return observedPoolFactory(origin, opts);
  },
});

Metrics Server

In some cases you might want to expose the gathered metrics through an individual server. This is useful for instance to not have GET /metrics expose internal server and business metrics to the outside world. For this you can use @promster/server:

import { createServer } from '@promster/server';

// NOTE: The port defaults to `7788`.
createServer({ port: 8888 }).then((server) =>
  console.log(`@promster/server started on port 8888.`),
);

Options with their respective defaults are port: 7788, hostname: '0.0.0.0' and detectKubernetes: false. Whenever detectKubernetes is passed as true the server will not start locally.

Exposing metrics via your own server

You can use the express or hapi package to expose the gathered metrics through your existing server. To do so just:

import app from './your-express-app';
import { getSummary, getContentType } from '@promster/express';

app.use('/metrics', async (req, res) => {
  req.statusCode = 200;

  res.setHeader('Content-Type', getContentType());
  res.end(await getSummary());
});

This may slightly depend on the server you are using but should be roughly the same for all.

The packages re-export most things from the @promster/metrics package including two other potentially useful exports in Prometheus (the actual client) and defaultRegister which is the default register of the client. After all you should never really have to install @promster/metrics as it is only an internally shared package between the others.

❯ Configuration

When creating either the Express middleware or Hapi plugin the following options can be passed.

Options

Option Description
labels An Array<String> of custom labels to be configured both on all metrics mentioned above
metricPrefix A prefix applied to all metrics. The prom-client's default metrics and the request metrics
metricTypes An Array<String> containing one of histogram, summary or both
metricNames An object containing custom names for one or all metrics with keys of up, countOfGcs, durationOfGc, reclaimedInGc, httpRequestDurationPerPercentileInSeconds, httpRequestDurationInSeconds. Note that each value can be an Array<String> so httpRequestDurationInSeconds: ['deprecated_name', 'next_name'] which helps when migrating metrics without having gaps in their intake. In such a case deprecated_name would be removed after e.g. Recording Rules and dashboards have been adjusted to use next_name. During the transition each metric will be captured/recorded twice.
getLabelValues A function receiving req and res on each request. It has to return an object with keys of the configured labels above and the respective values
normalizePath A function called on each request to normalize the request's path. Invoked with (path: string, { request, response })
normalizeStatusCode A function called on each request to normalize the response's status code (e.g. to get 2xx, 5xx codes instead of detailed ones). Invoked with (statusCode: number, { request, response })
normalizeMethod A function called on each request to normalize the request's method (to e.g. hide it fully). Invoked with (method: string, { request, response })
skip A function called on each response giving the ability to skip a metric. The method receives req, res and labels and returns a boolean: skip(req, res, labels) => Boolean
detectKubernetes A boolean defaulting to false. Whenever true is passed and the process does not run within Kubernetes, any metric intake is skipped (good e.g. during testing).
disableGcMetrics A boolean defaulted to false to indicate if Garbage Collection metrics should be disabled and hence not collected.

Customizing buckets and percentiles

Each Prometheus histogram or summary can be customized in regard to its bucket or percentile values. While @promster offers some defaults, these might not always match your needs. To customize the metrics you can pass a metricBuckets or metricPercentiles object whose key is the metric name you intend to customize and the value is the percentile or bucket value passed to the underlying Prometheus metric.

To illustrate this, we can use the @promster/express middleware:

const middleware = createMiddleware({
  app,
  options: {
    metricBuckets: {
      httpRequestContentLengthInBytes: [
        100000, 200000, 500000, 1000000, 1500000, 2000000, 3000000, 5000000,
        10000000,
      ],
      httpRequestDurationInSeconds: [
        0.05, 0.1, 0.3, 0.5, 0.8, 1, 1.5, 2, 3, 10,
      ],
    },
    metricPercentiles: {
      httpRequestDurationPerPercentileInSeconds: [0.5, 0.9, 0.95, 0.98, 0.99],
      httpResponseContentLengthInBytes: [
        100000, 200000, 500000, 1000000, 1500000, 2000000, 3000000, 5000000,
        10000000,
      ],
    },
  },
});

Custom label values

You can import the default normalizers via import { defaultNormalizers } from '@promster/express' and use normalizePath, normalizeStatusCode and normalizeMethod from your getLabelValues. A more involved example with getLabelValues could look like:

app.use(
  createMiddleware({
    app,
    options: {
      labels: ['proxied_to'],
      getLabelValues: (req, res) => {
        if (res.proxyTo === 'someProxyTarget')
          return {
            proxied_to: 'someProxyTarget',
            path: '/',
          };
        if (req.get('x-custom-header'))
          return {
            path: null,
            proxied_to: null,
          };
      },
    },
  }),
);

Note that the same configuration can be passed to @promster/hapi.

Request recorder

Both @promster/hapi and @promster/express expose the request recorder configured with the passed options and used to measure request timings. It allows easy tracking of other requests not handled through Express or Hapi -- for instance calls to an external API -- while using promster's already defined metric types (the httpRequestsHistogram etc).

// Note that a getter is exposed as the request recorder is only available after initialisation.
import { getRequestRecorder, timing } from '@promster/express';

const fetchSomeData = async () => {
  const recordRequest = getRequestRecorder();
  const requestTiming = timing.start();

  const data = await fetch('https://another-api.com').then((res) => res.json());

  recordRequest(requestTiming, {
    other: 'label-values',
  });

  return data;
};

Up/Down signals

@promster/express, @promster/hapi, @promster/fastify and @promster/marblejs automatically set the nodejs_up Prometheus gauge to 1 when the middleware or plugin is created. The @promster/apollo package does the same via the serverWillStart lifecycle hook. All packages also expose signalIsUp() and signalIsNotUp() for manual control. For instance, you can call signalIsNotUp() during graceful shutdown to set the gauge back to 0.

❯ Metrics Reference

Garbage Collection

Metric Description
nodejs_up An indication if the Node.js server is started: either 0 (not up) or 1 (up)
nodejs_gc_runs_total Total garbage collections count
nodejs_gc_pause_seconds_total Time spent in garbage collection
nodejs_gc_reclaimed_bytes_total Number of bytes reclaimed by garbage collection

With all garbage collection metrics a gc_type label with one of: unknown, scavenge, mark_sweep_compact, scavenge_and_mark_sweep_compact, incremental_marking, weak_phantom or all will be recorded.

HTTP Timings

Applies to Hapi, Express, Marble.js and Fastify integrations.

Metric Description
http_requests_total A Prometheus counter for the http request total. This metric is also exposed on the following histogram and summary which both have a _sum and _count and enabled for ease of use. It can be disabled by configuring with metricTypes: Array<String>.
http_request_duration_seconds A Prometheus histogram with request time buckets in seconds (defaults to [0.05, 0.1, 0.3, 0.5, 0.8, 1, 1.5, 2, 3, 5, 10]). A histogram exposes a _sum and _count which are a duplicate to the above counter metric. A histogram can be used to compute percentiles with a PromQL query using the histogram_quantile function. It is advised to create a Prometheus recording rule for performance.
http_request_duration_per_percentile_seconds A Prometheus summary with request time percentiles in seconds (defaults to [0.5, 0.9, 0.99]). This metric is disabled by default and can be enabled by passing metricTypes: ['httpRequestsSummary']. It exists for cases in which the above histogram is not sufficient, slow or recording rules can not be set up.
http_request_content_length_bytes A Prometheus histogram with the request content length in bytes (defaults to [100000, 200000, 500000, 1000000, 1500000, 2000000, 3000000, 5000000, 10000000]). This metric is disabled by default and can be enabled by passing metricTypes: ['httpContentLengthHistogram'].
http_response_content_length_bytes A Prometheus histogram with the response content length in bytes (defaults to [100000, 200000, 500000, 1000000, 1500000, 2000000, 3000000, 5000000, 10000000]). This metric is disabled by default and can be enabled by passing metricTypes: ['httpContentLengthHistogram'].

In addition with each HTTP request metric the following default labels are measured: method, status_code and path. You can configure more labels (see Options).

You can also opt out of either the Prometheus summary or histogram by passing in { metricTypes: ['httpRequestsSummary'] }, { metricTypes: ['httpRequestsHistogram'] } or { metricTypes: ['httpRequestsTotal'] }.

GraphQL Timings

Applies to the Apollo integration.

Metric Description
graphql_parse_duration_seconds A Prometheus histogram with the request parse duration in seconds
graphql_validation_duration_seconds A Prometheus histogram with the request validation duration in seconds
graphql_resolve_field_duration_seconds A Prometheus histogram with the field resolving duration in seconds
graphql_request_duration_seconds A Prometheus histogram with the request duration in seconds
graphql_errors_total A Prometheus counter with the errors occurred during parsing, validation or field resolving

In addition with each GraphQL request metric the following default labels are measured: operation_name and field_name. For errors a phase label is present.

❯ Example PromQL Queries

In the past we have struggled and learned a lot getting appropriate operational insights into our various Node.js based services. PromQL is powerful and a great tool but can have a steep learning curve. Here are a few queries per metric type to maybe flatten that curve. Remember that you may need to configure the metricTypes: Array<String> to e.g. metricTypes: ['httpRequestsTotal', 'httpRequestsSummary', 'httpRequestsHistogram'].

http_requests_total

HTTP requests averaged over the last 5 minutes

rate(http_requests_total[5m])

A recording rule for this query could be named http_requests:rate5m

HTTP requests averaged over the last 5 minutes by Kubernetes pod

sum by (kubernetes_pod_name) (rate(http_requests_total[5m]))

A recording rule for this query could be named kubernetes_pod_name:http_requests:rate5m

HTTP requests in the last hour

increase(http_requests_total[1h])

Average HTTP requests by status code over the last 5 minutes

sum by (status_code) (rate(http_requests[5m]))

A recording rule for this query could be named status_code:http_requests:rate5m

HTTP error rates as a percentage of the traffic averaged over the last 5 minutes

rate(http_requests_total{status_code=~"5.*"}[5m]) / rate(http_requests_total[5m])

A recording rule for this query could be named http_requests_per_status_code5xx:ratio_rate5m

http_request_duration_seconds

HTTP requests per proxy target

sum by (proxied_to) (increase(http_request_duration_seconds_count{proxied_to!=""}[2m]))

A recording rule for this query should be named something like proxied_to_:http_request_duration_seconds:increase2m.

99th percentile of HTTP request latency per proxy target

histogram_quantile(0.99, sum by (proxied_to,le) (rate(http_request_duration_seconds_bucket{proxied_to!=""}[5m])))

A recording rule for this query could be named proxied_to_le:http_request_duration_seconds_bucket:p99_rate5m

http_request_duration_per_percentile_seconds

Maximum 99th percentile of HTTP request latency by Kubernetes pod

max(http_request_duration_per_percentile_seconds{quantile="0.99"}) by (kubernetes_pod_name)

nodejs_eventloop_lag_seconds

Event loop lag averaged over the last 5 minutes by release

sum by (release) (rate(nodejs_eventloop_lag_seconds[5m]))

network_concurrent_connections_count

Concurrent network connections

sum(rate(network_concurrent_connections_count[5m]))

A recording rule for this query could be named network_concurrent_connections:rate5m

nodejs_gc_reclaimed_bytes_total

Bytes reclaimed in garbage collection by type

sum by (gc_type) (rate(nodejs_gc_reclaimed_bytes_total[5m]))

nodejs_gc_pause_seconds_total

Time spent in garbage collection by type

sum by (gc_type) (rate(nodejs_gc_pause_seconds_total[5m]))

About

⏰A Prometheus exporter for Hapi, express, Apollo, undici and Marble.js servers to automatically measure request timings πŸ“Š

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Sponsor this project

  •  

Packages

 
 
 

Contributors