Once the database is running, you need to monitor the system to ensure reliable uptime and performance. Variations in usage, workload, or the operational environment can affect the dynamics of the data application, which may need corresponding adjustments to the schema, procedures, or hardware configuration. VoltDB provides system procedures (such as @Statistics) and the web-based Volt Management Center to help monitor current performance. But to provide persistent, historical intelligence concerning application performance it is best to use a dedicated metrics data store, such as Prometheus.
Prometheus is a metrics monitoring and alerting system that provides ongoing data collection and persistent storage for applications and other resources. By providing an open source industry standard for collecting and storing metrics, Prometheus allows you to:
Offload monitoring from the database platform itself
Combine metrics from VoltDB with other applications within your business ecosystem
Query and visualize historical information about your database activity and performance (through tools such as Grafana)
Section 6.1, “Using Prometheus to Monitor VoltDB” explains how to configure your VoltDB database so the information you need is gathered and made available through Prometheus and compatible graphic consoles such as Grafana.
To monitor VoltDB with Prometheus on Kubernetes, you enable per pod metrics where each node of the cluster reports its own set of server-specific information. The servers make this data available in Prometheus format through an HTTP endpoint (/metrics) on the metrics port (which defaults to 11781). You can control the port number and other characteristics of the metrics system through Helm properties.
To enable Prometheus metrics, set the cluster.config.deployment.metrics.enabled
property to
true. You can also set the cluster.serviceSpec.perpod.metrics.enabled
property
to true, which creates a Kubernetes metrics service for each pod. Prometheus uses these metrics
services to identify the Volt pods as targets for scraping. For example, the following command enables per pod metrics with
default settings while initializing the mydb database cluster. It also sets the service type to
ClusterIP:
$ helm install mydb voltdb/voltdb \
--set-file cluster.config.licenseXMLFile=license.xml \
--set cluster.clusterSpec.replicas=5 \
--set cluster.config.deployment.metrics.enabled=true \
--set cluster.serviceSpec.perpod.metrics.enabled=true \
--set cluster.serviceSpec.service.metrics.type=ClusterIP
Once metrics are enabled, each Volt server reports its own information through the Prometheus endpoint on the metrics port. If you enable the per pod service, connection to the Prometheus server is handled automatically. If the service is not enabled or Prometheus is not configured to auto-detect targets, you will need to edit the Prometheus configuration to add the cluster nodes to the list of scraping targets.
Finally, if the database has security enabled, you will also need to configure Prometheus with the appropriate authentication information based on the truststore and password for the cluster. See the Prometheus documentation for more information.
Once Prometheus is scraping the Volt metrics, you can use tools such as Grafana to combine, analyze, and present the information in meaningful ways. There are example Grafana dashboards in the Volt Github repository (https://github.com/VoltDB/volt-monitoring) demonstrating some of the visualizations that are possible.