The two major differences between creating a VoltDB database cluster in Kubernetes and starting a cluster using traditional servers are:
With Helm there is a single command (install) that performs both the initialization and the startup of the database.
You specify the database configuration with properties rather than as an XML file.
In fact, all of the configuration — including the configuration of the virtual servers (or pods), the server processes, and the database — is accomplished using Helm properties. The following sections provide examples of some of the most common configuration settings when using Kubernetes. Appendix A, VoltDB Helm Properties gives a full list of all of the properties that are available for customization.
Many of the configuration options that are performed through hardware configuration, system commands or environment variables on traditional server platforms are now available through Helm properties. Most of these settings are listed in Section A.3, “Kubernetes Cluster Startup Options”.
Hardware settings, such as the number of processors and memory size, are defined as Kubernetes image resources
through the Helm cluster.clusterSpec.resources
property. Under resources
, you
can specify any of the YAML properties Kubernetes expects when configuring pods within a container. For
example:
cluster: clusterSpec: resources: requests: cpu: 500m memory: 1000Mi limits: cpu: 500m memory: 1000Mi
System settings that control process limits that are normally defined through environment variables can be set
with the cluster.clusterSpec.env
properties. For example, the following YAML increases the Java
maximum heap size and disables the collection of JVM statistics:
cluster: clusterSpec: env: VOLTDB_HEAPMAX: 3072 VOLTDB_OPTS: -XX:+PerfDisableSharedMem
One system setting that is not configurable through Kubernetes or Helm is whether the base platform has Transparent Huge Pages (THP) enabled or not. This is dependent of the memory management settings on the actual base hardware on which Kubernetes is hosted. Having THP enabled can cause problems with memory-intensive applications like VoltDB and it is strongly recommended that THP be disabled before starting your cluster. (See the section on Transparent Huge Pages in the VoltDB Administrator's Guide for an explanation of why this is an issue.)
If you are not managing the Kubernetes environment yourself or cannot get your provider to modify their environment, you will need to override VoltDB's warning about THP on startup by setting the cluster.clusterSpec.additionalArgs property to include the VoltDB start argument to disable the check for THP. For example:
cluster: clusterSpec: additionalStartArgs: - "--ignore=thp"
In addition to configuring the environment VoltDB runs in, there are many different characteristics of the database itself you can control. These include mapping network interfaces and ports, selecting and configuring database features, and identifying the database schema, class files, and security settings.
The network settings are defined through the cluster.serviceSpec
properties, where you can choose
the individual ports and choose whether to expose them through the networking service
(cluster.serviceSpec.type
) you can also select. For example, the following YAML file disables exposure
of the admin port and assigns the externalized client port to 31313:
cluster: serviceSpec: type: NodePort adminPortEnabled: false clientPortEnabled: true clientNodePort: 31313
The majority of the database configuration options for VoltDB are traditionally defined in an XML configuration file. When using Kubernetes, these options are declared using YAML and Helm properties.
In general, the Helm properties follow the same structure as the XML configuration, beginninging with "cluster.config". So, for example, where the number of sites per host is defined in XML as :
<deployment> <cluster sitesperhost="{n}"/> </deployment>
It is defined in Kubernetes as:
cluster: config: deployment: cluster: sitesperhost: {n}
The following sections give examples of defining common database configurations options using both XML and YAML. See Section A.6, “VoltDB Database Configuration Options” for a complete list of the Helm properties available for configuring the database.
Command logging provides durability of the database content across failures. You can control the level of
durability as well as the length of time required to recover the database by configuring the type of command logging and
size of the logs themselves. In Kubernetes this is done with the cluster.config.deployment.commandlog
properties. The following examples show the equivalent configuration in both XML and YAML:
XML Configuration File | YAML Configuration File |
---|---|
<commandlog enabled="true" synchronous="true" logsize="3072"> <frequency time="300" transactions="1000"/> </commandlog> | cluster: config: deployment: commandlog: enabled: true synchronous: true logsize: 3072 frequency: transactions 1000 |
Export simplifies the integration of the VoltDB database with external databases and systems. You use the export
configuration to define external "targets" the database can write to. In Kubernetes you define export targets using the
cluster.config.deployment.export.configurations
property. Note that the
configurations
property can accept multiple configuration definitions. In YAML, you specify a list by
prefixing each list element with a hyphen, even if there is only one element. The following examples show the equivalent
configuration in both XML and YAML for configuring a file export connector:
XML Configuration File | YAML Configuration File |
---|---|
<export> <configuration target="eventlog" type="file"> <property name="type">csv</property> <property name="nonce">eventlog</property> </configuration> </export> | cluster: config: deployment: export: configurations: - target: eventlog type: file properties: type: csv nonce: eventlog |
There are a number of options for securing a VoltDB database, including basic usernames and passwords in addition
to industry network solutions such as Kerberos and SSL. Basic security is enabled in the configuration with the
cluster.config.deployment.security.enabled
property. You must also use the property and its children
to define the actual usernames, passwords, and assigned roles. Again, the users
property expects a
list of sub-elements so you must prefix each set of properties with a hyphen.
Finally, if you do enable basic security, you must also tell the VoltDB operator which account to use when
accessing the database. To do that, you define the cluster.config.auth
properties, as shown below,
which must specify an account with the built-in administrator role. The following examples
show the equivalent configurations in both XML and YAML, including the assignment of an account to the VoltDB
Operator:
XML Configuration File | YAML Configuration File |
---|---|
<security enabled="true"/> <users> <user name="admin" password="superman" roles="administrator"/> <user name="mitty" password="thurber" roles="user"/> </users> | cluster: config: deployment: security: enabled: true users: - name: admin password: superman roles: administrator - name: mitty password: thurber roles: user auth: username: admin password: superman |
Another important aspect of security is securing and authenticating the ports used to access the database. The most common way to do this is by enabling TLS/SSL to encrypt data and authenticate the servers using user-created certificates. The process for creating the private keystore and truststore in Java is described in the section on "Configuring TLS/SSL on the VoltDB Server" in the Using VoltDB guide. This process is the same whether you are running the cluster directly on servers or in Kubernetes.
The one difference when enabling TLS/SSL for the cluster in Kubernetes is that you must also configure the operator with an appropriate truststore, in PEM format. The easiest way to do this is to configure the operator using the same truststore and password you use for the cluster itself. First, you will need to convert the truststore to PEM format using the Java keytool:
keytool -export \ -alias my.key -rfc \ -file mycert.pem \ -keystore mykey.jks \ -storepass topsecret \ -keypass topsecret
Once you have your keystore, truststore, and truststore in PEM format, you can configure the cluster and operator with the appropriate SSL properties, using either one of two methods:
Configuring TLS/SSL with YAML properties
Using Kubernetes secrets to store and reuse TLS/SSL information
The following sections describe the two methods for configuring encryption. In addition, TLS/SSL certificates have
an expiration date. It is important you replace the certificate before it expires (if
cluster.clusterSpec.ssl.insecure
is false, which is the default). If not, the operator will lose the
ability to communicate with the cluster pods. See Section 5.3, “Updating TLS Security Certificates” for instructions on updating the
TLS/SSL certificates in Kubernetes.
The following examples show the equivalent configurations for TLS/SSL in both XML and YAML (minus the actual truststore and keystore files).
XML Configuration File | YAML Configuration File |
---|---|
<ssl enabled="true" external="true" internal="true"> <keystore path="mykey.jks" password="topsecret"/> <truststore path="mytrust.jks" password="topsecret"/> </ssl> | cluster: config: deployment: ssl: enabled: true external: true internal: true keystore: password: topsecret truststore: password: topsecret clusterSpec: ssl: insecure: false |
Using the preceding YAML file (calling it ssl.yaml
), we can complete the SSL configuration
by specifying the truststore and keystore files on the helm command line with the
--set-file
argument:
helm install mydb voltdb/voltdb \
--values myconfig.yaml \
--values ssl.yaml \
--set-file cluster.config.deployment.ssl.keystore.file=mykey.jks \
--set-file cluster.config.deployment.ssl.truststore.file=mytrust.jks \
--set-file cluster.clusterSpec.ssl.certificateFile=mycert.pem
Three important notes concerning TLS/SSL configuration:
If you enable SSL for the cluster's external interface and ports and you also want to enable Prometheus metrics, you must provide an appropriate SSL truststore and password for the metrics agent. See Section 6.1, “Using Prometheus to Monitor VoltDB” for more information on configuring the Prometheus agent in Kubernetes.
If you do not require validation of the TLS certificate by the operator, you can avoid setting the
truststore PEM for the operator and, instead, set the cluster.clusterSpec.ssl.insecure
property
to true.
If you enable SSL for the cluster, you must repeat the specification of the truststore and keystore files
every time you update the configuration. Using the --reuse-values
argument on the helm
upgrade
command is not sufficient.
An alternative method is to store the key and trust stores and passwords in a Kubernetes secret. Secrets are a standard feature of Kubernetes that allow you to store sensitive information as key value pairs in a protected space. Three advantages of using a secret are:
You do not have to enter sensitive TLS/SSL information in plain text when configuring or updating your database.
The secret is used automatically for subsequent updates; you do not have to repeatedly specify the TLS/SSL files when updating the database configuration.
You can reuse the same secret for multiple database instances and services.
To use a Kubernetes secret to store the TLS/SSL information for your database, you must first create the necessary files as described in Section 2.2.2.4, “Configuring TLS/SSL”. Next you create your Kubernetes secret using the kubectl create secret command, specifying the key names and corresponding artifacts as arguments. For example:
$ kubectl create secret generic my-ssl-creds \ --from-file=keystore_data=mykey.jks \ --from-file=truststore_data=mytrust.jks \ --from-file=certificate=mycert.pem \ --from-literal=keystore_password=topsecret \ --from-literal=truststore_password=topsecret
It is critical you use the key names keystore_data, truststore_data, keystore_password, truststore_password, and certificate for the keystore, truststore, corresponding passwords, and PEM file, respectively. If not, the Volt Operator will not be able to find them. Also, the secret must be the the same Kubenetes namespace as the Helm release you are configuring.
Once you create the secret you can use it to configure your database by not setting any of
standard SSL properties such as the cluster.config.deployment.ssl...
properties or
cluster.clusterSpec.ssl.certificateFile
. Instead, set the property
cluster.config.deployment.ssl.sslSecret.certSecretName
. Using the secret created in the preceding
example, the configuration of your database will look something like this:
helm install mydb voltdb/voltdb \
--values myconfig.yaml \
--set cluster.config.deployment.ssl.sslSecret.certSecretName=my-ssl-creds
Another alternative for maintaining the TLS/SSL information that your cluster needs is to use the Kubernetes cert-manager (cert-manager.io). The cert-manager is an add-on for Kubernetes that helps you create and maintain certificates and other private information in Kubernetes. If you wish to use cert-manager for self-signed certificates, you not only use it to store the certificate and truststore, you create them with cert-manager as well. (For more detailed information concerning cert-manager, see the cert-manager documentation.)
The basic steps for storing self-signed TLS/SSL credentials in cert-manager are:
Create a Kubernetes secret with the TLS password you wish to use.
Create an issuer resource in Kubernetes that will generate and authenticate the certificate. You only need to do this once for the namespace and multiple certificate requests can use the same issuer.
Create a request for the issuer to generate the actual TLS/SSL certificate and store it in a Kubernetes secret.
Specify the resulting certificate secret in the VoltDB configuration and start your cluster.
You create the Kubernetes secret containing the password using the kubectl create secret command. For example, The following command creates a secret (my-ssl-password) with the password "topsecret". The password must be assigned to the label password:
$ kubectl create secret generic my-ssl-password \ --from-literal=password=topsecret
You create the cert-manager issuer and the certificate request using YAML properties. The easiest way to do this is by typing the property declarations into a YAML file. For example, the following two YAML files create a cert-manager issuer service and request a certificate.
create-issuer.yaml
apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: selfsigned-issuer namespace: mydb spec: selfSigned: {}
request-cert.yaml
apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: my-ssl-certificate namespace: mydb spec: commonName: voltdb.com duration: 8766h secretName: my-ssl-creds keystores: jks: create: true passwordSecretRef: name: my-ssl-password key: password issuerRef: name: selfsigned-issuer kind: Issuer privateKey: algorithm: RSA encoding: PKCS1 size: 2048 usages: - server auth
Four key points to note about the certificate request are:
The issuer must be in the same namespace as the database that uses the certificate.
The certificate request references the secret you created containing the password (my-ssl-password in the example).
As mentioned before, the key in the password secret must be "password".
You specify the duration of the certificate in hours. In this example, 8766 hours, or one year.
Once you create the YAML files, you can create the issuer and request the certificate:
$ kubectl apply -f create-issuer.yaml # Do only once $ kubectl apply -f request-cert.yaml
Finally, in your database configuration, you point to the two secrets containing the password and created by the certificate request (in this case, my-ssl-password and my-ssl-creds) the same way you would for a manually created secret:
helm install mydb voltdb/voltdb \ --values myconfig.yaml \ --set cluster.config.deployment.ssl.sslSecret.passwordSecretName=my-ssl-password \ --set cluster.config.deployment.ssl.sslSecret.certSecretName=my-ssl-creds
VoltDB uses Log4J for logging messages while the database is running. The chapter on '"Logging and Analyzing Activity in a VoltDB Database" in the VoltDB Administrator's Guide describes some of the ways you can customize the logging to meet your needs, including changing the logging level or adding appenders. Logging is also available in the Kubernetes environment and is configured using a Log4J XML configuration file. However, the default configuration and how you set the configuration when starting or updating the database in Kubernetes is different than as described in the Administrator's Guide.
Before you attempt to customize the logging, you should familiarize yourself with the default settings. The easiest
way to do this is to extract a copy of the default configuration from the Docker image you will be using. The following
commands create a docker container without actually starting the image, extract the configuration file to a local file
(k8s-log4j.xml
in the example), then delete the container.
$ ID=$(docker create voltdb/voltdb-enterprise) $ docker cp ${ID}:/opt/voltdb/tools/kubernetes/server-log4j.xml k8s-log4j.xml $ docker rm $ID
Once you extract the default configuration and made the changes you want, you are ready to specify your new
configuration on the Helm command to start the database. You do this by setting the
cluster.config.log4jcfgFile
property. For example:
$ helm install mydb voltdb/voltdb \
--values myconfig.yaml \
--set cluster.clusterSpec.replicas=5 \
--set-file cluster.config.licenseXMLFile=license.xml \
--set-file cluster.config.log4jcfgFile=my-log4j.xml
Similarly, you can update the logging configuration on a running cluster by using the --set-file
argument on the Helm upgrade command:
$ helm upgrade mydb voltdb/voltdb --reuse-values \
--set-file cluster.config.log4jcfgFile=my-log4j.xml