The following properties affect the size and structure of the Kubernetes cluster that gets started, as well as the startup attributes of the VoltDB cluster running on those pods.
Table B.2. Options Starting with cluster.clusterSpec...
Parameter | Description | Default |
---|---|---|
.additionalAnnotations | Additional custom pod annotations | { } |
.additionalLabels | Additional custom pod labels | { } |
.additionalStartArgs | Additional arguments for the 'voltdb start' command issued in the pod container | [ ] |
.additionalVolumeMounts | Pod volumes to mount into the container's filesystem; cannot be modified once set | [ ] |
.additionalVolumes | Additional list of volumes that can be mounted by node containers | [ ] |
.affinity | Kubernetes node affinity | { } |
.allowRestartDuringUpdate | Allow VoltDB cluster restarts if necessary to apply user-requested configuration changes. May include automatic save and restore of database. | false |
.adminOperationTimeout | Timeout for activity check for admin actions such as pause/stop/shutdown. If not specified 120 seconds is used as default. If set less than 120 it will use default 120 seconds. | 0 |
.autoScaling.enabled | Enable/disable auto-scaling. Also used to reset a failed state by disable/enable sequence | false |
.autoScaling.maxReplicas | Maximum scale up limit. Effective value will be rounded up to nearest multiple of kfactor+1 | 16 |
.autoScaling.maxRetries | Maximum number of times a failed elastic operation will be retried. 0 means no retries | 0 |
.autoScaling.metrics.cpu.scaleDown | The threshold that the value of the CPU metric must cross downwards for a cluster scale down | 0 |
.autoScaling.metrics.cpu.scaleUp | The threshold that the value of the CPU metric must cross upwards for a cluster scale up | 0 |
.autoScaling.metrics.cpu | The 'CPU percent usage' metric: the average value of the PERCENT_USED values reported in the CPU statistics | { } |
.autoScaling.metrics.idletime.scaleDown | The threshold that the value of the idle time metric must cross upwards for a cluster scale down | 0 |
.autoScaling.metrics.idletime.scaleUp | The threshold that the value of the idle time metric must cross downwards for a cluster scale up | 0 |
.autoScaling.metrics.idletime | The 'idle time' metric: the average value of the PERCENT values reported in the IDLETIME statistics. Note: lower values require scale up, higher values require scale down | { } |
.autoScaling.metrics.rss.scaleDown | The threshold that the value of the RSS metric must cross downwards for a cluster scale down | 0 |
.autoScaling.metrics.rss.scaleUp | The threshold that the value of the RSS metric must cross upwards for a cluster scale up | 0 |
.autoScaling.metrics.rss | The 'resident set size' metric: the average value of the RSS values reported in the MEMORY statistics | { } |
.autoScaling.metrics.tps.scaleDown | The threshold that the value of the TPS metric must cross downwards for a cluster scale down | 0 |
.autoScaling.metrics.tps.scaleUp | The threshold that the value of the TPS metric must cross upwards for a cluster scale up | 0 |
.autoScaling.metrics.tps | The 'transactions per second' metric: the average value of the TPS values reported in the LATENCY statistics | { } |
.autoScaling.metrics | Lists the thresholds for the monitored metrics, indexed by metric name: cpu, idletime, rss, tps | { } |
.autoScaling.minReplicas | Minimum scale down limit. Effective value will be rounded up to nearest multiple of kfactor+1 | kfactor + 1 |
.autoScaling.notificationInterval | The duration, in seconds, between notification events reporting that an elastic operation is ongoing. 0 means no notification | 0 |
.autoScaling.retryTimeout | Defines the duration, in seconds, to wait for a retried operation to start. If the timeout expires and the operation didn’t start, auto-scaling will be stopped. | 60 |
.autoScaling.stabilizationWindow.scaleDown | The duration, in seconds, that a ‘scaleDown threshold crossed’ condition must remain true in order to trigger an elastic remove operation | 300 |
.autoScaling.stabilizationWindow.scaleUp | The duration, in seconds, that a ‘scaleUp threshold crossed’ condition must remain true in order to trigger an elastic add operation | 300 |
.clusterInit.classesConfigMapRefName | Name of pre-created Kubernetes configmap containing stored procedure classes | "" |
.clusterInit.initSecretRefName | Name of pre-created Kubernetes secret containing init configuration, using key 'deployment.xml'. Ignores init configuration if set. Deprecated. | "" |
.clusterInit.licenseSecretRefName | Name of pre-created Kubernetes secret containing Volt license, using key 'license.xml' | "" |
.clusterInit.logConfigMapName | Name of pre-created Kubernetes config map containing custome logging configuration, using key 'log4j.xml' | "" |
.clusterInit.schemaConfigMapRefName | Name of pre-created Kubernetes configmap containing schema configuration | "" |
.customEnv | Key-value map of additional environment variables to set in all VoltDB node containers | { } |
.disableFinalizers | Disables Helm finalizers to permit cluster deletion. WARNING: many resources will require manual cleanup. | false |
.deletePVC | Delete and cleanup generated PVCs when VoltDBCluster is deleted, requires finalizers to be enabled (on by default) | false |
.dr.excludeClusters | User-specified list of clusters not part of XDCR | [ ] |
.dr.forceDrop | Indicate if you want to drop cluster from XDCR without producer drain. | false |
.elasticRemove.checkInterval | Time in seconds to wait between checks of the status of an ongoing elastic remove operation. A value of 10 seconds or more is recommended to let other workflows be executed by the operator | 10 |
.elasticRemove.ignore | Can be set to disabled_export to force ignoring the disabled exports, since elastic remove waits for all exports to be drained prior to removing the nodes | `` |
.elasticRemove.restart | Requests the restart of an elastic remove operation currently in the FAILED state. Value must be nonzero and also different from the last value used for restart | 0 |
.elasticRemove.shutdownDelay | Specifies the number of minutes to wait before shutting down the nodes being removed. Must be greater than 0 if topics are being used, otherwise the elastic remove fails | 0 |
.elasticRemove.update | Requests an update of the parameters of an ongoing elastic remove operation, e.g. ignore or shutdownDelay. Value must be nonzero and also different from the last value used for update | 0 |
.elasticReset | Requests a reset of the elastic remove information in the cluster status. Reserved for VoltDB support. Value must be nonzero and also different from the last value used for reset | 0 |
.enableInServiceUpgrade | Enable rolling upgrade of software version rather than requiring full cluster restart (V13.1.0 or later). | false |
.env.VOLTDB_GC_OPTS | VoltDB cluster java runtime garbage collector options (VOLTDB_GC_OPTS) | "" |
.env.VOLTDB_HEAPCOMMIT | Commit VoltDB cluster heap at startup, true/false (VOLTDB_HEAPCOMMIT) | "" |
.env.VOLTDB_HEAPMAX | VoltDB cluster heap size, integer number of megabytes (VOLTDB_HEAPMAX) | "" |
.env.VOLTDB_OPTS | VoltDB cluster additional java runtime options (VOLTDB_OPTS) | "" |
.env.VOLTDB_REGION_LABEL_NAME | Override for region label on node | "" |
.env.VOLTDB_ZONE_LABEL_NAME | Override for zone label on node | "" |
.forceStopNode | Enable or disable force stop node (V12.2 or later) | false |
.image.pullPolicy | Image pull policy | Always |
.image.registry | Image registry | docker.io |
.image.repository | Image repository | voltdb/voltdb-enterprise |
.image.tag | Image tag | Same as global.voltdbVersion |
.inServiceUpgrade.delay | FOR TESTING PURPOSES ONLY: Specifies the delay in seconds upgrading pods to new image. | false |
.initForce | Always init --force on VoltDB node start/restart. WARNING: This will destroy VoltDB data on PVCs except snapshots. | false |
.livenessProbe.enabled | Enable/disable livenessProbe; see Kubernetes documentation for probe settings | true |
.maintenanceMode | VoltDB Cluster maintenance mode (pause all nodes) | false |
.maxPodUnavailable | Maximum pods allowed to be unavailable in Pod Disruption Budget | kfactor |
.nodeSelector | Node labels for pod assignment | { } |
.persistentVolume.hostpath.enabled | Use HostPath volume for local storage of VoltDB. This node storage is often ephemeral and will not use PVC storage classes if enabled. | false |
.persistentVolume.hostpath.path | HostPath mount point. | "/data/voltdb/" |
.persistentVolume.size | Persistent Volume size per pod (VoltDB Node) | 32Gi |
.persistentVolume.storageClassName | Storage Class name to use, otherwise use default | "" |
.podSecurityContext | Pod security context defined by Kubernetes | See file values.yaml |
.podTerminationGracePeriodSeconds | Duration in seconds the pod needs to terminate gracefully. | 30 |
.priorityClassName | Pod priority defined by an existing PriorityClass | "" |
.readinessProbe.enabled | Enable/disable readinessProbe; see Kubernetes documentation for probe settings | true |
.replicas | Pod (VoltDB Node) replica count; scaling to 0 will shutdown the cluster gracefully | 3 |
.resources | CPU/Memory resource requests/limits | { } |
.securityContext | Container security context defined by Kubernetes | See file values.yaml |
.ssl.certificateFile | PEM-encoded certificate chain used by the operator to connect to VoltDB when TLS/SSL is enabled | "" |
.ssl.insecure | If true, skip VoltDB certificate verification by the operator when TLS/SSL is enabled | false |
.startupProbe.enabled | Enable/disable startupProbe; see Kubernetes documentation for probe settings | true |
.stoppedNodes | User-specified list of stopped VoltDB nodes, by pod ordinal (0, 1, ...) | [ ] |
.storageConfigs | Optional storage configs for provisioning additional persistent volume claims automatically | [ ] |
.takeSnapshotOnShutdown | Takes a snapshot when cluster is shut down by scaling to 0. One of: NoCommandLogging, Always, Never. NoCommandLogging means 'only if command logging is disabled'. | "NoCommandLogging" |
.tolerations | Pod tolerations for node assignment (see Kubernetes documentation) | [ ] |
.topologySpreadConstraints | Describes how a group of pods ought to spread across topology (see Kubernetes documentation) | [ ] |
.useCloudNativePlacementGroup | Enable or disable cloud native placement group in VoltDB | false |