1. Release V10.2.17 (June 6, 2023) |
1.1. | Security updates |
| Various packages within Volt Active Data have been updated to eliminate known security vulnerabilities,
including: CVE-2021-3712 | CVE-2021-23840 | CVE-2022-0778 | CVE-2022-1292 | CVE-2022-1304 | CVE-2022-2068 | CVE-2022-41723 | CVE-2023-29491 |
|
1.2. | Additional improvements |
| The following limitations in
previous versions have been resolved: Previously, if a client JAR file contained additional unexpected entries, the
sqlcmd utility would stall attempting to load information from the JAR. The utility now
ignores unexpected entries, resolving this issue. There was an edge case where a voltadmin dr reset command could result
in a deadlock, causing the database to hang. The issue has been resolved.
|
2. Release V10.2.16 (February 18, 2023) |
2.1. | Security updates |
| Various packages within Volt Active Data have been updated to eliminate known security vulnerabilities,
including: CVE-2022-41721 | CVE-2022-41881 |
|
2.2. | Additional improvements |
| The following limitations in
previous versions have been resolved: In previous releases, frequent client connection attempts could result in
excessive messages in the log file, although the messages were meant to be limited to one every 60 seconds. This
issue has been resolved and the rapidly repeated messages are now muted. There was a race condition where a problem pausing export connections during a schema or
configuration change could result in a deadlock. This issue has been resolved. Under normal conditions, after elastically shrinking the cluster (that is, removing
nodes) the cluster saves a snapshot as a final step. If the snapshot accidentally starts before the nodes are
completely removed, later attempts to shrink the cluster could fail, reporting that an elastic operation is
already in progress. This issue has been resolved. In certain cases when attempting to shutdown a cluster in Kubernetes, if the nodes take
too long to stop, the shutdown could fail. This issue has been resolved..
|
3. Release V10.2.15 (November 15, 2022) |
3.1. | New Prometheus metrics added |
| Information related to the configuration and status of the cluster, also available from the @SystemInformation
system procedure, is now available as metrics shared through the Prometheus agent. See the sections on integrating
with Prometheus in the Volt Administrator's
Guide and Volt
Kubernetes Administrator's Guide for more information about using Volt Active Data with
Prometheus. |
3.2. | Log4J replaced by reload4J |
| VoltDB does not use any of the components implicated in the published CVEs related to Log4J. However, to avoid
any confusion, VoltDB has replaced the Log4J library with reload4J, a drop-in replacement that replicates the log4J
namespace and functionality, but eliminates all known security vulnerabilities. |
3.3. | Security updates |
| Various packages within Volt Active Data have been updated to eliminate known security vulnerabilities,
including: CVE-2020-26160 | CVE-2021-28165 | CVE-2021-38561 | CVE-2022-1996 | CVE-2022-21698 | CVE-2022-3171 | CVE-2022-42003 |
|
3.4. | Additional improvements |
| The following limitations in
previous versions have been resolved: The HTTP export connector has been improved to cancel all pending export messages if the
connection to the export target times out. This allows the connection to be reset and the blocked requests to be
resubmitted. Previously, if the cluster encountered corrupted command log files during restart it
could result in the nodes repeatedly reporting remote hangups and a missing partition list. This issue has been
resolved and the server now correctly reports a failure due to corrupted command logs.
|
4. Release V10.2.14 (September 1, 2022) |
4.1. | Security Notice |
| The following package updates have been added to the Kubernetes release of Volt Active Data to address known
security vulnerabilities: AdoptOpenJDK 11.0.16_8 Alpine 3.16.2
|
4.2. | Additional improvements |
| The following limitations in
previous versions have been resolved: Previously, when configuring VoltDB on Kubernetes with security enabled, if you specified
a username and password for the Operator, but did not define any other users, installing the Helm release would
fail. This issue has been resolved and the Operator now automatically adds the specified user definition to the
database configuration. There was an edge case when using XDCR where, if a cluster stops and rejoins the XDCR
environment, then stops again before any XDCR data is exchanged, replication is broken and the cluster must be
reinitialized and join the XDCR environment from scratch to reestablish communication. This issue has been
resolved.
|
5. Release V10.2.13 (July 20, 2022) |
5.1. | Additional statistics for tracking communication between XDCR clusters |
| Several additional columns have been added to the first results table for the @Statistics
DRPRODUCER selector (and the corresponding Prometheus agent metrics) to help evaluate the time between when binary
logs are ready for transmission and when acknowledgement is received from the consumer. The new columns are the
following and are reported in milliseconds: DR_ROUNDTRIPTIME_1MINUTE_MAX: The maximum time it took to receive acknowledgement from the consumer, over
the past minute. DR_ROUNDTRIPTIME_1MINUTE_AVG: The average time it took to receive acknowledgement from the consumer, over
the past minute. DR_ROUNDTRIPTIME_5MINUTE_MAX: The maximum time it took to receive acknowledgement from the consumer, over
the past five minutes. DR_ROUNDTRIPTIME_5MINUTE_AVG: The average time it took to receive acknowledgement from the consumer, over
the past five minutes.
The corresponding metrics in the Prometheus agent are: replication_roundtriptime_1m_max replication_roundtriptime_1m_avg replication_roundtriptime_5m_max replication_roundtriptime_5m_avg
|
5.2. | Additional improvements |
| The following limitations in previous versions have been resolved: In the situation where a cluster failed or was forcibly shutdown while a node
was being added or removed, attempting to restart the cluster could result in an error claiming there were
"incomplete command logs", even if command logging was not enabled. This was caused by an incomplete snapshot
left by the interrupted cluster expansion. The issue has been resolved. Previously, the voltadmin release command did not always
release export on all partitions within the cluster. This issue has been resolved. The statistics and warning messages related to "missing" export data (that is, rows that
have not been exported but are not currently available in the export buffers) have been significantly improved
to provide a more accurate view of the actual state of export. Previously, under certain conditions, the
statistics on missing rows could be misleading due to overcounting. When certain errors interrupt communication between XDCR clusters, a voltadmin dr
reset command could hang and never complete. A timeout has been added to allow the DR RESET operation
to complete. There was an issue where if an export stream was dropped and recreated, then the database
was immediately shutdown and restored, the newly created export stream would have an inaccurate pointer
(associated with its previous incarnation). The consequence of this problem was that any records subsequently
inserted into the export source were never written to the associated target. This issue has been
resolved. The timeout period associated with export block operations has been extended to avoid
erroneously timing out operations for slower export targets, such as JDBC. Recently, issues have surfaced related to the use of replicated tables in database
replication (DR) where certain conditions can cause DR processing to stop consuming data. When this happens the
console and log report that "no new DR transactions have been processed." In one case, if a replicated table is
defined with the MIGRATE TO TARGET clause, migrating rows can cause an error in the multi-partition initiator,
which subsequently stalls DR traffic. In another case, a race condition while processing multiple
multi-partition procedures in a row followed by a partitioned procedure could also trigger a failure in DR.
These issues have been resolved. There was a problem where, if a properties file in the database root was corrupted, the
database would issue a fatal error with no explanation. The error now identifies the corrupted file and the
names of the missing properties. There was an issue where, if a stored procedure queued more than 200 SQL statements before
calling voltExecuteSQL() and at least one of the statements was a SELECT statement that
returned data, the result buffer could become corrupted causing one or more nodes to crash. This issue has been
resolved. Previously, the database would periodically report an error indicating that a VoltPort had
"died". As drastic as it sounds, the message did not indicate a serious problem (just that a connection had been
closed) and was usually followed shortly by the client reconnecting. Therefore, the message has been downgraded
to a warning and rewritten to more accurately reflect that a connection closed unexpectedly.
|
6. Release V10.2.12 (May 6, 2022) |
6.1. | Additional improvements |
| The following limitations in previous versions have been resolved: Previously, if Kubernetes pods were started with IPv6 disabled, the VoltDB Operator did
not detect it and the database failed to start when it tried using IPv6. The operator now recognizes this
situation and acts accordingly. The issue no longer exists. The binaries of AdoptOpenJDK and Alpine in the Volt Docker image for Kubernetes have been
updated to versions 11.0.14 and 3.15.4, respectively, to eliminate potential security vulnerabilities. In previous releases, restarting a database with lots of export connectors could take a
significant amount of time. And the delay was particularly noticeable if the connectors had fallen behind,
leaving large numbers of files in the export overflow directory. The startup process (as well as the contents of
the export_overflow directory) have been restructured to dramatically reduce the time required to validate these
files and thereby speed up the database startup itself. Also, the log messages related to export startup have
been streamlined and rewritten to be less intrusive and more informational.
|
7. Release V10.2.11 (April 26, 2022) |
7.1. | Additional improvements |
| The following limitations in previous versions have been resolved: Previously, changing the property cluster.config.deployment.dr.connection.enabled from
true to false would cause the cluster to restart unnecessarily. This issue has been resolved. There was a problem in previous releases where restarting a cluster with large volumes of
unprocessed export and topic data could fail with I/O errors from too many open files. This only occurred in
extreme cases — hundreds of export connectors or topics with literally thousands of overflow files due to
their targets being down prior to the database stopping. This issue has been resolved. VoltDB uses a special prefix, VOLTDB_AUTOGEN, for indexes that are not explicitly named in
the CREATE TABLE statement. Previously, if a user defined an index explicitly using the VOLTDB_AUTOGEN prefix in
an index name, the CREATE TABLE statement would succeed. However, any subsequent attempts to modify the schema
in any way would fail. This issue has been resolved.
|
8. Release V10.2.10 (March 8, 2022) |
8.1. | Additional improvement |
| The following limitation in previous versions has been resolved: There was an issue related to cross datacenter replication (XDCR) with three or more
clusters. If a cluster crashed and the remaining clusters were under heavy load when the missing cluster was
reinitialized and attempted to rejoin, the rejoin might fail. When this happened, the running clusters reported
an "unrecoverable replication error" during the reload. This issue has been resolved.
|
9. Release V10.2.9 (February 15, 2022) |
9.1. | Additional improvements |
| The following limitations in previous versions have been resolved: There was an issue where an attempt to modify specific export characteristics of a table
with ALTER TABLE... ALTER EXPORT... ON UPDATE_NEW would result in a bad table definition in the schema that
could no longer be modified. This issue has been resolved. There was an unusual edge case where if a database with a large number of tables was left
idle for an extended period of time, memory allocation would slowly increase until a node could crash. This
condition required hundreds or thousands of tables with no activity at all. Any transaction or update would
reset the memory usage. This issue has now been resolved.
|
10. Release V10.2.8 (January 25, 2022, updated June 1, 2022) |
10.1. | Database Replication (DR) improvements |
| A number of improvements to database replication (DR) developed in the follow-on release (V11.x) have been
backported to V10.2.8 to increase stability and reliability. These changes include: Time allowed for a DR snapshot to initiate a new connection has been increased from 30 seconds to 90
seconds. Additional logging of DR and XDCR activity on initiation and teardown to aid in debugging connection
issues. Previously, if a DR reset command did not complete its cleanup activities, attempting to create a
connection from a newly initialized cluster with the same DR ID could result in a Null Pointer Exception on the
producer cluster. This problem has been resolved.
|
10.2. | Recent improvements |
| The following limitations in previous versions have been resolved: There was an issue where if a topic was configured specifying the
consumer.key property but initially there was no stream defined to export to that topic, the
cluster would crash on startup with an error indicating that the topic is "not using a stream." This issue has
been resolved. The VoltDB Management Center lets you use a web browser to perform administrative
functions. However, in previous releases, if you attempted to connect to two database instances with security
enabled from separate browser tabs, logging on to one database would log you out of the other and vice versa.
This problem was erroneously reported as fixed in 10.2.5. However, the appropriate code was accidentally left
out until now. This problem is now resolved.
|
11. Release V10.2.7 (January 4, 2022) |
11.1. | Security Notice |
| The jQuery libraries used by the VoltDB Management Center have been updated to the following versions to
address security vulnerabilities: jQuery V3.5.1 jQuery UI V1.12.1 jQuery Slimscroll V1.3.8 jQuery Validate V1.19.2
|
11.2. | VoltDB Management Center improvements |
| In addition to the security updates, a number of functional improvements have been made to the VoltDB
Management Center (VMC), including: Ability to enable and disable security in VMC Improved user management: adding and modifying users, assigning multiple roles, and support for
user-defined roles Execution of stored procedures in the SQL Query tab
|
11.3. | Additional improvements |
| The following limitations in
previous versions have been resolved: There was a rare condition where the VoltDB network process could report an
index out of bounds error, causing the cluster to hang. This condition is now caught. As a consequence of the
error, one of the nodes will stop, but the cluster as a whole will continue and not be deadlocked. There was an issue where using the CAST function to convert a VARCHAR column to a BIGINT
could generate incorrect values if the number in question had more than 18 digits. This issue has been
resolved. VoltDB constrains the size of messages sent between cluster nodes and will cancel
transactions that exceed the limit. However, in rare situations, the system itself can generate overly large
messages and cause a "bad message length" error. This release adds additional hexadecimal information to the
logs when this happens, to help identify the root cause of the error. VoltDB V9.1 changed how VoltTables are read to improve access by column name. However if
only one or two columns are accessed from a large VoltTable, performance actually decreased. The current release
adjusted the read access to optimize for all cases where columns are fetched by name. There was an issue where altering the stream associated with a topic to remove a column
could cause a subsequent hash mismatch and crash the cluster. The issue has been resolved. Additional information is now logged if the SQL compiler encounters an unexpected error
while processing a data definition language (DDL) statement.
|
12. Release V10.2.6 (September 3, 2021) |
12.1. | New --credentials argument added to Prometheus agent |
| The Prometheus agent for VoltDB has a new argument available when starting the agent from the shell command.
The --credentials argument lets you specify a text file containing the authentication credentials
for accessing the database when security is enabled. The file must define two properties,
username and password . For example: Using the --credentials argument instead of --username and
--password avoids exposing your credentials on the command line to the ps
command. Note that the file path must be specified as a full pathname, not a relative path. |
12.2. | General release of VoltDB Topics for production use |
| VoltDB topics, which were released as a beta feature in V10.2, are now ready for production use. See the
chapter on Streaming Data in the
Using VoltDB manual for
more information. |
12.3. | Additional improvements |
| The following limitations in
previous versions have been resolved: Previously, it was possible when using IPv6, for the @SystemInformation system procedure
to return the string "localhost" as the server's IP address, which also interferes with the server's ability to
join a cluster. This problem has been resolved. There was an issue with the VoltDB Management Center where, if security was enabled, the
user could not log in through the web browser. This problem has been resolved.
|
13. Release V10.2.5 (June 16, 2021) |
13.1. | IMPORTANT: Limit partition row feature to be removed in VoltDB V11.0 |
| The LIMIT PARTITION ROWS feature was deprecated in Version 9. It will be removed in Version 11. This is a
change to the VoltDB schema syntax that is not forward compatible. This means that if your database schema still contains the LIMIT PARTITION ROWS syntax, you need to remove the
offending clause before upgrading to the upcoming major release. Fortunately, there is a simple process for doing
this. You can use the ALTER TABLE {table-name} DROP LIMIT PARTITION ROWS statement to correct the
table schema while the database is running and with no impact to the database contents. |
13.2. | Improved Java client handles both topology awareness and reconnections |
| The VoltDB Java client has two separate features that let you enable topology awareness
(setTopologyChangeAware ) and reconnection for lost connections
(setReconnectOnConnectionLosss ). Topology awareness uses existing connections to determine if
there has been any changes to the servers or ports available and creates connections to all servers in the cluster.
Reconnection periodically attempts to reconnect to a specific server and port if a connection is lost. Previously, these features were mutually exclusive. However, there are times when you might require both. For
example, topology awareness fails if there are no remaining connections (such as a cluster reboot). Whereas,
reconnection can only reconnect to addresses it already knows; it cannot detect if a failed server restarts with a
new address. To cover this situation, the client has been improved to allow both features to be enabled at the same
time. |
13.3. | The VoltDB Kubernetes Operator now logs all interaction with the individual VoltDB
processes |
| To aid in debugging, the VoltDB Operator now logs all of the commands it issues to the VoltDB processes
running on the Kubernetes pods. |
13.4. | The Java client autotune feature is deprecated |
| The VoltDB Java client has an autotune feature (with methods in the ClientConfig class) that was originally
designed to assist in developing demo applications. This feature is now deprecated and will be removed in a future
release. |
13.5. | The Java client send-reads-to-replicas feature is deprecated |
| Previously, VoltDB had a feature to enable complete read consistency (to protect against various failure
scenarios). The clientConfig method setSendReadsToReplicasByDefault was associated with that feature. However, read
consistency is now always enabled, so this method is obsolete and has been deprecated. It will be removed in a
future release. |
13.6. | Additional improvements |
| The following limitations in
previous versions have been resolved: Previously, if the snapshot rate limit was set (using the Java property
SNAPSHOT_RATELIMIT_MEGABYTES), requesting a CSV formatted snapshot could raise an illegal argument exception
stating that "requested permits must be positive" and the resulting snapshot files would be empty. This only
affected CSV formatted snapshots. This problem has been resolved. In Kubernetes, if you set the property cluster.clusterSpec.deletePVC to
false then uninstalled and reinstalled a release with the same name, some of the characteristics of the previous
instance of the release would be reused, creating problems for the new instance. This problem has been
resolved. In previous releases, there was an issue when using XDCR in Kubernetes, where repetitive
health checks on the DR port could flood the logs with warnings and interfere with regular client connections. A
similar condition could occur when enabling SSL on the VoltDB cluster. These problems have been resolved. When using the VoltDB Java client with setTopologyChangeAware enabled,
the service could generate two calls to the client status listener callback when a connection was created,
rather than one. This problem has been resolved. Previously, if both setTopologyChangeAware and
setReconnectOnConnectionLosss were enabled, and the last connection was lost long enough to
trigger backpressure and the query times out, the procedure callbacks are called repeatedly, causing unnecessary
thrashing and CPU consumption. The new, improved client now supports use of these features together and this
problem has been resolved.
|
14. Release V10.2.4 (May 7, 2021) |
14.1. | New license improvements |
| This release includes a number of improvements to the licensing and management of VoltDB software. These
improvements include: A new voltadmin license command, which updates the license on a running VoltDB
cluster A new voltadmin inspect command used by VoltDB product support to display summary
information about the cluster operating environment, including the current license
The new voltadmin license command is the most important of these changes for users, since
it allows you to update the license for a cluster without having to restart. Note that the cluster must be complete
— with no missing nodes — when you update the license. For example: |
14.2. | Beta utility voltsql deprecated |
| There is a beta utility, voltsql, that extends the standard sqlcmd utility adding command
completion and other interactive aids. The added functionality never fully met its goals and maintaining two
separate utilities is both impractical from a product perspective and confusing from a customer perspective. For
that reason, voltsql is being deprecated and will be removed in the next major release. |
14.3. | Improved connectivity for XDCR in Kubernetes |
| In environments such as Kubernetes where IP addresses are transient, XDCR could take an extended period of
time to reconnect a server on a remote cluster if the server restarted with a different address. The connection
logic has been rewritten to accommodate these environments, eliminating the delay. |
14.4. | Additional improvements |
| The following limitations in
previous versions have been resolved: The snapshotconverter utility lets you generate CSV files from VoltDB
snapshot files. These files can be used to recover and reload data from individual tables through the
csvloader utility. However, for certain data — such as XDCR tables, tables defined with
MIGRATE, or views with no COUNT(*) column — the snapshotconverter
utility includes hidden columns in its output, which can be confusing. A new command flag has been added,
--filter-hidden , that lets you exclude these hidden columns from the utility's output. The Java method TaskHelper.getTaskScepe has been replaced by the method
getTaskScope . The older method is now deprecated and will be removed in a future
release. Previously, if a cluster with command logging enabled stopped and restarted multiple
times, with the --missing argument used during at least one of the restarts, it was possible
for the recovery of the command logs to fail with an index out of bounds error. The problem was that the
database could not identify the original topology of the cluster. This issue has been resolved. If the same
situation occurs now, the cluster assigns a new arrangement to the partitions during recovery. There was an issue regarding tasks and directed procedures, where modifying the class
(with LOAD CLASSES) for a directed procedure associated with a task that was already running could cause the
database to fail with an error stating that active transactions were "moving backwards". This issue has been
resolved. There was an issue where Prometheus was randomly reporting additional database replication
(DR) producer statistics with an invalid timestamp. This problem has been resolved. Previously, a problem could occur if a node becomes detached from the cluster (for
example, due to network issues) and does not immediately fail but times out. The result was that the remote
cluster might stop replication, reporting a "replica ahead of master" error. This issue has also been
resolved. There was an issue in the export subsystem where, it was possible that releasing an
export queue with missing records could result in more records being deleted from the queue than necessary.
Normally releasing an export queue with a gap means the export connector "jumps" to the next record after the
missing data. However, if — after the queue pauses at a gap — the database schema was updated before
the release command is issued, it was possible for additional records unaffected by the gap to be deleted from
the queue. This issue has been resolved. There was a potential situation where, if a cluster used for cross datacenter replication
(XDCR) suffered one or more node failures, then was shutdown and restarted using command logs to recover,
replication might later fail with a "replica ahead of master" error. This underlying issue was related to
recovery using the failed node's command logs which did not match the current state of the remote cluster. This
problem has been resolved. Previously, integer columns (such as INTEGER and BIGINT) were allowed as TTL columns.
However, they did not produce the correct results. TTL columns are now constrained to TIMESTAMP columns
only. In recent releases (since 10.2.2), the command voltdb get license
failed to run, returning a Java error message instead. This problem has been resolved. Recent improvements to VoltDB allow clusters to continue running in a "reduced" K-safety
mode after a hash mismatch occurs, rather than shutting down. In reduced mode the extra partition copies are
stopped to avoid any data divergence. However, in certain cases when this happened, CPU usage could eventually
spike on individual nodes in the cluster. This problem has been resolved. There was a race condition where, when using database replication (either passive DR or
XDCR), applying multiple schema changes to the consumer cluster could cause the cluster to crash with a SIGSEGV
error. This problem has been resolved. However, it is still strongly recommended when applying schema changes on
DR clusters to process the DDL statements in batch mode using the sqlcmd file
-batch directive. Batch processing can greatly reduce the possibility of divergence occurring between
the clusters.
|
15. Release V10.2.3 (March 25, 2021) |
15.1. | Support for Kubernetes 1.19 |
| VoltDB and the VoltDB Operator now support Kubernetes 1.19. |
15.2. | Recent improvements |
| The following limitations in previous versions have been resolved: There was an issue regarding tasks and directed procedures, where modifying the class
(with LOAD CLASSES) for a directed procedure associated with a task that was already running could cause the
database to fail with an error stating that active transactions were "moving backwards". This issue has been
resolved. In certain situations, if an XDCR cluster stopped and recovered using command logs, some
partitions on the restarted cluster would not resume consuming data from the other clusters in the XDCR
relationship. A possible workaround was to perform a rolling restart of the cluster nodes. However, this issue
has now been resolved. There was a problem in the Prometheus agent for VoltDB, where database replication (DR)
statistics for the DR consumer were not being reported correctly. This issue has been resolved.
|
16. Release V10.2.2 (March 2, 2021) |
16.1. | Support for including additional content through Kubernetes persistent volumes |
| You can now identify additional content — such as schema files, stored procedure classes, and
third-party JAR files — to be included when initializing a VoltDB database on Kubernetes by specifying their
location in the additionalVolumes and additionalVolumeMounts properties.
Mounting persistent volume claims to /etc/voltdb/schema,
/etc/voltdb/classes, and /etc/voltdb/extension are equivalent
to using the voltdb init --schema argument, the --classes argument, or
including JAR files in the /lib/extension folder where VoltDB is installed on non-Kubernetes
servers. |
16.2. | Remove requirement for Python 2.7.13 inadvertently added in an earlier release |
| Improvements associated with SSL/TLS and IPv6 inadvertently added a requirement for Python version 2.7.13 in
VoltDB versions 10.2 and 10.1.1. This constraint has been corrected and VoltDB now accepts Python version 2.7.5 and
later. |
16.3. | Additional improvements |
| The following limitations in
previous versions have been resolved: Previously, it was possible for a final shutdown snapshot to stall due to "unacknowledged
transactions" in export. This could happen if an export stream was declared, but the associated export connector
was set to enabled="false" in the configuration. If data was then written into the stream and
a final shutdown snapshot requested (using the voltadmin shutdown --save command), the
shutdown could not finish due to the pending data in the queue. This issue has been resolved and pending data in
disabled queues is ignored. There was a rare condition where, if a node in a K-safe cluster failed while a snapshot
was being initiated, the cluster did not properly cleanup the aborted snapshot. As a result, no subsequent
snapshots could be started, including the snapshot needed to transfer data to the failed node when it tried to
rejoin. This issue has now been resolved. There was an issue in the cron scheduler for user-defined tasks (that is, tasks defined
using CREATE TASK ON SCHEDULE CRON...). As a consequence of the error, the tasks were always scheduled for
immediate execution. This issue has now been resolved.
|
17. Release V10.2.1 (January 21, 2021) |
17.1. | Initial Kubernetes release corrected |
| The initial release of 10.2 on Kubernetes included the wrong Docker image. This issue is resolved by the
10.2.1 point release. Do not use the initial application and
helm chart versions (10.2.0 and 1.3.0). Please be sure to use the latest releases, which are 10.2.1 and 1.3.1
respectively. This change affects the Kubernetes release of VoltDB only. |
18. Release V10.2 (January 19, 2021) |
18.1. | Configuration updates available in Kubernetes |
| The VoltDB Operator for Kubernetes now supports changes to cluster and database configuration properties while
the database is running. For properties that can be changed dynamically, the change occurs immediately. For other
properties, the Operator orchestrates a cluster restart or rolling upgrade, as needed. See the chapter on updates
and upgrades in the VoltDB Kubernetes
Administrator's Guide for details. |
18.2. | DR initialization snapshots changed to asynchronous processing |
| At the beginning of database replication (DR), a snapshot of the database is created and sent to the joining
cluster. Previously, the initialization snapshot was created as a synchronous snapshot — blocking transactions
on the existing database until initialization is complete. However, depending on the size of the database, the
snapshot could take a significant amount of time to complete, stalling ongoing database transactions until the
snapshot is complete. This release changes the processing of DR initialization snapshots from synchronous to asynchronous. The
asynchronous snapshot eliminates the interruption to ongoing work on the active cluster. The one drawback to this
change is, when using cross datacenter replication (XDCR) with more than two clusters, if a node fails on the active
cluster during the initialization snapshot, existing XDCR connections to other clusters may be lost and need to be
reset. |
18.3. | DR binary log handling improved for multi-cluster XDCR |
| Database replication (DR) is managed by passing binary logs between the participating clusters. The DR
consumer acknowledges packets after they have been applied. If the consumer falls behind and has no room in its
queue, it throws away additional packets and waits to request them again when it is ready. However, for
multi-cluster XDCR environments, this means all clusters are constrained by the latency of the slowest
cluster. Starting with VoltDB V10.0, the management of binary logs was enhanced to track the queuing and
acknowledgement of packets for each cluster separately. This means that each DR consumer can process packets at an
optimal speed. To help understand the impact of this change, extra fields have been added to the return results of
the DRCONSUMER and DRPRODUCER selectors for the @Statistics system procedure. See the description of @Statistics in the Using VoltDB manual for more
information. |
18.4. | Additional improvements |
| The following limitations in
previous versions have been resolved: There was an issue where a stream could stop writing data to its export target after
having more than two billion rows inserted into any one partition. The problem surfaced only after the necessary
number of records (approximately 2.15 billion) were written to the export connector and the database was saved,
shutdown, restarted, and restored. After the snapshot was restored, no further records were written to the
target by the export connector. This issue has now been resolved. In fact, upgrading to this release using the standard voltadmin
shutdown --save command, installing 10.2, and then restarting the database will automatically
circumvent the issue. There was a rare condition where using the CAST function to convert a VARCHAR column to
an integer for numeric comparison (for example, CAST(IQ AS INT) > 140 where
IQ is a VARCHAR column) could produce an incorrect result. This would only occur if
the table containing the column had an index and that index was selected to optimize the query. This issue has
been resolved. The New Relic latency graph data has been adjusted to improve accuracy. The VoltDB Prometheus agent supports monitoring a subset of available statistics, using
the --stats and --skipstats options. However, in earlier VoltDB v10.1
releases, use of these options could cause the agent to hang. This issue was resolved in VoltDB 10.1.2. Previously, when running VoltDB in Kubernetes, there were situations when the Helm charts
would ignore the serviceAccountName if the global.rbac.create property was
set to false. This issue has been resolved. To use a separately created service account, you must: Set the properties operator.serviceAccount.name and
cluster.serviceAccount.name to match the account in question Set the properties operator.serviceAccount.create and
cluster.serviceAccount.create to false.
Under certain circumstances, previous versions of the VoltDB Operator for Kubernetes
mistakenly used the underlying system, instead of the virtualized container, when calculating available memory.
This issue has been resolved.
|
19. Release V10.1.3 (December 18, 2020) |
19.1. | Internal improvements to VoltDB Operator |
| Code improvements to optimize the software upgrade process. |
20. Release V10.1.2 (December 15, 2020) |
20.1. | Adjustments and optimizations for Kubernetes settings |
| Several settings associated with Kubernetes have been adjusted to provide a better experience when starting
and running VoltDB in a Kubernetes environment. Those optimizations include: |
20.2. | Using load balancers to connect XDCR clusters in different Kubernetes domains |
| The Helm charts for Kubernetes now allow for alternate methods of establishing a network mesh between clusters
for cross datacenter replication (XDCR). In particular, you can now use per-pod load balancers so the clusters can
connect to each other through externally available IP addresses. See the VoltDB Kubernetes Administrator's Guide
for details. |
20.3. | Security Notice |
| A number of libraries included in the VoltDB distribution have been updated to eliminate
security vulnerabilities, including Guava, Jackson, Jetty, Kafka, Log4J, and Netty |
20.4. | Additional improvements |
| The following limitations in
previous versions have been resolved: There was a problem with the Kinesis importer where the importer could fail with a "no
class found" error. This issue has been resolved. There was an rare situation where if a schema failed causing a deadlock, subsequent
attempts to rejoin nodes to the cluster would fail. This issue has been resolved. Two issues associated with the JDBC export connector were identified and fixed.
First, when inserting into an Oracle database via the JDBC export connector, it was possible for the export
threads to get blocked if the commit failed. Second, it was possible for an insert into MySQL via the JDBC
connector to fail if the table definition required duplicate keys. These issues have now been resolved. There was an issue in the export subsystem where, it was possible that releasing an
export queue with missing records could result in more records being deleted from the queue than necessary.
Normally releasing an export queue with a gap means the export connector "jumps" to the next record after the
missing data. However, if — after the queue pauses at a gap — the database schema was updated before
the release command is issued, it was possible for additional records unaffected by the gap to be deleted from
the queue. This issue has been resolved.
|
21. Release V10.1.1 (November 13, 2020) |
21.1. | Support for IPv6 |
| VoltDB now supports both IPv4 and IPv6 networking. This includes support for IPv6-only environments. When
entering IPv6 network addresses, be sure to enclose the address in square brackets. See the implementation note concerning IPv6 addresses for details. |
22. Release V10.1 (October 30, 2020) |
22.1. | Support for multiple VoltDB databases in the same Kubernetes cluster |
| With the original release of VoltDB V10.0 and the VoltDB Operator, you could run multiple VoltDB databases in
separate Kubernetes clusters or in separate namespaces within a single cluster. You can now run multiple databases
within the same Kubernetes cluster and namespace. To do this, you start by running a single copy of the VoltDB
Operator, using the following steps: Start the VoltDB Operator by itself (helm install operator voltdb/voltdb --set
cluster.enabled=false ). After the Operator pod is ready... Start the first database without an Operator (helm install db1 voltdb/voltdb --set
operator.enabled=false ) Start the second database without an Operator (helm install db2 voltdb/voltdb --set
operator.enabled=false ) And so on.
When running multiple databases within the same namespace, the only proviso is that you must not stop and
delete the Operator until all of the databases it supports are stopped and deleted. |
22.2. | Support for future upgrades in Kubernetes |
| Another change to the VoltDB Operator for Kubernetes provides support for future upgrades to VoltDB
installations. Although not available for upgrading V10.0 to V10.1, this new functionality will allow scripted
upgrades for all future versions of VoltDB in Kubernetes. For the initial V10.0 release of the VoltDB Operator, the one-time process for upgrading to V10.1 is: Update the VoltDB charts in Helm: Verify that you have the latest charts. The following command should show version 10.1 for VoltDB, and
version 1.1.0 or later for the VoltDB Operator and the Helm chart: Shutdown VoltDB taking a snapshot and making sure you do not delete the persistent volume on which the
database root directory is stored. This example is shutting down the database call mydb: Wait for all the cluster pods to be removed from Kubernetes. Then delete the Helm release: Wait for the VoltDB Operator pod to be removed from Kubernetes. Then reinstall the Helm release with the
latest version:
|
22.3. | Improved SHOW TABLES and DESCRIBE information in sqlcmd |
| The sqlcmd directives SHOW TABLES and DESCRIBE have been enhanced to provide additional information about the
tables in the database schema. The SHOW TABLES directive now sorts the schema objects between regular tables, data
replication (DR) tables, streams, and views. Similarly, the DESCRIBE directive now distinguishes between regular
tables and DR tables. |
22.4. | Additional information in the @Statistics and @SystemInformation system procedures |
| Both the @Statistics and @SystemInformation system procedures have been enhanced to provide additional
information. The @Statistics TABLE selector now includes two additional columns indicating whether the table is a DR
table or not and, if it is defined as an export table, the name of its export target. The @SystemInformation
DEPLOYMENT selector now includes rows for additional paths, such as DR and export overflow and cursors, when
appropriate. |
22.5. | Kafka import and export support Kafka version 2.6.0 and later |
| The Kafka services within VoltDB (including Kafka import and export) support Kafka version 2.6.0 and later.
Support for earlier versions of Kafka is deprecated. |
22.6. | Additional improvements |
| The following limitations in
previous versions have been resolved: The snapshotconvert utility has been corrected to interpret null values
as end-of-file, rather than reporting an error. At the same time, general error handling has been enhanced and
extended to report more detailed information when a failure occurs. The VoltDB bulk loader (available in the client API and used in the loader utilities such
as csvloader) has been optimized to remove an unnecessary regular expression evaluation of string columns. This
change produces a noticeable improvement in load times for large data sets. A number of edge cases were discovered that could cause a database deadlock. These
situations — some race conditions, some the consequence of unusual failures during a schema change —
have now been resolved. VoltDB V10.0 introduced a change that caused the New Relic monitoring plugin to fail. This
issue has been resolved. For its original release, the VoltDB Operator supported Kubernetes versions 1.16.2 through
1.17.x. The Operator now supports Kubernetes versions 1.18.x as well. Previously, when attempting to configure XDCR in a Rancher Kubernetes environment, the nodes
would not initialize properly. This issue is now resolved. The Prometheus agent for VoltDB has been updated to improve the
accuracy of the information being reported. VoltDB Operator V10.1 changes the location of the VoltDB root directory under Kubernetes
from /pvc/voltdb/{release}-voltdb-cluster/voltdbroot in V10.0 to
/pvc/voltdb/voltdbroot/ in V10.1. When creating a new database, the Operator creates the
root directory in the new location. For existing instances (upgraded using the process described above), the
Operator keeps the older existing location.
|
23. Release V10.0 (August 12, 2020) |
23.1. | New VoltDB Operator for Kubernetes |
| VoltDB now offers a complete solution for running VoltDB databases in a Kubernetes cloud environment. VoltDB
V10.0 provides managed control of the database startup process, a new VoltDB Operator for coordinating cluster
activities, and Helm charts for managing the relationship between Kubernetes, VoltDB and the Operator. The VoltDB
Kubernetes solution is available to Enterprise customers and includes support for all VoltDB functionality,
including cross data center replication (XDCR). See the VoltDB Kubernetes Administrator's Guide
for more information. |
23.2. | New Prometheus agent for VoltDB |
| For customers who use Prometheus to monitor their systems, VoltDB now provides a Prometheus agent that can
collect statistics from a running cluster and make them available to the Prometheus engine. The Prometheus agent is
available as a Kubernetes container or as a separate process that can either run on one of the VoltDB servers or
remotely and makes itself available through port 1234 by default. See the README file in the
/tools/monitoring/prometheus folder in the directory where you install VoltDB for
details. |
23.3. | Enhancements to Export |
| Recent updates to export provide significant improvements to reliability and performance. The key advantages
of the new export subsystem are: Better throughput — Initial performance tests demonstrate
significantly better throughput on export queues using the new subsystem over previous versions of
VoltDB. Adjustable thread pools — The new subsystem let's you set the
thread pool size for export as a whole or to define thread pools for individual connectors. Fewer duplicate rows — When cluster nodes fail and rejoin the
cluster, the export subsystem resubmits certain rows to ensure they are delivered. The new subsystem keeps
better track of the acknowledged rows and does not need to send as many duplicates to maintain the same level of
durability.
|
23.4. | Improved license management |
| Starting with VoltDB V10.0, specifying the product license has moved from the voltdb start
command to the voltdb init command. In other words, you only have to specify the license once,
when you initialize the database root directory, rather than every time you start the database. When you do specify
the license on the init command, it is stored in the root directory the same way the
configuration is. The same rules apply about the default location of the license as before. So if you store your license in your
current working directory, your home directory, or the /voltdb subfolder where VoltDB is
installed, you do not need to include the --license argument when initializing the database. Also
note that the --license argument on the voltdb start command is now deprecated
but still operational. So if you have scripts to start VoltDB that include --license on the
start command, they will continue to work. However, we recommend you change to the new syntax
whenever convenient because support for voltdb start --license may be removed in some future
major release. |
23.5. | Support for RHEL and CentOS V8 |
| After internal testing and validation, RHEL and CentOS V8 are now supported platforms for production use of
VoltDB. |
23.6. | RabbitMQ export connector removed |
| The export connector for RabbitMQ was deprecated in VoltDB version 9 and has now been removed from the
product. |
23.7. | Ubuntu 14.04 no longer supported as production platform |
| Ubuntu 14, which is no longer supported by Canonical, has been dropped as a production platform for
VoltDB. |
23.8. | Additional improvements |
| The following limitations in
previous versions have been resolved: There was a rare edge case where, if a schema change failed due to an internal error and
was retried, the cluster could crash with a null pointer exception. This issue has been resolved. There was an issue where attempting to insert a row with all null values into a stream
with at least one column that allows null values, the server could crash. This issue has been resolved. Due to issues in the underlying library used, it was possible for the JSON functions to
return results in a different order on different servers, causing a hash mismatch error. This inconsistency, and
the resulting issue, have now be resolved. Under certain conditions while using the JDBC export connector, altering the stream
associated with the connector could cause export to fail. The problem was that the schema change requires an
update to the prepared statement used to write to the JDBC target. But if the createtable
property was set to false or the ignoregenerations property set to true, the prepared
statement was not updated. This issue has been resolved. There was an issue where a query with a complex ORDER BY clause with two separate column
expressions, both using the DECODE() function, could return incorrect results. This issue has been
resolved. Previously, if a user-defined aggregate function threw an exception, the function failed
but the specific exception was not passed back to the calling application. Instead, a generic exception was
returned. This issue has been resolved and user-defined aggregate functions now return the correct
exception.
|