MySQL is een krachtige opensource-databaseserver die vooral populair is als website- en forumdatabase. Oracle heeft enkele dagen geleden MySQL Cluster 7.1.3 uitgebracht met het label General Availability, waarmee deze versie stabiel is verklaard. MySQL Cluster is een real-time-transactional database waarmee een hoge mate van beschikbaarheid kan worden gerealiseerd. Het bevat daarnaast volgens de ontwikkelaars geen single-point-of-failure's en kan zonder downtime worden uitgebreid met meer nodes. Voor meer informatie over MySQL Cluster wordt naar dit pdf-document doorverwezen. De aankondiging van Oracle kan op deze pagina worden nagelezen en de lijst met veranderingen voor deze uitgave ziet er als volgt uit:
Changes in MySQL Cluster NDB 7.1.3 (5.1.44-ndb-7.1.3)
MySQL Cluster NDB 7.1.3 is the first General Availability release in the MySQL Cluster NDB 7.1 release series, incorporating new features in the NDBCLUSTER storage engine and fixing recently discovered bugs in MySQL Cluster NDB 7.1.2-beta and previous MySQL Cluster releases.
Obtaining MySQL Cluster NDB 7.1.3. The latest MySQL Cluster NDB 7.1 binaries for supported platforms can be obtained from http://dev.mysql.com/downloads/select.php?id=14. Source code for the latest MySQL Cluster NDB 7.1 release can be obtained from the same location.
This release also incorporates all bugfixes and changes made in previous MySQL Cluster NDB 6.x, MySQL Cluster NDB 7.0, and MySQL Cluster NDB 7.1 releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.44 (see Section C.1.4, “Changes in MySQL 5.1.44 (04 February 2010)”).
Functionality added or changed:
- Important Change: The experimental ndbinfo.pools table has been removed. Information useful to MySQL Cluster administration that was contained in this table should be available from other ndbinfo tables. See Section 17.5.8, “The ndbinfo MySQL Cluster Information Database”, for additional information.
- Cluster Replication: Replication: MySQL Cluster Replication now supports attribute promotion and demotion for row-based replication between columns of different but similar types on the master and the slave. For example, it is possible to promote an INT column on the master to a BIGINT column on the slave, and to demote a TEXT column to a VARCHAR column. The implementation of type demotion distinguishes between lossy and non-lossy type conversions, and their use on the slave can be controlled by setting the slave_type_conversions global server system variable. For more information about attribute promotion and demotion for row-based replication in MySQL Cluster, see Attribute promotion and demotion (MySQL Cluster). (Bug#47163, Bug#46584)
- If a node or cluster failure occurred while mysqld was scanning the ndb.ndb_schema table (which it does when attempting to connect to the cluster), insufficient error handling could lead to a crash by mysqld in certain cases. This could happen in a MySQL Cluster with a great many tables, when trying to restart data nodes while one or more mysqld processes were restarting. (Bug#52325)
- In MySQL Cluster NDB 7.0 and later, DDL operations are performed within schema transactions; the NDB kernel code for starting a schema transaction checks that all data nodes are at the same version before allowing a schema transaction to start. However, when a version mismatch was detected, the client was not actually informed of this problem, which caused the client to hang. (Bug#52228)
- After running a mixed series of node and system restarts, a system restart could hang or fail altogether. This was caused by setting the value of the newest completed global checkpoint too low for a data node performing a node restart, which led to the node reporting incorrect GCI intervals for its first local checkpoint. (Bug#52217)
- When performing a complex mix of node restarts and system restarts, the node that was elected as master sometimes required optimized node recovery due to missing REDO information. When this happened, the node crashed with Failure to recreate object ... during restart, error 721 (because the DBDICT restart code was run twice). Now when this occurs, node takeover is executed immediately, rather than being made to wait until the remaining data nodes have started. (Bug#52135) See also Bug#48436.
- The internal variable ndb_new_handler, which is no longer used, has been removed. (Bug#51858)
- ha_ndbcluster.cc was not compiled with the same SAFEMALLOC and SAFE_MUTEX flags as the MySQL Server. (Bug#51857)
- When debug compiling MySQL Cluster on Windows, the mysys library was not compiled with -DSAFEMALLOC and -DSAFE_MUTEX, due to the fact that my_socket.c was misnamed as my_socket.cc. (Bug#51856)
- Values shown for the DATA_MEMORY column of the ndbinfo.memoryusage table did not match those shown by the ndb_mgm client ALL REPORT MEMORYUSAGE command. (Bug#51735)
- The redo log protects itself from being filled up by periodically checking how much space remains free. If insufficient redo log space is available, it sets the state TAIL_PROBLEM which results in transactions being aborted with error code 410 (out of redo log). However, this state was not set following a node restart, which meant that if a data node had insufficient redo log space following a node restart, it could crash a short time later with Fatal error due to end of REDO log. Now, this space is checked during node restarts. (Bug#51723)
- Restoring a MySQL Cluster backup between platforms having different endianness failed when also restoring metadata and the backup contained a hashmap not already present in the database being restored to. This issue was discovered when trying to restore a backup made on Solaris/SPARC to a MySQL Cluster running on Solaris/x86, but could conceivably occur in other cases where the endianness of the platform on which the backup was taken differed from that of the platform being restored to. (Bug#51432)
- A mysqld, when attempting to access the ndbinfo database, crashed if could not contact the management server. (Bug#51067)
- The mysql client system command did not work properly. This issue was only known to affect the version of the mysql client that was included with MySQL Cluster NDB 7.0 and MySQL Cluster NDB 7.1 releases. (Bug#48574)
- Cluster API: Packaging: The file META-INF/services/org.apache.openjpa.lib.conf.ProductDerivation was missing from the clusterjpa JAR file. This could cause setting openjpa.BrokerFactory to “ndb” to be rejected. (Bug#52106)
- Disk Data: Inserts of blob column values into a MySQL Cluster Disk Data table that exhausted the tablespace resulted in misleading no such tuple error messages rather than the expected error tablespace full. This issue appeared similar to Bug#48113, but had a different underlying cause. (Bug#52201)
- Disk Data: DDL operations on Disk Data tables having a relatively small UNDO_BUFFER_SIZE could fail unexpectedly.
- Cluster Replication: The --ndb-log-empty-epochs option did not work correctly. (Bug#49559)
- Cluster API: A number of issues were corrected in the NDB API coding examples found in the storage/ndb/ndbapi-examples directory in the MySQL Cluster source tree. These included possible endless recursion in ndbapi_scan.cpp as well as problems running some of the examples on systems using Windows or Mac OS due to the letter case used for some table names. (Bug#30552, Bug#30737)