JGroups, voorheen JavaGroups, is een groepscommunicatie pakket gebaseerd op multicasting, geschikt voor IP maar andere protocollen worden ook ondersteund. Processen kunnen lid worden van verschillende groepen en berichten sturen naar de hele groep of individuele leden. Wanneer een lid bij de groep komt of de groep verlaat of crasht wordt er een bericht gestuurd naar alle leden van deze groep. Processen kunnen op dezelfde host in het LAN of in het WAN voorkomen. Het pakket wordt ontwikkeld in Java en onder de LGPL licentie uitgegeven. De tweede bčtaversie van JGroups 2.2.9 is sinds enkele dagen beschikbaar met de volgende aankondiging:
This release contains some fixes for critical bugs in beta1. All ~ 360 unit tests pass. The last bug I need to fix is related to a merge bug: when many members join at the same time, and we need to do a merge (which shouldn't happen since 2.2.8 anyways), the merge may not be correct. Besides that, 2.2.9beta2 is very stable and I hope to release 2.2.9 final soon, definitely before the end of the year. Give it a try and feedback to jg-dev/jg-users please!
Lead JGroups / JBossCache
Release Notes JGroups 2.2.9:
The channel and most protocols can now be accessed via JMX. This can be used in any environment that provides an MBeanServer, e.g. JBoss or JDK 5. With JDK 5's jconsole, for example, retransmission counters can be viewed in realtime, or operations can be invoked that dump the retransmission windows for NAKACK etc.Fine-grained interface binding
Attributes receive_on_all_interfaces and receive_interfaces enable receiving multicast packets on all or a number of interfaces, e.g. receive_interfaces="hme0,hme1,192.168.5.3"Retransmission from random member
[NAKACK] This is helpful if we have a large group, and want to avoid having to ask the original sender of a message for retransmission. By asking a random member, we take some potential load off of the original sender.Added payload to MethodCall
Needed to pass additional information with a method call, required in JBossCache.Common transport protocol TP
UDP and TCP now derive from this, therefore common functionality has to be implemented and tested only once. TCP now has many more properties supported by TP.Performance improvements
50% speed improvement for RpcDispatcher/MessageDispatcher/RequestCorrelator/MethodCall. Most headers now support size() and Streamable, making marshalling and unmarshalling faster.Discovery of all clusters in a network
With JGroups/bin/probe.sh or probe.bat, it is now possible to discover *all* clusters running in a network. This is useful forView reconciliation (VIEW_SYNC protocol)
- management tools that needs to discover the clusters running, and then drill down into each individual cluster
- for diagnostics and trouble shooting
When a coordinator sends out a new view V2 and then leaves (or crashes), it is possible that not all members receive that view. So we could end up with some members still having V1, and others having V2. The members having V2 will discard all messages from members with V1. Note that this is a very rare case, but when it happens, the cluster is screwed up. VIEW_SYNC solves this by having each member periodically broadcast its view. When a member receives a view that is greater than its own view, it installs it. Thus, all members will eventually end up with the same view should the above problem occur. Note that the view sending is done by default every 60 seconds, but it can also be triggered through JMX by calling the sendView() method directly. See JGroups/doc/ReliableViewInstallation for details.Bug fixes
Critical: in rare cases, the digests could be computed incorrectly, leading to higher message buffering than necessary
Critical: message bundling (in TP) changed the destination address, so when unicast messages had to be retransmitted, because dest=null, the receiver would drop them. This would cause UNICAST to stop delivering messages, which would accumulate forever ! This happened only in very rare cases when a high sustained throughput was encountered (e.g. 20 million messages sent at the highest possible speed). Workaround: set enable_bundling="false" in UDP.
Many smaller bug fixes.