This Plexus Message Broker solution features the following technology: WebSphere® MQ, bursts of 500 transactions per second, local and geographic redundancy, message routing, and message translation. This solution’s site and Plexus Message Broker configurations touch on these features below.
As illustrated below, there are three physical servers at the primary site. Each physical server contains two instances of the Plexus Message Broker. A load balancer is used to distribute the message traffic between all instances of the Plexus Message Broker. For redundancy there are two queue managers that handle the message to the IBM system. A similar configuration is maintained at the back-up site.
Steps have been taken at this site to ensure high message volume as well as local redundancy. A health check facility has been added wherein the Plexus Message Broker tests the critical data paths to ensure functionality in two respects: 1) validate that the data path is clear all the way to the IBM application and 2) validate that the response time performance of the data is within acceptable metrics. If either test fails, the health check mechanism signals the Load Balancer that the compromised data path is out of commission.
Plexus Message Broker Configuration
Due to the high volume of terminal traffic emanating from the Linux servers, there are redundant Linux message servers (LinuxToIBMConverter) as illustrated below. Each Linux message server is paired with a HealthCheck Message Server that continually tests the data path of that Linux message server. There are also separate inquiry and update transactions message servers (UnixToIBMConverter). Finally, there is a separate message server (WMBToIBMConverter) to handle WebSphere® Message Broker (WMB) traffic.
Plexus Message Server Functions
|LinuxToIBM||Receives a TCP connect request, establishes a TCP session, extracts the message from the TCP session, converts the character set, creates an MQ message containing the TCP message and inserts the message into the appropriate MQ queue.Once the message is stored in the MQ queue, the LinuxToIBM Message Server waits on a message insert with the correct Message ID, picks up that response message, and sends the message back to the Linux server. The TCP session is then closed.|
|HealthCheck||There is one HealthCheck Message server for each UnixToIBM Message Server. Every 20 seconds, the Load Balancer send a HealthCheck request to the HealthCheck Message Server. The HealthCheck Message Server analyzes the performance metrics of the associated LinuxToIBM Message Server and responds back to the Load Balancer with either an OK or a Not OK, or does not respond. If the Load Balancer does not receive an OK response, the data path to the associated LinuxToIBM Message Server is disabled.The HealthCheck is also used by a separate automated monitoring facility which will alert an Operations Center any time the HealthCheck does not return OK or fails to respond.|
|UnixToIBM||Removes a MQ message from the input queue, converts the message into the IBM format retaining the message origination information, inserts the message into the IBM MQ queue, and waits on the IBM MQ queue for a message with that correlation ID. Once that message is received, the UnixToIBM Message Server converts the message into Unix format, creates an MQ message containing the output message, and inserts the message into the appropriate application queue.|
|WMQIToIBM||Converts a WMB message format to a propriety internal message format|