Installation
Configuration
Management
Monitoring
Troubleshooting
Advanced Configuration
The default settings of the Data Hub allow for very high throughput while balancing system resources (CPU, memory, disk, network) on most hosts, both during normal delivery and during recovery scenarios (e.g. after a destination that was offline comes back online). Changing these settings can have unexpected consequences, so we recommend leaving these settings at their default values unless you first consult with PreEmptive support about any issues you are having.
The most common scenarios where you might need to make changes are for a slow network connection or for a destination that is getting overwhelmed when it comes online after an outage.
For performance considerations before installing, see High-volume Performance Considerations.
The Dispatch Service operates in a highly-concurrent manner, using a combination of .NET threads and .NET tasks. At the application level, these concurrent activities are rolled up into a single concept (in e.g. the Data Hub configuration) called tasks. There can be many concurrent tasks, each responsible for reading a batch of messages from a source queue and dispatching them appropriately.
There are two types of tasks:
At any given time, each task holds at most one outgoing HTTP connection to a particular destination.
While a task is processing, its messages are not actually removed from the source queue. Instead, they are reserved by the task and marked unacknowledged in RabbitMQ. When the task has processed all of its messages for all destinations, these messages are removed from their source queue. If the Dispatch Service is stopped or interrupted during processing, the set of messages will go back onto their source queue, preventing data loss.
The number of tasks in use at a time is limited to prevent resource overuse and contention. The number of messages that each task can process is also limited. These limits can be configured, to tune throughput and concurrency.
The Data Hub operates the most efficiently when all (durable) destinations are accepting all messages, so that no messages are being queued. In this state, the Data Hub should keep the endpoint queue near-empty, and CPU, memory, disk, and network overhead should be small.
A large number of error responses, or an offline destination, will affect performance upon recovery: in addition to new incoming messages, the Dispatch Service (and the destination) must handle the backlog of previously-queued messages. A similar issue occurs if the Dispatch Service is itself stopped for an extended period, as the endpoint queue will need to be cleared when the service resumes. (These scenarios also lead to greater disk usage.) In these scenarios, the Data Hub will attempt to deliver the backlog as fast as it possibly can, typically throttled by CPU or network bandwidth until the recovery is complete.
Each new message, when it is first processed by the Dispatch Service, must be evaluated to determine which destination(s) it will be dispatched to. Thus, high levels of complexity in include and exclude criteria may cause high CPU use.
Destination latency can impact performance, as the connection attempt will continue until either a response is received or the Delivery Timeout elapses, marking the destination offline. As all messages in a task must be processed for all relevant destinations before the task is complete, and because endpoint tasks are not destination-specific, a latent destination may slow throughput of new messages to other destinations.
Large message sizes can also impact performance as well as increase the memory usage by IIS, the Dispatch Service and RabbitMQ. Overall memory usage by the Data Hub can be controlled by adjusting the Dispatch Service Concurrency Settings.
In addition to performance considerations on the Data Hub, also consider the impact on destinations. As the Data Hub performs no processing on the actual bodies of messages, it may dispatch messages faster than a destination (or the connection to the destination) can handle. If the destination either begins refusing connections or does not respond in the Delivery Timeout, the destination will be marked offline. While this will alleviate pressure on the destination temporarily, upon a successful retry, the destination will now need to deal with both new incoming messages and the backlog accrued.
Specific concurrency and delivery settings can be configured for the Dispatch Service in the <appSettings>
section of [Application folder]\DispatchService\HubDispatchService.exe.config
.
You must restart the Dispatch Service for the settings to take effect.
The settings are as follows:
DeliveryTimeout
Dispatch.config
settings.SecondsBeforeDispatch
MaxMessagesPerTask
MaxEndpointTasks
MaxTasksPerDestination
The maximum number of outgoing requests to a given destination (from the Dispatch Service) can be configured. Higher values for this setting may increase performance, but can also lead to higher load on destinations. Lowering this value might help prevent timeouts from destinations that are being overwhelmed after coming back online.
To change this value:
[Application folder]\DispatchService\HubDispatchService.exe.config
.<system.net>
section, locate the <connectionManagement>
subsection.<add address="*">
tag's maxconnection
attribute appropriately.The default value, 24
, was chosen based on the total number of connections that would be needed (in the worst case) by summing the default MaxEndpointTasks
and MaxTasksPerDestination
values, below. This is because
each task uses, at most, one outgoing connection to a particular destination at a time. To prevent resource contention, we recommend keeping this pattern as you adjust any of these three values:
maxconnection = MaxEndpointTasks + MaxTasksPerDestination
To balance a large load of incoming messages, multiple Data Hub instances can be configured to accept requests in a round-robin fashion using third-party load balancers. However, there are some considerations to address when doing so:
Data Hub User Guide Version 1.3.0. Copyright © 2014 PreEmptive Solutions, LLC