Performance & Scalability
This guide covers strategies for optimizing OpcSharp when managing hundreds or thousands of subscriptions and monitored items with high-frequency data updates.
Key Principle: Fewer Subscriptions, More Monitored Items
The most impactful optimization is to minimize the number of subscriptions and instead group monitored items into shared subscriptions. Each subscription maintains its own publish cycle — fewer subscriptions means fewer round-trips and less protocol overhead.
// BAD: one subscription per variable (1,000 subscriptions = 1,000 publish cycles)
foreach (var nodeId in nodeIds)
{
var sub = await client.CreateSubscriptionAsync(publishingInterval: 1000);
await client.CreateMonitoredItemsAsync(sub.SubscriptionId, new[]
{
new MonitoredItemCreateRequest
{
ItemToMonitor = new ReadValueId { NodeId = nodeId, AttributeId = AttributeIds.Value },
MonitoringMode = MonitoringMode.Reporting,
RequestedParameters = new MonitoringParameters { SamplingInterval = 500, QueueSize = 10, DiscardOldest = true }
}
});
}
// GOOD: one subscription, many monitored items (1 publish cycle for all 1,000 items)
var sub = await client.CreateSubscriptionAsync(publishingInterval: 1000);
var items = nodeIds.Select(id => new MonitoredItemCreateRequest
{
ItemToMonitor = new ReadValueId { NodeId = id, AttributeId = AttributeIds.Value },
MonitoringMode = MonitoringMode.Reporting,
RequestedParameters = new MonitoringParameters
{
SamplingInterval = 500,
QueueSize = 10,
DiscardOldest = true
}
}).ToArray();
await client.CreateMonitoredItemsAsync(sub.SubscriptionId, items);When to Use Multiple Subscriptions
Use separate subscriptions when items have different update rates or priorities:
// Fast subscription for critical process variables (100ms updates)
var fastSub = await client.CreateSubscriptionAsync(
publishingInterval: 100,
priority: 200);
// Slow subscription for diagnostic/status values (10s updates)
var slowSub = await client.CreateSubscriptionAsync(
publishingInterval: 10_000,
priority: 50);
// Batch-create all items per subscription
await client.CreateMonitoredItemsAsync(fastSub.SubscriptionId, criticalItems);
await client.CreateMonitoredItemsAsync(slowSub.SubscriptionId, diagnosticItems);Batch All Operations
All OpcSharp service methods accept arrays. Always batch creates, modifications, and deletes rather than calling them one at a time.
// BAD: N round-trips
foreach (var item in itemsToDelete)
await client.DeleteMonitoredItemsAsync(subId, new[] { item });
// GOOD: 1 round-trip
await client.DeleteMonitoredItemsAsync(subId, itemsToDelete);This applies equally to CreateMonitoredItemsAsync, ModifyMonitoredItemsAsync, DeleteMonitoredItemsAsync, DeleteSubscriptionsAsync, ReadAsync, WriteAsync, BrowseAsync, and CallAsync.
Tuning Subscription Parameters
Publishing Interval
The publishing interval controls how often the server bundles notifications and sends them to the client. The server may revise your requested interval upward.
| Scenario | Suggested Interval |
|---|---|
| Real-time HMI / process control | 100–500 ms |
| Trending / data logging | 1,000–5,000 ms |
| Status monitoring / dashboards | 5,000–60,000 ms |
var sub = await client.CreateSubscriptionAsync(
publishingInterval: 500, // requested — server may revise
lifetimeCount: 600, // subscription expires after 600 × interval with no publish
maxKeepAliveCount: 20); // server sends empty notification after 20 × interval of silence
// Check the server's revised values
Console.WriteLine($"Revised interval: {sub.RevisedPublishingInterval} ms");Lifetime and KeepAlive Counts
lifetimeCount— the subscription expires (server-side) if no Publish request arrives withinlifetimeCount × publishingInterval. Set higher for slow intervals to avoid premature expiry.maxKeepAliveCount— the server sends an empty notification aftermaxKeepAliveCount × publishingIntervalof silence, confirming the subscription is alive.
Rule of thumb: lifetimeCount should be at least 3× maxKeepAliveCount.
MaxNotificationsPerPublish
Controls how many data change notifications the server includes in a single Publish response. Use this to bound memory usage per publish cycle:
var sub = await client.CreateSubscriptionAsync(
publishingInterval: 1000,
maxNotificationsPerPublish: 500); // cap at 500 notifications per responseWhen the cap is reached, the server sets MoreNotifications = true and the client automatically sends another Publish request to drain the remaining notifications.
Set to 0 (default) for no limit.
Tuning Monitored Item Parameters
Sampling Interval
The sampling interval controls how often the server checks the data source for changes. It is independent of the publishing interval — the server samples at this rate and queues changes, then delivers them at the publishing interval.
new MonitoringParameters
{
SamplingInterval = 100, // server checks every 100ms
QueueSize = 10, // buffer up to 10 values between publishes
DiscardOldest = true // drop oldest when queue is full
}- Setting
SamplingInterval = -1tells the server to use the subscription’s publishing interval. - Setting
SamplingInterval = 0requests the server’s fastest supported rate.
Queue Size and Discard Policy
For high-frequency data where the sampling interval is faster than the publishing interval, the queue buffers values between publish cycles:
| Scenario | QueueSize | DiscardOldest |
|---|---|---|
| Latest value only (HMI) | 1 | true |
| Full history capture | 50–100 | true |
| Alert/event — never drop | 50–100 | false |
// High-frequency capture: sample at 50ms, publish at 1s, buffer 20 values
new MonitoringParameters
{
SamplingInterval = 50,
QueueSize = 20,
DiscardOldest = true
}Modifying Items at Runtime
Adjust sampling rates without recreating monitored items:
// Slow down sampling during off-peak hours
var results = await client.ModifyMonitoredItemsAsync(
sub.SubscriptionId,
new[]
{
new MonitoredItemModifyRequest
{
MonitoredItemId = itemId,
RequestedParameters = new MonitoringParameters
{
SamplingInterval = 5000, // was 100ms, now 5s
QueueSize = 1
}
}
});Efficient DataChanged Handling
The DataChanged event fires on the Publish response processing path. Long-running handlers delay the next Publish request and can cause notification queue overflow on the server.
// BAD: blocking handler delays publish cycle
client.DataChanged += (sender, e) =>
{
SaveToDatabase(e); // slow I/O blocks the publish loop
};
// GOOD: offload to a background channel
var channel = Channel.CreateBounded<DataChangeEventArgs>(
new BoundedChannelOptions(10_000)
{
FullMode = BoundedChannelFullMode.DropOldest,
SingleReader = true,
SingleWriter = true
});
client.DataChanged += (sender, e) =>
{
channel.Writer.TryWrite(e); // non-blocking enqueue
};
// Background consumer
_ = Task.Run(async () =>
{
await foreach (var item in channel.Reader.ReadAllAsync())
{
await ProcessDataChangeAsync(item);
}
});Transport Buffer Tuning
OpcSharp negotiates TCP buffer sizes with the server during the Hello/ACK handshake. For high-throughput scenarios with large Publish responses, increase the buffer sizes by configuring a custom transport. The defaults (64 KB) work well for most cases but may limit throughput when the server sends large notification batches.
Default transport settings:
| Setting | Default | Description |
|---|---|---|
ReceiveBufferSize | 65,535 bytes | TCP receive buffer per message chunk |
SendBufferSize | 65,535 bytes | TCP send buffer per message chunk |
MaxMessageSize | 0 (unlimited) | Maximum reassembled message size |
MaxChunkCount | 0 (unlimited) | Maximum chunks per message |
ConnectTimeout | 30 seconds | TCP connection timeout |
Messages larger than SendBufferSize are automatically split into chunks by the message chunker and reassembled on receive.
Multi-Client Scaling
OpcSharp uses a single session per client model. For extremely large deployments, partition your workload across multiple client instances:
// Partition nodes across clients to stay within server session limits
var clients = new List<IOpcSharpClient>();
var partitions = nodeIds.Chunk(5000); // 5,000 items per client
foreach (var partition in partitions)
{
var client = new OpcSharpClientBuilder()
.WithEndpoint("opc.tcp://server:4840")
.WithSessionName($"Worker-{clients.Count}")
.WithAutoAcceptUntrustedCertificates(true)
.Build();
await client.ConnectAsync();
var sub = await client.CreateSubscriptionAsync(publishingInterval: 1000);
await client.CreateMonitoredItemsAsync(sub.SubscriptionId,
partition.Select(id => new MonitoredItemCreateRequest
{
ItemToMonitor = new ReadValueId { NodeId = id, AttributeId = AttributeIds.Value },
MonitoringMode = MonitoringMode.Reporting,
RequestedParameters = new MonitoringParameters
{
SamplingInterval = 500,
QueueSize = 10,
DiscardOldest = true
}
}).ToArray());
clients.Add(client);
}Be aware of the server’s MaxSessions limit — check with GetEndpointsAsync() or server documentation.
Logging for Performance Diagnostics
Enable structured logging to identify bottlenecks:
using Microsoft.Extensions.Logging;
var loggerFactory = LoggerFactory.Create(builder =>
{
builder
.SetMinimumLevel(LogLevel.Debug) // Debug shows token renewal timing
.AddFilter("OpcSharp", LogLevel.Trace) // Trace shows keepalive details
.AddConsole();
});
var client = new OpcSharpClientBuilder()
.WithEndpoint("opc.tcp://server:4840")
.WithLogger(loggerFactory.CreateLogger("OpcSharp"))
.Build();Key log events to watch:
| Level | Event | What It Means |
|---|---|---|
| Trace | Keepalive success | Server responding normally |
| Warning | Keepalive failure | Possible connection degradation |
| Debug | Token renewal scheduled | Renewal timing and jitter values |
| Information | Reconnect phase | Session recovery in progress |
Quick Reference: Optimization Checklist
| Strategy | Impact | Effort |
|---|---|---|
| Group items into fewer subscriptions | High | Low |
| Batch create/modify/delete calls | High | Low |
| Use appropriate publishing intervals per tier | High | Low |
| Offload DataChanged handlers to background channel | High | Medium |
| Tune QueueSize and DiscardOldest per item | Medium | Low |
| Set MaxNotificationsPerPublish to cap memory | Medium | Low |
| Use SamplingInterval = -1 to match publishing interval | Medium | Low |
| Partition across multiple clients for 10K+ items | Medium | Medium |
| Increase transport buffer sizes for large payloads | Low | Low |
| Enable structured logging for diagnostics | Low | Low |