Banks Taking Holistic Approach to Finding Weak Links Within a Trade's Lifecycle
Finding the weakest links within a trade’s lifecycle requires a holistic approach to latency monitoring focusing on infrastructure bottlenecks, capacity constraints, outdated application code and incompatible inter-application messaging systems, according to a new report by Tabb Group.
Will Rhode, research analyst in London and author of the report, ‘Holistic Latency Monitoring: Finding the Chain’s Weakest Link’, said that the internal decision-making process has become the fat end of the latency wedge. “Latency is a challenge comprised of inter-related parts, which, when addressed together by banks, deliver success in excess of their sum. As a result, more revolutionary, holistic approaches allowing for greater efficiencies and cross-fertilisation of latency reduction skill sets are now emerging,” he said.
Advances in reducing latency across an external network have cleared the path for banks and trading firms to think more clearly on the more complex challenge of internal processing latency, leading to adoption of new, more holistic approaches to latency reduction, tackling the internal compute challenge. Specifically, said Rhode, internal compute latency is a multi-faceted challenge covering infrastructure capacity and distribution; quality of data and absolute latency; application performance and coding latency; and inter-application messaging and data translation delay.
Many prevalent latency monitoring solutions offer ‘out of band’ techniques such as passive, non-invasive hardware clock or packet-capture tools for monitoring and measuring application-layer latency non-invasively via the network. “But application log files are also a useful tool for measuring into internal processing latency. While it may seem like an outdated approach, log files have a key advantage over other, more modern measurement techniques – namely, they are readily available and free,” said Rhode.
Once log files are aligned with server loads it becomes possible to see how applications weigh on the infrastructure according to load and statistically model the impact that future business volume growth will have on the overall architecture so that capacity constraints can be cured before they occur. Rhode cautioned, however, that “log data files vary, tend to be spread over a wide area and attempts to mine this vast, complex resource pool need a hugely scalable analytical system that can bleed key performance indicators from the raft of log files efficiently.” He said benefits to be gained include cross-fertilisation of skill sets, between asset classes and trading firms with mutual low-latency interests.
The new holistic approach to latency reduction is also leading to the breakdown of internal operations divisional silos; new partnerships with tech-savvy buy-side firms for new application build-outs; a greater level of co-operation between competing third-party latency solution providers; and the introduction of standard terminology for latency measurement. “More revolutionary approaches that allow for greater efficiencies and the cross-fertilisation of latency reduction skill sets are expected to emerge as the shortcomings of piecemeal approaches to latency reduction become further exposed,” said Rhode.
Advances in reducing latency across the external network, such as propagation and transmission delay, have cleared the path so that we have line of sight to ask new and more important questions, such as what will increasing speeds do to overall system speeds, how should the business build out in terms of infrastructure, new application development and connectivity to avoid growth-throttling bottlenecks? According to Rhode: “The answers lie within.”