FinTechSystemsComplex Event Processing for Financial Applications

Complex Event Processing for Financial Applications

Event processing has helped companies to identify and react to situations quickly and effectively, and there are many solutions that monitor various events happening both within an enterprise and outside. However, there are some situations that still require manual effort and intelligence to identify and react to those situations.

Such situations could include:

  • The price for stock may come from multiple sources and a client may want to automate the process to select an exchange that offers the best price.
  • Ensuring accounting processes comply with government regulations.
  • When trade orders can be split or aggregated based on market situations in order to maximise gains.
  • When price updates for stock from the stock market is a normal event, and updates over a span of time may represent market sentiment for that stock. Place orders in different directions based on the market movement.
  • When wishing to calculate a quote for CHF/JPY from EUR/CHF, EUR/USD, and USD/JPY, every time an update is received for any of the currency combinations by using the last update received for the rest of the combinations. Then place an order after comparing spread against the current resistance and support threshold for CHY/JPY.

One basic attribute that is common across all the above situations is ‘effective data management’. With the advent of electronic communication, the amount of data shared in the financial world has grown exponentially, and in order to leverage the hidden information, this data needs to be analysed.

The existing methodologies follow a conventional approach – storing all information in a data warehouse and then mining this information – however, the nature of problems is such that the earlier they are identified and solved, the lesser the risk involved and the greater the profits.

What is Complex Event Processing?

An event represents a business change and therefore does not exist without the business. Every event needs to be interpreted so that subsequent business related actions can be triggered. This interpretation and triggering of subsequent actions is known as simple event processing (SEP).

There can be scenarios where information is hidden across multiple events. This information is known as business intelligence (BI) and identification of this intelligence and initiation of subsequent actions is known as complex event processing (CEP).

Figure 1: The Level 0 Flow Diagram for CEP Solution

Source: Polaris

 

What is Different in CEP?

Typically, current solutions store data in a database management system(DBMS), and then fire queries across this data. But CEP inverts this common design pattern by first storing and indexing the queries/rules into an efficient structure and then streaming data through structures.

This approach has the following advantages:

  • CEP helps to analyse information as soon as it is available.
  • Time and effort spent on data base interaction and management is saved.
  • In CEP most of the comparisons are in memory which makes it a real-time solution.
  • Dynamic maintenance of rules/queries helps the solution to scale dynamically and effectively.
CEP Agents

There are multiple methodologies knows as CEP agents, on which CEP solutions are based. The following diagram illustrates three of them.

Figure 2: CEP Agents

Source: Polaris

Stream query engine

Stream query engines are SQL-based solutions, which have queries, data and results. But the way in which these interact with each other is different.

These agents work in a three-step process:

  1. Data received as events is converted into streams and fed to a query engine.
  2. Then the query engine runs these streams across pre-stored queries.
  3. The results from running streams against the queries are transformed and fed to other applications.

Streams are analogous to table in relational database management systems (RDBMS). Every record in a stream is similar to a record in a database. The difference being that stream records are ordered, i.e. they carry a time stamp, whereas records in RDBMS are not ordered. Information received from external sources is converted into streams and then fed to query engine.

Stream queries are continuous in nature and are generally configured with time as a dimension, i.e. query repeatedly runs only on those stream of records that were received after the last run.

Example 1:
Find maximum value of an attribute A1 on stream events received in last one minute. This query will run after every one minute and will find the maximum value for attribute A1 for all the stream records that were received in last minute, i.e. after last run.

The result is the output is received from firing queries on the stream of data. This is transformed into events and passed on to the next system in line.

Advantages of Stream Query Approach Over Conventional Approach

The stream query approach offers the following advantages:

  • Performance: RDBMS loads and runs queries on the complete data set, discards irrelevant data and provides the results, whereas streaming query only runs on the new data. It holds the data from last run until current run. Once the run is complete the data is discarded. Hence, it only holds data that it needs which minimises the memory usage and Disk I/O, so it provides better performance.
  • Concurrency: RDBMS stores data as a central repository. All the applications that need data will access same data. So if RDBMS is writing data, and some other application needs to update same data, it may have to perform concurrency control. Stream query approach evades any requirement for concurrency control as data coming from other applications is temporarily parked on the engine-specific queue and read-only when the engine is ready to process.
  • Extensible: A modification/addition required in a query or table may be time consuming in RDBMS as it needs a build and redeployment of the changed code base. But most of the stream query engine products provide mechanism to dynamically modify/add streams and stream queries, which are effective as soon as they are done. This makes the solution react positively and quickly to changing business needs and in-turn provides flexibility to the business.
Rules inference engine

CEP solutions based on the inference rule engine run on predefined rules (representing business functionality) on the incoming events (known as facts). Most of the rules engines present today in market are based on the Rete Algorithm – an efficient pattern matching algorithm designed by Dr Charles in 1979.

In Rete algorithm, rules are a ‘collection of pattern matching constructs that are kept as nodes of directed acyclic graphs’ (refer to Figure 3). Nodes can be shared across rules, provided they do not introduce any cycles. So when a fact comes it passes through the nodes in the graph for a given rule. If a fact reaches the leaf node for a rule, it passes the rule. Once a rule is passed, further action can be taken, which can be an event to external system, or application of some other set of rules to the same fact.

 

Figure 3: Rule Defined by Nodes/Pattern

Source: Polaris

 

Figure 4: Nodes Having Working Memory

Source: Polaris

 

Generally each node has an associated working memory, which helps in problems where time is a dimension (see Figure 4).

Example2:

If a pattern is to be identified on a fact continuously for 10 minutes only if it is found consistently will it proceed with execution of subsequent patterns in the rule.
In such a scenario, nodes can store the intermediate results in its working memory. Working memory is generally online and brings near time feature to pattern matching solutions.
In most of the inference rule engines used in CEP, matching constructs are if-then-else blocks, which are defined declaratively, and are loaded into the engine dynamically. Some engines even provide rule maintenance systems with interactive interfaces, through which business users can configure and maintain rules. These features allow inference rules engine to change dynamically for quickly responding to changing business need, and in turn provide flexibility to the business.

Applications of CEP

CEP has found many applications in the financial world. The following are a few examples:

High frequency trading

In the stock market, generally we follow market data and trade (buy/sell) based on our position against the market data. High-frequency trading aims to capture just a fraction of a penny per share or currency unit on every trade; traders move in and out of short-term positions several times each day.

With CEP programmes take the job of analysing market data. Not only they analyse the market data but also take trading decisions and use trading opportunities.

The following are statistics from the NY Times, which reveal that application of CEP had a great impact on the stock trading in US:

  • Volumes of NYSE has increased by 164% since 2005 due to high frequency trading (HFT).
  • In 2010 HFT accounted for over 50-75% of equity trades taking place in the US on daily basis.

Figure 5: HFT Trends in NYSE

Source: Polaris

Smart order routing

In order to reduce market impact and to get the best prices, trade orders can be split or aggregated and placed at different market centres.
CEP solutions, known as smart order router (SOR), help traders to achieve above mentioned goals. SOR receives and caches information such as market liquidity from various venues, and takes orders from multiple sources (OMS, trading systems, ECNs etc), splits/aggregates them, and places orders across various venues based on the information cached and configured rules.

Risk management

Most of the current risk management applications provide post trade, risk analysis. But with CEP traders can do the pre-trade analysis as well.

CEP risk management solutions can sit on the existing systems and augment their capacity by virtually running the trade against the historical data (containing data for various market conditions over a period of time) and provide statistics even before the trade is captured by trading system. This allows traders to predict impact of the trade on their portfolios for various market conditions, even before entering into a trade. Currently risk management is the second most common type of CEP implementation, behind trading.

How Does CEP Fit Into Existing infrastructure?

CEP solution sits along with the other solutions that an enterprise has and listens to the information that an enterprise receives from all the external systems (exchanges, brokers, news agencies, price sources) or generated from systems within the enterprise. It consumes and processes information and provides the results again as event to the event bus. These resultant events can be picked by other systems – trading platforms, order management systems (OMS), risk management solutions, which will initiate further actions.

The following diagram depicts how a CEP solution can fit into an existing infrastructure.

Figure 6: How a CEP Solution Can Fit Into An Existing Infrastructure

Source: Polaris

Vendors for CEP

Currently almost every software vendor has a CEP solution. Following is a list of some of these vendors. Each of them is based on different agents and hence needed to be evaluated for specific requirement.

  • Oracle Continuous Query language – part of Oracle CEP solution.
  • StreamInsight – part of Microsoft CEP solution
  • Streambase Stream SQL.
  • SQL Stream.
  • Coral8i.
  • Aleri.
  • APAMA – Progress Software.
  • Tibco CEP suite.
  • IBM CEP suite.

History and Times Ahead for CEP

The following is a synopsis of the history and future of CEP as seen by David Lukham.

Figure 7: CEP History and Future Trends

Source: Polaris

 

The first stage has been termed as ‘early struggle for market traction’ where, like any other initiatives, CEP had to go through initial struggle for creating a place in IT horizon – be it the dot com implosion of 2001 or a struggle to create awareness among potential users – stock broking and other related financial players. During this state all CEP initiatives were either research and development (R&D) university projects or small startups initiated by people who had identified and understood the strength of event processing. Most of developers around this stage were of database background, hence most of the early CEP solutions were query stream based.

The second stage has been termed as cCreeping CEP’, where people realise the potential in CEP and start incorporating them into their existing solutions. Another noticeable trend is the entrance of big vendors. These vendors either buy small vendors or create their own solutions and use them as add-ons to their existing service orientated architecture (SOA) based solutions. Though business activity monitoring (BAM) was introduced in first stage, but it started gaining prominence as a CEP solution around 2005.

We are currently in the third stage, during which we will see CEP becoming key part of information technology and will help in solving complex use cases spread across various fields such as air lines, traffic control, data security, etc. It will help us to process, analyse, and relate information so that we can identify various situations as they happen and react to them.

In the last stage CEP will become a holistic event processing, where it shall cover all areas where ever information exchange or processing is involved.

 

 

 

Comments are closed.

Subscribe to get your daily business insights

Whitepapers & Resources

2021 Transaction Banking Services Survey
Banking

2021 Transaction Banking Services Survey

2y
CGI Transaction Banking Survey 2020

CGI Transaction Banking Survey 2020

4y
TIS Sanction Screening Survey Report
Payments

TIS Sanction Screening Survey Report

5y
Enhancing your strategic position: Digitalization in Treasury
Payments

Enhancing your strategic position: Digitalization in Treasury

5y
Netting: An Immersive Guide to Global Reconciliation

Netting: An Immersive Guide to Global Reconciliation

5y