3. The Industry Scenarios Scenario 1: Measurement value from equipment goes beyond the threshold. Scenario 2: Measurement values from multiple equipment exceed their respective thresholds. Scenario 3: At a given instant of time, 'n' out of 'm' equipment in a group are above their respective threshold values for certain period of time. Scenario 4: The case of equipment going down: Absence of data for a certain period of time. Scenario 5: Case of frozen equipment: In a realistic scenario, it is expected to have white noise around mean data points. Non existence of it or in other words a flat curve of values from equipment should raise an alarm. Scenario 6: Case of bad equipment: The data values not meeting the expected quality is an indicator of the equipment going bad. Scenario 7: The rate of rise or fall of data points over a period of time is higher than expected.
10. Dynamic Realization of templates Measurement value M1 from equipment E1 goes beyond the threshold T1 Measurement value ‘ temperature ’ from an equipment ‘ temperatureTransmitter ’ goes beyond the threshold ‘ 100 ’ Measurement value ‘ pH ’ from an equipment ‘ waterAcidity ’ goes beyond the threshold ‘ 6.5 ’
11. As-is vs Modified The as-is approach to realizing business event had a lot of IT dependence, was time consuming and tedious The notion of complex event templates and instances significantly limits the IT intervention, simplifies business event realization and allows their dynamic life cycle management
The chemical and petroleum industry is facing a lot of challenges that are economical, environmental, and technical or operations related. There is overwhelming amounts of complex data from instrumented equipment. Need for near real time collaborative decision making, production costs, integrated operations, improved asset management are some of the main focus areas in this industry. RFID and sensors help in gathering various data points from the equipment. Continuous monitoring of this equipment for performance, and downtime is critical from operations perspective. This involves continuously monitoring the three phase flow of sediments (water, oil and gas) retrieved from reservoirs, calculation of gas to oil ratio (GOR) and comparing the well potential with actual output, detection of wells that are not performing properly, computing flow rates at multiple choke valves, identifying unacceptable drift during well performance monitoring. Water is increasingly becoming a scarce resource with no substitute. It is ridden with many issues mainly concerning ineffective usage, manual quality and quantity readings and close linkage with energy and carbon management as pumping water consumes power and generates greenhouse gases. In the water treatment facilities of the future, it will be imperative to apply advanced analytics to water quality in real time. Water quality analyzers collect many instrumented parameters like turbidity, conductivity, pH, chlorine residual, pressure, temperature, ammonia, oxidation -reduction potential and total organic carbon. It is desirable to monitor the pH variance for example and detect as it goes beyond acceptable thresholds. The input to such a system can even come from or the output can go to advanced analytics system, or to enterprise asset management systems, or real time control systems.
In the world of integrated operations, there is also a strong driver to bridge the gap between business and IT. The business would like to continuously monitor their business operations, and be able to change what they are looking at with minimal or no intervention from their IT. They want to be able to do this at real time. It is like looking through the kaleidoscope at the different patterns by varying the mirror angle. Of course it is not always a pretty picture that the patterns show. Sometimes the patterns reveal interesting trends that might help the business to cross-sell especially true of product sales, retail, supply-chain and similar businesses. At other times, the event patterns disclose potentially harmful trends, faulty equipment or an equipment about to go down. And at yet other times, they aid in identifying frauds or their likelihood. By observing these events, the business can control its inventory, change its manufacturing process, raise an asset maintenance order, alert key stake holders or take other appropriate action. The ability to control and vary what they are looking at not only saves them IT services cost but lets them take immediate advantage of business opportunities and react to impending breakdowns and failures. In the complex event scenarios that we identified above, we observed that the business was often interested in varying certain facets of the complex event scenario definition. Some wanted to monitor the pH of water against a threshold of 6.8 while others wanted to monitor it for a lesser threshold. Some wanted a level 1 alert on the temperature of a transmitter reaching certain value 1, a level 2 alert on the temperature breaching value 2 and be able to vary these thresholds on temporal and need basis.
A complex event definition is composed of a set of input events and a set of rules. The input event comes from an Event Source . If we look back at the business event scenario 1 that we identified, the measurement M1 from equipment E1 is the input event; equipment being the event source. The rule is to check if the measurement value exceeds the threshold. Similarly we can identify the measurements and rules for the other business event scenarios. Alternately we can also say that the scenario parameters are of two types- Measurement type and Rule type. This distinction is quite useful. This enables us to define a publication – subscription architecture for the measurement type parameters. When a new complex event instance is created and activated, we subscribe to the scenario parameter instances of the measurement type parameters. There is a corresponding publish model around the event source as well. Whenever the event source generates a measurement value, it is published for all the subscribing complex event instances to receive and act on it. There is also an intermediate step of Event Adaptation . Event adaptation involves creation of an event from the measurement value in a format expected by the underlying CEP runtime engine The other important blocks in the figure are the Event Information Management Services that map to the CRUD operations around the complex event templates and instances. The Query Services allow querying for the complex event templates, instances, scenario parameters, parameter instances and other artifacts. Event Orchestration Services map to the CEP tooling for creating complex event templates, and the Notification Services enable notification when a complex event is detected. There is an adaptation before the Event Sink to transform the complex event in a format expected by the sink. Event sink is a consumer of the business event. The Event Repository maps to the persistent store that we discussed. Together with the Event Topic/Channel Registry it helps in maintaining the measurement subscriptions. The subscriptions are live as long as there is at least one complex event instance receiving events through that subscription. When all the instances subscribing to a measurement are deactivated, the subscription is no longer needed and can be removed.