The Business Process Engine

How can IT keep up with all the revolutions in computing? There are two developments in software that will meet the challenge: SOA and BMP.

Barry Briggs

July 5, 2005

9 Min Read
ITPro Today logo in a gray background | ITPro Today

Revolutions in computing seem to come in the middle of the decade, and 2005 is no exception. With all the scandals that have rocked board rooms recently, and the tidal wave of government regulation that followed, with the increasing need for business agility, that is, to quickly change the way the business operates in response to changing conditions, with globalization, where the boundaries of business transcend national borders, corporate expectations of computing, and of IT, are rapidly evolving. How can IT keep up?

SOA and BPM


Fortunately, two developments in software will meet these challenges. The first is one that's been talked about for a couple of years and is just now reaching maturity: service-oriented architecture. SOA, based on standardized Web Services protocols and XML, enables the normalization of coarse-grained services regardless of the technology of the underlying platform. It will take some time, but the promise is that IT professionals will eventually have to worry less about legacy protocols and data formats and writing throwaway custom code and more about the business-relevant issues.

SOA in and of itself solves no business problems, and it's sometimes hard to justify an SOA project purely on the basis of technical elegance (although with Microsoft's forthcoming "Indigo" suite SOA deployments will become far less expensive).

But SOA provides the underpinnings for Business Process Management (BPM). Simplistically, SOA can be thought of as creating a set of enterprise API's; BPM is the application which takes advantage of them.

What characterizes a BPM application, and how should IT professionals evaluate BPM solutions?

From a technical perspective, the most important word in BPM is the middle one: process. Increasingly, business applications are process-centric: they require the secure interoperation of several different applications, and humans at various points are expected to participate and intervene when necessary.

Consider a "simple" order handling process: an order is received from one of a number of customer channels (Web site, EDI, point-of-sale, for example); it then goes through a number of stages before the product is shipped and money collected. Those stages include things like credit approval, which in some cases may require a manager to manually examine the order, it may include rule-driven cross-sells and upsells (which could change daily or daily), additional promotions – but only subject to corporate privacy policies, and logging at various critical points to demonstrate regulatory compliance. Furthermore, it may invoke subprocesses to replenish inventory from foreign suppliers, may ask customers if they want expedited shipping, and so on; and all these may spawn more and more subprocesses, some within the firewall, some outside according to standardized business protocols. Some of these activities can run in parallel with others; others must be run sequentially; and given the complex nature of such processes simple transaction rollback semantics are not enough to handle errors.

The simple act of taking an order in today's business-at-the-speed-of-light world has become very complex indeed.

In the old days, we used to think of integration, workflow, collaboration, business process and reporting as separate applications; but now it's easy to see that all of these are coalescing into one product category. And we are surrounded by process: everything from ordering new office supplies to mortgage applications to drivers' license renewal to income taxes, our personal and professional lives are increasingly as actors in some larger process.

Service-oriented environments give us a platform on which to build such applications. But we need to do more. We need to make sure steps are handled in order, and efficiently; we need to support sophisticated notions of error handling; we need the ability to rapidly adapt to changing conditions. What else then do we need to build such applications?

The Process Engine


Let's think for a moment about what the requirements of such a computing environment might be. First and foremost, we require a process engine that can manage the complex interactions between humans, systems and partners. And it's important to realize that deeply deterministic models like those covered by Business Process Execution Language (BPEL) only cover a portion of cases. BPEL, for example, has no notion of dynamically created paths through a process – critical for processes in which humans may decide to reroute an approval, or to escalate an issue.

Ideally, the process engine receives messages or events from the outside world and initiates sequences of activities or tasks. It enables sophisticated branching, parallelism and join semantics, as Figure 1 shows.

As necessary, the process engine implements both atomic transaction as well as compensation capabilities. The difference? Transactions can be rolled back automatically, by executing the operation(s)-in-reverse.

But many business processes take extended periods of time, up to months in some cases. In such cases maintaining a transaction context (e.g., locks) is not practical. Also, it means that process state must be persisted, both to make room for other process instances as well as to protect it against system failures; and "rehydrated" as needed. Further it may be that reversing the process may require business logic to be run, or may require that the steps not be rerun in reverse order. This is called "compensation," and is a key feature of the process engine.

Another important set of requirements on the process engine fall under the category of visibility. With monitoring capabilities it becomes possible to track, in real time, the operation of your business processes – and thus, your business. Such visibility has aspects relevant for both the IT professional – what Web Services have been exercised, are they conforming to SLA's and so on – as well as the business analyst – how many widgets did I sell? How much did I spend on shipping today? What is the average throughput time for orders? Figure 2 shows BizTalk Server 2004 business activity monitoring.

But visibility functions serve another purpose: they help to demonstrate regulatory compliance. Laws such as Sarbanes-Oxley Section 404 levy heavy reporting requirements on businesses with steep penalties – including sending the CEO to jail! – for noncompliance. Frequent and seemingly innocuous tasks like approving a supplier's pricing change require Sarbanes-Oxley recording. The process engine, the hub of business operations, is the perfect place to monitor for Sarbanes-Oxley events.

Finally, business rules are a key component of process engines. Simple business rules have incredible value in that they are approachable by business users: if a platinum customer buys a widget, offer her free shipping. Rules permit business processes to be changed without coding, and can be updated frequently. Rules engines often serve as the runtime technology behind "diamond" (decision) shapes in process flow diagrams.

To make a safe, and comprehensible, sandbox for business users, the rules engine relies on IT to create "vocabularies," which are effectively natural-language facades over lower-level artifacts such as database fields or Web Services. For example, "platinum customer" may under the covers refer to a field in a database, or issue a call to some business application. The vocabulary however provides a layer of both semantic and syntactic insulation.

The Process Engine as Core of BPM


Modern business applications are increasingly process-centric, and so the process engine should be considered a key element of the IT architect's toolkit; indeed, many consider the process engine a key component of a service-oriented architecture.

The existence of an SOA fabric and a process engine suggests that we can start to quickly build and deploy composite applications, that is, applications that take advantage and exploit and above integrate capabilities on widely varying applications and technologies. This enables a "whole-is-greater-than-the-sum-of-the-parts" effect of all the applications in a computing ecosystem working together toward achieving business goals.

The holy grail, of course, is that all business applications – and in many companies there are literally hundreds of them – work together to satisfy business objectives. This is how we achieve real ROI from SOA.

Composite Applications


Let's talk a little more about composite applications. We can think of them as falling into one of two categories (in fact, this distinction is more than a little arbitrary, but it helps to illustrate our point).

The first type we call a "vertical" application, not because it is confined to a particular domain but because it is driven by a user at the top down through business applications at the bottom. Figure 3 shows an example of "vertical" composite application.

Here our user asks the server hosting the process engine to gather up information about customers. In our example this data is distributed across multiple systems. Various steps in the process involve transforming the data from the unique syntax of a given system into a normalized layout for presentation to the user, joining it together, and returning it to the user.

Here, we want the process engine to run with as little latency as possible, since we want the user interface as responsive as possible.

On the other hand, a traditional EAI-centric business process is an example of a "horizontal" composite application, which Figure 4 shows.

Here it is not the human being that initiates or drives the process; rather the fact of an order arriving activates the process. The human being is here a participant in the process, an executor of a particular task rather than initiator of the process.

Such processes often require significant periods of time to complete (that is, more than a few seconds), and thus the process engine may invoke services like dehydration and compensation, as described above.

Context


Associated with any instance of a business process is its context, that is, its instance data: the particular order in question, the customer's name, credit card number and address, and so on. One can think of this context as analogous to a stack frame.

The process engine has a formal notion of context and can use it in powerful ways. For example, it can use it to link to information in systems of record such as external databases or business applications.

Conversely, it can "extrude" the context to an XML representation which can then be rendered to a familiar interface – a Microsoft Office™ document for example. This allows documents – say, approval forms — to be autogenerated by the process as needed; other information (properties) in the context can direct the process to which mailbox or portal site the document should be deposited.

For centuries business processes have run on the concept of forms; and because of XML's extraordinary flexibility we can easily maintain this metaphor in the most automated process.

Putting It All Together


It's easy now to see how important the process engine is. One engine can support both the "vertical" and "horizontal" modalities of interacting with distributed systems (and everything in between), can provide visibility into their operation, and can interact with end users in a contextually relevant and intuitive way. It's very likely, then, that the process engine will, over the next decade, become the core of enterprise computing.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like