Solving the UI Framework Puzzle

I was recently tasked to recommend and specify components of a thin client architecture for a large web-based application (1000+ views!). It had to be Java-based, so the first obvious questions that sprung-up were –

  – What is best way to build such a large web aplication in Java ?….which in turn can be  broken down into –  
          – What are the best frameworks available in the world today that facilitate web-app development?…which in turn can be broken down into –
                 – Which framework is most suitable in terms of technical, functional & commercial criteria? — clearly each of these 3 criteria classes can further be exploded into detailed independant heirarchies.
                 – Which framework will result in code/artifacts that are well-factored & maintenable.
                 – Which framework will support well-known Usability patterns.
             
       – What development methodology will be most suitable?….which can further be broken down into –
               – Which framework will guarantee high levels of productivity…which can further be broken down into –    
                    – Which framework/tool(set) supports a clean separation of and smooth hand-offs of UI design & coding concerns?….which can further be broken down into –     
                            – Which framework will allow parallelism of work-streams?     
   
So, simply put, one just needs to pick out the best “techno-functional-commercial” framework out there that also ensures that the application is delivered at an acceptable cost !

Sounds simple?…hmmm maybe yes, if one had to choose only from say 2 or 3 UI frameworks and, if it were year 2001 !
Today if one scans the Internet for UI frameworks, the information found can be overwhelming and quite difficult to effectively navigate through. The UI framework puzzle as it were, also is a rather pertinent question, as seen from the number of google searches people have made on key words such as
  – “ui frameworks comparison” – 5.19 million.
  – “ui frameworks” – 5.8 million.
  – “ui framework java” – 1.76 million.

So here’s a first hand account of an approach that can help one solve the UI framework puzzle. Mind you, there is no one single “best” solution; there’s only what is most suitable & acceptable for a given need, with the (relative :-)) peace of mind that you have (wisely) considered and understood the rationale for all the choices you have made & not made!

I use an age old principle of dealing with complexity/information overload – “Abstraction”.

To quote Wikipedia – “Abstraction is the process or result of generalization by reducing the information content of a concept or an observable phenomenon, typically to retain only information which is relevant for a particular purpose”.
    
Following this principle one can easily derive a few “UI Framework Abstractions” and start placing/sorting available frameworks/toolkits/technologies into these “Abstraction buckets”. So here goes..

1. Web 1.0 Technologies (JavaScript, DOM, HTML & CSS) 
These address client-side need only. This is essentially the pre-2004 way of building Web applications, featured by –
 – Open standards
 – No asynchronous calls and as a result – complete page refresh.
  – No rich widget library.
 – Requires a good server-side MVC framework as described in point 5 below.
2. Lightweight Ajax Toolkits
Again, these address client-side need only. Ajax is not a technology in itself, rather a group of technologies such as HTML, CSS, Javascript, DOM, XML, JSON and a method for exchanging data asynchronously between browser and server, thereby avoiding complete page reloads. Key features of this framework type are:
 
 – Ajax applied correctly, increases Responsiveness of the application by avoiding complete page refreshes.
 – It is based on open standards mentioned above.  
 – It usually provides a rich widget library, and if not, may combine with other widget libraries.
 – Screen developers code in HTML & JavaScript with API support for widgets and data exchange via asynchronous server calls.
 – Requires a good server-side MVC framework as described in point 5 below.

The popular ones are:
 – Dojo
 – Yahoo UI
 – Ext-JS
 – jQuery
 – MooTools
 – Prototype + Scriptaculous

  
3. Java Component-based Ajax Toolkits
These address both Client & Server side needs. These frameworks have essentially adapted the Java Swing or Eclipse SWT programming model to the Web. They differ from Lightweight Ajax Toolkit in that –

 – The development language is Java. Developers need to code HTML, Javascript only in exceptional cases. 
 – The framework compiles screen layout & behaviour code written in Java, to Javascript rendered by the browser.  
 – Server-side application development is also framework driven. No need for a separate server-side MVC framework.
 – It usually provides a rich widget library, and if not, may combine with other widget libraries.

The popular ones are:
 – GWT (Ext)
 – Ext-GWT
 – Echo3
 – Apache Wicket
  
 
4. RIA Toolkits
These address both Client & Server side needs. These are complete Web application toolkits promoted by companies such as Adobe, Microsoft & Sun. Their USP is rich internet applications using rapid development tools. Other key features are:
 – It requires a runtime interpreter to be installed as a plug-in within each browser.
 – It is usually based on custom scripting and layout languages and data exchange formats.
 – Framework drives development of both client & server-side application components. No need for separate MVC server-side framework.

The popular ones are:
 – Flex (Cairngorm + BlazeDS)
 – JavaFx
 – Silverlight

5. Server-side-only UI Frameworks
These address server-side need only. This is a basket of frameworks that not only aid the development of application components that accept and process a web request, but also aggregate and render page views back to the browser. They are meant to be used in conjunction with Lightweight Ajax Toolkits OR Web 1.0 Technologies.  The popular ones are:
 – Struts2
 – Spring MVC & Webflow
 – JSF
 – Seam Framework
 – Tapestry
 – Stripes
 
 Some of the popular Scripting language based frameworks –
 – Ruby On Rails
 – Grails
 

5A. View Technology
Though this is essentially a Server-side framework/component and represents the ‘V’ part of a MVC framework, it deserves a separate section of its own on account of the varied choices available in this area too. The key responsibility of View technology is to manage dynamic web content. This could involve say completely dynamic HTML generation or merge dynamic content with static HTML+Javascript.   

The popular View Technologies are:
 – JSP (Java Server Pages) + JSTL  (JSP Standard Tag Library)
 – Velocity & Freemarker (Templating Engine)
 – XSLT (Xml –> Html)
  
5Aa.View Aggregation Technology
This is essentially a web-page layout and decoration framework that provides a consistent look and feel, navigation and layout scheme. It works on top of any of the View technologies mentioned in 5A.
 – Site Mesh
 – Tiles
 
This throws up a number of interesting combinations or styles from an architecture perspective. Some of the valid ones are:

A. Traditional & Conservative – (1) Web 1.0 tech +  (5) Server-side-only UI Frameworks      

B. Modern yet Conservative – (2) Lightweight Ajax Toolkits + (5) Server-side-only UI Frameworks

C. Modern & Not-so-conservative – (3) Java Component-based Ajax Toolkits

D. Modern & Cutting-edge – (4) RIA Toolkits

So which one combination/style do I choose? Or can I make 2 choices? What are the trade-offs I need to be aware of when I make a choice? Will a particular style help acheive my GOAL (heirarchy)? Soul-searching questions, right? 🙂

The level of abstraction that we are dealing in, can only help so far as to eliminate or put aside “Styles”, one is not “comfortable” with. For example:

 – Style A is probably totally superceded by other styles and one really should look at modernizing with changing times.   

 – Some of us may “feel”, Style C is actually a Leap-of-Faith – As a general practice, how can you expect me NOT to touch Javascript & HTML, basically I don’t trust the Java-to-Javascript compiler well enough to handle all things that I have done all these years with DHTML. This may gravitate one to the slightly conservative Style B, though one would lose out on Style C’s BIG benefit of fine-grained componentization of UI element layout & behaviour, coded in Java language itself. Debugging, re-factoring & code maintenance is so much better with a strongly typed language like Java than Javascript and mind you, as a rough estimate, in a large enterprise business application, atleast 35% effort goes into UI layer development, and within that atleast 70% goes into screen layout and behaviour development.      
 
 – Some of us may not want to surrender to proprietory (licensed) IDE driven approach of Style D, since one would really need to use the IDE to reap the productivity benefits espoused by it. However, the richness in UI features offered by RIA Toolkits is much greater than that offered by any of the other Styles and that alone may favour Style D, especially if your competitor’s product already offers it.                   

 – Style B is kind-of-a small evolutionary step forward, which I think most Enterprises would prefer for their back-office applications. It has many more moving parts than any of the other Styles and there are a number of sub-combinations possible within Style B itself. Still, it is the most robust on account of reliance on time-tested design patterns. A BIG issue that creeps up gradually especially in case of large applications is the “Javascript Mayhem”. Simply too much gets written and quickly becomes unweildy and unmanageble, duplication across number of screens and difficult to change, test & refactor. All is not lost on this front though. Upfront thought and effort can be put to struture Javascript usage in a Object oriented fashion. Here are some links:
            http://www.amazon.com/Object-Oriented-JavaScript-high-quality-applications-libraries/dp/1847194141
            http://my.safaribooksonline.com/9781590599082
            http://www.webreference.com/programming/javascript/ncz/column5/
            http://www.jsunit.net/
                            
So there we have it, a few architectural styles to choose from. There may be more, please do share your thoughts & comments. Once you’ve chosen a style, the next step is the evaluation of frameworks within a Style say between Dojo & YUI or Struts 2 & Spring MVC or GWT & Wicket. The internet is abound with such comparisons and one should back it up with PoCs of her own. Here are some links:

http://en.wikipedia.org/wiki/Comparison_of_JavaScript_frameworks
http://www.oreillynet.com/onjava/blog/2008/03/spring_mvc_javafx_google_web_t.html

          xxx

 

Model-driven development – Embedded v/s Enterprise software

I recently attended a session on MDD organized by IBM. Following are my take-ways (besides a sumptuous meal, leather wallet & keychain :-))  from the session. I also try to arrive at a set of foundational MDD principles that are key to successful implementation in Enterprise product development, mostly from a Tooling support perspective.

As it turned out, it was directed at the “Embedded Systems” community and the Lead presenter was from Telelogic(now an IBM company) . He was well-supported especially on the demo-front by an experienced IBMer.

 The main showcase was the capabilities of Telelogic’s Rhapsodyas an MDD tool for Embedded software development and some other tools that support a complete process.

 I was pleasantly surprised to see the level of sophistication as well as adoption of MDD in the Embedded field. Dig deeper though and it makes sense when we (the customer) place such strict quality demands on embedded software. As basic hardware systems become more and more  “intelligent” via embedded software it was probably imperative that people like Telegic had to adapt for example – auto-industry product development practices in bringing a new car model into the market, to the software used.

 The key message 1 – There is need to diligently and efficiently  perform a lot of modeling, prototyping & testing for a huge number of conditions (and platforms) before eventual product rollout.

 Again, in the auto-industry, when we look at Costs involved in setting right either an electro/mechanical or embedded software glitch on a production vehicle, it can be  quite huge, varying significantly with the duration for which the particular make is already into production because that many more units could be in circulation with a possible global footprint. Respectable auto manufacturers actually issue a free recall of the defective instances. (E.g. Mercedes Benz, Honda, VW).

 The key message 2  – There is a need to apply the same strict quality requirements irrespective of hardware or software components of the product implying a drastic raise in quality of software component. The earlier the defects are detected in product lifecycle the more significant the Cost savings.       

 So we do understand some of the imperatives of the Embedded software industry and it starts making sense to a good extent to bind our Enterprise software products to these same imperatives.

 But, with the way things are today, Enterprise software may not face the ire and scale of say a car-maker having to recall a batch of failed units or a phone-maker having to recall millions of faulty batteries. The vendor usually issues a patch schedule all covered by a support contract and this money spinning method may not go-away so soon. For e.g. for the equivalent of say – GM India offering 3-yr warranty with NO maintenancecosts on its vehicles,  to happen in the Enterprise product world, seems like a reaaally distant dream.

 The support-based operating model actually negatively impacts the business case for MDD in Enterprise products. So we might ignore key message 2 or restrict its scope to product development lifecycle as opposed to entire product lifecycle. That leaves us with key message 1which I believe is imperative enough to adopt MDD.   

 So here’s a list of “MDD tenets or principles” that the embedded software community has evolved and refined to a great extent, that can readily be used for Enterprise software products.   

  1.  MDD needs a highly Structured approach to Requirements Gathering that can then be easily traced (in an automated fashion) to every single Model & Code artifact. Ignoring this, and only working out of the Model can lead to the same problems we face with traditional software development, ultimately affecting quality of the product.     
  2.  

  3.  MDD needs to enforce exhaustive use of a design process & design language  E.g.  Use case models should be linked  to structured requirement ids, that are linked to Object models, sequence/flow, and state models, which are in turn linked to respective code and configuration artifacts.  
  4.  

  5. The primary MDD tool must use a single unified repository representing all requirement, model & code artifacts, ensuring a very tight synchronization between them. This core artifact-base could be represented in a common format and each of the Models and Code are simply  different views of the common format artifact-base. This is actually the crux to solving the Round-trip Engineering problem. For E.g. As soon as you drop say a class object  onto a model area, the tool near-instantaneously creates the code behind. Also any code written, immediately updates the Model Views. One can see the likeness with GoF Observer design pattern.
  6.  

  7. The MDD tool must support a Comprehensive Testing strategy. This includes automated developer level Unit & Integration style Test case generation from Model & Code artifacts, execute & report on code coverage as well as link the coverage & test pass/fail metrics to Requirements coverage.     
  8.  

  9.  The MDD tool must provide Dynamic modeling capabilities such as Sequence diagram generation derived from runtime code, profiling etc. 
  10.  

  11. The MDD tool must support clear separation/abstraction of environment or platform specific feature usage from the Core product. It may do so by providing customizable APIs or macros.    
  12.  

 Enterprise software poses few unique challenges of its own that needs further innovation & techniques to deal with. The single biggest challenge I believe is  the following which impacts and magnifies complexity in every other principle mentioned above.  

  •  We usually build enterprise on the base of popular open source frameworks such as Swing, Struts, Spring, EJB, Hibernate, iBATIS, etc and even advanced frameworks such as JBPM, Drools & Mule ESB. We also use at times, home-grown equivalents of these frameworks. The Enterprise MDD tool must support – Model, Code & Test Casegeneration according to “terms” dictated by these frameworks. In other words the MDD tool needs to understand the artifacts required by these frameworks just as it would normally understand a Java class.  
  •  

 Concluding Notes

I believe there exists a significant gap even in basic MDD capabilities between the Embedded world and Enterprise world. Solutions to Enterprise MDD challenges may still be evolving.  There may be cases were Enterprise users have cracked these problems, though it is very likely that in such cases, they had only their own home-grown frameworks  to content with.  Still, their learnings could  be very useful to adapt the techniques to mainstream open-source framework based Enterprise software deveopment. Would the Enterprise MDD gurus please care to share?

The BPM Technology Convergence – II

A Converged or Unified Process & Rules Engine brings forth a few striking capabilities by virtue of it being “Rule driven”. Let us examine the key features.

Dynamic Process Configurabilty  

It is possible for such an engine to allow a process/rule designer to define process and rule objects that behave or evaluate differently based on multiple possible Contexts specified using variables such as locale, customer type, time, version, rating, etc; combining in multiple possible ways. So, along with  the main process and rule definition, one can define specializations of them, applicable only in defined Contexts. At runtime now, the Engine evaluates the incoming context instance and based on the evaluation outcome, assemble the pre-configured  process & rule specializations for execution and execute them.

Traditional (Camp 1) process engine approach tends to build service (or process) configurability by embedding more decision points and branches in the process itself. In other words, configurability tends to get codified in the process flow. Over a period of time this results in bloated and difficult to comprehend and maintain process diagrams.
In contrast, a Rule-driven process engine, could evaluate the rules based on contextual information and dynamically assemble the optimal process(es), decisions, and data sources for that particular case.  System building then, is an exercise in defining the different contexts and responses to them as integrated components (process & rules) that are brought together as needed, rather than developing an individual process that handles all potential situations. 

 

Consistent Artifact Versioning 

A unified process & rules engine, based on contextual information, could resolve at runtime the correct version of Process as well as Rules to be executed.

This is required to support for e.g. a smooth transition from old to new process & rule definitions where existing in-flight component instances are required to continue through to completion based on old definitions and newly instantiated cases are based on the latest version definitions.
Another simpler example could be to support concurrent operations of the same service for 2 different clients. 

Traditional (Camp 1) process engines provide runtime version resolution support for “Process” definitions only.
A rule-driven process engine being intimately aware of the process-rule combinations for the case, can natively provide consistent artifact versioning capability. 

 

Integrated Domain Object Model 

Most business applications define a Business logic layer where “application” logic (Domain Driven Design, Eric Evans) and  domain logic resides. The Business logic layer usually tends to take the shape of one of following structures:

1. A Rich Domain model comprising of fine grained objects, interfaces & strategies typically supported by a well-defined O-R mapping layer and encapsulated by a thin service layer comprising of coarse grained interfaces, implementing application logic and exposing business functionality to the world.  

2. An Anaemic Domain model(Martin Fowler) comprising of mostly non-behavioural objects designed as elementary state capturing attributes, typically supported by a DAO layer that implements data access logic, and a thick service layer that implements business (domain + application) logic in addition to exposing business functionality to the world.

3. The third form is a variation of the Anaemic domain model wherein the service layer is kept thin by moving bulk of domain logic into RDBMS dependant stored procedures, following a performance oriented architectural principle – Process Data as close to its Source as possible.

I have witnessed all 3 forms of organizing business logic mentioned above and each has its pros & cons. I will reserve a personal account of this for a later post. So moving on; how does Traditional v/s Unified BPM fare in the context of the structures described above? After-all both process & rules engine help codify a significant portion of the Business logic of a Business application. Processes codify “application” logic –  responsible for orchestrating domain operations, controlling transactions and third-party system interactions; whereas  Rules codify components of domain logic – the kernel of business logic.    

Traditional (Camp 1)  BPM design takes the approach that processes invoke  rule services that govern certain processing steps, as and when required. Such rule invocations will require that the domain model objects (either Rich or Anaemic)  be explicitly passed back & forth between Process & Rules engine as invocation/response parameters. There are a couple of consideratons here, which are actually overheads if process & rules engine are from different vendors, or process & rule artifacts are Not managed from a single development environment:

1. The domain model versions used by both engines need to be explicitly kept in sync.  

2. Business logic (in the case of Anaemic domain model, see structure 2 above) may get distributed  between Process & rule artifacts. To avoid this, policies that enforce localization of business logic to the process engine (preferably) need to be implemented. Practically, this results in only stateless invocations of rule types that are nothing more than decision tables or  if-condition-action statements, again with the “action” part not really responsible for any domain operations. A typical invocation by a Process, will ask the rules engine — Given this context specified by these (anaemic) domain objects (also known as fact instances), help decide the next course of action, by evaluating this specific bunch of if-conditions (operating on the facts) that I have externalized as rules, so that my (process) Code looks clean !  

From the interaction pattern described above it is clear that one does not get to use the full power of the Rules engine such as Inferencing – Backward and Forward chaining, unless we allow significant business logic to also reside in the rules engine, which can quickly lead to a maintenance nightmare.  

A Unified Process & Rules engine does not suffer from the overheads & limitations described above since it provides an integrated environment – both development and runtime, and infact a process artifact is treated as just another special rule type artifact with special features.

Concluding Notes

The BPM industry is taking note of innovations such as the one we have discussed – a Unified process & rules engine platform for deploying BPM solutions. However, whether a widespread adoption of this technology will happen remains to be seen. Two major factors that will be instrumental in this regard are:
1. Backing from Industry heavy-weights such as IBM & Oracle marketing & delivering their own adaptations of this technology.
2. Backing from the developer community at large, primarily driven by adaptations in open source arena – for e.g. JBOSS Drools 5 that employs similar technology. 
The technology however has some compelling merits.  It seems to directly address the heart of real world BPM problems of dynamic process configurability or context driven adaptable/agile process flows and consistent process and rule versioning. In the bargain, one should consider the additional skills required to architect, develop, deploy, maintain and govern a solution based on this technology.

The BPM Technology Convergence – I

A number of Businesses today acknowledge the relevance of BPM & SOA and are increasingly looking to derive value from its adoption and application. Why, even the most important CIO on this planet considers SOA. (overseeing $70 billion spend is quite an awesome IT job). Anyways, BPM has clearly seen through the trough of disillusionment, with a number of standards organizations, vendors, system integrators, customers and of course the analysts (Gartners & Forresters) applying focused efforts in their own ways towards pushing BPM/SOA further up the productivity plain.

 

Well, at least one of the key enablers has been Technology, be it in terms of various technical standards formulations, proven architectural frameworks or vendor product innovations. With this write-up I try to create just one perspective of today’s BPM technology landscape. This actually leads me to investigate (in part II of this article), whether a general convergence of technology is possible within the frame of this perspective, and is that really where we are going to find the best possible solutions to BPM & SOA.

 

To narrow-down the scope of the BPM technology landscape, I consider three core foundational technology components required to put together any comprehensive BPM or SOA solution:

 

  • Process Engine  – BPEL/WSDL being the dominant standard.
  • Rules Engine – No real standard here, except JSR94 that defines interfacing APIs
  • Enterprise Service Bus – Variety of standards such as JBI,  SCA-SDO (incompatible unfortunately L)    

 

  

The BPM provider community as we know it today is broadly spilt across two (or maybe three) camps:

 

  • The more dominant** camp, that envisions a BPM ecosystem where Process & Rules are loosely-coupled (service-oriented) components. This camp includes heavy-weights such as IBM, Tibco, Oracle-Weblogic, and Microsoft. Each of them offer a basic Rules Engine along-side their primary component – a BPEL engine. This particular Rules engine typically supports only stateless service-oriented synchronous rule(set) invocations called out from within decision points in BPEL-based process flow. 

 

  • The Emergent, less dominant** camp lead by Pegasystems that envisions a BPM ecosystem where Process & Rules are unified tightly-coupled “first class citizens”. Pega sites an analogy in Relational databases where its core capabilities of query optimization, integrity constraints & concurrency simply cannot be addressed by separate engines – an engine for integrity constraints, a separate query optimizer and a third engine for concurrency.  Their BPM solutions have evolved from a strong Rules engine foundation that has been extended to treat Process steps as “specialized” rule types and hence the term – Rule-driven processes.      

 

  • The 3rd camp (now more-or-less morphed into/aligned with one of the 2 camps mentioned above) that believed all Business logic can be modeled purely in a Rules Engine. ILOG probably was one such vendor that was recently snapped up by IBM.

 

Both these models (loosely-coupled v/s unified Process & rules) have their merits as well as demerits. Before we get into a detailed analysis of these we need to probe a bit deeper into the “Emergent” camp for the benefit of folks who are not so familiar with them. I myself, have burnt hands working only with the “Dominant” camp, so turned out quite interesting trying to explore the other camp.    

 

 

…continued in Part – II      

 

 

** IBM claims to sell 3 times as much SOA/BPM software as its nearest rival, probably running into few $ billion revenue. PegaSystems’07- 08’ revenue was around $200 million.

 

 

 

 

 

      

Hello world!

Welcome to WordPress.com. This is your first post. Edit or delete it and start blogging!