Friday, August 21, 2009

SOA Enlightenment

This month, Gartner released the 2009 "hype cycle" curve, showing SOA has finally emerged from the "trough of disillusionment". As Gartner tells us, SOA is no different than other technologies, moving from initial inception, to extreme vendor hype, to disillusionment as users feel swindled when reality falls far short of expectations. See Joe McKendrick's article, which includes the hype cycle graph. SOA is now marching forward on the "slope of enlightenment". At this stage in its lifecycle, the over-hype of past years has been resoundingly squelched, and the fundamental benefits of SOA, without the vendor exaggerations we've come to expect, have become apparent to the masses of IT groups across many industries. These IT groups are now "rolling up their sleeves", in Joe's terms, implementing projects using service oriented principles, and actually getting real value from those efforts. SOA is not saving the world in and of itself, but rather has become an essential tool for those companies seeking greater productivity. SOA makes IT more efficient, and helps focus energy on the essential purpose of IT: making the business run better, both now and in the future when inevitable business changes demand rapid change from software and systems.

Our data services project at XAware continues to gain strength. As companies build business applications using service-oriented principles, accessing data in a service-oriented manner becomes critical. Software from the XAware project addresses this need elegantly, especially when a company has complex data structures and varied formats spread across different systems. XAware is open source, and can be found at www.xaware.org.

Monday, June 22, 2009

Cloud-based Integration

There has been a lot of press over the last year about cloud computing. Web-based applications like Salesforce.com have demonstrated the utility and cost-effectiveness of cloud-based resources. Integration vendors have also begun providing products designed for the cloud. Like the few other cloud-based integration products, the XAware engine can be installed and run in a cloud computing environment. This architecture lets you avoid local infrastructure investments, and is most beneficial when you need to integrate data sources and applications involving other cloud-based or SaaS resources (not all resources are premise-based). If you are using SaaS or other internet-based resources, placing an integration component like XAware on the internet makes sense. It makes less sense if all your application components are premise-based, depending on performance requirements. This latter scenario would have multiple premise-based components contributing data to a cloud-based integration application, where the data is combined and transformed, then sent back into premise-based resources. Only a few types of integration applications can afford the performance cost of such a round-trip to/from the cloud.

If you're interested in experimenting with XAware in the cloud, you might read the new Wiki article on installing the XAware engine in Amazon's Elastic Cloud Computing (EC2) environment . This environment lets you pay Amazon for hardware and software resources as you go, providing a very cost-effective and flexible environment for many integration applications.

Tuesday, April 21, 2009

A recent front-page article by Jeff Feinman in SD Times, "Bottom Line: Software had better pay", warns that economic conditions mandate any funded project better begin paying back immediately.  Supply chain management and automation improvements are cited as types of projects that continue to be funded during this economic down turn.  Automation in particular is an area ripe with opportunities aligned with service-oriented architecture (SOA).  Even more attractive is the fact that many automation opportunities are focused on improving a small number of business processes.  This means that a project has a manageable scope.  Project costs  are more predictable, thus return on investment (ROI) is more definite.  Projects with predictable and immediate ROI are much more attractive than those with fuzzy costs ROI, thus are more likely to get funded when budgets are tight.

Why is automation well-aligned with SOA?  Since the early days of SOA hype, the main goal of SOA has been to achieve business agility through the organization of computing functions as interchangeable parts, called services.  A business process is implemented by orchestrating services to accomplish the goals of the process.  Services are designed to be reusable, so a particular service invocation, like "Create new customer", can participate in many different business processes.  Most importantly, new business processes can largely be orchestrated from a comprehensive library of existing services.  Traditional development cycles of 12-18 months are replaced with the creation of a new orchestration, a process that may take just a few weeks.

While most companies are years away from having a comprehensive library of services, automation projects present an attractive "delivery vehicle" to begin or continue growing the service inventory.  Automation is the processes of exploiting computer resources to augment or replace manual processes traditionally performed by humans.  Architected in a service-oriented manner, a process or orchestration environment provides a visual tool to design and manage a business process.  Activities in the process generally are implemented by services.  Candidate tools for the process layer include ActiveEndpoints , Apache ODE , and ProcessMaker. Tools in the services layer would of course include XAware for information-oriented services, and Eclipse-based tools when custom-coded services are required.

So, while times are tough and budgets are tight, development work continues in key areas.  Companies can't afford to stop reacting to market demands or improving operations.  Service-oriented implementation strategies will conitinue to play a key role in the important work currently underway.  And, if architected properly, service-oriented projects help lay the groundwork for more strategic benefits in the future.

Wednesday, January 7, 2009

The Essense of SOA

Anne Thomas Manes’ recent article proclaiming the death of SOA has started a firestorm across many SOA and IT-related blogs and forums. I wrote about it here. In the post, Anne refers to the severe disillusionment some feel towards the over-hyped term, “SOA”, and proposes we drop the term altogether and simply refer to the core concept as “services”.

The term “SOA” is at the same time ubiquitous and ambiguous. So much energy has gone into molding it into a marketing story that it seems no two people share the same definition.. The truth is, SOA is really just a simple evolution in IT development strategy. I believe the essence of SOA is building software components as “interchangeable parts”, an idea preceding Eli Whitney himself. It is evolutionary (not revolutionary), because we’ve tried to do this for decades, culminating in object oriented and component-based software development. SOA is simply the next stepping stone, which loosens the chains of platform and vendor lock-in, catalyzed by internet and XML-based standards. It’s still about interchangeable parts, components that can be recombined and reused to either build new systems, or rapidly change existing ones. And it’s a superior strategy even if you don’t plan to reuse or recombine, because componentized systems are easier to build, manage, troubleshoot, and support.

In the software industry, I believe we are travelling a similar path as other industries, and finding that interchangeable parts are easy to design in the small, but exponentially more difficult to design in the large. Wiper blades and radios are easily replaced in your car. But I would just love to install a new hybrid-electric engine in my beloved ’96 Jeep Cherokee. Interchangeable parts on such a grand scale are much more problematic. I’m sure there’s marketing mechanics at work here, too. GM wraps their latest electric engine in the Chevy Volt, available next year for $42,000. They don’t want to sell just an engine.

So, I think the term we use to describe the concepts behind SOA is less important than agreeing on the core, underlying essence of SOA. I believe this to be extending the idea of “interchangeable parts”. Personally, I find myself avoiding the term “SOA” more and more, perhaps in a subconscious effort to avoid the pained or befuddled look on those faces in the room. Instead, I gravitate towards the term “service” or “service oriented”. Above all, we need to understand and accept that we are not revolutionaries. We are just carrying forward what others have already set in motion. By communicating this point, we gain credibility in our conversations with business people who control budgets. The alternative is to position this concept as “the next big thing”, something business people seem to immediately distrust.

Tuesday, January 6, 2009

Is SOA Dead?

Burton Group analyst and SOA guru Anne Thomas Manes recently blogged that “SOA is Dead” (http://apsblog.burtongroup.com/2009/01/soa-is-dead-long-live-services.html), referring to the disillusionment and even disgust some feel towards the over-hyped term. But she was referring just to the term SOA, not the concept itself. On the contrary, service orientation has seemed to find firm roots in diverse areas such as mashups, RIA, BPM, cloud computing, and others.

I completely agree that service orientation is here to stay. But I don’t agree that the term SOA is going away any time soon. We are in the typical “trough of disillusionment” Gartner speaks about, as a huge wave of over-hype sets high expectations for a technology. Industry buys into the vision, then slowly comes to realize it is not a silver bullet. Hard work is still to be done to extract the benefits of the new technology. When so many people express disappointment in a technology, negative momentum builds, and soon a consensus develops that the new technology is a failure at best, and evil at worst. Such is the case with SOA.

To be sure, some technologies never fully emerge from the trough. Artificial Intelligence and Object Databases are two examples that never achieved wide-spread adoption after huge early stage hype. Others fare much better, like EAI and even Java. I remember the early Java days working at MCI circa 1997. Despite huge investments including the best consultants Sun had to offer, projects were massively under-performing, or even failing altogether. But the gradual maturation of the platform and supporting tools pulled Java from the trough of disillusionment to eventually make it the most popular programming language ever.

Anne concludes by saying that we need to move away from the term SOA and simply use the term “services”, since that is the core foundation of the concept. I think that’s fine for the time being. In fact, I’ve recently found myself using the term “service orientation” instead of SOA anyway. But I believe this is a temporary diversion. Eventually, market noise will settle down, and we in the software industry will finally develop a consensus on what this concept really is. When that happens, I believe, it will mark the emergence from the trough of disillusionment, and the return to calling this concept “SOA” once again, without fear of scorn.

See my related post here.

Wednesday, December 3, 2008

The Long Tail of IT

In these days of recession and shrinking IT budgets, development groups are forced to do more with less. This appears to be an opportunity for growth for Open Source projects, as companies find it difficult to purchase products, or even expand use of products they currently own. Open source products are available to assist in implementations of a wide range of IT problems. And with a very low cost of entry, development groups can kick the tires, and even implement an entire project, without awaiting the decision of an enterprise architecture group or budget committee. In a recent meeting with AMR Research, analyst Dave Brown talked about “Long Tail” effects within IT, where large, mainstream projects are still getting funding, but the large number of smaller, tactical projects are left to fend for themselves. This is exactly where Open Source can make the biggest impact… the large number of tactical projects going on within a company, often “flying under the radar” of the corporate enterprise architects. And it is not just the stealth projects benefiting from Open Source. Many projects designated as “tactical” or short term solutions have the flexibility to select the most expedient solution, which often turns out to include Open Source. I discussed Long Tail effects in the creation of services here.

So, in addition to areas where Open Source has a solid beachhead, like Linux usage for corporate servers, it certainly appears that Open Source is making additional headway, filling many nooks and crannies in the IT development space.

SOAWorld

This fall’s version of SOAWorld ran November 19-21. I attended all three days, and presented a session on creating and managing data services for both SOA and RIA environments. The show was a combined conference that also included the named conferences Cloud Computing World, Virtualization World, and Data Services World. David Linthicum gave the keynote address on Wednesday, with a theme of “its time to make something work”. The hype is officially over, with 53% of companies now having SOA up and running. As an industry, we’ve proven SOA works if you approach it right. He reiterated several concepts he’s conveyed in the past, such as “SOA is something you do, you can’t just buy it”, and “understanding your data is key to success”, and “data is the foundation of SOA”. Architects should apply a layered approach to data, with an abstraction layer over the raw data sources, and services then binding into the abstraction layer. David spent a good portion of his presentation outlining a full SOA process that he helps clients implement. To summarize, if you follow the right process, SOA works.

Some other interesting thoughts and ideas from the show

Paul Lipton from CSC warned about ensuring governance is in place fairly early in an SOA initiative. Once your services are reused, you risk becoming a support group, so you need to implement a plan for reuse and support.

Michael at Active Endpoints had an interesting presentation on Complex Event Processing, which I wrote about here.

Werner Vogels, CTO at Amazon, gave the Day 2 keynote. He spoke about Amazon’s cloud infrastructure for both storage and processing capabilities, and noted that many startups are using these services to gain scalability for very little investment. He likened the cloud infrastructure business like early 20th century Belgium brewing companies. At that time, each brewery had its own power plant. Power companies centralized this infrastructure, freeing brewers to concentrate on their core business. Computing infrastructure seems to be following the same path. Vogels also revealed Amazon’s deep adoption of service orientation, noting that a typical web page on Amazon.com involves the invocation of 200-300 services to build the page.

Dr. Michael Carey, who formerly led much of the development for BEA’s Aqua Logic, described some of the major design concepts in Aqua Logic, which map fairly closely to the capabilities of XAware. One primary goal of Aqua Logic is to create a data abstraction layer, and expose data, wherever it resides physically, as “entity” services. Business information objects, like a customer, order, or invoice, is represented as a service, with multiple operations to read, write, update, and delete, as well other more business-centric operations like “process order”. Without this entity service layer, orchestration is very difficult, as every data access invocation requires multiple calls. Carey made a compelling case that any composite app built on services requires an entity service layer.

Rob Steward of Data Direct discussed Data Direct’s SOA Data Access tools. Like Dr. Carey, Rob discussed the need for a data services layer, and provided a definition of data services that included abstracting physical location away from the logical model, ideas that are core to XAware as well. Rob described typical client application operations in terms of queries into a data services layer implemented by Service Data Objects (SDO), which are then manipulated, possible off-line, then updated back to the services layer.

Jeff Davis of HireRight, Inc. gave an interesting talk on his company’s services implementation using Apache Tuscany, which is a framework for implementing the Service Component Architecture (SCA), which allows services to be defined once, and invoked over any supported transport. HireRight is mainly using JMS as their channel to invoke services.

Glen Daniels of WSO2 spoke about their company’s Registry product, and the influence of social web site features on its design. The registry manages access to service definitions, and provides social features like ratings, comments, and labeling. I thought this tack was interesting, because I believe the main reasons software reuse has never been what it should be revolves around human trust issues. Can you trust the developer or his code? Who else is using it that you might talk to? If you trust the developer (based on a high rating), and a component is used by others with good reviews, then you are more likely to use that component. This registry product includes core features common in most web 2.0 and community sites, to increase the trust factor, hopefully leading to better reuse.

While this is just a sampling of the sessions I attended, it does represent the most interesting of the bunch in my mind.