Seven Steps to SOA
This blog describes the SOA steps an organization must take before it can be truly successful in realizing the cost and agility benefits offered by enterprise service-oriented architecture. It discusses the various stages of SOA adoption describing the technologies, processes, and best-practices available to help companies succeed in their SOA initiatives.
What Are the Seven SOA Steps?
The seven steps of SOA are as follows:
- Create/Expose Services
- Register Services
- Secure Services
- Manage (monitor) Services
- Mediate and Virtualize Services
- Govern the SOA
- Integrate Services (ESB)
Steps 2 through 6 describe cross-enterprise concerns that should be addressed with a centralized SOA infrastructure platform. Steps 1 and 7 address specific needs and are often delivered as part of an existing enterprise application stack. Following these steps will lead an organization down the right path to realizing the benefits of SOA.
What Does SOA Stand For?
The definition of SOA, as provided by Wikipedia:
Service-oriented architecture (SOA) is a style of software design where services are provided to the other components by application components, through a communication protocol over a network. The basic principles of service-oriented architecture are independent of vendors, products and technologies. A service is a discrete unit of functionality that can be accessed remotely and acted upon and updated independently, such as retrieving a credit card statement online.
A service has four properties according to one of many definitions of SOA:
- It logically represents a business activity with a specified outcome.
- It is self-contained.
- It is a black box for its consumers.
- It may consist of other underlying services.
Different services can be used in conjunction to provide the functionality of a large software application, a principle SOA shares with modular programming. Service-oriented architecture integrates distributed, separately-maintained and -deployed software components. It is enabled by technologies and standards that facilitate components' communication and cooperation over a network, especially over an IP network.
This provides a concise and accurate definition of what SOA stands for, but it does not describe the reasons why an organization would want to move towards an SOA, or the benefits such architecture can offer.
Also from Wikipedia’s definition:
Some enterprise architects believe that SOA can help businesses respond more quickly and more cost-effectively to changing market conditions.This style of architecture promotes reuse at the macro (service) level rather than micro (classes) level. It can also simplify interconnection to—and usage of—existing IT (legacy) assets.
In some respects, SOA could be regarded as an architectural evolution rather than as a revolution. It captures many of the best practices of previous software architectures. In communications systems, for example, little development of solutions that use truly static bindings to talk to other equipment in the network has taken place. By embracing a SOA approach, such systems can position themselves to stress the importance of well-defined, highly inter-operable interfaces.
Drilling into these concepts we can see that there is a set of basic capabilities that are required to achieve the potential benefits. Wikipedia discusses a number of these guiding principals, amongst which are:
- Reuse – the ability to encapsulate and expose coarse grained business functions as services
- Loose-coupling – ensuring that service consumers are sufficiently abstracted from the physical implementation of a service
- Identification and categorization (discoverability) – making sure that potential consumers can find the services they need
These fundamental principals lead to a natural order of activities an organization must complete to incrementally realize the benefits of service-oriented architecture.
Step One - Create/Expose Services
The first step in moving an organization towards SOA is obvious. There can be no SOA without services, so the first step must be to expose or create services that can readily be consumed by Web services enabled applications.
There are a number of interesting decisions companies face as they begin this process, not least of which is the decision as whether to rebuild existing applications using a service-oriented paradigm, or to expose existing application logic as a set of services. This is not a binary decision of course, and most organizations will build new services from scratch, often using J2EE and/or .NET, and will also expose services from existing mainframe and other business applications.
There are a wide range of different solutions for companies looking to expose existing applications as services, including Akana's market-leading SOLA allowing mainframe CICS applications to expose and consume reliable, secure, high-performance services.
Any company with a significant (more than 1000 MIPS according to Gartner) investment in mainframe should be investigating ways to better leverage the advantages of this platform, and should be using Web services to easily integrate their mainframe applications with modern systems. Akana’s SOLA is production-proven at customers like Merrill Lynch where it is processing more than 2 million transactions per day from more than 600 Web services. SOLA is the only product that offers mainframe programmers an easy to use, Web-based interface for exposing and consuming services from CICS applications. It includes built-in support for advanced Web services capabilities like WS-Security, XML-Encryption, and XML-Signature.
Most of the traditional enterprise application integration (EAI) companies are also delivering versions of their adaptors that expose Web services rather than their traditional proprietary protocols. In fact many of these EAI platforms are re-emerging as Enterprise Service Bus products. The ESB addresses multiple different issues, one of which is the commodity data services type functionality used to expose Web services interfaces from traditional adaptors.
What is a Web Service?
In relation to W3C Web Services, the W3C defined a web service as: "A web service is a software system designed to support interoperable machine-to-machine interaction over a network. It has an interface described in a machine-processable format (specifically WSDL). Other systems interact with the web service in a manner prescribed by its description using SOAP-messages, typically conveyed using HTTP with an XML serialization in conjunction with other web-related standards."
This definition is interesting from a technical perspective, but it doesn’t really get to the heart of the value that can be derived from SOA and Web services. A fundamental aspect of Web services is that they should expose business logic
via a standards-based interface. One important aspect of exposing or creating a service is its granularity. There are many different schools of thought on what constitutes a service (see above), but most enterprise architects would agree that to be useful in an SOA, a service must be sufficiently coarse grained to be useful to multiple different applications.
This, of course, gels with one of the fundamental tenets of SOA – in order for a service to be reused; it must be generally useful and useable. An example of a generally useful service might be a “getCustomerInfo” service that will return details about a customer from a customer identifier. A more fine-grained service that may not be generally useful could be something specific to a particular application, “setApplicationState” for example.
It is important to consider granularity of services both when building new services, and when exposing existing business logic as a service. For example it would be easy to take a CICS transaction and expose multiple different services to set and get various parameters, while the reality is that a more coarse grained service that exposes the whole transaction as a single service might be much more generally useful, and therefore valuable.
Related Reading: Find out how Akana's SOLA Mainframe API provides customers with a fast and easy process to expose mainframe applications as secure web services, and allows mainframe applications to consume web services.
Step Two - Register
Ok, so you have one or more services available in your enterprise. Now what?
In order for a service to be used, let alone reused, application architects and developers that might benefit from this service need to know that it exists. This is where a registry comes in. At its simplest level, a registry is nothing more than a library index that helps potential users of services find services they might be interested in. The registry should offer both search and browse interfaces, and should be organized logically to facilitate quick and accurate discovery of services.
In today’s SOA the accepted standard for basic registry services is UDDI (Universal Discover, Description, and Integration). The UDDI specification provides a data model and a set of interfaces (all Web services themselves) for publishing and discovering services, as well as a further set of interfaces for managing the registry server itself.
Many registry vendors have taken the original concept of registry quite a bit further and are using registry technologies as the base for a more complete repository. Registry vendors are also adding “policy” based filters to their publishing tools, and offering “design-time governance solutions”.
Akana considers registry to be a fundamental building block of SOA. All our products leverage UDDI as a single central store for authoritative information about the services in an SOA. The products can work with any UDDI compliant registry, and Akana includes its own UDDIv3 registry server with its Service Manager product for companies that do not already have a registry. In this regard registry is filling the role for SOA that LDAP filled for Identity and Access Management solutions.
Akana’s products focus on run-time governance and leverage the registry as a central place to find run-time policies to implement and enforce. Of course, the embedded registry is fully functional UDDIv3 server and so also makes an ideal design-time service repository.
Beyond registry is the whole concept of policy and metadata services that provide comprehensive repositories for all design-time and run-time artifacts for the services that make up the SOA. This concept is covered in more detail in Akana’s whitepaper The SOA Infrastructure Reference Model.
Step Three - Secure
The world of SOA and Web services is not all wine and roses. Assuming you have now completed steps one and step, take a step back and consider what you have done. Assuming you are going for maximum business value, you are likely to have taken mission critical applications, broken them down into coarse grained pieces of functionality, exposed this functionality as services, and then published these services to a universally accessible registry.
This may all seem like a good thing, and in most cases it is a great thing, but you may have inadvertently created some gaping security holes in your organization and exposed sensitive information, or high-value transactions to anyone with rudimentary Web services skills. Ensuring the security of services means building a security enforcement layer at the service providers, and a security implementation layer at the service consumers.
A good way to understand the security risks with Web services is to think back to traditional green screen mainframe applications. The only security required by these applications was login security, if you could access the application, you were in. These applications contained the same basic pieces as a modern application (interface, business logic, data services), but only the interface was accessible outside the application. In the world of Web services, it is likely that some of the core data services will be exposed as services, some of the business logic certainly should be exposed, and the application was not written with either of these access points in mind. This means that it is critical to secure the services you expose individually, you cannot rely on the internals of the application to ensure its own security.
So what does it mean to secure a Web service? The same 5 principals of security that apply to other applications also apply to Web services:
- Authentication – ensure that you know the identity of the service requestor (and also ensure that the requestor knows the identity of the service it is contacting). There are multiple different Web services standards for authenticating requests, ranging from simple approaches like http basic authentication, to more complex mechanisms like SAML or X.509 signature. A good Web services security tool should support all of these various options.
- Authorization – ensure that the requestor is authorized to access the service or operation it is requesting. Authorization is a complex issue and there are number of various policy decision products available. Most large enterprises will have an existing solution for authorization (CA SiteMinder, IBM TAM, etc), and a good Web services security solution should be able to integrate with these as well as provide its own authorization capabilities.
- Privacy – ensure that request and response messages cannot be read by unauthorized 3rd parties. This is where standards like XML-Encryption come into their own, but in order to successfully use XML-Encryption, the Web services security solution must provide effective key and certificate management and distribution capabilities, and ideally some client side tooling to aid consumer developers.
- Non-Repudiation – ensure that the requestor cannot deny sending a request and ensure that the service cannot deny sending its response. This is the role of XML-Signature, with the same key management and distribution provision as for privacy.
- Auditing – ensure that the system can maintain an accurate and timely record of request and responses as needed.
One interesting question, and point of debate, that arises around Web services security is where this security should be applied. Some vendors will argue that security should be a central function enforced through a cluster of centrally deployed proxies, while others will argue that it should be exclusively the domain of the service provider or container. The reality of course is somewhere between these two positions. There is certainly an important role for a proxy (or cluster of proxies) in providing authentication and threat detection services at the edge of the network to protect against external threats. There is also an equally important role for provider or container based security to ensure the security of the service to the last mile.
Many companies tend to overlook the research that shows that the majority of security threats are internal to the enterprise, not external, and therefore delegate security to a firewall type of solution. Akana offers a high-performance proxy that performs centralized or network edge security functions, and a broad range of in container agents to ensure the last mile security of deployed services.
Securing Web services requires:
- Deploy fully-functional in-container agents to ensure last mile security and distribute the load of cryptographic operations
- Deploy high-performance network intermediaries for routing and network edge security enforcement
- Provide comprehensive key and certificate management and distribution to ensure that service provider and consumer developers can successfully implement required security policies
- Provide consumer developer toolkits to abstract consumer developers from the complexity of discovering and implementing enterprise security policies
Related Reading: Find out more about Akana's Solutions for API Security.
Step Four - Manage
Three steps into the 7 step approach to SOA and we’re beginning to make some progress towards achieving real value from enterprise SOA. We have some services, they are published in a registry and can be discovered by potential users, and we have ensured that they are secure. What else could we possibly need?
To answer this question we should first look at a potential disaster scenario. What happens when my services become victims of their own success? I.e. a service has become so popular that there are several (tens, or hundreds, or thousands) different applications consuming it and it starts to buckle under the load. We suddenly have tens, or hundreds, or thousands of applications that are failing or performing poorly and we don’t know why, or how to stop it.
We need to monitor our services so that we know if they are performing within normal operating parameters, and so that we see potential problems before they occur by implementing capacity planning models.
The monitoring solution we deploy should be able to monitor services for basic availability, performance (response time), throughput, and even extend to content and user based monitoring. It should be able to monitor and alert on specific SLA thresholds, and should be able to apply different SLAs to different users of the same service or operation.
Many vendors define this category of product as a Web Services Management solution, although many analysts concur that management is much broader than a simple monitoring solution, and should include much of the functionality described in the next step.
Ideally, of course, we should not have to deploy separate solutions for security and monitoring because each time we intercept messages through and agent or intermediary we add another performance bottleneck. This is why Akana’s Service Manager combines with Network Director to provide a comprehensive SOA security, monitoring and management platform in a single, high-performance, easy to deploy solution.
Some Web services management (monitoring) vendors position their solutions as Business Activity Monitoring (BAM) platforms, although BAM is a complex solution requiring integration with many back- and front-office systems and databases. Monitoring Web services interactions for content should not be considered as an alternative to an enterprise BAM solution.
Related Reading: Find out more about Akana's End-to-End API Management Solution.
Step Five - Mediate and Virtualize
At this point it would seem that our SOA is ready for primetime. And so it is, for a while at least...
The next set of challenges occurs as your SOA matures. You need to introduce new versions of services, or increase the capacity of a service by running multiple instances of it, you need to provision applications to use specific instances of services, and you need to offer services that expose a wider range of different interface types.
This is where service virtualization comes in. Virtualization seems complex, but the reality is quite simple. A virtual service is an entirely new service, defined by its own WSDL, with its own network address and transport parameters. It doesn’t implement any business logic; it simply acts as a proxy to one or more physical services, routing, load-balancing, transforming, and mediating between different request message types and standards.
While simple on the surface, the concept of virtualization is extremely powerful. It provides the ability to fully abstract service consumers from any direct knowledge of the physical service. For example, through a virtual service, a .NET client using a Microsoft implementation of a reliable messaging protocol with SOAP over http could consume a plain old XML service exposed via IBM WebSphere MQ series from a mainframe application. The .NET client would believe it was communication with an http service that supported its reliable messaging protocol, while the mainframe application would believe it was being queried by another native MQ series application.
Virtual services offer a wide range of functionality:
- They can use XML transformation technologies to allow consumers to continue using an old version of a service that no longer exists by transforming request and response messages between the old version and the new version that has been deployed.
- They can select specific operations from multiple different services and combine them into a single functional WSDL for a specific class of consuming application.
- They can provide different policy requirements for different classes of user (i.e. internal users with one policy set, and another implementation of the service that enforces a different policy set for external consumers).
- They can provide transport bridging for services and consumer on incompatible transports (e.g. http and JMS). • They can mediate between different standards implementation or even versions of standards.
- They can mediate between different messaging styles - RSS, SOAP, REST, POX (Plain Old XML).
- They can provide powerful content- or context-based routing to deliver advanced load-balancing and high-availability capabilities.
The bottom-line with virtual services is that they are required for any routing or other advanced Web services, and will quickly become an essential part of any enterprise SOA.
Step Six - Govern
With 5 out of 7 steps completed, the enterprise SOA is pretty much ready to go. You now have secure, managed services that are available to a wide audience in a reliable, load-balanced manner and are easily discoverable.
The next step is to tie together all the capabilities delivered through the first 5 steps with a governance framework. Governance is an overused and much misused work, so let’s briefly look at a definition of governance. Again from Wikipedia:
The term governance deals with the processes and systems by which an organization or society operates. Frequently a government is established to administer these processes and systems.
The key point to take from this definition is that governance is about processes and systems. In the case of SOA governance, we are discussing organizational processes like an architecture review board, and systems like the registry, security, and management technologies discussed in this blog.
Governance for SOA is often divided into two separate parts, design-time governance, and run-time governance. At design-time, organizations want to control (govern) the types of services that can be published, who can publish them, what types of schema and messages these services can accept, and a host of other rules about services. At run-time, organizations need to ensure the security, reliability, and performance of their services, and need to ensure that services comply with defined enterprise policies. Design-time governance is interesting and does help ensure an organized, well designed SOA, but it pales into insignificance when compared to active control of the SOA at run-time.
Akana’s Service Manager is the industries leading run-time governance platform, offering a comprehensive solution for security, monitoring, management, mediation, and registry. Run-time governance essentially provides a single framework for controlling and monitoring the application of the various steps to SOA described in this document. It provides a single overarching solution providing oversight into service registry, security, management, and mediation. A good run-time governance solution will manifest itself as a form of workbench providing tools for all of the active participants in an SOA.
- Service Developer: tools to publish, categorize, define meta-data, download “agent”, virtualize, version, choose policy, choose visibility/access rights, participate in capacity planning and access workflow
- Service Consumer: Tools to facilitate service discovery, selection, and access workflow
- Operations Staff: Tools to monitor service performance, troubleshoot problems, monitor dependencies, version services, virtualization, proxy management
- Security Staff: Tools for policy management, policy reporting, compliance checking, security auditing
- Enterprise Architect: Tools for application monitoring, relationship management, design policy validation and definition, service version management, service to proxy assignment, service virtualization
- Enterprise IT Management: Reuse metric management, service reuse metrics, SOA statistics
Step Seven - Integrate Services (ESB)
Looking back at the results of the last 6 steps towards an SOA, we are left wondering what else can there possibly be. By now we have a set of services that are published in a registry, secure, managed, reliable and loosely-coupled, all wrapped in a solid design and run-time governance solution.
The final step for some enterprises is to deploy one or more Enterprise Services Bus (ESB) implementations to integrate services into higher-level composite or orchestrated services. These ESBs will often be delivered as part of a broader application suite, such as Oracle eBusiness applications, or SAP. Some specialized ESBs may be used to create complex composite services, and will extend into the realm of Business Process Management (BPM).
Related Reading:Find out more about Akana's Integration Solutions.
Enterprise Service Bus (ESB) is an increasingly popular concept. Originally conceived as the evolution of both message-oriented middleware and EAI (enterprise application integration) solutions, the ESB means very different things to different organizations. As analysts, vendors, and customers come to terms with the idea of an ESB, it appears that an ESB encapsulates 3 functional concepts; adapters taken from an EAI tool to ensure connectivity with legacy applications, messaging services taken from a message-oriented middleware platform to ensure reliable delivery, and process orchestration to build agile applications.
The consensus amongst analysts is that most organizations will end up with multiple ESB platforms from multiple vendors, providing services in their own context. In this environment it is critical that the ESBs use a central SOA infrastructure to ensure consistent compliance with security and messaging policies for the services they expose and consume.
Akana’s Service Manager and Network Director Products combine to provide a complete solution for governance of, and mediation between multiple ESB instances in an enterprise.