One of the most interesting presentations in Architecture & Design
world was the eBay Architecture presentation by Randy Shoup and Dan Pritchett.
The presentation was only one hour long, so Randy and Dan didn't cover
all the topics in the slide. Here are some of the insight I took from
Architecture evolution - eBay actually went through several architecture revolutions
Their initial architecture cannot even begin to scale to their current
loads. It was, however, a very good fit for their initial quality
attributes - specifically, the emphasis on time to market and costs.
This shows the importance of balancing quality attributes. Sure an
architectural change is painful but if they'd future proofed too much I
doubt they would ever get something working.
that traditional 3-tier architecture would only scale so far. It was
nice to see how it evolved though. Also with the move from version 2.4
to 2.5 and later to 3 we see eBay learning about CAP
the hard way. In its final (current) incarnation eBay's data
architecture prefers partitioning and availability over consistency.
This doesn't mean they forgo consistency altogether - just that they
trade the comfort zone of ACID transactions with the BASE approach.
Where BASE - stands for Basically Available, Scalable/Soft state &
Eventually Consistent. .
eBay partitions thier data in two levels
one is a SOA like division by business areas (users, items etc.) and
the second level is an horizental partitioning based on access
paths.This BASE approach to data was dubbed by Dathan Pattishall (from
Flicker and Friendster) as sharding
This approach means things like high partitioning, no distributed
transactions (also see below), denormalization etc. (you might also
want to read the item I wrote on denormalizaiton in InfoQ
The more major implication here is that when it comes to internet scale, the database looses its importnace - or as Bill de Hora
nicely puts it:
use of RDBMSes as data backbones have to be rethought under these
volumes; as a result system designs and programming toolchains will be
altered. When the likes of Adam Bosworth, Mike Stonebraker, Pat Helland and Werner Vogels are saying as much, it behooves us to listen.
I said the data architecture of eBay is SOAish - partitioned their
components and data along business lines, and they apply many of SOA
principles. They don't however unite data and components to create a
service and they don't (seem) to have the same contract boundaries
that SOA promotes (Randy told me that they are currently contemplating
Returning to the eBay do not use transactions.
"no transactions" which seems very controversial - but if we just
consider some of the points I made on transactions between services in
previous posts - it is the only logical way to ensure scaling. By the
way, as can be expected they do use transactions - when they are local
e.g. if the users table is spread over a couple of table both will be
The application layers also follow the
segmentation by business areas. eBay cacse metadata/immutable data as
much as possible. keep the application stateless (i.e. state comes from
client/db) e.g. they don't use sessions. The DAL virtualized the
horizontal partitioning mentioned above for the rest of the code.
was also interesting to that eBay developed its own messaging
infrastructure - though Randy and Dan did not provide alot of details
Development process - It seems that eBay is using some
hybrid of feature driven development with waterfall (i.e. the
development is feature by feature - but the development of a feature is
waterfallish). The do have a constant delivery rate which they
synchronize using the concept of a train. if you have a features that
is it will be added to the train which is scheduled to arrive around
the time your feature will be ready. Several features are delivered as
a package which gives a predictable (weekly). I guess it also gives
them some nice metaphors to use such as a feature that doesn't make it
- misses the train or the train leaves on time etc.
The slides of the presentation can be downloaded from Dan Pritchett's site
(They not from the same event but they are pretty much the same slides. Also you can read Elliotte Rusty Harold's account of the presentation
I won't say anything about my presentations (that's for others to say :) ). The point of this post is just to let you download them. So here they are:
- SOA Patterns (2.14mb) - Takes a look at different strategies (patterns) to solve common SOA pitfalls
- Getting SPAMMED for architecture (4.56mb) - Takes a look at the activities architects can/should do when they think about software architectures. The presentation also covers architecture in agile projects.
While I am getting ready to fly to A&D world 2007
where I'll present both SOA patterns and the SPAMMED architecture framework, I thought I'd throw in a little update on the book
I've made a small change to the way chapters 5-7 are organized.
They are now grouped under a separate part called "Service Interaction
Patterns" (and chapters 2-4 are grouped under "Structural Patterns").
5 is focused on Message Exchange Patterns (MEP): synchronous,
asynchronous, events and transactional - The patterns there are not
new for SOA, instead the focus is on the meaning of implementing the
usual MEPs under SOA constraints. I sent it to manning early last week
so hopefully it would be available on MEAP soon.
- Chapter 6 is called "Consumer Interaction patterns" and
includesthe UI interaction patterns as well as interaction pattern with
other types of consumers. This is the chapter I am currently working on.
- Chapter 7 is unchanged for now
Lastly, as you may
remember, I publish online one pattern from each chapter so I'd be
happy to get comments on which of the following three patterns (from
chapter 6) you like to see on-line: Reservation pattern (making
partial commitments), Client/Server/Service (integrating Legacy or thin
clients with SOA) , Client/Service (integration Rich clients with SOA)
- if you want to vote just send me an email or leave a comment
Following the previous post I had a chance to exchange a few email with Mark Little
director of engineering in the JBoss division of Red Hat). Mark thinks
the topic of transactions and SOA has been beaten to death already and
wonder's why does it need to resurface (see his post "Is anyone out there?"
- I don't see a problem with discussions resurfacing when new people
are faced with situations others already solved (but that's a matter
for another post)
Anyway, the reasons we're here is that I
think that during this conversation mark made a few interesting
observations and I think the end result is pretty interesting. I
decided (with his permission) to post it here ( It is only minimally
edited: no deletions, few additions (in ) and a few time shifting to
make it more coherent as a single conversation)
From what I can see it's [the arguments on transaction and services -
are] the same old arguments that have gone round and round, ignoring
the important fundamental issues and not doing enough background
Sagas are transactional - it's just an Extended
Transaction model and not an ACID transaction model. Don't get hung up
on the word "transaction", which is way to overloaded in our industry
to actually mean anything by itself. Plus, 2PC is a consensus protocol
too; it does not impose any other aspect of ACID than the A. Even the D
is optional until/unless you want to tolerate failures.Arnon
I know this is an old argument - but that doesn't mean it isn't worthwhileMark
isn't worthwhile if people aren't going to listen
involved in these debates so many times over the past 7 years (for Web
Services transactions) and longer for extended transactions, that it
gets a bit old after a while. Maybe we should create a wiki page and
point people at that
I guess, but you should
keep in mind that people who are solely in the .NET camp only got WS-AT
recently with Windows Communications Foundation so you can expect the
issues to resurface. By the way a wiki might not be a bad idea
2PC.] 2PC is a distributed consensus protocol and in principle doesn’t
have to be related to ACID transaction. But I think the common view and
use of it is for ensuring distributed ACID behavior. Looking back at my
experience with XA and COM+ transactions it seems it does a good job at
achieving this ACIDness Mark
This is an education
issue. The literature is clear on this. People who know and understand
transactional protocols don't make the mistake of equating 2PC to ACID
Yes it is an educational issue . But
I am not sure that it is that common knowledge. It is expected that
middleware vendors who build the tools to support these protocols to
understand it better - I don’t think it is that widely known outside
these circles. Most of the architects I’ve don’t (maybe It time to look
for new friends
By the way as 2PC is not resilient to
failures of the coordinator so in a highly distributed environment like
SOA it might have been a better idea to go with paxos commits if at all
you go down that path.Mark
The reason WS-AT and WS-ACID
chose 2PC is: interoperability. All TP monitors support it. Try getting
IBM, MSFT, Oracle, BEA etc. to change to Paxos, 3PC, flat-commit, or
anything else and you'll be waiting for the heat death of the universe.Arnon
Can’t argue with that Mark
2PC is resilient to failures if the coordinator eventually recovers.
Paxos has its own failure assumptions too: Jim never disputed this.
Same as 3PC and other consensus protocols. As with *any* fault
tolerance approach (transactions, recovery blocks, replication, etc)
it's always probabilistic. All we're doing is making it highly unlikely
that the system cannot complete, but we can never make it entirely
safe. Even in the airline industry they can "only" go to a probability
that failures happen .000000001 Arnon
probably right that in an SOA situations the chances of not getting an
ACID transaction are worse than in a controlled environment - which
actually make the situation even worse since people using WS-AT
perceive it as allowing them ACID interaction (e.g. Juvals podcast) .Mark
is *all* about ACID, in the same way WS-ACID is about ACID
transactions. It is *nothing* to do with SOA though. Web Services are
not purely the domain of SOA implementations!Arnon
totally agree that Web-services and SOA are not directly related and
can each exist independently of each other. Again this is an
educational issue but, SOA==Web-services is a very common
misconception (I guess the word “service” in web-service doesn’t help
[in any event] I think distributed transactions in general should be used carefully period.Mark
Absolutely. They are not a global panacea and people who push them as such do more harm than good.Arnon
is more problematic than regular distributed transactions as by
definition in an SOA you do not know who and how many other services
will participate in your transactions so you are much more likely to
run into problems.
Sagas which embrace the temporal shift don't
give an illusion of ACIDness and allow to focus on achieving
distributed consensus while keeping all parties involved consistent. I
think that it is a much better option if you need transaction-like
For SOA, yes. Although Sagas are only good
for a certain type of use case. That's why we've always tried to
develop "live documents" that allow people to add new models when/if
needed. With a couple of exceptions during the BTP days, there has
always been consensus that one size does not fit all
Services *anything*, whether it's WS-AT, WS-Sec, or WS-Addressing all
have their non-SOA aspects because Web Services aren't developed purely
with SOA in mind. If that were to happen then Web Services as a
technology would lose some of their important benefits immediately.Arnon
whole discussion is in the context of SOA (at least from my side ) –
naturally there’s a place for ACID transactions for other uses.
Sagas - calling them "Extended transactions that are not ACID" is just
semantics - my point was that they are not ACID transaction. I think
most people equate transactions with ACID transactions as well (but I
may be wrong)Mark
Many people do and that again is an
education problem. The term Extended Transactions (don't need to say
"that are not ACID") has a well defined meaning in the R&D
community. There have been many good models and implementations around
Extended Transactions. They really took off in the vendor community
through the Additional Structuring Mechanisms for the OTS, back in the
1990's. If you check that out you'll see that it formed the basis of
WS-TX and WS-CAF. Even in Jim's original technical report he discussed
relaxing all of the ACID properties in a controller manner to get more
flexibility. That was the first extended transaction. In fact, ACID
transactions are just one type of extended transaction. There are many
many others, including nested transactions, coloured actions, epsilon
transactions, sagas etc.
Unless I qualify it beforehand, I try
to never use the term "transaction" in isolation because it has
different meanings to different people. For example, when talking to
developers working in trading infrastructures, a "transaction" isn't an
ACID transaction at all. In telecos it's different again.
Evan H asked a question about distributed transactions and services in the MSDN architecture forum
Are distributed transactions (ie.. WS-Transaction)
a violation of the "Autonomous" tenant of service orientation? Yes or
No and Why? Kudos if you can address concurrency and scalability (in
an enterprise with multiple interacting services).
answerd this questions back in april when I wrote a couple of posts
that explained why cross-service transactions are a bad idea:cross service transactions
and some more thoughts on cross service transactions
Roger Sessions also agrees with this view (well, it seems actually, he wrote about it well before I did :) ):
the WS-Transaction specification was first proposed, back in 2002, I
wrote an article explaining why I thought the idea of allowing true
transactions to span services was a bad idea. I published the article
in The ObjectWatch Newsletter, #41: http://www.objectwatch.com/newsletters/issue_41.htm.
Nothing since then has changed my mind. Atomic transactions require
holding locks, and spanning transactions across services requires
allowing a foreign, untrusted service to determine how long you will
hold your very precious database locks. Bad idea. Just because IBM and
Microsoft agreed on something doesn't make it good!
The reason I am bringing this issue back is that Juval Lowy (who wrote the article that triggered my first post on the subject) has recorded an Arcast with Ron Jabobs
Where he re-iterated the idea that "Transactions is categoricaly the
only viable programming model" and you should strive to use it whenever
you can. It seems Juval admits you sometimes need to use Sagas
(which he called "long running transactions" - you can see in my link
why I think that's a wrong name). He also agrees that you can also use
a transactionable transport and then only do internal transactions from
each service to the transport (a pattern I call "Transactional
Service"). However, at the end of the day, he still thinks you should
use WS-AtomicTransactions whenever you can.
I agree that
transactional programming is important. I think it is the simplest
programming model (from the developers side). I would probably never
write an interaction with a database that is not transactional; I look
very favorably at initiatives for in-memory ACI (no Durability)
transactions such as the one Ralf talks about
. Until we get to Distributed Transactions...
First, we should note that transactions are not "the only viable" option.As Martin Fowler notes
Ebay seems to be doing fine without distributed transactions. Not only
that, they abandoned distributed transaction and went
"transactionless"because they needed one simple thing... Scalable
In most COM+ scenarios you have a single server or a few internal
servers where the distributed transaction happen - and even there you
should plan your transactions carefully if you want to get any kind of
decent performance. In SOA scenarios the situation is more complicated
as the distribution level is expected to be higher (even if you don't
involve services from other companies). More distribution means longer
times to complete transactions (especially if a participant can flow
the transaction and extend it). It also means increasing the chances of
failure (see Steve Jones series of posts on five nines for SOA
In my opinion, the more distributed components you have the more you
want their interaction to be decoupled in time - i.e. the opposite of
Juval also said he doesn't buy the denial of
service problem I mentioned (supporting a transaction means you allow
locks - if an external party doesn't commit you retain the lock..).
Juval said he assumes that a solutions has both authentication and
authorization so this shouldn't be an issue. For one, I have seen too
many projects where security was something that was neglected or
quickly patched in at the latest moment - so I would hardly assume
security. Even with security on - you increase your attack surface.
that's just the half of it. Even if all your service consumers have
good intentions - you still don't know anything about their code. SOA
is not like the "good old days" where you owned the whole application
- this means you cannot trust their security to be ample. Also you
don't know anything about their code quality. Services are likely (in
the general case) to be deployed on different machines, even if they
start co-located. I think that a Service boundary should be treated as
a trust boundary just like a tier boundary
I strongly believe you should have reduced assumptions on what's on the
other side of the service's boundary - transactions are not reduced
SOA and distributed transactions do not go hand in hand - it isn't just
autonomy at stake here. It is a problem for performance and scalability
and even security period.
To finish this post - I would also highly recommend looking at Pat Helland's paper "Life Beyond Distributed Transactions: an Apostate's Opinion"
and a post he recently made called "SOA and Newton's Universe
", where he explains more eloquently than I ever could why SOA is not a good fit for distributed transactions.
Steve Jones has (yet another) great post called "Le Tour SOA - why support services are critical, but not important"
You should go read the article - but in a nutshell, Steve explains that
important services are the ones that bring business values and critical
services are the supporting ones that help keep the light on for the
important services to function properly.
While the post has SOA in the title. I think it is more general and is
also applicable to applications or any other IT generated components.
In fact it can also be applicable to IT itself as Nicholas Carr noted
in 2003 when he published his paper "IT doesn't matter
Nicholas argues that IT will become akin to electricity and as such be
critical for the business to continue operating but not important. As a
side note I'd say I think this is might be true for traditional
businesses but not for businesses where the IT is the business (such as
banks, insurance companies, etc.)
Back to critical vs. important - I think this is an important for
architects to make this distinction to be able to prioritize work and
not confuse business value with semblance of business value due to
criticality for operations. This doesn't mean you can neglect critical
tasks (after all they are critical...). It is the important stuff that
will bring your business the competitive edge.
You raise an event when something interesting happens to you, you think
it is important, but you don't care enough to know who is interested.
you are even less interested in to personally going and to each an
every interested party and letting them know. So - instead, you raise
and event, and let the poor buggers take care of any implications by
themselves. We raise the event "now" when the change happened - it is
only important now anyway...
Looking from the "poor buggers"
-the event consumer point of view things are more complicated. There
are events which are cyclic in nature like stock price updates, the
blips from a sonar etc. if you missed one, then it isn't really
important you'd get the right information in the next update (actually,
that isn't entirely true - see later in this post). Then, there are the
events which only occur once. sometime it isn't important for you to
listen to them if you are not up and running in the same time. Other
times you can't afford to lose an event for instance if your ordering
service (or component for that matter) communicates with the invoicing
one using events you don't want to miss the event of a new order else
you would loose money.
This basically means that the event
producer and the event consumer are coupled in time - one way to solve
it is to make sure both of these services are available at the same
time i.e. if the invoices crashed, then processing orders should be
suspended (note that this doesn't mean that you don't accept orders
just that you don't process them).
Ok - maybe we can just raise the
event "transactionally" - this would probably work, but we need to
remember that the event producer doesn't really care about the event
consumers, why would it want to fail because of them?!
better way would be to "raise" the event over some reliable transport
- this has a few problems. one is that we've passed the problem to the
connection between the event producer and the transport. It might be
acceptable to have a transaction between the event producer and the
transport. However, as I've already said the producer doesn't care much
about the consumers..
We can have persistent subscriptions for
existing consumers to prevent events from getting lost which make both
creates a er minor problem that new consumers can't see past events but
also has the risk of existing subscribers disappearing and their queue
can then grow endlessly (or until an administrator would remove the
Ok, let's try to look at the problem from a
different angle. looking at the events, what we can really see is that
an event has a time-to-live (TTL) as far as the event consumer is
concerned. For instance in the case of the cyclic events the TTL is the
interval until the next event. Actually, even with cyclic events the
TTL might be larger - if we are also interested in analyzing trends or
ab normal occurrences (which is why I said it isn't entirely true we
don't care about old events). In case of one-time events the TTL might
be indefinite or maybe even then it might be some definite value (one
day, week, year etc.). Since we can't know about the TTL of consumers
it can be a good idea to make past events available somehow.
when you design an event centric architecture like EDA (whether on top
of SOA or not) it is important to think about event consumers - we
don't want to think about specific consumers since it negates the
benefits of thinking in events, but I would say that you want to think
about event consumers in general, after all your component is also an
event consumer (do unto others as you would have them do unto you)
One option, which I already talked about
is to make past events available as a feed. Event consumers can then
come at their own leisure and consume past event (this can be in
addition to to raising the events in real-time). This provides a
partial solution as the maximal TTL is determined by the event producer
(after which the event is deleted from the feed). This may be
acceptable but you must be aware of that.
The other option is to to
log all the events and provide an API to retrieve past events. In a
sense the max TTL is still at the hands of the event producer only if
you use a database it would probably be a large time compared with a
feed. Alternatively the events can be logged on by a central "always
present" event aggregator (in a manner similar to the aggregated reporting
pattern I described for SOA).
To sum all this - events they seem only to matter in the instance in
time they are created, we are used to that thinking from building OO
systems where all the components are co-located in the same
address-space and time (even there I can think of scenarios where we
would want past events) - in a distributed world events need to have a
TTL, the TTLs can vary and are determined by the events consumers.
Lastly, as I demonstrated in the paragraph above, there are several
strategies we can use to help solve the event TTL dilemma (and there
are probably a few others).
Few months ago I wrote here about solving the mismatch between Service
Oriented Architecture (SOA) and Business Intelligence (BI) (see papers and articles section
). Recently I got the following question from Ben:
major question I have is around large data sets. As an experienced
BI/DW architect and developer I have worked on a number of large scale
data warehouses. Retrieving large data sets (i.e. millions of records)
doesn't seem to fit well into SOA. As you state in your article, we
could have another point-to-point interface, where the service which
houses data we need gets a request and writes out a batch file (xml or
plain ascii text). Then using typical ETL, we grab the file and load
it. The underlying source system (service) can use optimization in
generating a large data set (vs. record by record) and
the data warehouse can correspondingly load in bulk.
Like most architectural questions - the answer is "it depends"
instance, if you do a run-of-the-mill ETL as a on-time setup then it is
just that- a one time setup and I, personally, don't see any
contradiction between SOA goals or tenets and that.
I do think
that iit is better to enhance SOA with EDA interactions to provide a
long term solution to the BI problem. You can also have a dedicated
component that aggregated the information that flows in in these events
and builds batch files that are suited for the ETL you've used during
the setup phase (mentioned above).
It is true though that moving an
SOA which is already in-place to EDA is not a small feat, but adding
EDA layers does not have to mean that the old interfaces go away -
especially not immediately (remember to treat services as products
you have a business that generated millions of records on a daily basis
- then the situation is more complicated. Now you have to think about
the trade-offs between "compromising" SOA and adding a dedicated
interface (or a backdoor to the database) for the ETL vs. the
implications of performance, bandwidth, transition costs, ROI etc. of
pushing that information with EDA.
I, personally believe in pragmatism and the "no-silver-bullet
" approach so I can't say that EDA is always the best solution (As an aside, this is part of the reason I write my book
as patterns not as "best-practices guidance"). You may find that ETL is
the best trade off in your situation. Yes I know that it isn't a
definitive answer - but real life is (usually) a little more
complicated than black and white solutions. As architects we need to
find the best trade off for the situation at hand.
In addition to the drafts of selected patterns I publish on my site
, you can now purchase my book via the Manning Early Access Program (MEAP)
means you can get chapter drafts as I write them and the complete book
when its done (ebook or printed). Here is Manning's explanation:
now through MEAP (Manning Early Access Program) and get early access to
the book, chapter by chapter, as soon as they become available. You
choose the format - PDF or ThoutReader - or both. By subscribing to
MEAP chapters, you get an opportunity to participate in the most
sensitive, final piece of the publishing cycle by offering feedback to
the author. Reader feedback to the author is welcome in the Author
Online forum. As new chapters are released, announcements are made in
the MEAP Announcement Forum. After all chapters are released, you will
be able to download the complete edited ebook. If you order the print
edition, we will ship it to you upon release, direct from the bindery,
weeks before it is widely available elsewhere.
By the way, this is probably also a good time to mention that I'll be speaking about quite a few of the patterns in Architecture & Design World 2007
which will take place this July.
still a lot of work, but I already like to thank all the people in
manning that helped me get this far. especially to Cynthia Kane my
editor (hey, maybe now she'll give me more slack :) )
Ok, 'nuff blubbering, back to completing chapter 5...
While I am on the topic of REST
, it is probably a good time to comment on my (first) post on InfoQ "Debate: Does REST need a Description Language
I think there's merit in Services publishing their message structures
in a machine readable format. When a Service has a machine readable
contact. generated stubs allows you to make the interaction with less
bugs vs. hand crafted interactions. It also makes it easier to test the
I do agree with Stefan's views on runtime interface dependency
where he said that if a service consumer needs just 20% of the
information in a service it shouldn't be forced to deserialize (i.e.
know or care about) the whole message.However, I think this is a
weakness of tooling not the concept. What if you had a tool that reads
the machine readable contract, allow you to pick the 20% you need and
generate for you a stub that ignores all the other 80% and "hand pick"
the 20% you need. This is what you would personally do yourself anyway,
and since the code is generated from the Service's definition it would
be more resilient and error-free This is effectively designing a
personalized mini-contract from the published general one. It does mean
that when that 20% changes you will be affected, but this is something
you'd have anyway.
I also agree that that the WS-* standards and
resulting contract are (and getting more) complicated. Much of this can
probably be attributed to the "design by committee"
effect. However, there are also some real challenged that the SOA and
ROA architectural styles do not address and we still need to solve
those. Trying to solve these challenges is, by the way, what prompted
me to write my SOA patterns book
DevHawk (Harry Pierson) raised today a question
I was toying with myself for a while now - if REST is an architectural
style can it exist without the specific technologies that define it
today. Or as Harry put it :
- REST is a an "architectural style for distributed hypermedia systems".
- REST "has been used to guide the design and development" of HTTP and URI.
- Therefore REST as an architectural style is independent of HTTP and URI.
I get the feeling that the REST community would consider a solution
that uses the REST architectural style but not HTTP and/or URI as "not
What I had in mind for example is to use messaging where the equivalent of the URI would be a topic hierarchy.
Topic hierarchy allows you to have a unique "URI" for each resource.
The next thing we need to take care of are the PUT, GET, POST and DELETE verbs - we can do that by making the verbs part of the message headers.
As an aside I'll also say that if we try to think about it as an
architectural constraint then we don't necessarily have to use these
verbs, a more general rule would say that the verbs are uniform and
well known rather than specific ones
The rest (no pun intended) of the concerns, like specifying related states etc. can be dealt with making conventions on the message formats
Is that still REST?! I wonder...
In any event, what worries me the most in regard to REST is
the religious manner that some people seem to treat it. By the way that
is the same phenomena we see with some of the Agile folks. As for me? - Well, I don't really care if I fit that label or the other. I am just payed
to deliver working and viable software :), but hey, that's another discussion.
Yesterday I attended an SOA governance presentation by Brent Carlson
. The presentation was basically an updated version of an article he authored in 2006 "SOA Governance Best Practices - Architectural, Organizational and SDLC implications
As a tool vendor Brent has a lot of focus on the governance processes
which I don't completely agree with (I prefer Jim Coplien's
organizational patterns approach - see my post from last week
). I also think the reuse figures he cites
(registration required) are a little optimistic common place for what I consider the right granularity for services.
He also made a few points I strongly agree with
- Brent talked about difference between the needs of run-time
service repository (e.g. UDDI or an ESB) and a development time one.
You need to address the services and their interactions during the
development and you need to do that in a way that would be easy for the
development teams. For example, one thing you want to log is usage, who
is using the services since that will let you perform impact analysis
when you have to make a change
- Building an SOA for an organization is an iterative process not a
"big-bang" effort. This means you can't do just top-down design. you
need to be pragmatic and also roll out working services.
The reason for this post however is the insight Brent gave regarding treating services as products rather than applications
Treating Services as products is important because even if you don't
believe that the SOA initiative should be an iterative process once
the move is finished you would have quite a few services deployed in
your organization. These services would integrate and interact with
other services - some of which outside of your organization. You would
also want to capitalize on flexibility claim that SOA makes and adapt
your services to the changing business needs.
The challenges you face regarding updating and upgrading functionality
, anticipating consumer's needs, allowing consumers to get used to
changes etc. are exactly the challenges product management techniques
and principles come to answer
Treating services as products means a lot of things. let's look at a
few examples: For one, it means predictable release cycles services
like products get updated over time you want service users to be able
to cope with this changes. Predictable release cycles means they can
get organized in advance. Another aspect is the emphasis on backward
compatibility e.g. orderly deprecation of features and version
management,.One other thing is introducing a "product manager".
someone whose responsibility is to interact with customers, and
potential customers, understand their needs and build a release road
map for the services.
You might be used to doing some of that with applications but
thinking about services an products makes all this more explicit and
that in itself is also important.
Udi Dahan writes that ".NET/Java Interop is not a reason for SOA
". Udi writes that companies that need to integrate two technologies turn to web-services and that
"The only problem is that in order for things to work right, they really must
have a chatty interface, and flow transaction context between these
“services”, and all the other things I describe as anti-patterns"
Udi is right that if you don't rethink and remodel your systems you
will (probably) not have an SOA as you are likely to find your self
implementing anti-patterns such as the ones he mentions.
However, using Web-services does not automatically mean that you are
doing an SOA. If you don't think about moving to SOA you can still opt
to use web-services as a remoting or RPC technology to connect two
systems. The advantage over the other proprietary products Udi mentions
is that web-services are a standard technology. This will work well or
fail is orthogonal to the technology choice. It depends on the
architectures of the systems you integrate. If you need to flow
transaction between the systems you'd also need that even if you
cross-compile one of the applications in the other environment.
Another thing I don't agree with is the word must
First, while it is likely that older systems has chatty interfaces it
is not a must. The designers of the legacy system may have thought
about the consequences of distribution without regard to SOA. Also you
can still wrap an existing system with a service contract (using
web-services or any other technology) and not get to chatty interfaces
etc. However that means that the wrapper should have some substance or
business logic inside it to mask the old system's behavior this is
especially important if you are thinking about moving to SOA and you
take into consideration that the business will not just halt and wait
there until you are done. You have to think about interim solutions,
such interim solutions can include wrapping a legacy system with an
Edge Component and a SOA facade (a pattern I call Legacy Bridge) while
you move in the grader direction of a full blown SOA.
I just read Shy Cohen's (via Nick Malik) article in Microsoft's Architecture Journal entitled "Ontology and Taxonomy of Services in a Service-Oriented Architecture"
Shy provides a list of what he calls service types. He identifies two major types bus services and application. He then continues to sub-divide them, the Bus Services are divided into communication and utility services and the Application services are divided into entity, capability, activity and process services.
I have to say it was quite alarming to see this coming from someone who had deep involvement in defining
Windows Communication Foundation Indigo.
Where do I start?
well, for one, it seems completely fails to make the distinction between Services as in Service Oriented Architecture and Services as in capabilities or features an infrastructure provide. The "Communication services" are for the most part capabilities that a service infrastructure (such as an ESB) provides. Not Services you would define in an SOA initiative
And then there's the matter of service granularity and the difference between Remote objects and SOA for instance, the example Shy gives for a "method" (his word) on a Customer service (entity service):
"An example of a domain-specific operation is a customers service that exposes a method called FindCustomerByLocation that can locate a customer's ID given the customer's address"
why would a service return a customer ID? This is the kind of call you would make on an object you hold a reference to not some remote "Something" that also want to authorize your call and may reside in a different company. This kind of thinking is what made remote objects fail. Gregor Hohpe explained that nicely in a paper called Developing software in a Service Oriented World
The Transparency Illusion. Distributed components promised to hide remote communication from the developer by making the remoteness "transparent". While the basic syntactic interaction between remote components can be wrapped inside a proxy object, it turned out that dealing with partial failures, latency, and remote exceptions could not be hidden from the developer. It turned out that 90% transparency was actually worse than no transparency because it gave developers a false sense of comfort.
As a side note, Gregor recently gave a presentation that covers this paper at JavaZone which you can watch online at InfoQ
Returning to Shy's article let's take a look at another quote:
Capability Service may flow an atomic transaction in which it is included to the Entity Services that it uses. Capability Services are also used to implement a Reservation Pattern over Entity Services that do not support that pattern, and to a much lesser extent over other Capability Services that do not support that pattern.
I already explained why cross-service transactions and especially flowing transactions is not a good idea in SOA so I won't do it again here - but you can read about it both here
("Transactions between Services? No, No, No!"
) and here
Also I truly hope Shy didn't mean .NET data sets when he said "In some cases, typically for convenience reasons, Entity Service implementers choose to expose the underlying data as data sets rather than strongly-schematized XML data
. Even though data sets are not entities in the strict sense, those services are still considered Entity Services for classification purposes."
In any event the whole decomposition of Services into fine grained "capability", "Activity" and "process" takes no account of the fact the SOA is a distributed architecture...maybe Microsoft is not affected by the fallacies of distributed computing
*ad nauseam (latin)- to the point of disgust
Pat Helland is back in Microsoft (after a two years vacation in Amazon) and more importantly he also restarted blogging. I only met him in person a few times - but he is definitely one of the few persons really worth listening to - especially when it comes to distributed computing. Not only does he make interesting observations he is also capable of explaining them in a crisp and interesting manner. Indeed, it didn't take too long (his second post) before he blogged some valuable content. The post is called Memories, guesses and apologies
(go read it).
Pat talks about how the notion of time in a distributed environment is subjective and you can really know what happened before what and what we can do about it (I really think you should just go read it :) ).
Another related aspect of the phenomena Pat mentioned is that taking a snapshot in time, the chances of having a single unified truth in a distributed system degrade in a proportional manner to the system's load. I had a chance to work on a few systems where some of the sites had either occasionally connected or connected over low bandwidth networks. This situation makes the whole notion of guessing the state and compensating and/or apologizing for wrong conclusions much more explicit than in always connected high bandwidth system. Nevertheless, latency still exists even in connected systems and and you should really be weary of assuming a universal truth - unless you can stop the businesses long enough to allow complete synchronization.
As I mentioned a few days ago, we can't afford to have cross-service transactions
(I also think we can't afford too many distributed transaction in non-SOA architectures, but this is a especially true for SOA) which makes things even worse in this sense. One thing we can do in an SOA to achieve distributed consensus is to run a Saga. Saga, which is a long running conversation between services, is probably one of the most important interaction patterns for SOA.
You know what? instead of trying to explain it here in a haste i'll just publish the pattern draft - I'll try to do that before the end of the week.
In the article
mentioned in the previous post, I talk about adding EDA to SOA and how you can use Complex Event Processing (CEP) to process the event streams and infer the trends and enhance the understanding of what happen inside your business. All the tools I knew about were Java tools I knew about were Java tools but now I've found out (via Nauman Leghari's blog)
that there's also a .NET CEP engine and it is even open source - It is called NESper
and like many other tools it is a port of a Java tool.
I am not sure how good it is - but at least I'll have an interesting evening today :)
An article I wrote on Business Intelligence (BI) and Service Oriented Architecture (SOA) has just been published on MSDN.
You can find it here
The article explains the SOA & BI mismatch and how to bridge it by adding EDA to SOA. (I bloged about it here before, but the article is more ordered and complete)
I have seen the following question on one of the forums I follow
"I have studied up on the SOA approach and it all sounds good. But most articles stop at the theory.
say I sell things. I have a CustomerProfileService. The application
does CRUD through this service to a back end database. Its autonomous
I have anther service, InventoryItemProfileService.
Again, the application does CRUD through this service to a back end
database. It is autonomous from the CustomerProfileService. Not only
may it live on a different DB from the CustomerProfileService, it might
exist on a different platform.
Now lets get to the InvoiceService.
Lets say from the client side, I would guess that i would have a
CreateInvoice(custID,itemID ) method. The InvoiceService would then
call out to the CustomerProfileService for profile that meets the needs
of the invoice, then another call out to the
InventoryItemProfileService for the item descriptions and such.
is the question. It would seem like in the back end (the db) of the
InvoiceService there would be tables to support the customer info and
the item info from the invoice. Where prior to SOA, when everything
was in the same db, these requirements would be largely satisfied by
joins. Now a logical join across services just seems radically
expensive (everytime you touch the invoice). hence the need for the
customer and item tables local to the invoice service.
Does this sound right? Just how often does the InvoiceService have to go back to these other supporting services?"
I also got a comment with a similar theme on my Cross Service Transactions
see a few problems with the way the services in the question are
modeled (like CRUDy interface) but in the end it all boils down to the
root cause -and the real problem: granularity of the services.
when "a service" is too small it doesn't make sense to separate its
tables from those of other services. it doesn't make sense to have
transactions that span only what's internal to the service. It doesn't
make sense to pay the price to make a service autonomous (like caching
reference data from other services). When the granularity is too small
you will often find that you need to make a loot of interactions with
other so called services. you are more likely to have CRUDy interfaces.
You are also more likely to have slow performing solution and suffer from low-availability.
services in a granularity mentioned above is, in my opinion, a
nightmare that would probably make you work very hard to maintain the
SOA principles in place - or the more likely option, that you would
circumvent the principles so that you can get something maintainable,
usable and performing (and flip the bozo bit on this all SOA thing)
what is the right granularity. Well, it is not a one-size-fits-all kind
of thing, but as a rule of thumb I would say anything just shy of a
sub-system and up. A service has to have enough meat so that it would
make sense having it autonomous; that the transactions would fit nicely
inside its boundaries; that it would be worthwile making it
highly-available; that you can pass a complete task/document to it and
it won't have to talk to a gazillion other services to complete
processing it; etc.
If your application's idea of invoices is a
2 tables one with a header and one with invoice details - then don't
make that a service. if invoicing is a sub-system with complex business
rules a lot of options and what-not - then it can be a good candidate
Think about it next time you design a service :)
After seeing Juval Lowy's article on WCF transaction propagation in the May issue of MSDN magazine. I posted " Transactions
Between Services? No, No, No! " in my DDJ blog. I've got a few comments which I thought warrant a post in their own-right.
The previous post was triggered by an article that promoted flowing
transactions (i.e. you perform a transaction against one or two services and
then one of the services calls an additional service and it joins the
transaction). It is important to say that I think transactions between services
should be discouraged regardless of automating extension of transactions.
Transaction propaqgation just makes the matters worse.
There might still be some edge case where you have to have an atomic
transaction from a service consumer to the service. I think that in the vast
majority of SOA implementations you shouldn't do that and I would think real
hard about the other options before allowing it in my architecture.In general I think cross-service transactions are an antipattern (and that's the way you'd find them documented in my SOA patterns book :) )
the comments I received began with:
"Cross service transactions are a sure way to introduce coupling and
performance problems into your SOA." I'm not sure I agree with that thought.
Logically speaking, cross service transactions are a must. The question is how
to implement them. There are two mechanisms we can use for implementing TXs: (1)
ACID TXs; (2) Long-running TXs. The latter is preferable for the cases Arnon is
talking about (large geographical distances, multiple trust authorities, and
distinct execution environments). ACID TXs are more suitable for what Guy has
mentioned (DeleteCustomer service invokes the DeleteCustomerOrder service
internally). I agree with Arnon the a-synchronicity is preferable, but we all
have encountered use-cases where ACID-ness is required from a business
requirement level... [snipped]
One minor point in regard to this comment is that I don't like the term
long running transaction - there is a long running interactions between services
and I think the term SAGA describes them better. Sagas are made of a series of
business activities that flow back and forth between services to realize a
larger business process. Note that these interactions doesn't necessarily have
which brings me to the more important point of looking at the statement
"Logically speaking, cross service transactions are a must". I don't think so.
For instance, if a service that manages the inventory in a warehouse receives a
request for some items and later a cancelation of that request. The first
request can trigger the inventory service to order some more items from a
supplier. Whether or not the cancellation would cause a cancellation of the
order of the supplier depends on the business rules of the inventory service for
inventory levels for the items ordered. it might also depend on whether or not
the items have already been received etc. The cancellation (the "abort") of the
original request does not have to translate to an abort (or compensation) on the
request receiver. Furthermore if the service communications model is based on
the push model (e.g. using EDA with SOA) the cancellation notice would just be
propagated without regard to the inventory service -. It is the inventory
service's responsibility to understand the ramifications of this event and act
accordingly. Even the example given in the comment 'DeleteCustomer service
invokes the DeleteCustomerOrder service internally" is not a good candidate from
ACID transactions (there's also a problem of service granularity here - I'll
talk about it later). Since when the customer service decides to delete a
comment and request the Orders service to delete orders - there's a reasonable
chance that some of the orders are already paid for but not delivered. In this
case the customer cannot really delete the customer until all the paid orders
are resolved. Or maybe the order service is a facade to a night batch that does
the actual deletion. - I know I am just fantasizing with these examples but the
point is that the customer service has no knowledge on the order service or the
inventory service above except the messages supported in their contract. To
assume something about the internal behavior is problematic. Even if you know
about the internal structure on the onset, the whole idea of SOA is that the
services can evolve independently from each other...
Another thought triggered by the example in the comment originated by the
granularity of the services (DeleteCustomer service vs. a Customer Service that
also supports deleting customers) is that we should be really conscious to the
difference between other architectures like 3-tier client/server and SOA. SOA is
actually more distributed than 3-tier - we cross a distribution boundary every
time we pass a message from a service to a service and not just when we move a
massage from a client-tier to an application server. We add this distribution to
gain advantages in flexibility and agility. However, we should note that this is
a weakness of SOA (considering for example, that Martin Fowler's first law
of distributed object design is" Don't distribute your objects!") means we
should really pay attention to the way services interact with each other.
- The granularity of services - having a lot of fine grained services means
there will be a lot of interactions over the wire (even if you don't go out to
the network you still have to serialize/deserialize, follow the security policy
etc.) rather than internal interactions that much faster
- The Granularity of messages - The same considerations should also guide us
to try to create larger and fewer messages. for the example above . Instead of a
DeleteCustomerOrder message maybe something like an UpdateCustomersOrders
message that can hold a list of customers and orders and the status changes or .
by the way this would also support off-line clients better since they can
- The assumptions we can make on the other service's availability,
performance, internal structure, the trust we have for it etc. - We should try
to minimize the assumptions we make and concentrate on what can be inferred from
the contract. Remember that policies can change externally so the business logic
within a service cannot count on them being constant. this brings us back to the
issue of transaction. every cross-wire interaction increases the chances of
failure - in transactions one failure invalidates all the transaction is
invalidate. every cross-wire interaction within a transaction increases the
length of time we lock internal resources (even if we do trust all the involved
parties) - especially if that transaction can extend itself automatically. Also
as I've mentioned in the previous post the transactions also open the door for
denial of service attacks.
If we want to reap the benefits that are sold under the SOA moniker, like
flexibility and agility, we really have to pay attention to this extra
distribution and design our services differently than we would components in a
3-tier architecture - but hey, that's why they pay us the big bucks, right ? :)
I should probably also add that building SOAs is not a goal in itself.
We can build perfectly good solutions using other architectures - but if we find
that we do need SOA (or any other architecture for that matter) we have to pay
attention to the way we implement it to both keep its benefits and not harm
other quality attributes like performance, security etc..
I've updated the draft for the Edge Component Pattern to a more legible version (thanks to Cynthia Cane my editor @ manning).
The Edge component pattern solves the following dilemma:
How do we allow the business aspects of the service, technological concerns and other cross-
cutting concerns like security, logging etc. to evolve in their own pace and independently of
I was going to try to explain why it took me so long since I've posted the last pattern draft on-line when I saw that a couple of my fellow Manning authors already did that. See Roy Osherov's "Writing a book is like developing Software" and Fabrice Marguerie's "My Writing Experience". I have similar experience here -there are a few commonalities for software writing and it seems that the counter measures of shorter iterations, refactorings (which I guess writers know as rephrasing) and increased inspections seem to work here as well.
Finally, I am back to writing new stuff and I am completing Chapter 4 now. Chapter 4 deals with SOA security pattern, and I've decided to release the "Service Firewall" pattern as free draft. Note that it is a draft and it can change by the time it gets to publication for example the Edge Component, which I published a few months ago already went through some extensive rewrite (maybe I'll post the updated draft..)
The Service Firewall helps deal with malicious "service consumers" and protect the services from several types of attack including for example XDoS (XML Denial of Service), malicious content, preventing leaking private information from the service etc.
You can download the draft for Service Firewall pattern from here .
Following my post on SOA definition, Alex left the following comment
“One question - how can an organization achieve "agility" through an SOA, if not through "re-use"? Isn't re-use really the ROI for implementing a Service?”
The way I see it, Agility means the ability to change rapidly and it doesn’t have to mean reuse –for instance, it can come from the ability to replace a component without disturbing other dependent components – though you can say that this is reuse as you are reusing the interface (contract).
When you replace or update a service you may reuse some or maybe even all of the previous version of a service – as long as the context for that service didn’t change significantly – if it did the granularity of the reusable components will be much smaller than a “service”.
I would also note that I think there’s a difference between reuse and use. If you take the same ordering capabilities and you include it in two business processes that just using it. I’ve seen reuse of services in product companies where services were reused with few modifications between two or more solutions but this isn’t very common.
Regarding the ROI of SOA –That doesn’t have to be reuse or just reuse it is also things like easier connectivity so that you can integrate faster with partners or new components that are developed . Another way to measure ROI is measure the gains in easier replacability and adaptability so you can faster respond to changing business requirements (e.g., changing what counts as a VIP customer without letting any of the service’s consumers that something changed).
Udi has some comments on my SOA definition. Udi says that the definition I provided does not support the notion of publish/subscribe using topics for services. My answer to this is yes and no :)
First thing first, I never said (or at least I never meant to say) that contracts are limited to only incoming messages. Contracts contain incoming and outgoing messages. I probably should have stated it more clearly though.
Udi says “Contract: Who owns the message type being published? The publisher or the subscriber? Common SOA knowledge would say that the message belongs to the contract of the service that receives it”
I don’t know who is “Common SOA knowledge”. In my opinion, this thinking is a wrong “even” for request/reply. The reply message belongs to the service the sends the reply
Regarding Endpoints – if the subscribers go to a topic as in “ServiceName\TopicName “ then yes I would call that an Endpoint since this is a well known address consumers (subscribers) go to find messages published by a service
Regarding consumers Udi says “ Is the publishing service “using” the subscriber when it publishes a message? I don’t think so, and the subscriber definitely isn’t using the publisher at that point either. So, we’ve got some inter-service message-based communication going on and it isn’t clear if we even have a service consumer. In fact, if all a service ever did was subscribe to some topics, and publish messages on other topics, it looks like we’d have very loose-coupling but be straying from the common SOA wisdom.”
Maybe that’s just semantics but I don’t see why the subscriber isn’t using the publisher- The publisher publishes a message on a topic this is part of its offering. The subscriber chooses to consume that information and maybe do some stuff with that – possibly publishing some other messages. That’s a “using” relationship to me.
Nevertheless - SOA is not a synonym for "Distributed system" so there are cases when distributed components that communicates through messages aren’t SOA. For example publish/Subscribe using topics where the topics are common and shared between components so that multiple services can publish on the same topic does not, in my opinion, fall under the definition of SOA . This doesn’t say that this is a bad architecture in any way – but it isn’t SOA either.
As I said in the “What is SOA posts” for an architecture to be SOA you need autonomous components , that publish and accepts messages defined in contracts, delivered at an endpoint and governed by policies to service consumers – no more, but no less either.
I've been talking about SOA for a while now it's finally time to (try to) properly define it
I've publised this as a 5 posts on my DDJ blog and I thought it was good enough to be publised as a single whitepaper:
"Service Oriented Architecture or SOA for short has been with us for quite a while. Yefim V. Natiz, a Gartner’s analyst, first talked about SOA back in 1996. However it seems that only in the recent year or so SOA has matured enough for real systems based on the SOA concepts to start to appear – or has it? There is so much hype and misconceptions surrounding SOA that we first have to clear them all up before we can explain what SOA is – let alone identify who really uses it...." (Download full PDF (670K))
You can see additional presentations and papers I wrote here
[originally published in my DDJ blog]
You may have read my BI and SOA post where I suggested using EDA within SOA to solve the BI/SOA impedance mismatch. Jack Van Hoof made the following comment on that post:
Many people think of SOA as synchronous RPC (mostly over Web Services). Others say EDA is SOA. And there are many people who say that the best of EDA and SOA is combined in SOA 2.0. Everybody will agree that there is a request-and-reply pattern and a publish-and-subscribe pattern. It is easy to see that both patterns are an inverse of each other….
I think that "Synchronous RPC" is not a very good (or useful) definition for SOA (see my series on what is SOA anyway). Nevertheless, I also think that even if all you have is synchronous request/reply you can still implement both asynchronous messaging and EDA How can we implement Asynchronous Messaging?
Option 1 Duplex Channel
Let’s say you are a service consumer. You send me your request. Instead of a reply I just acknowledge you that I got the message. I put the message into a queue and process it on my "spare" time. I then call you with the answer.
Option 2 One way Channel
Again you send the request. Instead of a reply, I give you a token or a ticket for the answer. When you think it is time, for example when the time promised in the SLA elapse (or whenever), you call me again, give me the ticket, and I look up the answer and give you your reply. If we hide all this protocol inside some software infrastructure the applications can see asynchronous messaging even though we have synchronous request/reply on the lower levels.
Okay, so what about Events? How can we publish events just using request/reply. The previous example would not work since we can miss out on important events?
If you are reading this blog -- chances are you already have the answer working on your computer -- yes, it is RSS. Think about it using an RSS Reader that pulls the server that publishes this blog you reach out using synchronous request/reply and get all the posts (events) that were added since the last time you asked.
There are a few additional architectural benefits for working this way. For one the service does not have to manage subscribers. Secondly, the consumer doesn’t have to be there the moment the event occurred to be able to consume it -- and the management and set up is easier and simpler than using queuing engines
Jack Van Hoof left the following comment on my post on BI & SOA:
"Many people think of SOA as synchronous RPC (mostly over Web Services). Others say EDA is SOA. And there are many people who say that the best of EDA and SOA is combined in SOA 2.0.
Everybody will agree that there is a request-and-reply pattern and a publish-and-subscribe pattern. It is easy to see that both patterns are an inverse of each other. In my article 'How EDA extends SOA and why it is important' I explained the differences between the two patterns and when to use the one or the other.
Because of the completely different nature and use of the two patterns, it is necessary to be able to distinguish between the both and to name them. You might say making such a distinction is a universal architectural principle. Combining both of the patterns into an increment of the version number of one of them is - IMHO - not a very clever act. I believe it is appropriate and desirable to use the acronyms SOA and EDA to make this distinction, because SOA and EDA are both positioned in the same architectural domain; SOA focusing on (the decomposition of) business functions and EDA focusing on business events."
I agree with some of the things Jack says but not all of them. The way I see it EDA and SOA are two different architectural styles- but I guess that I see it a little different than Jack does
EDA is an evolution of the publish-subscribe style - and can exist independent of SOA i.e. you can implement it with other architectural styles SOA is an evolution of the component based development style which puts an emphasis on interoperability and adaptability.
However I don't agree that SOA is "Synchronous RPC". That's just the initial "wave" of SOA implementations since synchronous interactions are easier to grasp and implement. I think that adhering to SOA principles you can also implement additional interaction patterns including, asynchronous messages, publish/subscribe and EDA (and combining SOA with EDA is what I suggested for solving BI in an SOA)
I don't like the SOA 2.0 term as well - but that's just because I don’t see a need for defining a new term :)
I'll post more about this once I finish the "What is SOA anway" series on DDJ where I explain the way I see SOA
[based on a few posts from my DDJ blog]
Implementing Business Intelligence (BI) solution on top of Service Oriented Architecture (SOA) is not a simple feat. A recent survey by Ventana Research shows that "...only one-third of respondents reported they believe their internal IT personnel have the knowledge and skills to implement BI services.". There's a good reason for that since there an inherent impedance mismatch between BI and SOA which takes some effort to overcome. The purpose of this paper is to look to explain the problem as well as look at the possible solutions.
Service-Oriented Architecture is about autonomous loosely coupled components. These traits gives you lots of benefits such as greater flexibility and agility but it also means that services have private data. Data that you don't want to expose to the outside as exposing it will decrease autonomy and increase coupling. This is why services only expose data and processes via contracts rather then exposing their internal structure.
That is all fine until you start to think about business intelligence. The cornerstone of any business intelligence initiative is gathering, collecting and consolidating data from all over the place. Once you have the data, you can use tools to analyze it, data mine it, slice, splice, aggregate, and whatnot. Traditionally BI builds on ETL (Extract, Transfer, Load) which goes directly to the database of the involved sources.
And here lies the problem: On the one hand we have services that want to keep their data private, and on the other we have a datamart or warehouse that wants that data badly.
What are our options?
- If you go with traditional ETL, you introduce coupling into your service.
- If you only rely on contracts that were constructed for business processes you may be missing out on important data.
- If you build a specific contract that exposes "all" the data you are back at the point-to-point integration -- solving point-to-point integration is one of the reason we want SOA in the first place.
The second option seems to be the most reasonable choice of the three -- but it also has several problems. One problem is that the BI needs to know about all the contracts. The second was already mentioned -- important data might be missing. The third problem is that the BI system need to fetch data from the services which means it may miss out on data in the intervals between request. On the other hand, too frequent requests and you can congest your network easily as well as cause DOS on your own services.
Clearly we need a fourth option
In my opinion, the best way to tackle BI in SOA is to add publication messages into the contract. By "publication messages", I mean that the service will publish its state either in a periodic manner or per event to anyone who listening. This is a service communication pattern which I call "Inversion of Communications" since it reverse the request/reply communication style which is common for SOA.
To make the solution complete, you can add additional requests/reply or request/reaction messages to allow consumers to retrieve initial snapshots. Following this approach, you get an event stream of the changes within the service in a manner that is not specific for the BI. In fact, having other services react on the event stream can increase the overall loose coupling in the system - for instance by caching results of other services
Why is this better than the other three approaches? For one , you can get a good picture of what happens within the service. However the contract is not specific for the BI and can be used by other services to cache the service state (thus increasing their own autonomy), for reporting (you can see an early draft of the aggregated reporting pattern), and for BI purposes. By working against a steady stream of events, the BI platforms can Analise treands, keep history and get the complete picture they need.
The approach above is sometimes referred to as "Event Driven Architecture" (EDA) and while I (and others) see EDA as another facet of SOA, not everyone agrees. Gartner, for instance, sees EDA as another paradigm and SOA just for request/reply, or client/server. Recently, however, they published a paper that calls the approach described here as "Advanced SOA". I tend to agree more with the "advanced SOA" definition and don't see a contradiction with EDA and the SOA definitions. We are still using the same components and the same relations only adding an additional message exchange pattern into our toolbox.
A note on implementation: If you are implementing SOA over an ESB that is rather easy to implement as most ESBs support publishing events out of the box. Using the WS* stack of protocols, you have WS-BaseNotification, WS-BrokeredNotification and WS-Topic set of standards. If you are on the REST camp, then I guess you will need to implement publish/subscribe by yourself.
Once you have event streams on the network, The BI components grab that data scrub it as much as they like and push it to their datamarts and data warehouses. However, event steams can also enable much more complex and interesting analysis of real time events and real time trend data using complex event processing (CEP) tools to get real-time business activity monitoring (BAM)
You can also get post as as a presentation down loadable from the papers section on my site or directly from here. (The download is about 3MB.)
One unique aspect of SOA vs. other architecture styles like Object Orientation , Client/Server or even 3-Tier architecture is that it is built for highly distributed systems. Each and every service is a sub-system in itself it can run on its own machine and be located everywhere in the world . Many times, the service itself needs to be distributed in its own right. One reason to use distributed computing inside the service is computational intensive tasks.
One of my recent projects was the development of a biometric platform. The platform can be used for many usage scenarios. A simple scenario is an access control systems - e.g. authorize entrance into a secure building or area. This is a relatively simple scenario as you usually only have to deal with few thousands of people and as a person requests entry she also declares who she is (e.g. using an RFID card with her ID). In these cases you can go to the database, lookup the appropriate record , run the biometric algorithm or algorithms and verify the person is who she says she is. However the same platform also has to work for other, much more demanding and computing intensive scenarios. For example consider a forensics scenario where you have a fingerprint collected at a crime scene, in this case you don’t know who the person you are looking for is, and you have to run your search on basically all the database which can contain millions of records. Keep in mind that when you match a biometric template1] you calculate the probability of a match (based on the internal structure of the template) and that each template weights about a one kilobyte you quickly realize that this can be quite a CPU intensive task.
Sometimes when you develop you SOAs you will have algorithmic tasks or other computational heavy tasks such as the one mentioned above and the question is
How can a Service handle computational heavy tasks in a scalable manner?
You can get the full pattern from here
[This is an early draft of one of the Performance, Scalability and availability Patterns from my SOA Patterns book]
I've added a section called SOA Patterns on the site while holds the current draft for the table of contents of the SOA Patterns book I am writing. The section lists the problem each pattern addresses as well as links to published patterns. Also, you can use this to monitor my progress (patterns that already have their problem written down already have drafts; the others are in-progress or not started).
I am currently working on chapter 4: Security & Manageability patterns (not counting delays mentioned in the previous post).
Also, as I think I've already mentioned, I'll make public at least one pattern per month, if you are interested in a specific pattern in particular (from those which are ready - now chapters 2&3) drop me a note and I'll publish the one that gets the most votes
My editors at manning think that my chapter 1 of the SOA patterns book is not good enough.
They basically say that the chapter talks about too much theory vs. the other chapters which contain much more down-to-earth stuff (e.g. Edge Pattern, Aggregated Reporting Pattern, Decoupled Invocation Pattern ). Also they’ve said that I spend too many pages explaining what architecture is or taking about distributed system before I get to SOA – which is the topic of the book.
The way I see it, understanding architecture and distributed systems is essential to understanding SOA (from the development side i.e. when you want to design and build services). For example the discussion on quality attributes explains how you can use scenarios to find architectural requirements (and each pattern then has a section on relevant scenarios to help you find if the pattern is applicable to your needs)
I would be very interested in hearing what you have to say (either as comments here or emails to me) about the Chapter’s structure and content (considering most of the books will be patterns like the Edge pattern)
Thanks in advance
The business rationale behind going on the SOA road is increasing the alignment of the business and IT, so we divide the business into a bunch of business services and everything is just fine. However the minute we start diving into the SOA implementation details we are swamped by a horde of technologies, cross-cutting concerns (auditing, security, etc.) and whatnot.
For example, in one project I was involved with, we implemented an SOA over a messaging middleware (Tibco's Rendezvous). Just when everything was fine and dandy - along came another project which could potentially use few of the services. Well, almost, it needed a slightly different contract and it also used completely different wire protocol - WSE 3.0 (Microsoft interim solution for the WS-* stack before Windows Communication Foundation). And that's just one simple example - cross cutting concerns and implementation details are everywhere. The question is then:
How can you handle cross cutting concerns like multiple technologies, protocols, changing policies etc. while keeping the service's focuses on its core concerns - i.e. the business logic.
You can get the full pattern from here
[This is an early draft of one of the Service Structural Patterns from my SOA Patterns book]
I am going to present SOA in one of our internal forums next week - so I thought it would be a good opportunity to dust-off my SOA presentation and give it a little face lift. You can download a copy from the papers and articles section (or get it directly from here).
As always, any comments are welcome
The draft for the first chapter of my SOA Patterns book is available on-line from Manning Publications Co.
The first chapter talks about software architecture and the inputs the architect can/should use to design one (emphasizing Quality Attributes); Explains the challenges of distributed systems and takes a look at the SOA from an architectural perspective.
You can download the chapter from here
Any comments are welcome (you can also leave your comments at firstname.lastname@example.org)
[Will also be cross-posted on my DDJ blog]
Working on my SOA patterns book, I thought of this rule for contract versioning which my shameless ego wanted to dub Arnon's Contract Versioning Principle. I was happy playing with this thought, until I realized that there isn't some profound new understanding here, this is just an application of LSP for service contracts.
Liskov Substitution Principle (LSP) which I recently blogged about here as part of a series of blogs on Object Oriented Principles, basically states that a subclass should be usable instead of it parent class. To put this in other words you could say that a subclass should meet the expectations that users of the parent class have come to expect from the parent class's observable behavior.
So LSP applied to SOA would state that:
When changing the internal behavior of a service, you don't need to create a new version of the contract if for each operation defined in the contract the preconditions are the same or weaker and the postconditions (i.e. the outcome of the request) are the same or stronger or in other words the to retain the same contract version, the new version of the service should meet the expectations that consumers of the service have come to expect from the old version's observable behavior
For example, let's say you have a customer service and the contract lets you get a customer's VIP status. If you changed the way the VIP status is calculated (e.g. in the old version the customer had to have 1 million dollars in her account, but now she must have 10 million dollars) there's no need to create a new contract version. However if you introduced a new level of VIP status (e.g. 1 Million = Gold, 10 Million = Platinum) you do need a new version for the contract
I've added a new section on the site www.rgoarchitects.com/Papers to allow easy access to all the papers, presentations and articles I published (and will be publishing e.g. I'll add a paper on architect soft skills in a month or so etc.)
Udi Dahan, which is one of the best architects I know has recently created an excellent course on SOA - you can find the details of the syllabus on Udi's site .
I had a chance to work with Udi in the past and the solution we implemented utilized many of the patterns and techniques Udi covers in his course - so these are not just nice theories but rather real stuff that works
[crossposted from DDJ]
Yesterday I attended an interesting presentation on SOA by Dr. Donald F. Ferguson, chief architect for IBM's software group.
I was happy to hear him validate some of my thoughts on SOA (e.g., workflows are better kept inside services rather then outside, transaction boundaries should be inside a service, and so on), and introduced a couple of things I didn't know much about (for example, OSGi, a SOA platform for networked devices that's not based on web services. He also presented some nice insights (for instance, looking at the middleware as an infrastructure service and thus nicely unifying SOA and EDA)
On of the insights Donald presented was the use of heuristics as an aid to modeling and validating architectures. Some of the heuristics he mentioned include:
- Occam's Razor -- avoid needless repetition
- Don't create something new if you can compose existing stuff to get the same result
- externalize volatility -- don't put in the code things that are likely to change
- Focus on "name,value" programming not "offset programming" -- make things easy to understand
- Different is hard
If you look at heuristics as an abstraction of experience, they can provide as a good tool for keeping yourself on the right track. Some heuristics are universal (maybe the ones mentioned above and a few others like "simplify, simplify, simplify" or "the original statement of a problem is probably not the best one or it may even not be the right one"), but the problem is, as always, deciding (in advance) which heuristics to apply to a problem.
If you interested in using heuristics as an architect tool you may want to look at " The Art of Sysytem Architecting", by Mark Maier and Eberhardt Rechtin. The book discusses the architectures of different system types (collaborative, IT, Manufacacturing, etc.) and provides heuristics for each of these systems.
Heuristics are a good tool to use when you design an architecture and in a way the different design principles (e.g., the single responsibility principle) can also be considered heuristics. Nevertheless it is very important to verify designs by additional methods like code and formal evaluation and not rely on heurisitcs as the only tool.
I have amassed more than 30 patterns related to SOA (e.g. SOA Patterns - Decoupled Invocation and SOA Patterns - Aggregated Reporting which I previously published here). I have patterns around security, availability, scalability, composition, adding a UI etc. Some of the patterns are original (I think) and some are based on other peoples work.
I am trying to decide whether or not it would be worthwhile putting all these patterns into a book. Writing a book is a very time consuming task (or so I am told) - so I thought I'd run a quick poll between the readers of this blog to see how many of you would be interested in reading (and buying) this book if it will get published.
I know this is not a representative crowd - but it can give me a (very) rough idea on the interest in such a book.
Please send any comments (comments like "forget it, no one would ever want to read anything you write" are also ok) to email@example.com (or leave a comment here)
Thanks in advance - Arnon.
[Edited version of post in DDJ]
Since I have been blogging for about a year now on this blog and now also on the DDJ blog. I think it is time to try making something with more two-way communications.
Consequently, I am going to run a little experiment for a few weeks and see how it goes.
The idea is as follows: If you have an interesting architectural or design dilemma, drop me an email at firstname.lastname@example.org I'll pick one issue per week and post (on the DDJ blog) the dilemma (anonymously) plus voice my opinion (and/or suggested solution)--and then everyone else can chime in with their comments and insight which hopefully will shed some light on the subject.
I'd be interested to hear both your opinions on this initiative and, of course, interesting dilemmas you are facing. Again, send your dilemmas to email@example.com)
Here is another SOA pattern from the list of patterns I am publishing.
One of the core goals of going with SOA is to enable loose coupling. The request-reply communication pattern, which is very prevalent, inhibits this decoupling. The problem is for the caller or consumer of the service - the consumer service is dependant on the timely response of the called service for its normal operations. To help elevate the consequences of this dependency the service that is consumed should maintain QOS (Quality of Service) as part of its contract (it doesn't have to be part of the machine-readable contract but it needs to be defined and adhered to). Consider for example an on-line music store. On a normal business day that can have few thousands of purchases nicely distributed around the clock. And then when a new <name your favorite band here> album debuts they can have much higher peaks than their usual requests load. They still need to be able to handle all coming requests or the (potential) buyers will take their business elsewhere.
How can I maintain a level of QOS, handle peaks and high-loads without my service failing?
One option is to estimate the peak loads and get enough computation power to ensure you can handle them – this causes problems. One is a problem of waste you can have machines just sitting there twiddling their thumbs so to speak. However the idle computers have purchase, maintenance and operational costs. The other problem is unexpected loads (e.g. Harry Potter craze for an Amazon-like site) – the estimated load might not be enough.
Ensuring QOS gets even more problematic when some of the actions performed in the service access resources/services that are not in the under the service control (- e.g. taking to a credit card clearing in the e-commerce example mentioned earlier).
Another issue that needs to be take care of is prioritizing requests – a Service most likely handles several types of requests – not all of them need the same level of QOS. You can set the QOS for according to the most demanding request type – but then you may need more resources.
Decouple the invocation- separate the reply from the request: Acknowledge receipt in the edge, pass incoming request to a queue, load-balance and prioritize behind the queue.
Making the Edge acknowledge the receipt of the request (for our e-commerce example this can translate to "Your order has been received and is being processed, you would get confirmation email when the transaction completes") allows hiding operations that take long time from the service consumers (be that other services or end-users).
Writing requests to the Queue is a relatively low-cost operation that can be performed fast thus allowing handling request peaks. The actual handling of the incoming requests can be performed more slowly according to the available resources of the service. The load balancing can be done by setting different number of readers working against the queue.
Making the Queue a Priority Queue (or having several queues according to priority) allows for maintaining different levels of QOS for different message types.
Decoupled Invocation can be combined with the Gateway pattern to allow scaling out the service.
Decoupled Invocation is enhanced by the use of Correlated Messages pattern which helps relate the request and the reactions.
Acknowledge in the Service
Sometimes the initial response needs to involve some business logic and is not just an acknowledgment. In this case the Edge doesn't respond, it just passes the request to the service, the service sends both the initial reaction and the reaction.
Michael Platt talks about SOA vs. Web 2.0 He provides several links to blogs and article that basically claim that SOA is dead and long live the new king Web 2.0.
One thing I have to say is that if indeed the hype around SOA is starting to calm - this is a very good sign. Finally we can go about adding SOA to our toolset and use it when it is appropriate (not just because management has got to have an SOA). Also it can be a good sign that SOA is maturing.
Another point I'd like to make is that SOA and Web 2.0 are not really related - there is no reason why one should compete with the other. Why using an AJAX front-end makes it impossible to have Services in the backend (it may be appropriate to have a Client/Server/Service Scenario - where the front-ends don't hit the services directly (the other option is Peer/Service) - I may talk about these 2 mini-patterns in my SOA pattern series). Another example where SOA and Web 2.0 can work together is RSS. A service can expose its list of recent changes as an RSS feed (as well as providing the more "traditional" web-services API). Exposing an RSS feed can be an option to implement the Inversion of Communication pattern I mentioned in a previous post).
To sum things up - Web 2.0 may be more hyped today than SOA. Web 2.0 and SOA can co-exist and actually complement each other.
In any event I think we (as an industry) should focus more on delivering great applications and solutions rather than fight about whose trend-du-jour is fancier or sexier.
After writing about the example of using RSS for Service communication I stumbled today on RSSBus which is an effort to create an ESB on top of RSS protocol ...
As promised, here is the first pattern. If you like this pattern but
you think there is something missing to gain better understanding
please drop me an email: arnon at rgoarchitects.com . Naturally any
other comments are also welcome :)
an SOA right is very hard, not so much because of the technical problems (we
know how to deal with those, don't we?), but rather it is very hard to figure
where to put the borders and keep the right business alignment. Assuming you somehow managed that, the real
fun begins - you now have to produce reports, dozens and dozens of reports.
Many reports will fall within the boundaries of single services (if you have a
good partition), however many reports will also require adding data from
several services. For example, in a Telco scenario, you may have a Customer,
Billing and Provisioning Service (a real-life example would have dozens of
additional services) now a customer is calling customer care and you want the
CRM to show everything about the customer what outstanding invoices does she
have, what equipments and services (GPRS, UMTS, friends and family etc.) she
got, what her status as a customer (loyal , VIP, senior citizen …) open service requests etc. Things get much
more complicated when you need to summarize or group data from multiple
How do you get a decent cross business entities report with
the data scattered about in all those services?
possible solution would be to create the report at the consuming end (e.g. UI)
visit all of the services involved then do all the grouping, cross-cuts etc.
This solution is not very good from the performance perspective (you need to
get more data then needed and you have to post-process it). It is also
problematic from the flexibility perspective each service involved has to
expose interfaces to get the data for the specific query (otherwise you
mobilize even more data).
option is to go straight to the data, you may still need to hit multiple
database servers to get to the data but the performance will be better. The
problem is this is throwing your service boundaries down the drain and
introducing a lot of dependency.
third is to create interim Services ("Entity Aggregation") - this
works fine as long as you have real business reasons to do the aggregations
(there is an overhead with adding business logic to handle the aggregated data)
and as long as you only have few of those
(or you might end up with a single "service" with all the
Create an Aggregated Reporting Service by building an Operational Data Store (ODS) to enable
creating sophisticated reports on otherwise dispersed data
ODS is similar in concept to a data mart e.g. data is subject based,
integrated, scrubbed etc. However, the
main differences are that the data is up-to-date and that there is little or no
incoming data the Aggregated Reporting Edge performs the data transformations
from contract data into reporting data. The service updates the ODS by
scrubbing the data (can be limited unless the data has to go to a data mart /
data warehouse) and then integrating it and De-normalize into subjects. Incoming report request fill parameters for
the pre-prepared reports.
problem with Aggregated Reporting is that it is not a Business Service (i.e. it
is a technical solution rather than a business oriented one) - however since
unlike Entity Aggregation the data in Aggregated Reporting is Read-Only this doesn't affect
Reporting is easier to implement when combined with Inversion of Communication
Aggregated reporting with Data Mart/Data Warehouse
of just storing recent operational data, this version enhances the depth and
complexity of queries that can be executed against the service. The downside is
the increased complexity in setting up the data mart - both from the
operational costs perspective (e.g. additional storage) and from the design and
development perspective (you need to think about long term aspects, indexing
etc.) as you also need to scrub data and consider the structure of your schemas
much more carefully.
Operational Data Store (ODS)
ODS is probably the best kept secret of data warehousing technology. It has
been around almost as long but it isn't as famous.
data in the ODS is operational - live data and not static data. The ODS can be
thought of the as the cache memory of the data mart / data warehouse.
is important to note that while it doesn't need the same amount of planning and
set-up as a data mart, an ODS still requires careful planning in order to bring
real business value.
figure below shows the classical usage of an ODS in an OLTP/Data Mart
it was thought there would be 4 types of ODS
I - Near Real-Time synchronization of the ODS with operational data from the
OLTP databases. an implementation of
Class I is the preferred type for the Aggregated Reporting pattern
II - Update the ODS every four hours or so
III - Overnight updates of the ODS
IV - the ODS is updated from the data
mart / data warehouse
reality there are more variants - for example a powerful (and complex to build)
option is to merge a Class IV ODS with one of the other Classes and get.
decide to write a short series of blog post on SOA patterns. These are not
patterns that are only usable for SOA, however, I have found them particularly
useful in implementing SOAs.
isn’t an exhaustive list of pattern - on the contrary I'll try not repeat
patterns which are well known (like
Entity Aggregation http://patternshare.org/default.aspx/Home.PP.EntityAggregation
am a little busy these days (e.g. I have to complete an architecture document
for one of my projects) - so this post will only introduce the (first batch of)
patterns . And the following posts (in the series) will expand on each one
(i.e. explain What to do, usage context, consequences etc.). Then, if I'll get good feedback maybe
I'll publish some more.
what patterns are we talking about here?
- Gateway - How
do you scale a service without exposing too many endpoints?
- Inversion of
Communication - How do I get the data from other services without too much
- Biztalkize -
How do I control volatile behavior inside the service ?
Reporting - How do you get a decent cross business entities report with
the data scattered about in all those services?
Emergence - How do I know where to find a service?
invocation - How can I handle peaks and high-loads without my service
Choreography - How do I expand the
behavior of hard-to-change service (e.g. legacy systems exposed as
I hope this sparkle enough interest to make you follow the rest of the posts on
this subject :)
just read an excellent post by
Gregor Hohpe talking about the motivation for Event Driven semantics for
gives an example of a shipping service listening on order events and address
change events to produce shipments.
is nice to see how architectural approaches transcends business domains so well
- The Naval C4I project Udi Dahan and myself are working on,
we basically try to take the same approach. For example: A Sensors service
publishes its status every predefined time - The sensor knows if something is
wrong with its state. A sensor, however, doesn't know if the problem is important
or not. We designed an Alerts service
that listens in on status messages, based on (changing) business rules a
certain status may trigger an alert event (which a UI can then choose to
display); a severe alert may result in an SMS alerting a technician to come and
have a look.
while this approach is very good for inter-service communication -things aren't
as rosy when it comes to interacting
with UIs. The point is UIs are based on
interaction so the request-reply idiom (should actually be implemented as
request-reaction) is much more prevalent. Users really want to know their
request is being taken care of
lesson we learnt is that since services make go on-line and off-line
independently of each other, it is not enough just to support event listening
for event aggregators to be up-to-date. One option is to relay on reliable
messaging to any event posted will eventually get to the listener - there are
several problems with this approach for example:
- For one, you need a reliable message
transport which might be a problem e.g. you may not be able to use
JMS/MSMQ between enterprises and/or the protocols you use don't support it
(e.g. WS-RM is not durable see here and here )
- Even if you
have reliable communication, if one
service has been offline for a long period of time (where long is defined
by the communication load) - it may be a waste of time (or plainly wrong) to process old events that are no longer
Another option to handle this situation is to
supply in the contract request for current state (the current state can be
published using the same message structure used by the matching event). The
advantage here is that a server coming on-line can quickly and efficiently get
up-to-speed on the current situation.
event thinking is relatively on par with Take-it-or-leave-it approach for
contracts construction, but as I said in the previous post on contracts, I think it is more beneficial to know about your consumer and take their input
EDA, I also learnt today that the
Micro-Services strategy Udi and me came up with had already been "invented"
several years ago. It is called SEDA (Staged Event Driven Architecture) there's
a nice presentation explaining it here
Dahan writes about "Contract First, Discussion Second?" saying
that "a service's contract is a more "take-it, or leave-it" kind
There are situations
when this is true - for example when Amazon decided to expose some of its
functionality as Services they probably didn't negotiate it with
most (if not all) of us. Similarly, whenever you want to consume a deployed
version of a service you can either use it as is or move on.
However, services are rarely developed in a
void. This means that when you set up to design the next iteration of a service
(first or otherwise) there are usually several potential consumers out there
(other internal systems, partners etc.) - and like it or no, you will be
negotiating the service contract with them, after all, the whole idea of the service is to add some
business value. If you disregard your consumers, it will make it harder on them
to actually make use of the (hopefully) wonderful functionality you will be
Also means that it is better to negotiate the contract first (i.e. as one of
the first steps of developing the next
service version). Again, deciding on the contract upfront, allows the other
parties to get organize to better take use of the functionality that will be
exposed through the service once it is deployed.
suggest you be pragmatic when you set up to develop a service, meet with the
potential consumers and try to agree on something that will be useful for them
- or as the Beatles once said "let it be, let it be, speaking words of
WSDL, let it be"…
I've just found out (via Gianpaolo's blog
) that Roger Wolter (former PM of Service Broker) started blogging
. He is going to focus on Data in a Service Oriented world. I had a chance to work with Roger for a short time, which was enough to notice that if anyone knows about data, it is him. I guess there is no surprise there, considering his past at Microsoft working on :SQL Server Service Broker, SQL Express, SQL XML Datatype, SOAP
Toolkit, SQLXML and COM Plus.
His first post
(after the obligatory "hello world" post) is about Service Broker positioning (vs. MSMQ, Biztalk and WCF) - Subscribed
the previous post I said "don't bubble exceptions out of your service"
- Ebenezer Ikonne asks "Well I wonder
what the verbiage of the exception should be? If a null pointer occurred
in the service, what message should I return back to the consumer of the
off, lets consider the meaning of bubbling the exception - what would a remote
consumer, sitting on some other company's server do with a "null
pointer" exception?! - the consumer doesn't have any control on the
resources or life cycle (or anything else for that matter) of the service
it is trying to consume. Also if it depends on the internal problems of the
service it consumes it makes it (the consumer) much less autonomous.
what's the other option? Well, as I mentioned in my previous post it is best if
the service can "pretend" nothing really happened e.g. log the
incoming message before doing anything and then if there's an exception respond
(if the contract requires response by a deadline) with a "got your
message, working on it, you'd get a confirmation message soon" sort of
reaction. If the exception occurs before the incoming message is saved then it
is probably needed to respond with "out of service, try again soon"
if the edge is not up then you (as a consumer) should (finally) get an
exception (the protocol failed - the message you've sent did not arrive)
the way a I think that a somewhat similar principle is true for bubbling
exceptions across layers in a layered architecture