Evan H asked a question about distributed transactions and services in the MSDN architecture forum
Are distributed transactions (ie.. WS-Transaction)
a violation of the "Autonomous" tenant of service orientation? Yes or
No and Why? Kudos if you can address concurrency and scalability (in
an enterprise with multiple interacting services).
answerd this questions back in april when I wrote a couple of posts
that explained why cross-service transactions are a bad idea:cross service transactions
and some more thoughts on cross service transactions
Roger Sessions also agrees with this view (well, it seems actually, he wrote about it well before I did :) ):
the WS-Transaction specification was first proposed, back in 2002, I
wrote an article explaining why I thought the idea of allowing true
transactions to span services was a bad idea. I published the article
in The ObjectWatch Newsletter, #41: http://www.objectwatch.com/newsletters/issue_41.htm.
Nothing since then has changed my mind. Atomic transactions require
holding locks, and spanning transactions across services requires
allowing a foreign, untrusted service to determine how long you will
hold your very precious database locks. Bad idea. Just because IBM and
Microsoft agreed on something doesn't make it good!
The reason I am bringing this issue back is that Juval Lowy (who wrote the article that triggered my first post on the subject) has recorded an Arcast with Ron Jabobs
Where he re-iterated the idea that "Transactions is categoricaly the
only viable programming model" and you should strive to use it whenever
you can. It seems Juval admits you sometimes need to use Sagas
(which he called "long running transactions" - you can see in my link
why I think that's a wrong name). He also agrees that you can also use
a transactionable transport and then only do internal transactions from
each service to the transport (a pattern I call "Transactional
Service"). However, at the end of the day, he still thinks you should
use WS-AtomicTransactions whenever you can.
I agree that
transactional programming is important. I think it is the simplest
programming model (from the developers side). I would probably never
write an interaction with a database that is not transactional; I look
very favorably at initiatives for in-memory ACI (no Durability)
transactions such as the one Ralf talks about
. Until we get to Distributed Transactions...
First, we should note that transactions are not "the only viable" option.As Martin Fowler notes
Ebay seems to be doing fine without distributed transactions. Not only
that, they abandoned distributed transaction and went
"transactionless"because they needed one simple thing... Scalable
In most COM+ scenarios you have a single server or a few internal
servers where the distributed transaction happen - and even there you
should plan your transactions carefully if you want to get any kind of
decent performance. In SOA scenarios the situation is more complicated
as the distribution level is expected to be higher (even if you don't
involve services from other companies). More distribution means longer
times to complete transactions (especially if a participant can flow
the transaction and extend it). It also means increasing the chances of
failure (see Steve Jones series of posts on five nines for SOA
In my opinion, the more distributed components you have the more you
want their interaction to be decoupled in time - i.e. the opposite of
Juval also said he doesn't buy the denial of
service problem I mentioned (supporting a transaction means you allow
locks - if an external party doesn't commit you retain the lock..).
Juval said he assumes that a solutions has both authentication and
authorization so this shouldn't be an issue. For one, I have seen too
many projects where security was something that was neglected or
quickly patched in at the latest moment - so I would hardly assume
security. Even with security on - you increase your attack surface.
that's just the half of it. Even if all your service consumers have
good intentions - you still don't know anything about their code. SOA
is not like the "good old days" where you owned the whole application
- this means you cannot trust their security to be ample. Also you
don't know anything about their code quality. Services are likely (in
the general case) to be deployed on different machines, even if they
start co-located. I think that a Service boundary should be treated as
a trust boundary just like a tier boundary
I strongly believe you should have reduced assumptions on what's on the
other side of the service's boundary - transactions are not reduced
SOA and distributed transactions do not go hand in hand - it isn't just
autonomy at stake here. It is a problem for performance and scalability
and even security period.
To finish this post - I would also highly recommend looking at Pat Helland's paper "Life Beyond Distributed Transactions: an Apostate's Opinion"
and a post he recently made called "SOA and Newton's Universe
", where he explains more eloquently than I ever could why SOA is not a good fit for distributed transactions.