Just got back from Chicago, where I attended Architecture & Design World
The event was great. it had many interesting talks and sessions which
I'll blog about in few of the following posts (Also I'll upload the
slides from my sessions later this week). One session however, was very
disappointing for me, Ivar Jacobson's
Keynote on "Next Generation Process with Essential Modeling".
I was really looking forward for this keynote; After all Ivar is one of
the "Three Amigos", father of use cases and all. Also I had the chance
to attend a couple of his presentations several years ago and he had a
very interesting and convincing appearances at those times.
Unfortunetly this session was nothing like those previous events.
Ivar talked about his new methodology
The basic idea behind the essential software process is (in my opinion)
correct. Instead of big bang prescribed processes - it is better to
tailor a process from a set of practices to get something that fits the
organization/project at hand. It might be a little unnerving to hear
this from one of the people who brought (upon) us the RUP, but, I guess
it is good to see people who are constantly learning.
The problem was that while the main idea can be summed up in one line
(see above) Ivar went on fro the better part of the hour to explain how
cool it was that his base set of practices comes on a set of que cards
that can be sorted and arranged. He also went on to explain the game
board (or something to that effect) where you can lay different cards
to both describe your current process as well as the bunch of practices
you want to use for your future process.
If that wasn't enough, when he (briefly) shown us the cards that talk
about the architecture and architecture description practices the
advice/guidance there was very mediocre and general
I expected much more from someone of the stature of Ivar Jacobson.
On the up-side this was only one keynote and the other sessions I
listened to where much better - as I already said, I'll blog about few
of them in upcoming posts.
Coming back from vacation I saw that Jeff Atwood (coding horror) wrote a post on "rethinking design patterns"
a few days ago. Jeff criticizes the GoF design patterns book
Jeff says his two main grips with the book are
- Design patterns are a form of complexity. As with all complexity, I'd rather see developers focus on simpler solutions before going straight to a complex recipe of design patterns.
you find yourself frequently writing a bunch of boilerplate design
pattern code to deal with a "recurring design problem", that's not good engineering-- it's a sign that your language is fundamentally broken.
I agree that these can be a problem with using
design patterns in general and the GoF ones in particular. I would add
to this that I think that using design patterns just to be able to say
you've used them is very wrond as well (this may sound obvious - but I
have seen organizations where this was encouraged).
I don't think however, any of those problems are problems with the
patterns themselves. This is not to say that the GoF book is
perfect.Looking back at the GoF book I think that it isn't clear of
problems. For instance, I don't think all the patterns stand the test
of time. The most prominent example for that is the Singleton pattern,
which I hardly recommend using these days. Singletons are problematic
for testing and create a tight coupling for a specific instance. There
are better ways to create a mono-instance if that's what you need (I
think others have posted about it in the past - but I can post about it
separately if needed)
Also, newer languages sometimes provide better
ways to tackle some of the design patterns solutions - see for example
a recent post by Alex Miller where he talks about an alternative for Template Method
I can even say, that not all the patterns are that useful (Flyweight comes to mind as an example).
Nevertheless, I still think that the GoF book is one of the most
important books in computer science. First it is a seminal work which
introduced the patterns thinking into software development. Today we
have literally hundreds of patterns on all subjects and technologies. I
think it is a very good think since the looking at a problem from a
pattern perspective gives us more depth and understanding on both the
problem and the solutions vs. other ways I've seen.
Even in itself, the GoF book is great, since many of the patterns are
very valuable and can help us solve real problems. we just need to keep
in mind that the sample implementation in the book is just that -
sample. There can be more than one way to code a pattern and still gain
the benefits (these are "design" patterns not "coding" patterns). A few
weeks ago, someone asked a question on the visitor pattern in one of
the forums I monitor. The guy needed to add an additional parameter to
the Visit method and asked if it wasn't a violation of the pattern. I
told him that there isn't such a thing as "violating a design pattern".
The patterns are a means to an end not some coding codex we should
keep. I think, that if we treat the design patterns as pieces of knowledge
rather then a holly script, they can really help avoid some stupid mistakes.
Thus, at the end of the day, I still think the Gof book is required reading for any new developer. But hey, that's just my .2 cents :)
A couple of quick observations following the Events and temporal coupling post
Events, Current data and aggregated data all have Time-to-live aspects.
- Events value usually diminishes over time until the TTL reaches
- Current data usually have a constant value while their TTL lasts
(until a new value is the current data) - unless we are talking about
version data which is a component of or a step in the direction of
- Aggregated data has the longest TTL, it is interesting to note that its value increases over time
Also while the Current data TTL is determined by the producer both Events and Aggregated data TTLs are determined by consumers
Yeah, I know these are not not earth shattering observations but I still think they are interesting
It was more than 10 years ago that I got my first MVP award from Microsoft (I was probably the first Israeli MVP, because few years later Microsoft Israel awarded what they said werethe first Israeli MVP awards... :) )
Anyway, now, for the forth time, I got awarded again - this time it is in the Solution Architect category -
I've been working in Rafael for a total of 5 years now and I guess my tenure here is nearing its end. The latest company-wide reorg has brought with it the end of the biometric product line (or maybe just its evolution into something I don't really like - it isn't completely clear yet, but since I don't like both option it doesn't rally matters).
I am checking several possibilities of joining one of the consulting companies here in Israel - none of the options has fully matured as of yet but I believe that in the next month or so something will be finalized - meanwhile I am also open for other options (my CV is on the about page - hint, hint :) ).
In any event, between the book, DDJ, InfoQ and looking for a new job (not to mention my family) I sure have have my hands full - I guess now I understand why they say "may you live in interesting times" is an old Chinese curse :)
Yesterday I attended an SOA governance presentation by Brent Carlson
. The presentation was basically an updated version of an article he authored in 2006 "SOA Governance Best Practices - Architectural, Organizational and SDLC implications
As a tool vendor Brent has a lot of focus on the governance processes
which I don't completely agree with (I prefer Jim Coplien's
organizational patterns approach - see my post from last week
). I also think the reuse figures he cites
(registration required) are a little optimistic common place for what I consider the right granularity for services.
He also made a few points I strongly agree with
- Brent talked about difference between the needs of run-time
service repository (e.g. UDDI or an ESB) and a development time one.
You need to address the services and their interactions during the
development and you need to do that in a way that would be easy for the
development teams. For example, one thing you want to log is usage, who
is using the services since that will let you perform impact analysis
when you have to make a change
- Building an SOA for an organization is an iterative process not a
"big-bang" effort. This means you can't do just top-down design. you
need to be pragmatic and also roll out working services.
The reason for this post however is the insight Brent gave regarding treating services as products rather than applications
Treating Services as products is important because even if you don't
believe that the SOA initiative should be an iterative process once
the move is finished you would have quite a few services deployed in
your organization. These services would integrate and interact with
other services - some of which outside of your organization. You would
also want to capitalize on flexibility claim that SOA makes and adapt
your services to the changing business needs.
The challenges you face regarding updating and upgrading functionality
, anticipating consumer's needs, allowing consumers to get used to
changes etc. are exactly the challenges product management techniques
and principles come to answer
Treating services as products means a lot of things. let's look at a
few examples: For one, it means predictable release cycles services
like products get updated over time you want service users to be able
to cope with this changes. Predictable release cycles means they can
get organized in advance. Another aspect is the emphasis on backward
compatibility e.g. orderly deprecation of features and version
management,.One other thing is introducing a "product manager".
someone whose responsibility is to interact with customers, and
potential customers, understand their needs and build a release road
map for the services.
You might be used to doing some of that with applications but
thinking about services an products makes all this more explicit and
that in itself is also important.
Yesterday I attended Jim Coplien
's presentation on "Organizational
Patterns - a Key for Agile Systems Development". Overall I think It was
a very good presentation. Jim makes a few interesting claims, some of
which are controversial within both in the traditional and the agile
- Process guidance (ISOs etc.) doesn't work - roles are stabler than processes, processes always change.
says that in order to make a change you need to make it at the
organizational structure level. The processes will then support these
- TDD is evil - it is just an re-incarnation of bottom-up procedural design. It is better to follow "Design by contract"
- He says XP is not a good methodology (He thinks SCRUM is good)
he talked about some of the organizational patterns he and Neil
Harrison discovered studying organizations for more than a decade. You
can read the Top ten patterns on his site.
Jim covered 2 patterns that are related to software architecture: Architect controls product and Architect also Implements
Architect controls Product basically says that you should have an
architect and that she should oversee that the direction of the project
is flowing in the right direction.
Architect also implements - this pattern says that in order for the
architect to broaden her leadership without sacrificing depth and
pragmatics she must also participate in the implementation (beyond
advising and communicating). Jim gave the example of the development of
Borland's Quatro pro for windows in 1993 where the team's architect had
a daily meeting (akin to scrum stand-ups) for synchronization and would
then go and code with the developers. The Quatro pro team had 4
architects out of 12 persons that made the team.If a third of the
development team is architect I'd say he is right - My experience,
however with most organizations I see is that you hardly have one
architect per project (sometimes you only have one for several
projects). In these cases I hardly see the architect writing production
code as part of the team since she would not have time to fulfill her
architectural responsibilities. She must know how to code though and
she must be able to prove her designs in code or be able to offer a
candidate implementation if needed (I also wrote about that in the past
see "Should architect's code" part 1
, part 2
, part 3
the way if you are located in Israel, Jim will be here for a couple of
weeks and he is giving a few courses like Agile Architecture,
Patterns of Agile Project Management etc. You can find more information on pacificsoft's site
Back in January, I took part in an architect panel that Microsoft
Israel organized. The panel was led by Ron Jacobs and it featured Udi Dahan
, Assaf Jacoby, Coby Cohen, Dudu Benabou and myself. A few days ago Ron edited this recoding and turned it into a
podcast in his Arcast series
The panel's focus was on lessons learned from mistakes made in past project. Ego
maniac as I may be :) -- even though you don't get to hear me much in
the final edited version -- I think the podcast is worth listening to,
as the panel raised some interesting points. You can download the
podcast here (don't worry it is in English even though it was recorded in Israel)
I am the first speaker after the introduction, in case you are wondering.
I'll be presenting a 90 minute class on SOA Patterns on the upcoming Architecture & Design world 2007 - which will take place in Chicago on July 24-27th.
If any of you happen to be there, I'd be very happy if you drop by and say hello :)
Back in January I opined
that moving to web applications was not the optimal solution to the real problem we have/had with desktop applications which was installation woes. What we got was a poor UI without installation problems so we (software industry) started to resolve problems like we had when we moved from terminals to graphical UIs etc. -
So now we have Rich Internet Applications (RIA) - using technologies like AJAX - but they suffer from other problems which again we've already been through
Well that was the topic of the post in January. Now I've stumbled upon an interesting/amusing twist - called Adobe Apollo
Apollo let's you, yes you've guessed it, take your RIA applications and deploy them as desktop applications. you can now take your HTML, CSSs , AJAX scripts pack them up as a single file (AIR) and lo and behold deploy them on the desktop. You even get these nifty start menu and desktop shortcuts :)
The reason not to dismiss this a complete waste of time - is that what we actually see here is another
example of a trend to convergence web and desktop UI architectures and programming models. I say "another" because coming from the desktop direction Microsoft is doing pretty much the same thing. WPF brings the web-programming model with its markup (XAML) and "code-behind" concepts to the desktop as well as pushing the same model to the browser with WPF/E
The difference between Microsoft's and Adobe's solutions is that, Adobe is coming from the web-side and, as I said, Microsoft is coming from the Desktop side - both companies are striding toward the same goal - and what we are left with, is yet another technology war
[based on a few posts from my DDJ blog]
Implementing Business Intelligence (BI) solution on top of Service Oriented Architecture (SOA) is not a simple feat. A recent survey by Ventana Research shows that "...only one-third of respondents reported they believe their internal IT personnel have the knowledge and skills to implement BI services.". There's a good reason for that since there an inherent impedance mismatch between BI and SOA which takes some effort to overcome. The purpose of this paper is to look to explain the problem as well as look at the possible solutions.
Service-Oriented Architecture is about autonomous loosely coupled components. These traits gives you lots of benefits such as greater flexibility and agility but it also means that services have private data. Data that you don't want to expose to the outside as exposing it will decrease autonomy and increase coupling. This is why services only expose data and processes via contracts rather then exposing their internal structure.
That is all fine until you start to think about business intelligence. The cornerstone of any business intelligence initiative is gathering, collecting and consolidating data from all over the place. Once you have the data, you can use tools to analyze it, data mine it, slice, splice, aggregate, and whatnot. Traditionally BI builds on ETL (Extract, Transfer, Load) which goes directly to the database of the involved sources.
And here lies the problem: On the one hand we have services that want to keep their data private, and on the other we have a datamart or warehouse that wants that data badly.
What are our options?
- If you go with traditional ETL, you introduce coupling into your service.
- If you only rely on contracts that were constructed for business processes you may be missing out on important data.
- If you build a specific contract that exposes "all" the data you are back at the point-to-point integration -- solving point-to-point integration is one of the reason we want SOA in the first place.
The second option seems to be the most reasonable choice of the three -- but it also has several problems. One problem is that the BI needs to know about all the contracts. The second was already mentioned -- important data might be missing. The third problem is that the BI system need to fetch data from the services which means it may miss out on data in the intervals between request. On the other hand, too frequent requests and you can congest your network easily as well as cause DOS on your own services.
Clearly we need a fourth option
In my opinion, the best way to tackle BI in SOA is to add publication messages into the contract. By "publication messages", I mean that the service will publish its state either in a periodic manner or per event to anyone who listening. This is a service communication pattern which I call "Inversion of Communications" since it reverse the request/reply communication style which is common for SOA.
To make the solution complete, you can add additional requests/reply or request/reaction messages to allow consumers to retrieve initial snapshots. Following this approach, you get an event stream of the changes within the service in a manner that is not specific for the BI. In fact, having other services react on the event stream can increase the overall loose coupling in the system - for instance by caching results of other services
Why is this better than the other three approaches? For one , you can get a good picture of what happens within the service. However the contract is not specific for the BI and can be used by other services to cache the service state (thus increasing their own autonomy), for reporting (you can see an early draft of the aggregated reporting pattern), and for BI purposes. By working against a steady stream of events, the BI platforms can Analise treands, keep history and get the complete picture they need.
The approach above is sometimes referred to as "Event Driven Architecture" (EDA) and while I (and others) see EDA as another facet of SOA, not everyone agrees. Gartner, for instance, sees EDA as another paradigm and SOA just for request/reply, or client/server. Recently, however, they published a paper that calls the approach described here as "Advanced SOA". I tend to agree more with the "advanced SOA" definition and don't see a contradiction with EDA and the SOA definitions. We are still using the same components and the same relations only adding an additional message exchange pattern into our toolbox.
A note on implementation: If you are implementing SOA over an ESB that is rather easy to implement as most ESBs support publishing events out of the box. Using the WS* stack of protocols, you have WS-BaseNotification, WS-BrokeredNotification and WS-Topic set of standards. If you are on the REST camp, then I guess you will need to implement publish/subscribe by yourself.
Once you have event streams on the network, The BI components grab that data scrub it as much as they like and push it to their datamarts and data warehouses. However, event steams can also enable much more complex and interesting analysis of real time events and real time trend data using complex event processing (CEP) tools to get real-time business activity monitoring (BAM)
You can also get post as as a presentation down loadable from the papers section on my site or directly from here. (The download is about 3MB.)
Welcome to chain-letters blogsphere style. There's this on-line tag game going around, I've been watching it spiraling around on many of the blogs I read and now Udi dragged me into this as well :)
So here goes - here are 5 things you don't know about me:
1. It only took me 14 years to get my BA degree in Computer Science. I began studying in 1990 in the Technion ,quit after 2 years and only bothered to graduate when I wanted to get a Masters degree
2. I was a Microsoft Foxpro MVP for 3 years in the 90s - about half of that time I was working on completely different platforms and tools (J++ and C++ on Windows and then J2EE on Solaris).
3. first met English in fourth grade (like most other Israeli kids at that time) - by the eighth grade I read my first real book - Shogun. I took me 3-4 month to get through the 1200 pages of the book, but I've been reading English since. In fact I hardly read Hebrew anymore.
4. I learned to program on the ZX81 - I remember the joy the first time I fully used the 1K memory it had, as well as the disappointment the followed thereafter when the instructor tried to add the 16K expansion which caused the machine to reboot.
5. I used to be a hobbyist bar tender. I still have more than 70 different bottles at home, with everything from Grenadine to a 25 years old Glenmorangie. I don't mix too many drinks these days, I mostly drink the Macallan.
I don't want to stay "it" for too long :), so on to tagging some other folks: Ohad Israeli, Andrew Johnston, Tad Anderson, Nancy Folsom and Ruth Malan. You are all it
Roy Osherov recommended this site today - but he also urged me to write more frequently.
This is probably a good opportunity to explain how posts are divided between my 3 blogs
First theres the blog on Dr. Dobb's Journal. This blog is published on the "Architecture & Design" section of DDJ portal. I blog there about 3 times a week. Jon (my Editor @ DDJ) prefers a steady stream of blogs over longer posts which means that I break down large subjects (like OO principles, fallacies of distributed computing and the currently running series on the Architect's soft skills) into many parts.
The second blog is a new one on Microsoft's Israel blogs site. The aim of this site is to bring Architecture content in Hebrew to the Israeli architects (As can be imagined, most of the technical content available is in English, I thought it was important to generate some content in Hebrew as well)
The last blog is this one. My current plan for this blog is as follows
- Cross posting selected posts from the DDJ site
- I am posting here complete articles made by editing and aggregating multi-part blogs posts (again such as the fallacies etc.)
- Pointers to presentations and articles I publish
- In the near future, I'll start posting bits of my upcoming SOA patterns book (I am currently writing chapter 3). I've already documented 8 patterns (of more than 50 patterns and about 30 anti-patters). I plan to publish here at least some of the patterns here for review (I am still crossing the t's and dotting the i's with my publisher but I expect this to be finalized soon)
so Roy, does seven (7) posts in seven days (including 3 on Ms Israel site, 3 on DDJ and this one) qualify as posting often enough? :)
I've added a new section on the site www.rgoarchitects.com/Papers to allow easy access to all the papers, presentations and articles I published (and will be publishing e.g. I'll add a paper on architect soft skills in a month or so etc.)
Dr. Dobb's has begun a special (6 issues) E-Zine called "Dr. Dobb's Requirements Development" and I am happy to say that Issue #1 includes the first part of an article by me on the subject of use case modeling.
The issue also includes articles by Joe Marasaco (author of "Software Development Edge"), Andrew Stellman & Jenifer Geene (authors of "Applied Software Project Management") and Karl E. Wiegers (Author of "Software Requirments" 2nd edition.)
You can get the EZine by registering on www.ravenflow.com to the bi-monthly Requirement Develompment magazine
Over the last few months I've posted a series of blogs on DDJ that cover the basic Object Oriented principles (e.g. Single Responsibility Principle, Don't Repeat Yourself, Inversion of Control etc.).
I've assembled all the posts into a single whitepaper which you can get here.
Also you can download the same (plus a little more) material as a powerpoint presentation.
[Crosspost from my DDJ blog]
When talking about multi-tiered architectures, we need to remember that the tier boundary is significant. The tier boundary is where distribution happens and if you remember the "fallacies of distributed computing", you know not to take that lightly.
A tier is a physical boundary (versus an Edge in an SOA which is a logical boundary, for example) and the implications are numerous. For instance, you need to consider:
- Trust--who do you let in?
- Security--what do you send out?
- Performance--you need to serialize to pass the boundary, and remote data is expensive to fetch.
- Availability--what happens if you crash?
- Manageability--can anyone see what's your state? Help you recover?
- Temporal coupling--can you afford to make synchronous calls?
- and many similar questions.
Yet many times people think passing a tier is as simple as passing a logical layer. I should know. I made this stupid mistake more than 15 years ago in one of the first distributed systems I designed. I planned this beautiful separation of the UI controls from the business logic (I didn't know it was called "MVC" and that someone else had figured it out ages ago, so I was pretty proud of myself). When you clicked on a button you just used metadata to say that BL should catch it. I had all this wonderful "infrastructure"that handled passing the call to its destination.
But then we wanted to take this n-layer application and put the BL in an "application server" which will handle multiple clients. Oh--now we need to move events over the wire , handle calls from multiple unrelated clients, pass a lot of data back and forth, and what about security... you can imagine the fiasco.
Thus, as Niels Bohr once said, "An expert is a person who has made all the mistakes which can be made in a very narrow field." But you don't have to make the same mistakes. Just remember that a tier is a natural boundary. You know what? You should probably even want to consider it the edge of a cliff at the end of your application--and be careful not to fall down.
[crosspost from my DDJ blog]
In a comment to my previous post on Architecture vs. Design, Yoni said:
It seems you are categorizing technical issues as architecture and logical issues as design. I think Martin Fowler's definition of "Making sure important things remain decoupled and easy to change" transverses both categories and is easier to follow.
I have a few things to say about this.
First, I don't categorize technical things as "architectural" and logical ones as "design." What I do say is both "architecture" and "design" are types of design where with one you focus on the wider aspects of the solution and quality attributes, while with the other you focus on local and functional aspects.
I don't see how the definition Yoni brings up is a better way to distinguish between the two? Who is to say what is important and what is not? Isn't decoupling an important trait at all level including so called "detailed design" level (e.g. utilizing dependency injection at the class level will give you better testability). Moreover, decoupling is important but sometimes you need to trade that to be able to satisfy a more prioritized quality attribute (if you want to meet a projec's quality, budget and schedule targets); see my definition of what software architecture is.
Another thing is that it doesn't matter that much where the line between architecture and design passes. The distinction between architecture and design is a semantic one that reminds us that the design of a system needs to be done in several levels of abstraction (provided the system is not too trivial). We need to abstract certain aspects of the system in order to be able to grasp the big picture. You cannot (well, I can't anyway) think at a 100 man-years project. You can only think about it at the class level and understand how everything will work together. Again, architecture is there to remind us to focus on a level of abstraction that lets you deal with non-local decisions and to make sure quality attributes are met if we cross the line to design - No biggie.
[edited version of post I made on Dr. Dobbs Portal]
Back in April I provided a definition for "architecture" as one of my first posts on DDJ. I also promised I'll talk about the distinction betwen Architecture and Design. Well this time is now.
When I try to think about it. I see two base criteria to distinguish between the architecture and design:
- Design deals with local decisions, where architecture is broader. For instance, you "design" the interfaces for your classes, but you "architect" the division into tiers.
- Design is mostly about the functional requirements, while architecture is mostly about quality attributes. You design how a specific workflow will fulfill a certain use case, but you architect the solution to the system's availability.
It is probably quite evident that this distinction only provides blurry borders between architecture and design; for example, when you have a multi-tier solution and you "architect" the UI and say it will implement MVP pattern. Can this be considered local decision and thus design or is this the overall decision (for the UI) and thus architecture?
The way I see it the exact cross-point from architecture to design is not that important. The point in talking about two distinct activities in the development process is to maintain separation of concern. you need to handle both to make sure a solution will actually work whether you do a little design while architecting or do a little architecture while designing really doesn't matter. Also architects should be involved in both activities anyway...
Last week I published a 3 part article on O/R mapping on my blog @ Dr. Dobb's Portal. The paper describes the benefits and costs of using O/R mapping as well as recommend when O/R mapping should be used.
Here it is as a single whitepaper: Architecture Dilemmas - OR Mappin.pdf (228.78 KB)
Well, not right away - but I just read he will be leaving MS by 2008. I don't know whether it will be good or bad for Microsoft in the long run but whatever the outcome will be it will definitely be the end of an era.
I have amassed more than 30 patterns related to SOA (e.g. SOA Patterns - Decoupled Invocation and SOA Patterns - Aggregated Reporting which I previously published here). I have patterns around security, availability, scalability, composition, adding a UI etc. Some of the patterns are original (I think) and some are based on other peoples work.
I am trying to decide whether or not it would be worthwhile putting all these patterns into a book. Writing a book is a very time consuming task (or so I am told) - so I thought I'd run a quick poll between the readers of this blog to see how many of you would be interested in reading (and buying) this book if it will get published.
I know this is not a representative crowd - but it can give me a (very) rough idea on the interest in such a book.
Please send any comments (comments like "forget it, no one would ever want to read anything you write" are also ok) to email@example.com (or leave a comment here)
Thanks in advance - Arnon.
I gave a peresetation on CMMI today.
The main points of the presentation are
- CMMI focuses on process
- It builds on the premise that improving the process imporves the quality of the product
- CMMI integrated several others CMMs including Software CMM
- CMMI is a framework that lets you define the process - you need to show you cover the CMMI process areas for certification
- Newer methdologies (Agile) focus on people rather than process.
- However considering CMMI is a framework you can map agile processes to CMMI
- You would want to do this (mapping) if you want to introduce agility into CMMI organizations
- Another reasons to mix both approaches is sometimes therese a need to use formal processes but you still want some agility for sub-projects.
Scott Ambler has a new article comparing most of the leading development methodologies. He also tries to recommend which methodology fits which kind of project (e.g. for a commercial of the shelf products use EUP/RUP, ISO 12207, TSP/PSP or Data Driven Approach).
The article serves as a nice overview of available methods - however Scott doesn't explain the reasoning on why he think a particular methodology fits (or doesn't) a certain type of application which is a pity. Furthermore, I think Scott is missing the point a little by neglecting organizational, cultural other people related reasons for choosing a methodology. For example if all your teams are versed with RUP, you would most likely "force-fit" it to your new COBOL project rather than choose a better fit methodology.
Also, I am not sure I agree with the all his mapping - the most notable example is mapping XP for safety critical projects. To get a DO-178B (the certification required by the FAA for aviation software) you need to have the following documents (DO-178B has 5 levels and not all documents are needed for all levels):
- Plan for Software Aspects of Certification (PSAC)
- Software Development Plan (SDP)
- Software Verification Plan (SVP)
- Software Configuration Management Plan (SCMP)
- Software Quality Assurance Plan (SQAP)
- Software Requirements Standards (SRS)
- Software Design Standards (SDS)
- Software Code Standards (SCS)
- Software Requirements Data (SRD)
- Software Design Description (SDD)
- Software Verification Cases and Procedures (SVCP)
- Software Life Cycle Environment Configuration Index (SECI)
- Software Configuration Index (SCI)
- Software Accomplishment Summary (SAS)
- Software Verification Results (SVR)
- Problem Reports
- Software Configuration Management Records
*list copied from LynxOS site
I don't think that this level of formality is a good fit for XP
[crosspost from DDJ]
Reading the comments on my previous two posts on whether architects should code (here and here) as well as the comments on Johanna Rothman's posts (here, here and here) leads me to a few observations:
The first apparent thing is that the issue is a very loaded. Some people believe it is essential for architects to code, while others (like me) believe that their time is better spent on other issues. (That said, it seems that a small majority of the commenters think architects should code as part of the development team--at least for feedback purposes if nothing else.)
There is a wide consensus (me included)that architects should know how to code and have extensive experience in coding. It is also agreed that architects should be involved in the project--that is, not just drop off the architecture, then disengage.
I still believe that when the project is big enough (that is, big enough to warrent more than one team working on it) the project is better served by the architect getting involved in all the teams, rather than participating as a developer in one of them. If you are an architect and develop as part of the development team you are (or should be anyway) committed--meaning you need to deliver the piece of code under your responsibility at an acceptable quality level as other developers. Which is exactly why you would be less likely to deliver on your responsibilities for the total quality of the project. I assume some of the differences in opinion can be attributed to disagreement on what software architecture is , at least when compared to design).
I also think those who think architects must code see the architect as some sort of a lead developer again. I don't buy that. The architect's role is much broader than that (see also this post by Kevin Seal, which also discusses this issue). I see a holistic view of the architect role, which is making sure the product is delivererable. This may translate to the architect coding a module or two, but it can also translate to a lot of other things. Examples from my experience as an architect include preparing initial cost estimates, iteration planning, helping debug and testing, solving installation problems, analyzing requirements, conducting design and code review, design, and prototyping (yes, that's coding but as I said in the previous posts, that's not writing the production code and this is not having to meet deadlines etc.).
I also liked a comment by Graham Oakses on one of Johanna's posts :
My experience is that an architect is pulled between three poles--the product, the team and the client. The product pole pulls you towards managing the "conceptual integrity" of the design. The team pole pulls you towards mentoring people, helping them build skills, etc (which may mean consciously letting someone write code that you could do much better yourself). The client pole pulls you towards translating between the technical and the client domains (which is often where you get pulled into powerpoint). You need to trade these poles off differently on every project...
To sum up, the answer to "should architects code? " is like so many things in life is--it depends.
While I still hold my view on the Current State of Software Factories - at least one company (EDS) is repoting (in an MSDN article) that they've built a software factory for their domain entities [found via Steve Cook]. Nevertheless this is still a generic generator (i.e. not a factory for ordering system or something similar- but rather something like a DSL for O/R Mapping).
I also wonder why do they generate unit tests fot the generated code - assuming they properly tested their templates the generated code should just work ...
Roy Osherove blogs
about some of the questions that were discussed in the architect's panel in the Recent TechEd Israel
I've been thinking a lot about some of these questions lately (and not just because I helped draft the questions for the panel :)) specifically about when and how to introduce agile methods. One problem, which Roy points out, of fixed price/time projects (which unfortunately in Israel are pretty much the norm.). Another problem is with organizations such as the one I work for which have CMMI level 3
(or more) certification (not to mention ISO 9002) which makes it really hard to introduce agility.
I stumbled upon this presentation which analyzes the CMMI compliance of agile methodologies (I've started to try map SCRUM to CMMI 3 to get SEPG (Software Engineering & Practices Group) off of my back)
Another interesting approach I found is AgileTek's Agile+ methodologywhich is a mix and match approach that claims to be the best of both worlds (I am not sure I am 100% convinced but it is worth a look)
Lastly you can look at this interesting presentation by Barry Boehm and Richard Turner which talks about When to use which approach.
[Crossposted from my DDJ blog
About the same time I wrote the post on whether architects should code, saying that architects should be able to prototype but shouldn't be part of the dev team (in the sense that the architect shouldn't get coding tasks that results in production code), Johanna Rothman wrote a blogpost that claimed architects must code .
Two days ago she posted a more detailed explanation of her view. I agree with most of the points she made:
- Architects need to participate in the project; that is, not be some outsider who just drops her architecture on the team and leaves).
- The best way to test a design is to code and run it.
- It is beneficial for architects to know to code.
- It is important that architects understand the implications of their decisions on the code and developers.
I don't see how architects taking coding tasks serves the greater good, versus their monitoring teams that code and making sure all aspects of the architecture actually fit the problem and work. Again, this may work on smaller projects, but probably not on larger ones.
You may also want to look at two related posts I made in the past
SAF Architecture Evaluation: Evaluation in Code talks about some of the ways architecture can be validated in code.
SAF Deployment: What to do when the architecture seems stable? talks about the architect's involvement in the project when they think the architecture is "finished".
A couple of points regarding the analogy Rothman uses--that is, architects who design bathrooms for hotels. Building architects are seldom a good analogy for software architects (I once used it as well). However, there are far too many differences (maybe I'll blog about that sometime in the future).
This brings me to the second point. This analogy doesn't serve Rothman's point well since building architects never actually participate in laying down brick or installing bathrooms. The fact that hotel bathrooms are not comfortable means that this quality was low on their priorities. In any event, verifying if a bathroom is usable--you don't have to install it just use it. (If you do take the analogy, you don't have to code it just stick around to see what's going on
[crosspost from Dr. Dobb's Portal]
Test Driven Development (TDD) is, in a nutshell, writing a unit test up front--making it fail, making it work , refactor, and repeat until the product is finished. (If this is new to you, read more at testdriven.com )
So with TDD you get a bunch of unit tests that are also proven as regression tests. That's pretty cool.
TDD also lets you work in small increments while maintaining the working code. That's even cooler.
And lastly TDD has a very good influence on design:
- It encourage loose-coupling. When you want make something testable you want to remove the dependencies it has so you can test it by itself.
- It makes you think about the interface of the unit under test--how is the interface going to look?
- It makes you think about how the unit under test would be used--for example, the behavior of what you are writing (or designing).
Sounds great to me. I think TDD is a great way to do the detailed design. You specify the results (interface + behavior), then implement that design. One thing I don't buy though is that TDD alone will produce an "emergent design" for the whole system. The way I see it is that you have to do some design up-front (assuming your system is not a trivial one) since TDD, being a coding technique, keeps you working at sea-level.
There's also a fundamental matter of scale--it might be possible in theory to start that 100 man-year project as a single object, then refactor it in baby-steps until you'd get the perfect system. I believe that if you don't work at a higher level of abstraction (vs. code), you will not be able to partition the system in a reasonable time. This was true when we moved from assembly code to higher level languages which enabled us to write much more complex software--and it is true today as we need to answer the ever-changing requirements of modern enterprises.
To sum up, TDD is good for testing and it is a good design methodology for the detailed design level. It can be used to drive the overall design on smaller project--but on larger systems we need additional methods and tools to cope with the overall design and architecture.
Bellware attacks Microsoft's cluelessness of modern development methodologies
and tools. By talking about the 3 (in?)famous typecasts (Mort, Elvis &
Einstein personas) used by Microsoft to model developers.
I agree with a lot of the points Scott makes I think that (unfortunately)
there are individuals and organizations that are suitable for to the
Mort-Elvis-Einstein approach - Where
people are not as smart or competent as Scott is and can't handle agility. Also there are situations where agility cannot be practiced -e.g. clients insist on fixed price projects and waterfall-ish
milestones where, against our better judgment, we were forced to do a lot of
up-front planning (that had to be reworked later…), safety-critical system etc.
much prefer the direction Scott took in an earlier post
where he talked about a missing persona - Hugo the Agilist. I think Microsoft
will be making a grave mistake if they will not pay attention to the needs of
the growing community of developers that prefer agile methods and practices.
Dobb's has recently launched a new portal site, where they want industry experts to post their views - well, I guess it is not
limited to experts only, as they've also offered me to write there - I'll be
writing the blog on software architecture and design.
am going to post there on a daily basis (read: ~5 posts a week). Naturally they
are going to be shorter posts that will try to highlight and comment
(hopefully) interesting things related to architecture and design (my thoughts,
other peoples posts, news etc.).
am still trying to decide on the balance between this blog and the new one,
but I guess most longer posts will go
here (though I may cross-post them) and that whitepapers and presentations will
continue to be posted here. Also note that the new blog has a wider
spectrum as it also talks about design.
will find my new blog "If you build it…will they come" here.
of the roles of the software architect is to act as a mentor/coach. Reviewing
some of the designs in one of my projects' teams it seems the time was ripe for
doing just that. Thus, last week I gave them a presentation on the basics of good OO design - which I thought might also be of interest for other people (you can download a copy
here - 312KB).
presentation starts with the 7 deadly
sins of software design:
- Rigidity – make
it hard to change
- Fragility –
make it easy to break
- Immobility –
make it hard to reuse
- Viscosity –
make it hard to do the right thing
Complexity – over design
Repetition – error prone
- Not doing any
is interesting to note that just yesterday I read an interesting piece on what
makes good design (i.e. looking from the positive side) by James Shore (found via Sam Gentile)
main part of the presentation demonstrates the 5 basic design principles
(drafted by people like Robert C. Martin , and Barbara Liskov
- OCP open-closed
principle - a class should be open for extension but closed for
- SRP single
responsibility principle - a class should have a single responsibility
- ISP interface
segregation principle - there should be separate interfaces for different
- LSP Liskov
substitution principle - basically design by contract - a sub-class should
fulfill the same expectations its suparclass set
- DIP dependency
inversion principle - classes should depends on abstractions, class
consumers should depend on abstractions and abstractions shouldn't depend
principles are the basis for some of the
techniques widely used today - few examples include:
Of Control - builds on OCP
Injection - a mechanism to allow DIP
First - building on LSP,DIP
the end of the day following these principles helps managing classes
dependencies, increase overall loose coupling and cohesion thus increasing the
overall quality of design. It sometimes amazes me how using just a few simple rules can improve maintainability,
flexibility and usefulness of designs so much.
recently found MSF 4.0 is out - yep, it
is still in two flavors…
Agile (or officially "MSF
for Agile Software Development
Process Guidance") and MSF 4 CMMI (MSF
for CMMI Process Improvement
I consider MSF Agile as a lightweight process,
however I prefer SCRUM as an Agile project management process.
find the CMMI version more interesting - Both for cases where Agile is not a good fit
(also see Agile vs. Plan driven
for organizations that need to have CMMI certifications (such as the one I
currently work for). MSF 4 CMMI covers level 3 pretty well and also has some
guidance moving on to the next levels.
worth a look :)
Bellware writes about Microsoft missing an Agilist Persona (in
addition to Mort, Elvis and Einstein.
I pretty much agree with Scott's views - MS's lack of understanding of the Agile
crowd is evident in "fiascos" like the TDD article on MSDN or even MSF Agile, which is relatively a light process but still
very far from processes such as XP,
SCRUM and the like.
Personas is a very interesting concept, usable as a communication aide, for development teams - as a means to help maintain user focus.
CHAOS Chronicles 3.0 by Standish group* (www.standishgroup.com) cite "User involvement" as second
most important success factor for development project success (after executive management support). Indeed
many agile methodologies also encourage high customer involvement in
my professional career, I had the chance to work for both product companies and
solution companies. - Why is that
relevant you ask? Well, when you work for a solution company you usually have
tangible, real-life, customers to work with. You can walk them through early
usability prototypes, you can consult with then on problematic requirements,
you can have them on site for instant feedback etc. etc.
are more problematic when you work on
"shrink wrapped" products - now your customers are much more abstract
, and elusive, yes you can still hold focus groups etc. but your you can't have
that day-to-day interaction with end-users and customers - enter Personas.
first heard about Personas several years ago, when I read Alan Cooper's "The Inmates are
running the Asylum".
The book demonstrates some bad designs
for software-based products and then introduces an approach (Personas...) to
help avoid these problems (He elaborates more on the approach in "About
are basically a way to define archetypes of users. Unlike Use Case Actors which
represent a role in the system, Persons try to high-light the characteristics
of real users. The idea is to come-up with representative model of a user and
to give it a full-bio and characteristics
so as to help the development team understand motivations and relate to actual
users. The model for absent users is the reason this technique is very important for product companies. If you don't
have real users come up with abstract ones representing them. Alan Cooper
introduced Personas as a means to help designers in the initial phases of
product design. Microsoft extended the use of Personas as a communication aide
to the full range of the development team (designers, developers, testers,
managers, marketers etc.) - which brings more benefits. I highly recommend reading the very
interesting paper "Personas: Practice and Theory" by John Pruitt and
Jonathan Grudin which relates
Microsoft's experience on the subject.
Personas are very important for product companies, it can also be important for
solutions development especially on larger projects where only few members of
the development team get a chance to interact with customers and user.
you cannot have the customer on site (or the customer representative doesn't
give you the full picture of all user types of the system) you can use
Persona-Scenarios as a way to augment user stories (or in other circumstances
as an alternative to actor-use cases)
Standish Group has been collecting metrics from thousands of projects since
1994. They analyze success and failure factors and publish them on yearly
basis. For example, while our ratios as an industry are getting better over the
years - as 2004 only about third of the projects achieved their goals both on budget and on time...
a look at Jeff Schneider's blog - for some nice Lego illustrations of composite
Forget that stupid agile methods and all that iterative junk - Waterfall 2006 is here http://www.waterfall2006.com
or as the "report of the DEFENSE SCIENCE BOARD TASK FORCE ON DEFENSE SOFTWARE 2000" sums nicely:
"About 90% of the time, the [waterfall] process results in a late, over-budget, fragile, and expensive-to maintain software system. A typical result of following the waterfall model is that integration and testing consume too much time and effort relative to the other software development activities. Most waterfall projects, expend over 40% of their effort and schedule in integration and testing."
Oh well, maybe we should stick with what we know after all :)
(Thanks Grady Booch and David Ing blogs for the conference link)
I recently read Weblog Usability: The Top Ten Design Mistakes by Jacob Nielsen [found via Jeff Tash's IT Scout blog]
The first 2 mistakes Jacob discusses are missing author photo and bio - since these are relatively easy to fix, I went on and mended the situation. I've added a photo on the sidebar and I've posted a short bio in a new "about me" section.
Avoiding the other 8 mistakes require more effort - but I think (read: hope) I have most of them covered.
I'll take this opportunity to make a quick note on the architecture posts. I see that there's a lot to say about architecture modeling (oh my, what a surprise). I want to cover SAF at a certain level before I get lost in the details - so if you found SAF interesting thus far, watch out for posts on Mapping, Evaluation and Deployment soon :)
I just read Paul Kimmel's very interesting article on "truisms" of software development called "Un-Dynamics of Software Development, or, Don't Bite the Flip Bozo" . The article is written in an amusing/cynical tone, however a lot (if not all) of it is really true -
"Flipping the Bozo bit " by the way is "to make a mental note that a particular person is a bozo and everything they say in the future should be ignored or looked upon as the meanderings of a slightly annoying, occasionally amusing child or a drunken uncle"
Paul comments that the mean time for someone in the crowd to flip the bozo bit on you when you start speaking is 10 seconds.
The article also mentions stuff like
- When someone says the schedule is going to be missed, they are never lying.
- If a manager says I am not technical, be prepared to spend a lot of time explaining things to them so they can make decisions they shouldn't be making.
- Managers hire experts and ignore them all the time.
[found via Mitch Barnett's blog on software industrialization - which provides for interesting reading by itself (focusing with Software Factories and DSLs and general software development). his company developed a Biztalk appliance, which you may want to checkout if you are using Biztalk*]
* I have my views and some reservations regarding Biztalk - but I guess that a subject for another post :)
Udi Dahan talks about the merits of agile methods vs. (heavy) process methodologies (countering a post by Joel Semeniuk on CMMI).
I can't say I totally agree with either (what else :) ) - There are cases where agile methods are a better git and there are cases where you'd be lost without a plan. While agile methods optimize for change (which is indeed a grim reality we all live with) plan driven methods optimize for complexity. Barry Boehm and Richard Turner detail the various people related issues that can make you choose one over the other. The diagram below (taken from that article) sums it up nicely.
It is always important to strike a balance between the level of process employed and the project at hand. That's why even the more heavy processes (like RUP or MSF for CMMI) are tailorable. I guess a lot of organization don't bother to take the effort to tailor the processes to their need and rely on the "out of the box" experience which is not tuned to their needs. and thus suffer less than optimal results.
Another important issue is tool support - If you are going to employ a more plan-driven (heavy) process you really want it to be supported by your tools to help alleviate that "document oriented development" feeling Udi mentioned in his post. This is where this upcoming release of MSF shines (especially vs. the previous release of MSF).
I just upgraded the blog from to DasBlog 1.8 (RC), I also took the chance and changed the theme for the site.
I guess this is a good opportunity to thank Scott Henselman and Omar Shahin for all the time and effort the put into maintaining and upgrading this software
I've added a new area on the site for the SPAMMED Architecture Framework (SAF)
There's nothing much there (yet...) except links to the blog entries on the SPAMMED process, however, I am going to add there presentations, whitepapers, a workshop etc. in the future (some of these are already under development)
The presentation for the paper mentioned in the former post can be downloaded from here
First the disclaimer :)
I wrote the paper below about 2 years ago, summarizing my experience with requirements engineering using use cases. The problem was that it got to be too long to be a magazine article and too short for a book plus it needs a ton of editing.
Nevertheless, Now that I started blogging and considering that I think it still has some very useful information to anyone trying to make use cases work in a mid sized or large project - here it is for your viewing pleasure:
Methodology for building Use Cases for large systems.pdf (206.64 KB)