The opinions expressed herein are my own personal opinions and do not represent
my employer's view in anyway.
In the previous post about SAF I introduced the concept of quality attributes. I wrote that using a "utility tree" approach is a very good way to identify, document and prioritize quality attributes. The purpose of this post is to expand on this issue
As I mentioned before, MSF 4 for CMMI improvement make use of LAAAM (developed by Microsoft's Jeromy Carriere )
) for assessing the architecture (it is used there for assessing the architecture, which is also a good place to use it - but I'll talk about that when I get to E(valuation) of SAF.). LAAAM also builds on a "utility tree, below are the sub-activities mentioned in the MSF beta bits:
ATAM (by SEI) - (another architecture evaluation methodology) talks about a similar process with the addition of prioritization:
This post is going to cover writing the scenarios, their prioritization and what's missing from both these methods (since they are evaluation methods) - ways to help us identify which quality attributes to use in the first place.
First, before we delve too much into details, here is an example for what the end result might look like (taken from http://www.akqit.ch/w3/pdf/bosch_atam.pdf - I am trying to see what I can publicize from project's I've been involved with - but I guess this will have to be later, i.e. in a separate post)
It is hard to explain exactly how you would go about eliciting the quality attributes and their refinements (I think that the best way to do that would be through a workshop - but it's hard to do that over a blog :) - it does, however, include the same techniques you would use to elevate any other requirement -either by building on your past experience from similar systems but mostly by working closely with your stakeholders:
To help with the elicitation, I'll try to give you some list for the first two levels (Attributes and refinements) that can serve as a repository or checklist when you are working with the stakeholders.
I already provided a relatively long list of quality attributes to draw from to create level 1 of the tree (though the list is not an exhaustive one) in the previous post .
For the next level 2 of the tree (refinement) consider the following lists for the common quality attributes (most from A Method?< Analysis Tradeoff Architecture the to Scenarios General of Applicability)
The scenarios are the most important part of the utility tree, the main reason is that the scenarios help us understand the quality attributes needed, and more importantly, by tying the attributes to real instances in the system the scenarios help make these goals both concrete and measurable.
A couple of things that are important to note about scenarios
Scenarios are basically statements that have a context a stimulus and a response and describe a situation in the systems where the quality attribute manifests itself.
Context - under what circumstances
Stimulus - trigger in Use case lingo
Response - what the system does.
let's look at few examples to try to clarify this:
If we take one of these (e.g. "An intrusion is detected, and the system cannot lock the doors. The system activates the electromagnetic fence so that the intruder cannot escape ")
The stimulus - An intrusion is detected
Context - the system cannot lock the doors.
Response - the system activates…
Or another one (Half of the servers go down during normal operation without affecting overall system availability)
Stimulus - Half the servers go down
Context during normal operation
Response - without affecting overall ...
The last step is prioritizing the scenarios, it is common to use 2 criteria (though you can use more)
The interesting scenarios (where you would focus) are the ones with high priority (H,H);(H,M) and (M,H) - these will be used as input for the modeling step of SAF
I'll try to provide samples based on my experience in one of the future posts.