In the previous post about SAF I introduced the concept of quality attributes. I wrote that using a "utility tree" approach is a very good way to identify, document and prioritize quality attributes. The purpose of this post is to expand on this issue
As I mentioned before, MSF 4 for CMMI improvement make use of LAAAM (developed by Microsoft's Jeromy Carriere )
) for assessing the architecture (it is used there for assessing the architecture, which is also a good place to use it - but I'll talk about that when I get to E(valuation) of SAF.). LAAAM also builds on a "utility tree, below are the sub-activities mentioned in the MSF beta bits:
- Examine quality of service requirements and product requirements to determine the key drivers of quality and function in the application.
- Construct a utility tree that represents the overall quality of the application. The root node in the tree is labeled Utility.
- Subsequent nodes are typically labeled in standard quality terms such as modifiability, availability, security. The tree should represent the hierarchical nature of the qualities and provide a basis for prioritization.
- Each level in the tree is further refinement of the qualities. Ultimately the leaves of the tree become scenarios.
- For each leaf in the utility tree, write a scenario. The scenario is in the form of context, stimulus, and response. For example, "Under normal operation, perform a database transaction in fewer than 100 milliseconds."
- Open the assessment matrix template. Enter each scenario as a row in the assessment matrix.
ATAM (by SEI) - (another architecture evaluation methodology) talks about a similar process with the addition of prioritization:
- Select the general, important quality attributes to be the high-level node
- E.g. performance, modifiability, security and availability.
- Refine them to more specific categories
- All leaves of the utility tree are “scenarios”.
- Prioritize scenarios
- Present the quality attribute goals in detail
This post is going to cover writing the scenarios, their prioritization and what's missing from both these methods (since they are evaluation methods) - ways to help us identify which quality attributes to use in the first place.
First, before we delve too much into details, here is an example for what the end result might look like (taken from http://www.akqit.ch/w3/pdf/bosch_atam.pdf - I am trying to see what I can publicize from project's I've been involved with - but I guess this will have to be later, i.e. in a separate post)
It is hard to explain exactly how you would go about eliciting the quality attributes and their refinements (I think that the best way to do that would be through a workshop - but it's hard to do that over a blog :) - it does, however, include the same techniques you would use to elevate any other requirement -either by building on your past experience from similar systems but mostly by working closely with your stakeholders:
- Interviews - meeting with individuals stakeholders to discuss their view of the system
- Brainstorming - meetings with multiple stakeholders trying to come with attributes and scenarios
- Reading written requirements (if available) - e.g. RFPs, use cases , project risks document etc.
To help with the elicitation, I'll try to give you some list for the first two levels (Attributes and refinements) that can serve as a repository or checklist when you are working with the stakeholders.
I already provided a relatively long list of quality attributes to draw from to create level 1 of the tree (though the list is not an exhaustive one) in the previous post .
For the next level 2 of the tree (refinement) consider the following lists for the common quality attributes (most from A Method?< Analysis Tradeoff Architecture the to Scenarios General of Applicability)
- miss rate
- data loss
- Availability -
- time period when the system must be available
- availability time
- time period in which the system can be in degraded mode
- repair time
- boot time
- Modifiability / Replacability / Adaptability /Interoperability
- difficulty in terms of time
- cost/effort in terms of number of components affected
- Resource X (CPU/Memory/…) usage on average per unit of time
- Max usage of Resource
- Availability of resource over time
- Usability / Learnability / Understandability / Operability
- task time
- number of errors
- number of problems solved
- user satisfaction
- gain of user knowledge
- ratio of successful support requests to total requests
- amount of time/data lost
The scenarios are the most important part of the utility tree, the main reason is that the scenarios help us understand the quality attributes needed, and more importantly, by tying the attributes to real instances in the system the scenarios help make these goals both concrete and measurable.
A couple of things that are important to note about scenarios
- First and foremost - Scenarios should be as specific as possible.
- Scenarios should cover a range of
- Anticipated uses of the system (“use case” scenarios) - what happens under normal use
- Anticipated changes to (growth scenarios) - where you expect the system to go and develop
- Unanticipated stresses to the system ("Soap opera scenarios" or exploratory scenarios , pushing the envelop etc.)
Scenarios are basically statements that have a context a stimulus and a response and describe a situation in the systems where the quality attribute manifests itself.
Context - under what circumstances
Stimulus - trigger in Use case lingo
Response - what the system does.
let's look at few examples to try to clarify this:
- Under normal operation, perform a database transaction in under 100 milliseconds (Use case)
- Remote user requests a database report via the Web during peak period and receives it within 5 seconds (Use case).
- Add a new data server to reduce latency in scenario 1 to 2.5 seconds within 1 person-week. (Growth)
- An intrusion is detected, and the system cannot lock the doors. The system activates the electromagnetic fence so that the intruder cannot escape (Use Case)
- For a new release, integrate a new component implementation in three weeks. (Growth)
- Half of the servers go down during normal operation without affecting overall system availability (Soap opera)
- Under normal operations, queuing orders to a site which is down, system suspends within 10 minutes of first failed request and all resources are available while requests are suspended. Distribution to others is not impacted. (Use case)
- By adding hardware alone, increase the number of orders processed hourly by a factor of ten while keeping the worst-case response time below 2 seconds (Soap opera)
If we take one of these (e.g. "An intrusion is detected, and the system cannot lock the doors. The system activates the electromagnetic fence so that the intruder cannot escape ")
The stimulus - An intrusion is detected
Context - the system cannot lock the doors.
Response - the system activates…
Or another one (Half of the servers go down during normal operation without affecting overall system availability)
Stimulus - Half the servers go down
Context during normal operation
Response - without affecting overall ...
The last step is prioritizing the scenarios, it is common to use 2 criteria (though you can use more)
- Importance to system success
- Risk/difficulty in achieving
The interesting scenarios (where you would focus) are the ones with high priority (H,H);(H,M) and (M,H) - these will be used as input for the modeling step of SAF
I'll try to provide samples based on my experience in one of the future posts.