Pages

Tuesday, November 30, 2010

Scrum Rules

Here is the list of the "Rules" which any agile(Scrum) team needs to follow. These rules are categorized into Required Rules, Basic Rules and Optional Rules.

Required Rules to Start – the “Agile Skeleton”:

  • Full-Time ScrumMaster Identified and Team Members Available to Do Work
  • Team Agrees to Demonstrate Working Software in No More Than 30 Days
  • Stakeholders Invited to Demonstration

Basic Rules of Scrum to Implement As Soon As Possible:
  • ScrumMaster Ensures “Required” and “Basic” Rules Followed
  • Full-Time Product Owner (with Expertise and Authority) Identified
  • Cross-Functional Team Including ScrumMaster and Product Owner
  • Team Size 7 +/-2, Maximum of 12
  • Product Owner Works With Team and All Other Stakeholders
  • Product Backlog Created and Managed by Product Owner
  • Daily Scrum Meeting with 3 Questions (Completed? Will Complete? Obstacles?)
  • Daily Scrum at Same Place and Time and Less Than 15 Minutes
  • All Team Members Required at Daily Scrum
  • Anyone Can Observe Daily Scrum, but Not Participate
  • Sprint Length No More Than 30 Days, and Consistently Same Length
  • Sprint Planning Meeting with Whole Team
  • First Part of Sprint Planning: Product Backlog Items Selected by Team
  • Second Part of Sprint Planning: Team Creates Sprint Backlog of Estimated Tasks
  • Sprint Backlog Tasks Added/Updated/Removed by Team
  • Sprint Burndown Chart
  • Retrospective Meeting with Whole Team for Process Improvements
  • Definition of “Done”
  • Commitment Velocity Calculated (from Sprint Backlog Estimates)
  • Team Members Volunteer for Tasks, 1 Task at a Time Until Complete
  • Team can Seek Advice, Help, Info
  • ScrumMaster Tracking and Removing Obstacles
  • No Interruptions, Advice about or Reprioritization of Team's Work During Sprints
  • No “Break” Between Sprints
  • Sustainable Pace – Timebox Effort, Not Just Schedule
  • Quality is Not Negotiable – Defects Go on Top of Product Backlog
  • Sprint Planning and Review Meetings 1/20th Sprint Duration

Optional Rules of Scrum to Implement Depending on Context:
  • Team Room with All Needed Equipment and Supplies
  • Test Driven Work and Continuous Integration
  • User Stories as Product Backlog Items (As a I can so that )
  • Project/Release Burndown Chart
  • Planning Velocity Calculated (from Product Backlog Estimates)
  • Scrum of Scrums for Multiple Teams
  • Canceling the Sprint Early
  • Financial Modeling for Product Backlog
  • Sprint Backlog Tasks on Big Visible Chart on Wall
  • Backup Product Owner Identified
  • Team of Volunteers – Self-Selecting
  • Rotate the ScrumMaster Duties
~SA

Thursday, October 28, 2010

Affinity Estimating - A Better Way of Estimating User Stories in Agile

Affinity Estimating by Lowell Lindstrom:

Affinity Estimating is a technique many teams use to quickly and easily estimate (in Story Points) a large number of user stories. This is a great technique if you’re just starting a project and have a backlog that hasn’t been estimated yet.

In this technique stories are read out to the whole team and then the team is asked to arrange the stories on horizontally on a wall in order of size, without talking.
Place the largest stories on the left and the smallest stories on the right. This only takes a few minutes. Then you get a final opportunity to make adjustments to the ordering, again without talking.
Place some Fibonacci numbers (described by Mike Cohn in “Agile Estimating and Planning“) above the list of stories. Then group the user stories around the nearest number.
By using this technique, a team can estimate good number of User Stories only in a few minutes.

In agile many teams use "Planning Poker" a widely prevalent technique to estimate, which is a consensus based, but this way of estimating take too much time. Where Affinity estimating technique is very quick & transparent.

In Kane Mar words: I loved this (Affinity) estimating technique for a number of reasons: It’s quick and easy; it feels very natural; and, the entire decision making process is made very visible. Finally, “Affinity Estimating” helps make estimating a positive experience rather than a confrontational one.

~SA

Thursday, September 30, 2010

Agile Modeling

Agile Modeling (AM) is a practice-based methodology for effective modeling of software-based systems. Where the Unified Modeling Language (UML) defines a subset of the modeling techniques that software professionals require, AM defines practices that enables developers to model in an efficient and effective manner. This paper provides a brief overview of AM’s values, principles,
and practices; defined what agile models are; and summarizes the scope of AM.

When is a Model Agile?
To understand AM you need to understand the difference between a model and an agile model. A model is an abstraction that describes one or more aspects of a problem or a potential solution addressing a problem. Traditionally, models are thought of as zero or more diagrams plus any corresponding documentation. However non-visual artifacts such collections of CRC cards, a textual description of one or more business rules, or the structured English description of a business process are also considered to be models. An agile model is a model that is just barely good enough. But how do you know when a model is good enough? Agile models are good enough when they exhibit the following traits:
(1) They fulfill their purpose and no more.
(2) They are understandable.
(3) They are sufficiently accurate.
(4) They are sufficiently consistent.
(5) They are sufficiently detailed.
(6) They provide positive value.
(7) They are as simple as possible.

What Is(n’t) Agile Modeling?
I am a firm believer that when you are describing the scope of something, be it a system or in the case of AM a methodology, that you should describe both what it is and what it isn’t. The following points describe the scope of AM:
(1) AM is an attitude, not a prescriptive process.
(2) AM is a supplement to existing methods, it is not a complete software development methodology.
(3) AM is a way to work together effectively to meet the needs of project stakeholders.
(4) AM is effective and is about being effective.
(5) AM is something that works in practice, it isn’t an academic theory.
(6) AM is not a silver bullet.
(7) AM is for the average developer, but is not a replacement for competent people.
(8) AM is not an attack on documentation.
(9) AM is not an attack on CASE tools.
(10)AM is not for everyone.
(11)AM is complementary to the UML.

Ta,
~SA

Tuesday, August 10, 2010

Difference Between Agile Themes, Epics and User Stories

People often get confused about the difference between Agile Themes, Epics and User Stories.

Here’s a simple explanation of what they are and and a diagram showing how they relate to one another.

Agile Themes

A Theme is a top-level objective that may span projects and products. Themes may be broken down into sub-themes, which are more likely to be product-specific. At its most granular form, a Theme may be an Epic.

Themes can be used at both Programme and Project Level to drive strategic alignment and communicate a clear direction.

Agile Epics

An Agile Epic is a group of related User Stories. You would be unlikely to introduce an Epic into a sprint without first breaking it down into it’s component User Stories so to reduce uncertainty.

Epics can also be used at a both Programme and Project Level – Read more using Epic Boards to manage programmes and projects.

Agile User Stories

A User story is an Independent, Negotiable, Valuable, Estimatable, Small, Testable requirement (“INVEST Acronym”). Despite being Independent i.e. they have no direct dependencies with another requirements, User stories may be clustered into Epics when represented on a Product Roadmap.

User Stories are great for Development Teams and Product Managers as they are easy to understand, discuss and prioritise – they are more commonly used at Sprint-level. User Stories will often be broken down into Tasks during the Sprint Planning Process – that is unless the stories are small enough to consume on their own.














The Hierarchy of Agile Requirement Formats - Themes, Epics, User Stories, Tasks








Monday, July 19, 2010

Scrum Master vs Product Owner

The most common confusion in the 2 roles of Agile-SCRUM is between Scrum Master & Product Owner.

The Scrum Master is in charge of making sure the Scrum process is followed. The Product Owner (PO) is the person in charge of making sure the product succeeds in the direction the company wants to go.

The PO does this by setting priorities of product backlog items, making sure all needed parties are there for both the sprint planning and sprint review (both are very important). If something occurs and either the scope explodes or decisions need to be made, the PO is the person who gives the answers to the decisions (whether they are the person making the decision or not). I like to think of the product owner as the prioritizer and proxy between IT and the business/customer. Although another common term for that person is "single wringable neck" because if the company isn't succeeding with that product, it's probably a prioritization issue...and who is in charge of that :-).

The Scrum Master is making sure the process gets followed and working through the adaptation of Scrum to your organization. By doing this, they help to create a high performance team or teams. There are many "styles" to achieving this, but the goal is the same.

Ta,
~SA

Monday, June 21, 2010

Metrics Used In Testing

Metrics Used In Testing:

The Product Quality Measures -
1. Customer satisfaction index,
2. Delivered defect quantities,
3. Responsiveness (turnaround time) to users,
4. Product volatility,
5. Defect ratios,
6. Defect removal efficiency,
7. Complexity of delivered product,
8. Test coverage,
9. Cost of defects,
10. Costs of quality activities,
11. Re-work,
12. Reliability and Metrics for Evaluating Application System Testing.


1. Customer satisfaction index

This index is surveyed before product delivery and after product delivery (and on-going on a periodic basis, using standard questionnaires).The following are analyzed:

* Number of system enhancement requests per year
* Number of maintenance fix requests per year
* User friendliness: call volume to customer service hotline
* User friendliness: training time per new user
* Number of product recalls or fix releases (software vendors)
* Number of production re-runs (in-house information systems groups)

2. Delivered defect quantities

They are normalized per function point (or per LOC) at product delivery (first 3 months or first year of operation) or Ongoing (per year of operation) by level of severity, by category or cause, e.g.: requirements defect, design defect, code defect, documentation/on-line help defect, defect introduced by fixes, etc.

3. Responsiveness (turnaround time) to users

* Turnaround time for defect fixes, by level of severity
* Time for minor vs. major enhancements; actual vs. planned elapsed time

4. Product volatility

* Ratio of maintenance fixes (to repair the system & bring it into compliance with specifications), vs. enhancement requests (requests by users to enhance or change functionality)

5. Defect ratios

* Defects found after product delivery per function point.
* Defects found after product delivery per LOC
* Pre-delivery defects: annual post-delivery defects
* Defects per function point of the system modifications

6. Defect removal efficiency

* Number of post-release defects (found by clients in field operation), categorized by level of severity
* Ratio of defects found internally prior to release (via inspections and testing), as a percentage of all defects
* All defects include defects found internally plus externally (by customers) in the first year after product delivery

7. Complexity of delivered product

* McCabe's cyclomatic complexity counts across the system
* Halstead’s measure
* Card's design complexity measures
* Predicted defects and maintenance costs, based on complexity measures

8. Test coverage

* Breadth of functional coverage
* Percentage of paths, branches or conditions that were actually tested
* Percentage by criticality level: perceived level of risk of paths
* The ratio of the number of detected faults to the number of predicted faults.

9. Cost of defects

* Business losses per defect that occurs during operation
* Business interruption costs; costs of work-arounds
* Lost sales and lost goodwill
* Litigation costs resulting from defects
* Annual maintenance cost (per function point)
* Annual operating cost (per function point)
* Measurable damage to your boss's career

10. Costs of quality activities

* Costs of reviews, inspections and preventive measures
* Costs of test planning and preparation
* Costs of test execution, defect tracking, version and change control
* Costs of diagnostics, debugging and fixing
* Costs of tools and tool support
* Costs of test case library maintenance
* Costs of testing & QA education associated with the product
* Costs of monitoring and oversight by the QA organization (if separate from the development and test organizations)

11. Re-work

* Re-work effort (hours, as a percentage of the original coding hours)
* Re-worked LOC (source lines of code, as a percentage of the total delivered LOC)
* Re-worked software components (as a percentage of the total delivered components)

12. Reliability

* Availability (percentage of time a system is available, versus the time the system is needed to be available)
* Mean time between failure (MTBF).
* Man time to repair (MTTR)
* Reliability ratio (MTBF / MTTR)
* Number of product recalls or fix releases
* Number of production re-runs as a ratio of production runs

Metrics for Evaluating Application System Testing:

Metric = Formula

Test Coverage = Number of units (KLOC/FP) tested / total size of the system. (LOC represents Lines of Code)

Number of tests per unit size = Number of test cases per KLOC/FP (LOC represents Lines of Code).

Acceptance criteria tested = Acceptance criteria tested / total acceptance criteria

Defects per size = Defects detected / system size

Test cost (in %) = Cost of testing / total cost *100

Cost to locate defect = Cost of testing / the number of defects located

Achieving Budget = Actual cost of testing / Budgeted cost of testing

Defects detected in testing = Defects detected in testing / total system defects

Defects detected in production = Defects detected in production/system size

Quality of Testing = No of defects found during Testing/(No of defects found during testing + No of acceptance defects found after delivery) *100

Effectiveness of testing to business = Loss due to problems / total resources processed by the system.

System complaints = Number of third party complaints / number of transactions processed

Scale of Ten = Assessment of testing by giving rating in scale of 1 to 10

Source Code Analysis = Number of source code statements changed / total number of tests.

Effort Productivity = Test Planning Productivity = No of Test cases designed / Actual Effort for Design and Documentation

Test Execution Productivity = No of Test cycles executed / Actual Effort for testing

Wednesday, May 26, 2010

QA dilemmas in Agile environment

I have heard many times that practicing agile raises some dilemmas when trying to keep the QA standards.
There are few examples mentioned below:

1. Little documentation and lots of changes that are imposed with direct verbal discussion between product manager and developers, makes it hard for QA to write their test documents and keep them updated.

2. The "Team" needs to deliver product in time , yet lots of tasks are completed simultaneously at the last moment. Little consideration for the amount of time QA needs to give a green light. At this stage QA are alone and "Team" is busy with future tasks.

By reading above points, first thing comes in my mind is that it is an execution of a waterfall project disguised as agile. A number of things stood out that are not in alignment with an agile project:

• Little documentation- Agile does not mandate little documentation. It says use the type and amount of documentation you need, but no more. If the documents to which you are referring are critical to the completion of the iteration then the time necessary needs to be accounted for in your iteration planning. The idea though is that documents should be used to capture the results of a discussion, but the discussion is where the team does most of their learning and uses the documents as a reference when necessary. The team should ask themselves what value the documents are adding and are they worth the time they take. Can the detail in the document be reduced? One of the things a team I am working with discovered was their test cases were WAY too detailed. They contained every single step a tester needed to go through for each test case. When they asked the testers why they needed to detail every step for a system they all knew very well they said “we don’t need this detail, we are only doing it because our process said we had to”. The test cases were shrunk to about 20% of their normal size.
• Last point makes brings up a number of items that sound very waterfall like:
o QA left alone and “Team” is busy with future tasks – There is one team on an agile project. Until the iteration is done no one on the team should be working on future tasks. One of the main focuses of agile projects is to get one thing completely done before moving on to the next. What you described sounds like a classic waterfall approach. Why aren’t the other members of the team assisting with the testing to complete the iteration?
o Tasks are completed at the last moment – agile projects are feature focused rather than task focused. If everything being developed is not being tested until the end of the iteration then you are following a sequential process and it will be highly unlikely you will realize the benefits associated with agile. You ideally want to be doing testing on features throughout the entire iteration. This requires planning and good selection of items from the backlog to ensure that you will have flow of feature completion throughout the iteration.
o QA giving the green light – Why is QA giving the green light? There should be a definition of done that defines the criteria of a backlog item being done. The definition of done should be used during the planning to make sure the tasks necessary to cover all the criteria are accounted for.
• Getting teams to work in unison rather than in a sequential manner is hard, but it is critical to achieving the benefits of an agile approach.

~SA

Wednesday, April 21, 2010

User Stories Vs. Tasks

User stories are one of the primary development artifacts for Scrum and Extreme Programming (XP) project teams. A user story is a very high-level definition of a requirement, containing just enough information so that the developers can produce a reasonable estimate of the effort to implement it.

A Task is a simple description of how to do some bit of work towards completing an item in the Work Item List. However, there are some important things to remember when using Tasks.

Normally, an item from the Work Item List is broken down into multiple Tasks. These Tasks are all the things that need to be done or built to get the item into a deliverable state.

In general, the process of creating a bundle of Tasks for a given Work Item is a design or analysis process. It is a problem-solving process. The Tasks themselves represent the solution: the building blocks of the structure of the Work Item. Tasks do not normally represent the problem solving or analysis process.

Writing stories should be a separate activity from the tasking of the stories themselves. The story should be about what specifically you want your product to do and clearly explain the value of this feature. Together with the scrum team you should clearly define the acceptance criteria of the story. Only after these are defined should the story be tasked out by the contributors who will be doing the work. They get to decide how to best get it done.

The process of breaking down a user story is important because it helps me think about how I'm going to build the functionality. Many people disaggregate a user story into tasks and then estimate them (usually in ideal time) because they're smaller units of work and can be estimated with less inaccuracy. Then they total the task estimates to obtain a better indication of how long it will take to complete the user story. Tracking each task's remaining time feels like micro-management, so I don't it anymore. I'm only interested in tracking the number of running tested features. I want to know how many user stories are passing all their FIT tests.

~SA

Monday, April 12, 2010

Agile estimation – Story Points vs. Hours

The most common debate (or rather, amusement) between software engineers in an agile team - hours (or days) and not story points are a better way to estimate user stories. I disagree.

Developers are, in general, more aware of the potential complexities that they can run into in the process of implementing a story. Despite this cognizance, it is often hard to predict exactly what these complexities might be. Obviously, if a developer knew the exact nature of the issues that he will run into, he can account for those and predict exactly how much time the work might take. Since this knowledge can never be complete, no developer can determine the exact amount of time needed.

Further, depending on the specific process being used in a given team, it is possible that the developer(s) who estimated a given story is not the one who ends up actually doing the implementation. Different developers have different skill levels and differing amounts of domain-knowledge. This also contributes to the variance in the actual time taken for implementation.

Finally, there are things that developers have no control over – the so called external factors – source-control going down or behaving erratically, having to attend meetings, or having to help another team or another developer on something, and so forth. Someone with critical knowledge could be out on vacation or a key developer could fall sick.

Lets represent actual time taken by a developer to complete a story with the following formula ->
A = function(C, U, E, S, K)
where
A = actual time
C = an idea of complexity (how difficult something is, and how many tasks there may be),
U = unpleasant technical surprises,
E = external factors,
S = implementing developer skill level,
K = available domain knowledge

Clearly, the only thing that can be “estimated” here is C – an idea of complexity (also based on the “understanding” of the team – whatever that is, and however it can be quantified). Let’s say that we measure this with complexity points (or story points). If we assume that everything else will get averaged out over the duration of the project (which it usually does), then it all comes together quite nicely. This is why story point estimation works.

On the other hand, trying to estimate using hours is like trying to guess the effect of the other four (and possibly more) factors. Trying to estimate in hours is like trying to estimate both the effort needed due to complexity AND the velocity of the people working on the story.

Finally, if I hear someone say that they have a capacity of 200 real hours, that they’re signing up for 300 hours of work and end up completing 250, then in my mind, the 250 hours of work that got done (which can’t be squeezed into 200 real hours) might as well be 250 story points. In fact, you can up the numbers by a factor of 10 so that the team really had 200 real hours (time doesn’t change), they signed up for 3000 points and got 2500 done, it will not change a thing. Story points are story points. The team can call it “ideal hours” or “estimate hours” or whatever they like. As long as they’re not real hours, they’re just like a rose with another name…

~SA

Tuesday, April 6, 2010

Instant Gratification To My Dear Pa

It's true what everyone says. When you lose someone who was so close to you, and so involved in your life, the pain never really goes away. I lost my father last month, and I still feel like he is on a perpetual vacation. He was very nice person, not an alcoholic, was not abusive, and was not a bad father. He was actually the best father he could have been. He loved me so much & i loved him too.

There is not a day that I do not think of him or wonder what he might think of me if he were still here; but by taking the best of his qualities and learning lessons from his life, I like to think that I have kept the flame of his spirit dancing within the halls of my own being. In that sense, he has never left, and he will continue to accompany me during the remainder of my own limited time on this planet.

Thanks Pa for giving me life & everything in this life. I'll miss you always.

Tuesday, February 23, 2010

Business Process Interoperability (BPI) & Business Process Testing (BPT)

BUSINESS PROCESS INTEROPERABILITY (BPI):

Business process interoperability (BPI) is a property referring to the ability of diverse business processes to work together, to so called "inter-operate".

It is a state that exists when a business process can meet a specific objective automatically utilizing essential human labor only. Typically, BPI is present when a process conforms to standards that enable it to achieve its objective regardless of ownership, location, make, version or design of the computer systems used.

BUSINESS PROCESS TESTING (BPT):

The testing of the full business process, from the start of a transaction (which might be a telephone call) through to the completion (which might be the receipt of payment for an invoice after goods have been shipped).

Testing the full business process ensures that the system is really going to work. There are many examples of excellent software systems being implemented, where the supporting processes don’t work and undermine the effectiveness of the software (for example, e-commerce sites work perfectly, but the delivery companies don’t deliver when expected and ruin the customer experience).

Where the implementation of the system requires a lot of interaction between the business and the system(s) then a dress rehearsal is highly recommended.

~SA

Friday, January 29, 2010

Agile Testing Book

Agile Testing: A Practical Guide for Testers and Agile Teams, one of the best book by agile testing gurus Lisa Crispin and Janet Gregory. If you are interested to know about the Agile testing, this is a "must read" book.

~SA