Saturday, April 2, 2011

Automation - Magic Robot

Do you have any test automation for your project? If so does your management think its a magic robot that can pretty much do anything and that too on its own?

We have been asked to automate our project. It has to test everything, catch all defects and need no manual over site time. We try to explain that creating and maintaining test scripts is really important and project will have to scope in time for this every release.

A fellow blogger in the post Automation: Oh! What A Lovely Burden! talks about automation being a lovely burden.

What are some automation myths and how to manage them?
  • Automation will test everything!
    • Automating every single test is not a good investment. There are some features of the product that might never be used or used very little. Spending time on automating them may not give good return on investment.
    • The more complex a test, the harder it is to automate it and the harder it is to maintain the test. If it takes a hour to manually test it during regression then just manually test it.
  • Automation will catch all defects!
    • Again this is a misconception. If something changes in areas that are already automated the automation test will catch it. If its in features that automation does not cover then we wont be catching it via automation. For example an automation test might check a box and continue with the test. A defect occurs when the box is checked and then unchecked. This defect was caught my a customer because its not a standard step that customers follow. It was a one off scenario.
    • Automation will not catch defects in new features since they have not been automated yet.
  • Once automation is complete no more time will be needed for maintaining those tests.
    • Any change in future releases can impact automated tests. Time has to be spent in maintaining these scripts. It wont take as much time as it takes when creating new tests but its still time that testers have to spend and this time has to be planned into the project plan.
Yes automation is great but its not a robot and it wont do everything we would like it to do. It makes manual testing effort easier and also gives time to explore and test other features that are new or need our attention.

Friday, April 1, 2011

SOP - Its Really About Quality

Most dictionaries define Standard Operating Procedure as
  • Established procedure to be followed in carrying out a given operation or in a given situation.
  • A specific procedure or set of procedures so established.
What does SOP mean?

SOP is a written document detailing steps or activities for a certain process. SOP can be created for any existing or new process. This document helps standardize the process. The goal is to really do the job same way every time we do it.

Why create SOP?
  • It details the activities that need to be performed and so there is a common understanding of the process among the people involved.
  • Someone new to the position will perform the same task the same way as someone who has been in the job.
  • It ensures the process is performed the same way on a continuous basis.
How to create SOP?
  • Start with the team who is involved in the process. Include people who will be performing the job to gain insight and details that might get missed.
  • Document current state of the process in the sequence it occurs.
  • Document terminologies and define them so there is no ambiguity.
  • Review the document with the team and get sign off.
  • Maintain the document and review on a continuous basis.
  • Establish a system for distribution and sign off when changes occur.
Bottom line: SOP are an integral part of creating quality systems. It provides information to perform a job consistently and properly. To get to a good quality output we have to have inputs that are predictable. For example I asked 10 non-testing people at work "What is regression testing?

Each one had a different understanding. One person said its "100% testing of everything" Another individual said "its automated testing". We have a Software Quality Control Handbook that defines our testing terminology but that is a document that we use internally. We also have explained regression testing to some extend in our Test Plan. This test plan is reviewed before every release. So then why is there a lack of understanding?

Well we haven't spent the time with the team to go over the process steps. We didn't define terms with context to the process steps involved within testing group. With a Lean Six Sigma project I am currently working on I am hoping to define the testing process and create an SOP that would make our lives a lot easier than it is today.

What this will then do is help with setting the right expectations from our testing processes and we can deliver a product that is tested and meets the expectations of the project team. Right now they expect us to test everything and catch all defects. Sure we did love to do that and then we would never have a release for any of our products.

Wednesday, March 30, 2011

The Test Story - My First Online Publication

I am really excited because one of my goals for 2011 has come true. In this blog "Welcome 2011" I talked about wanting to get published.
My white paper has been published in STP "Test &QA Report" You can read a summary at their site and also download the whole white paper.


Link to article - click here. One goal down, four more to go.

Thursday, March 24, 2011

Ask or Answer and Get Paid

I am really excited about the latest badge earnings that IT Knowledge Exchange is offering. ITKE gives you points for every questions you ask, for every answer you provide and for approving answers. The details on earning points can be found here.

Here is my previous blog on ITKE.

This is a summary view of the points
  • Ask a Question: 5 Knowledge Points
  • Answering a Question: 15 Knowledge Points
  • Discussing a Question: 10 Knowledge Points
  • Accepting an Answer: 10 Knowledge Points - approve an answer a fellow member has give to your question

The more you exchange knowledge the more you earn. Their new rewarding system pays going forward and also retrospective. So if you have been active look for emails from them. If you have not been active this is the time to really look at how you can participate. More information can be found at

Earning Badges Pays Off - Literally!

Earning badges pays off - Today!

From here on out, prizes will be as follows:
  • Bronze Member Badge: Sticker and ITKnowledgeExchange t-shirt
  • Silver Member Badge: $25 Amazon.com Gift Card
  • Gold Member Badge: $50 Amazon.com Gift Card
  • Platinum Member Badge: $100 Amazon.com Gift Card
  • Nerd Cog: $10 Amazon.com Gift Card
  • Genius Cog: $25 Amazon.com Gift Card
  • Brainiac Cog: $50 Amazon.com Gift Card
  • Certified Nerd Cog: $10 Amazon.com Gift Card
  • Certified Genius Cog: $25 Amazon.com Gift Card
  • Certified Brainiac Cog: $50 Amazon.com Gift Card

If you have not checked out IT Knowledge Exchange the time to start getting active in asking or answering questions.

Sunday, March 13, 2011

Mistake Proofing - Poka Yoke

"Your ability to mistake-proof of a process is only limited by your own lack of imagination." Shigeo Shingo


Last week I was at week 2 lean six sigma training and learnt a concept that really made me think of my job differently. In testing we at times look for bugs and we also do activities that are risk based. For risk based activities we look at the risks that can occur and how we can test or plan for it during testing.
The other side to a defect is how to mistake proof it in such a way that if the defect does occur how do we prevent or detect it. Its a back up of a back up - Poka Yoke.

Poka Yoke is Japanese for mistake proofing. It is the creation of devices that either prevent special causes that result in defects or inexpensively inspect each item produced to determine whether it is acceptable or defective.

When this topic was introduced in class I was thinking oh this is hard. I didn't understand it. Then the instructor gave examples. Automobile air bags - yes this is poka yoke. If the customer does have an accident then the car is helping reduce the impact. Another good example is auto door lock, seat belt warning, etc.

What is really happening in these cases are that a tester is being sent with every product. He or she warns the customer when a defect is occurring or going to occur. There are some cases where the customer can override it. Example of this is where you get a spell check error and you can still choose to over write it and use the different spelling than what is being recommended. When closing a word file a message is displayed do you want to save the file before closing and its up to the customer or end user to choose one option or the other.

After the class was completed we were given an assignment to come up with as many poka yoke's we can see around us. We all went home thinking this is hard we don't see as many of these mistake proofing as we think. Next morning we all came up with 200 poke yoke's between 11 class participants.

Its really everywhere....My favorites from the class
  1. Garages have car clearance limits and have a height check before the cars go into the garage
  2. Dryer/washer switch off when the doors are open
  3. Garage doors do not close if there is an object that obstructs its closing path
  4. Drop downs for most online applications have a state drop down
  5. You have to enter email addresses twice when signing up online
  6. ABS (anti lock braking system)
  7. Bathroom sink have a little hole at the top of the sink to prevent overflows
  8. Iron's auto switch off
  9. Auto sensor lights/flushes
  10. Keys enter the key hole only certain way
I love this concept and will be thinking of how to use this in our day to day activities be it how I write test plans, test cases or do my testing. After all I have to think of our customers everyday and add value for them.

Friday, February 18, 2011

Pilot Projects Process

We recently rolled out Team Foundation Server Work Item Tracking at work to a pilot project. During the discussions we were asked what did we want from pilot project and this made me think about how pilot projects are created and what are the expectations.
To start with there has to always be a business statement or hypothesis. So here for TFS it was
  1. There will be more direct visibility for code change due to various work items related to the project.
  2. We will be eliminating various different tracking systems (Clear quest, SR System, worksheets, Quality Center Defect module, etc) and will save time from having to do duplicate entries.
One of our biggest goal was to consolidate different tracking systems. We had more than 5 tools and each project also had their own little system like worksheet or share point issues log. We wanted to make sure our tool would fit all these needs.

We then put a plan together for this project to make sure we had a time line and also set expectations for the pilot team.
  1. Project X will start using this for TFS work item to track bugs, change requests and tasks.
  2. The pilot will last 4 weeks after which a decision will be made on the roll out process for other projects within the organization.
  3. Support will be provided during these 4 weeks for questions regarding process, tool, and technology.
  4. Feedback will be gathered via emails, surveys and interviews.

We then had to tell the team on how the feedback will be gathered.
Feedback will be gathered during these 4 weeks around the following areas
  1. Training
  2. User guide and documentation
  3. Use of tool
  4. Technical adaptability
  5. Advantages and disadvantages
  6. Gaps in process within the tool
 Our final section for the plan was capturing and reporting findings.
  1. We will validate our hypothesis and present our reports to management
  2. The data gathered during the pilot will be presented at the end of 4 weeks with our findings
  3. If changes are proposed to workflow or tool, they will be presented to the steering committee to get approval for appropriate changes.
 So now we are busy gathering the feedback and data for reporting on our pilot project status.

Thursday, February 3, 2011

Updates and TFS WIT - Pilot Project

My projects at work have been keeping me away from blogging. This project that I was helping with during the month of Jan is finally wrapped up and I have caught up with my other project work that were on the back burner.
The thing that I am most excited about right now is the implementation and roll out of TFS Visual Studio Work Item Tracking (WIT). Yes we are going to pilot our new tracking system. Our goal with TFS is that it will eventually be a one stop show for all our software development activities (code, builds, work items, requirements, test cases).
Right now we are only moving our work item tracking (defects and change requests) to this system. We already have our code in TFS. Eventually we will be using it for requirements and test management. We are not there yet but moving WIT is like being one step closer.

This project is special to me because this has a lot of "firsts" for me. My first project
  1. I am leading the documentation process. I have a great team who is helping me put all the pieces together.
  2. We are doing a pilot for a process roll out. I will get a chance to learn pilot project processes including but not limited to gathering feedback, training and supporting users, gathering metrics, etc.
  3. I will be training teams outside of our core business unit. Its a great opportunity for me to learn about our other business units and their current/future processes.
I cant wait for the first pilot project to kick off. I will be back with more on pilot project implementation and process.