Showing posts with label Software Testing. Show all posts
Showing posts with label Software Testing. Show all posts

Thursday, April 7, 2011

NVA - Overproduction or Overengineering

Over production or over engineering are considered a non-value add and a waste. When we try to add unnecessary steps to a process or make it more complicated than it is required we are creating more chances of creating defects in our work.


Photo Courtesy: Stephen w morris
A few examples are
  1. Having too many slides in a presentation. You loose the audience and are unable to get the right message across.
  2. Too many and too long reports - are you sending reports that take just too long to create and no one really looks at it? Then maybe its time to stop doing it.
  3. Creating too many copies of a report - are you customizing the report so much that you are creating multiple copies. Then look at how you can combine them. The more copies you have the more place you have to make updates where there are changes and the higher the likelihood of creating defects.
  4. Rewriting steps of a test cases - Are you writing the same steps several times in several places? If yes then try to create a test that can be called often. This will save a lot of maintenance time.

Tuesday, April 5, 2011

NVA - Defect or Rework

In my previous post I talked about customer value add and talked about non value add (NVA).
D.O.W.N.T.I.M.E is the word I used to talk about waste. The first one in the downtime is Defects or rework.

Customers don't pay us for rework. They don't expect defects and when they do find defects they are unhappy. To increase customer satisfaction we have to focus on preventing defects.

What can we do to reduce defects and rework?

Photo Courtesy: Michael Jastremski via openphoto

Its "self certification".

If I can certify my work then I can reduce the number of defects that escape. If business analysts make sure their requirements are accurate, software engineers make sure their code is unit tested and software testers can make sure they have good coverage during testing there are chances of fewer defects will escape to the customer.

To reduce waste and create more value to customer lets start looking at our work and see how we can reduce rework and defects. We can do this one step at a time.

Saturday, April 2, 2011

Automation - Magic Robot

Do you have any test automation for your project? If so does your management think its a magic robot that can pretty much do anything and that too on its own?

We have been asked to automate our project. It has to test everything, catch all defects and need no manual over site time. We try to explain that creating and maintaining test scripts is really important and project will have to scope in time for this every release.

A fellow blogger in the post Automation: Oh! What A Lovely Burden! talks about automation being a lovely burden.

What are some automation myths and how to manage them?
  • Automation will test everything!
    • Automating every single test is not a good investment. There are some features of the product that might never be used or used very little. Spending time on automating them may not give good return on investment.
    • The more complex a test, the harder it is to automate it and the harder it is to maintain the test. If it takes a hour to manually test it during regression then just manually test it.
  • Automation will catch all defects!
    • Again this is a misconception. If something changes in areas that are already automated the automation test will catch it. If its in features that automation does not cover then we wont be catching it via automation. For example an automation test might check a box and continue with the test. A defect occurs when the box is checked and then unchecked. This defect was caught my a customer because its not a standard step that customers follow. It was a one off scenario.
    • Automation will not catch defects in new features since they have not been automated yet.
  • Once automation is complete no more time will be needed for maintaining those tests.
    • Any change in future releases can impact automated tests. Time has to be spent in maintaining these scripts. It wont take as much time as it takes when creating new tests but its still time that testers have to spend and this time has to be planned into the project plan.
Yes automation is great but its not a robot and it wont do everything we would like it to do. It makes manual testing effort easier and also gives time to explore and test other features that are new or need our attention.

Friday, April 1, 2011

SOP - Its Really About Quality

Most dictionaries define Standard Operating Procedure as
  • Established procedure to be followed in carrying out a given operation or in a given situation.
  • A specific procedure or set of procedures so established.
What does SOP mean?

SOP is a written document detailing steps or activities for a certain process. SOP can be created for any existing or new process. This document helps standardize the process. The goal is to really do the job same way every time we do it.

Why create SOP?
  • It details the activities that need to be performed and so there is a common understanding of the process among the people involved.
  • Someone new to the position will perform the same task the same way as someone who has been in the job.
  • It ensures the process is performed the same way on a continuous basis.
How to create SOP?
  • Start with the team who is involved in the process. Include people who will be performing the job to gain insight and details that might get missed.
  • Document current state of the process in the sequence it occurs.
  • Document terminologies and define them so there is no ambiguity.
  • Review the document with the team and get sign off.
  • Maintain the document and review on a continuous basis.
  • Establish a system for distribution and sign off when changes occur.
Bottom line: SOP are an integral part of creating quality systems. It provides information to perform a job consistently and properly. To get to a good quality output we have to have inputs that are predictable. For example I asked 10 non-testing people at work "What is regression testing?

Each one had a different understanding. One person said its "100% testing of everything" Another individual said "its automated testing". We have a Software Quality Control Handbook that defines our testing terminology but that is a document that we use internally. We also have explained regression testing to some extend in our Test Plan. This test plan is reviewed before every release. So then why is there a lack of understanding?

Well we haven't spent the time with the team to go over the process steps. We didn't define terms with context to the process steps involved within testing group. With a Lean Six Sigma project I am currently working on I am hoping to define the testing process and create an SOP that would make our lives a lot easier than it is today.

What this will then do is help with setting the right expectations from our testing processes and we can deliver a product that is tested and meets the expectations of the project team. Right now they expect us to test everything and catch all defects. Sure we did love to do that and then we would never have a release for any of our products.

Wednesday, March 30, 2011

The Test Story - My First Online Publication

I am really excited because one of my goals for 2011 has come true. In this blog "Welcome 2011" I talked about wanting to get published.
My white paper has been published in STP "Test &QA Report" You can read a summary at their site and also download the whole white paper.


Link to article - click here. One goal down, four more to go.

Sunday, March 13, 2011

Mistake Proofing - Poka Yoke

"Your ability to mistake-proof of a process is only limited by your own lack of imagination." Shigeo Shingo


Last week I was at week 2 lean six sigma training and learnt a concept that really made me think of my job differently. In testing we at times look for bugs and we also do activities that are risk based. For risk based activities we look at the risks that can occur and how we can test or plan for it during testing.
The other side to a defect is how to mistake proof it in such a way that if the defect does occur how do we prevent or detect it. Its a back up of a back up - Poka Yoke.

Poka Yoke is Japanese for mistake proofing. It is the creation of devices that either prevent special causes that result in defects or inexpensively inspect each item produced to determine whether it is acceptable or defective.

When this topic was introduced in class I was thinking oh this is hard. I didn't understand it. Then the instructor gave examples. Automobile air bags - yes this is poka yoke. If the customer does have an accident then the car is helping reduce the impact. Another good example is auto door lock, seat belt warning, etc.

What is really happening in these cases are that a tester is being sent with every product. He or she warns the customer when a defect is occurring or going to occur. There are some cases where the customer can override it. Example of this is where you get a spell check error and you can still choose to over write it and use the different spelling than what is being recommended. When closing a word file a message is displayed do you want to save the file before closing and its up to the customer or end user to choose one option or the other.

After the class was completed we were given an assignment to come up with as many poka yoke's we can see around us. We all went home thinking this is hard we don't see as many of these mistake proofing as we think. Next morning we all came up with 200 poke yoke's between 11 class participants.

Its really everywhere....My favorites from the class
  1. Garages have car clearance limits and have a height check before the cars go into the garage
  2. Dryer/washer switch off when the doors are open
  3. Garage doors do not close if there is an object that obstructs its closing path
  4. Drop downs for most online applications have a state drop down
  5. You have to enter email addresses twice when signing up online
  6. ABS (anti lock braking system)
  7. Bathroom sink have a little hole at the top of the sink to prevent overflows
  8. Iron's auto switch off
  9. Auto sensor lights/flushes
  10. Keys enter the key hole only certain way
I love this concept and will be thinking of how to use this in our day to day activities be it how I write test plans, test cases or do my testing. After all I have to think of our customers everyday and add value for them.

Thursday, February 3, 2011

Updates and TFS WIT - Pilot Project

My projects at work have been keeping me away from blogging. This project that I was helping with during the month of Jan is finally wrapped up and I have caught up with my other project work that were on the back burner.
The thing that I am most excited about right now is the implementation and roll out of TFS Visual Studio Work Item Tracking (WIT). Yes we are going to pilot our new tracking system. Our goal with TFS is that it will eventually be a one stop show for all our software development activities (code, builds, work items, requirements, test cases).
Right now we are only moving our work item tracking (defects and change requests) to this system. We already have our code in TFS. Eventually we will be using it for requirements and test management. We are not there yet but moving WIT is like being one step closer.

This project is special to me because this has a lot of "firsts" for me. My first project
  1. I am leading the documentation process. I have a great team who is helping me put all the pieces together.
  2. We are doing a pilot for a process roll out. I will get a chance to learn pilot project processes including but not limited to gathering feedback, training and supporting users, gathering metrics, etc.
  3. I will be training teams outside of our core business unit. Its a great opportunity for me to learn about our other business units and their current/future processes.
I cant wait for the first pilot project to kick off. I will be back with more on pilot project implementation and process.

Tuesday, January 11, 2011

Rumblings From Testing Project

This past week and weekend I had to help out with a project that I used to lead two years ago. Since I handed it over there has been several changes to the project. The project saw quite a few test leads come and go, several project managers changed hands and project changed directions several times.

This product is part of our legacy system and a project that I loved working on. Right now due to lot of compliance changes its mandatory that they hit the release date. There is no wiggle room. They asked me if I could help and I did sure why not.

They assigned a couple of change requests to me and sent me the documents I would need to successfully test these changes. I came in thinking its going to be easy peasy. Guess I was wrong.

The product was the same. Yes it is like riding a bike. As soon as I logged into the application everything came back to me. The caveat though was that the processes for the project had changed. From being a successfully self run motor vehicle it had become a car that needed pushing all the time. I am not going to get into the details of the actual work.

This is a mini vent of things I learned from testing this product:
  • When working under pressure (especially when the focus is to hit the date) no matter what we want to think chances of making mistakes are higher. No one can be blamed. There is less time go review your own work irrespective of your role. Requirements may be missed, code changes often impact areas that developers didn't get time to review, testing scenarios are missed and chances are these mistakes wont be caught till it goes to the customer. No one wants to do these on purpose but circumstances force these situations. No these are not excuses I am making, I am talking about human beings who have to work overtime, spend weekend and weeknights to get things done.
  • How do handle this?
    • Teams need to take a break, so occasionally even if we are behind, ask the team to take some time to do other stuff. Yes we are loosing time but when people come back from the break they will be better charged.
    • Patience is an asset. Everyone is busy and being patient and polite will take you a long way.
    • Get someone outside of the team to help with testing. Support line, product managers, project managers, etc
      • Someone who can help without ramp up time.
      • They will be able to look at it with a new set of eyes and help with finding issues and ask questions that others might overlook.
    • Split the product into feature areas and get all or few testers to test one area fully one at a time. Wait for one a complete round of testing in that one area, log all issues and then fix them at the same time so a second round of testing can be done to wrap it up.
      • This will help gather all issues within a few hours.
      • Also coders can fix all issues at the same time instead of touching the code at various times.
  • Scope, date and quality - three pieces of the release equation. Product can only have any two at a time. So if the date is fixed and more issues are found, then either the date has to be moved or scope has to be reduced (cannot fix all found issues). So really product team has to decide which one item will provide the wiggle room for the product. Can we move the date? Can the product release with known issues?
  • Whole team fails if the product does not release or releases with poor quality no matter who else within the team were on schedule or completed their tasks on time. Project failure equals to team failure.
Easy to talk about all this now that the work is wrapping up well and kudos to everyone on this project for having done everything they can or could. True test of team work comes when there is a crisis.

Monday, January 3, 2011

Monday, December 27, 2010

Musings - Sherlock Holmes

A friend of mine sent me link to PBS Masterpiece Mysteries where there were links to the new Sherlock Holmes series. This is where Sherlock Holmes meets 21st Century CSI. This show is brilliant and I loved it.


When I was watching the show it reminded me of testing. Holmes strengths are attention to details and observation (investigation). He observes and lets his observation tell him things. Things (requirements or bugs in our world) that others might miss easily because either they were not looking for it or didn't know to look for.

It’s the same with testing - we are observing and looking for things that are out of place (bugs) and things that should be there (requirements). When we find bugs, we should build the story. Build it: where could it have originated? What did we do to get here? What values can cause the system to break? What else can cause the system to break?

We can help the rest of the team (software engineers or business analysts) if we went a few steps further than just logging issues. We could give them as much information as possible. All we have to do is look for it, observe it and document it. We have the information (we saw the bug first, have the test cases, know the requirements, etc) and we have the system where the bug was found, let’s now go and build the story if possible by asking the questions and trying to answer them.

With New Year right around the corner let’s try and be better investigators in 2011. Let’s find the bugs before the customers do and let’s show our team we are Sherlock Holmes the best software detective in town.

Happy New Year Friends.

Sunday, December 5, 2010

My Take: HPQC 11/ALM 11

Recently I had a chance to attend seminars for TFS Test Manager and HP Quality Center 11. Its interesting to see that HPQC is jumping into the bandwagon of having complete software development life cycle in one tool. They were known for test and defect management only. They biggest threat is TFS Test Manager which includes project, requirements, builds, test environment management and test management all in one tool.


With QC 11/ALM 11 HP is trying to catch up to competition. So from just managing testing pieces, they are moving into application life cycle management. This is what HPQC users have been asking them to do for a longtime.

As a tester I am excited about
  1. Record and play back - yes in QC you will finally be able to record what testing real time.
  2. Dashboard that will show scope of a release, requirements , test cases and defects associated with each release. We had to run reports outside of QC to gather this information.
  3. Rich text editor - which will save a lot of time for us testers who add notes outside QC and then attach them here.
  4. Test case re-usability - We can reuse test cases for various configurations and assign it to multiple builds and environments.
  5. Exploratory testing and mirror testing - we can now create test scripts from recording our exploratory test cases. You can testing multiple test environments at the same time using mirror test cases.

Friday, December 3, 2010

Friday's Musings - Reduce Redundancy

Where do we typically see redundancy? Test cases and test plan.

We have to try to reduce this redundancy for the following reasons
  • Waste of time - if we are repeating the same step or steps in more than one test case we are loosing time that could be used for actual testing or other activities.
  • Painful to update and change - if one of those steps that is common in a couple of test cases changes, we have to manually go through all the tests to change them
What should we do?
  • Create call tests or test templates when you know you have a bunch of steps that will be common in quite a few test cases
  • Create a matrix to make sure you are not over testing. You only need tests to cover requirements and if you can combine requirements in a test case, go ahead and do it.
  • Use automation if possible to get some of the redundancy out of the way.
  • For test plan if there are sections that you repeat release after release, think about pulling it out into a handbook or a master test plan where you can maintain this in one place for all release. Have change control on the document to indicate changes when its applicable.

Thursday, November 25, 2010

Friday's Musings - Heads up you guys: sending bugs your way

As a tester, when we find bugs we are trained to log them. There are industry standards out there on how to write a good bug and what details need to be included. What we forget after we log a defect is: Now what?

What I always do is call or email my software engineers. No not to show off or blow my horns but to give them a "heads up". I always call them if its the right time (have a few offshore and wait till its a good time for them) and tell them this is what I found, this is what I was doing and this is what happened (especially if its high or critical issue).

Why do this?
  1. The heads up gives them a chance to react to it before the bug triage/defect review meeting. You are giving them an opportunity to research it and be ready when questions are asked in meetings. This way your team is already a step ahead in trying to answer the questions like "How critical is it?" or "How easy is it to fix it?" etc.
  2. When we give them a heads up immediately they may ask you to look for additional information and you will still have the system available to help them trouble shoot. If its days later and if you have had builds deployed or other test data changes you may not be able to help them replicate defects as easily as you would want to.
  3. You are being a team player. You are not catching them by surprise in review meetings and they will return the favor by sharing information and helping you to when needed.
  4. Gives an opportunity to learn. Several of these calls to my developers have lead to more questions and then into discussion sessions to trouble shoot. They ask for details and information and this has helped me take notes on things that they are looking for in bugs. I use these tips/notes to write better bugs. So the next time I call them I have answers to a few more questions that I had the first time I called them to report a bug.
  5. You are saving time. Instead of waiting till the defect or bug get to the review meeting and then being assigned for analysis and then wait till another meeting to decide if the bug has to be fixed or not you are giving opportunity to react in the first meeting itself with information from software engineers.
I am tagging this under Change Leader. Why? Because you can start small changes and lead the team to do better. I call these small changes as signs to build better teams and as a result better products for our customers: internal and external.

Do you have tips on saving time, building teams or help testing team, please share?

Tuesday, November 23, 2010

2 Things To Make Test Leading Job Easy

As test leads we have several responsibilities between testing, manging project, testing, test planning, etc. You can do these two things daily to make your life more manageable and easier.

Keep a Test Journal
Clean your data everyday 

Read more at 
 http://itknowledgeexchange.techtarget.com/software-testing-big-picture/2-things-to-make-test-leading-job-easy/

Monday, November 22, 2010

My Take: Test Manager (TFS Visual Studio 2010)

I was at the Microsoft Event - The Full Testing Experience - Quality Assurance with Visual Studio 2010. I am pretty excited about this tool and the demo I saw. There are some cool features that I wouldn't mind using to test my application. The one thing that was preventing me from using this tool so long (even though I have access to it) was that the test manager piece didn't support silver light 4 and our application was on that technology.
The latest update for Test Manager now has this support. So yes I am looking forward to using this to test.

Some pieces that I am really excited about
  1. Manual test cases - video recording. Really if I can have a witness to all my testing I would not have to waste my time battling, answering, writing clearer notes, etc. I would have it all in my recordings.
  2. Fast forwarding - I can fast forward my test cases. Super cool.
  3. Lab Management - If I can get the same test environment that takes 6-8 weeks to build in hours, I would be testing so much more than fighting paper work to get the lab machines.
  4. Use manual test cases to create automated test cases - really that just makes my life a lot easier. Automation and manual testing wont be in two different tool/technology. They will be based of each other.
  5. Dashboard - Requirements, test cases, execution, defects - when everything is in the same system, my metrics would mean so much more real time. I wont have to massage my data to co relate the different elements together.
I will be playing with this tool for the next several months. I will add more information as I learn more. But this is one technology I am excited about.


For more information visit
http://www.teamsystemcafe.net/Resources.aspx
http://msdn.microsoft.com/en-us/events/bb944781.aspx

Sunday, November 14, 2010

My Take: Weekend Testing Americas - Session 1

I was part of the first Weekend Testing American Session on Saturday Nov 13, 2010 from 2 - 4 pm CST. Michael Larsen was the session coordinator and he helped moderate the session and also presented the objective and mission.

You can find more information @ WTA01 - Let's Dance

There were 21 people from all over the world including India, Canada, Israel, etc. It was nice to see so many people from different locations, different experience, different testing ideas come together to try to work on one goal - making Weekend Testing American a success.

Mission was to test StepMania 4 (open source clone version of Dance Dance revolution) with a partner. We had to choose a partner within the first few minutes and then test it for an hour. We regrouped after that to discuss what went well and what didn't.

I learnt couple of things
  1. Don't forget the mission. I jumped into trying to test the application. The goal was to work with your partner.
  2. Testing is learning - Michale Bolton brought up a good point of how learning really is part of testing and not separate.
  3. Rapid Reporter - A cool tool to take test notes especially during exploratory testing. Thanks to Shmuel Gershon I now have a tool that I can use everyday.
  4. Skype has a share option where you can share screen. Another great tool that I will be using a lot. I used to use MSN Messenger sharing but it has its glitches and live meeting takes time to setup and is one way.
  5. There are some good testers out there who love to share knowledge and are good mentors. I met a very enthusiastic group during this weekend testing session.

Wednesday, November 3, 2010