Tuesday, June 28, 2016

We want to have QA in our company! ookkkk

A friend of mine who is new to the IT industry called me today with a question.
We’re a new startup and we want to set up QA in our company. What are the things we need to know?

Ooops, I did give an answer but was worried if I had confused the poor guy. Then I did the obvious thing and Googled to see if someone else has written something that would sum up what I wanted to say. There was just too much information but I couldn’t find what I wanted.
So here I go, writing my own answer.
---------------------------------------------------------------------------------------------------------
First be open to the idea, that there is no clear cut/standardized process on how the QA team/department has to function. It varies from company to company, project to project and also from time to time.

As a small start-up you can let it grow organically based on what needs to be tested and the expectations of your customers. Start small with excel and few open source tools.
You MUST have a Defect/Bug tracking system. I’ve used Jira(Commercial), Bugzilla(Open Source) and even an in-house developed tool.

Jira is not only for defects, it covers the whole agile cycle. You can capture requirements, assign tasks to everyone in the team (including, but not limited to bugs), can be linked with other systems and gives a ton of features.

Test case writing start from the beginning. I’ve seen QA teams using excel sheets to capture test cases. They break down the requirements to individual use cases. They have to be precise and detailed. You need to specifically mention the input data, step to follow etc and the expected outcome. Doing this usually helps with clearing up any issues with requirements as well.  (In Jira, we can write the requirements in the above format, so there is no requirement document or test case. There are “stories” entered in Jira and developers implement and QAs test and enter bugs against these stories. It’s a very agile approach which we tried out recently)

Based on the product you are testing, you can do various types of testing. I’d say this is primarily determined by the requirements. Say for example a product can have functional requirements as well as performance requirements and the QA team is expected to do performance testing as well. And there can be multiple products/ modules that work as a system, so have to do integration testing to ensure they work together as expected.
A smoke test can be used as an acceptance test prior to introducing a build of software to the main testing process.
User acceptance testing (UAT) consists of a process of verifying that a solution works for the user. Software vendors often refer to this as "Beta testing". They are also known as Customer Acceptance testing (CAT)
Testing process can have various stages. For a large project there can be separate testing teams who are in charge of these stages.

Found the following list on Wikipedia.
Functional testing
Integration testing
System testing
Usability testing
Performance testing
Load testing
Installation testing
Regression testing
Stress testing
Acceptance testing
Beta testing
Volume testing
Recovery testing

How do we conduct tests? Again no one simple answer. Again it depends on what we need to test. There can be manual testing where you test like an actual user, by manually doing each and every step. Or you automate tests using a tool/programming language. When you automate a test suite, there is significant investment of effort at the beginning but it pays off in the long run.
Here are some tools I’ve come across. JMeter is a load testing tool used mainly for web applications. soapui can be used to test web applications as well. Selenium is a popular open source set of tools that serves many purposes. In fact there is an overwhelming number of tools catering to various testing requirements.

Continuous integration/build management is another thing to consider. Jenkins is what we use currently. It can be configured to build and deploy the product and run an automated test suite on it.  The process is repeated when new releases arrive.

You can’t ignore metrics and reports. You may start small with a few excel generated statistics and graphs and once you get the hang of it may move to a more sophisticated approach. A monitor hung on the wall with a dashboard of current status which gets updated real time is not uncommon.
Another point to note is that in today’s agile world, where TDD (Test Driven Development) is popular, is that developers are expected to be more and more involved in testing. Ideally in TDD developers are expected to write the tests before they start implementing. In some extremely agile projects test suites are jointly contributed by both dev and QA teams and the number of QA personal required can be less.

Hope this is a good starting point.
Finally know that I’ve been a dev and not a QA. So do get the opinion of a QA :)