How to advertise
on Softwareqatest.com

Software QA and Testing Frequently-Asked-Questions Part 2


What makes a good Software Test engineer?
A good test engineer has a 'test to break' attitude, an ability to take the point of view of the customer, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers, and an ability to communicate with both technical (developers) and non-technical (customers, management) people is useful. Previous software development experience can be helpful as it provides a deeper understanding of the software development process, gives the tester an appreciation for the developers' point of view, and reduce the learning curve in automated test programming. Judgement skills are needed to assess high-risk or critical areas of an application on which to focus testing efforts when time is limited.

Return to top of this page's FAQ list

What makes a good Software QA engineer?
The same qualities a good tester has are useful for a QA engineer. Additionally, they must be able to understand the entire software development process and how it can fit into the business approach and goals of the organization. Communication skills and the ability to understand various sides of issues are important. In organizations in the early stages of implementing QA processes, patience and diplomacy are especially needed. An ability to find problems as well as to see 'what's missing' is important for inspections and reviews.

Return to top of this page's FAQ list

What makes a good QA or Test manager?
A good QA, test, or QA/Test(combined) manager should:

Return to top of this page's FAQ list

What's the role of documentation in QA?
Generally, the larger the team/organization, the more useful it will be to stress documentation, in order to manage and communicate more efficiently. (Note that documentation may be electronic, not necessarily in printable form, and may be embedded in code comments, may be embodied in well-written test cases, user stories, etc.) QA practices may be documented to enhance their repeatability. Specifications, designs, business rules, configurations, code changes, test plans, test cases, bug reports, user manuals, etc. may be documented in some form. There would ideally be a system for easily finding and obtaining information and determining what documentation will have a particular piece of information. Change management for documentation can be used where appropriate. For agile software projects, it should be kept in mind that one of the agile values is "Working software over comprehensive documentation", which does not mean 'no' documentation. Agile projects tend to stress the short term view of project needs; documentation often becomes more important in a project's long-term context.

Return to top of this page's FAQ list

What's the big deal about 'requirements'?
Depending on the project, it may or may not be a 'big deal'. For agile projects, requirements are expected to change and evolve, and detailed documented requirements may not be needed. However some requirements, in the form of user stories or something similar, are useful. For non-agile types of projects detailed documented requirements are usually needed. (Note that requirements documentation can be electronic, not necessarily in the form of printable documents, and may be embedded in code comments, or may be embodied in well-written test cases, wiki's, user stories, etc.) Requirements are the details describing an application's externally-perceived functionality and properties. Requirements are ideally clear, complete, reasonably detailed, cohesive, attainable, and testable. A non-testable requirement would be, for example, 'user-friendly' (too subjective). A more testable requirement would be something like 'the user must enter their previously-assigned password to access the application'. Determining and organizing requirements details in a useful and efficient way can be a difficult effort; different methods and software tools are available depending on the particular project. Many books are available that describe various approaches to this task. (See the Softwareqatest.com Bookstore section's 'Software Requirements Engineering' category for books on Software Requirements.)

Care should be taken to involve ALL of a project's significant 'customers' in the requirements process. 'Customers' could be in-house personnel or outside personnel, and could include end-users, customer acceptance testers, customer contract officers, customer management, future software maintenance engineers, salespeople, etc. Anyone who could later derail the success of the project if their expectations aren't met should be included if possible.

Organizations vary considerably in their handling of requirements specifications. Often the requirements are spelled out in a document with statements such as 'The product shall.....'. 'Design' specifications should not be confused with 'requirements'. It can be helpful to have design specifications traceable back to the requirements.

In some organizations requirements may end up in high level project plans, functional specification documents, in design documents, or in other documents at various levels of detail. No matter what they are called, some type of documentation with requirements, user stories, and related information will be useful to testers in order to properly plan and execute tests (manual or autoamted). Without such documentation, there will be no clear-cut way to determine if software is performing correctly.

If testable requirements are not available or are only partially available, useful testing can still be performed. In this situation test results may be more oriented to providing information about the state of the software and risk levels, rather than providing pass/fail results. A relevant testing approach in this situation may include approaches such as 'exploratory testing'. Many software projects have a mix of documented testable requirements, poorly documented requirements, undocumented requirements, and changing requirements. In such projects a mix of scripted and exploratory testing approaches may be useful. See the Softwareqatest.com 'Other Resources' page in the 'General Software QA and Testing Resources' section for articles on exploratory testing, and in the 'Agile and XP Testing Resources' section for articles on agile software development and testing.)

'Agile' approaches use methods requiring close interaction and cooperation between programmers and stakeholders/customers/end-users to iteratively develop requirements, user stories, etc. In the XP 'test first' approach developers create automated unit testing code before the application code, and these automated unit tests can essentially embody the requirements.

Return to top of this page's FAQ list

What steps are needed to develop and run software tests?
The following are some of the steps to consider: (Note: these apply to an overall testing approach or manual testing approach; for more information on automated testing see the SoftwareQATest.com LFAQ page.

Return to top of this page's FAQ list

What's a 'test plan'?
A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the 'why' and 'how' of product validation. It should be thorough enough to be useful but not so overly detailed that no one outside the test group will read it. The following are some of the items that might be included in a test plan, depending on the particular project:

(See the Softwareqatest.com Bookstore section's 'Software Testing' and 'Software QA' categories for useful books with more information.)

Return to top of this page's FAQ list

What's a 'test case'?
A test case describes an input, action, or event and an expected response, to determine if a feature of a software application is working correctly. A test case may contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results. The level of detail may vary significantly depending on the organization and project context. Note that organizations vary considerably in their handling of test cases; many utilize less-detailed 'test scenarios' that allow for simpler and more adaptable/maintainable test documentation.

Note that the process of developing test cases can help find problems in the requirements or design of an application, since it requires completely thinking through the operation of the application. For this reason, it's useful to prepare test cases early in the development cycle if possible.

Return to top of this page's FAQ list

What should be done after a bug is found?
The bug needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested, and determinations made regarding requirements for regression testing to check that fixes didn't create problems elsewhere. If a problem-tracking system is in place, it should encapsulate these processes. A variety of commercial problem-tracking/management software tools are available (see the 'Tools' section for web resources with listings of such tools). The following are items to consider in the tracking process:

A reporting or tracking process should enable notification of appropriate personnel at various stages. For instance, testers need to know when retesting is needed, developers need to know when bugs are found and how to get the needed information, and reporting/summary capabilities are needed for managers.

Return to top of this page's FAQ list

What is 'configuration management'?
Configuration management covers the processes used to control, coordinate, and track: code, requirements, documentation, problems, change requests, designs, tools/compilers/libraries/patches, changes made to them, and who makes the changes. (See the 'Tools' section for web resources with listings of configuration management tools. Also see the Softwareqatest.com Bookstore section's 'Configuration Management' category for useful books with more information.)

Return to top of this page's FAQ list

What if the software is so buggy it can't really be tested at all?
The best bet in this situation is for the testers to go through the process of reporting whatever bugs or blocking-type problems initially show up, with the focus being on critical bugs. Since this type of problem can significantly affect schedules, and indicates deeper problems in the software development process (such as insufficient unit testing or insufficient integration testing, poor design, improper build or release procedures, etc.) managers should be notified, and provided with some documentation as evidence of the problem.

Return to top of this page's FAQ list

How can it be known when to stop testing?
This can be difficult to determine. Most modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are:

Also see 'Who should decide when software is ready to be released?' in the LFAQ section.

Return to top of this page's FAQ list

What if there isn't enough time for thorough testing?
Use risk analysis, along with discussion with project stakeholders, to determine where testing should be focused.
Since it's rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. This requires judgement skills, common sense, and experience. (If warranted, formal methods are also available.) Considerations can include:

Return to top of this page's FAQ list

What if the project isn't big enough to justify extensive testing?
Consider the impact of project errors, not the size of the project. However, if extensive testing is still not justified, risk analysis is again needed and the same considerations as described previously in 'What if there isn't enough time for thorough testing?' apply. The tester might then do ad hoc or exploratory testing, or write up a limited test plan based on the risk analysis.

Return to top of this page's FAQ list

How does a client/server environment affect testing?
Most current software being tested involves multi-tier client/server applications which can be highly complex due to the multiple dependencies among clients, data communications, hardware, and servers. Thus testing requirements can be extensive. When time is limited (as it usually is) the focus should be on integration and system testing. Additionally, load/stress/performance testing may be useful in determining client/server application limitations and capabilities. There are commercial and open source tools to assist with such testing. (See the 'Tools' section for web resources with listings that include these kinds of test tools.)

Return to top of this page's FAQ list

How can Web sites be tested?
Web sites are essentially client/server applications - with web servers and 'browser' clients. Consideration should be given to the interactions between html pages, web services, encrypted communications, Internet connections, firewalls, applications that run in web pages (such as javascript, flash, other plug-in applications), the wide variety of applications that could run on the server side, etc. Additionally, there are a wide variety of servers and browsers, mobile platforms, various versions of each, small but sometimes significant differences between them, variations in connection speeds, rapidly changing technologies, and multiple standards and protocols. The end result is that testing for web sites can become a major ongoing effort. Other considerations might include:

Some sources of web site security information include the Usenet newsgroup 'comp.security.announce' and links concerning web site security in the 'Other Resources' section.

Hundreds of web site test tools are available and more than 570 of them are listed in the 'Web Test Tools' section.

Return to top of this page's FAQ list

How is testing affected by object-oriented designs?
Well-engineered object-oriented design can make it easier to trace from code to internal design to functional design to requirements. While there will be little affect on black box testing (where an understanding of the internal design of the application is unnecessary), white-box testing can be oriented to the application's objects, methods, etc. If the application was well-designed this can simplify test design and test automation design.

Return to top of this page's FAQ list

What is Extreme Programming and what's it got to do with testing?
Extreme Programming (XP) is a software development approach for small teams on risk-prone projects with unstable requirements. It was created by Kent Beck who described the approach in his book 'Extreme Programming Explained' (See the Softwareqatest.com Books page.). Testing ('extreme testing') is a core aspect of Extreme Programming. Programmers are expected to write unit and functional test code first - before writing the application code. Test code is under source control along with the rest of the code. Customers are expected to be an integral part of the project team and to help develope scenarios for acceptance/black box testing. Acceptance tests are preferably automated, and are modified and rerun for each of the frequent development iterations. QA and test personnel are also required to be an integral part of the project team. Detailed requirements documentation is not used, and frequent re-scheduling, re-estimating, and re-prioritizing is expected. For more info on XP and other 'agile' software development approaches (Scrum, Crystal, etc.) see the resource listings in the 'Agile and XP Testing Resources' section.

Return to top of this page's FAQ list