0% found this document useful (0 votes)
20 views11 pages

Manual Testing Interview Questions & Answers-PART6

Uploaded by

rakeshlr1996
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as RTF, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views11 pages

Manual Testing Interview Questions & Answers-PART6

Uploaded by

rakeshlr1996
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as RTF, PDF, TXT or read online on Scribd

Manual Testing Interview Questions -6

Q: How do you introduce a new software QA process?


A: It depends on the size of the organization and the risks
involved. For large organizations with high-risk projects, a
serious management buy-in is required and a formalized QA
process is necessary. For medium size organizations with lower
risk projects, management and organizational buy-in and a
slower, step-by-step process is required. Generally speaking, QA
processes should be balanced with productivity, in order to
keep any bureaucracy from getting out of hand. For smaller
groups or projects, an ad-hoc process is more appropriate. A lot
depends on team leads and managers, feedback to developers
and good communication is essential among customers,
managers, developers, test engineers and testers. Regardless
the size of the company, the greatest value for effort is in
managing requirement processes, where the goal is
requirements that are clear, complete and
testable.

Q: What is the role of documentation in QA?


A: Documentation plays a critical role in QA. QA practices
should be documented, so that they are repeatable.
Specifications, designs, business rules, inspection reports,
configurations, code changes, test plans, test cases, bug reports,
user manuals should all be documented. Ideally, there should
be a system for easily finding and obtaining of documents and
determining what document will have a particular piece of
information. Use documentation change management, if
possible.

Q: What makes a good test engineer?


A: Good test engineers have a "test to break" attitude. We,
good test engineers, take the point of view of the customer;
have a strong desire for quality and an attention to detail. Tact
and diplomacy are useful in maintaining a cooperative
relationship with developers and an ability to communicate
with both technical and non-technical people. Previous
software development experience is also helpful as it provides a
deeper understanding of the software development process,
gives the test engineer an appreciation for the developers' point
of view and reduces the learning curve in automated test tool
programming.
G C Reddy is a good test engineer because he has a "test to
break" attitude, takes the point of view of the customer, has a
strong desire for quality, has an attention to detail, He's also
tactful and diplomatic and has good a communication skill, both
oral and written. And he has previous software development
experience, too.

Q: What is a test plan?


A: A software project test plan is a document that describes the
objectives, scope, approach and focus of a software testing
effort. The process of preparing a test plan is a useful way to
think through the efforts needed to validate the acceptability of
a software product. The completed document will help people
outside the test group understand the why and how of product
validation. It should be thorough enough to be useful, but not
so thorough that none outside the test group will be able to
read it.

Q: What is a test case?


A: A test case is a document that describes an input, action, or
event and its expected result, in order to determine if a feature
of an application is working correctly. A test case should contain
particulars such as a...
• Test case identifier;
• Test case name;
• Objective;
• Test conditions/setup;
• Input data requirements/steps, and
• Expected results.
Please note, the process of developing test cases can help find
problems in the requirements or design of an application, since
it requires you to completely think through the operation of the
application. For this reason, it is useful to prepare test cases
early in the development cycle, if possible.

Q: What should be done after a bug is found?


A: When a bug is found, it needs to be communicated and
assigned to developers that can fix it. After the problem is
resolved, fixes should be re-tested. Additionally, determinations
should be made regarding requirements, software, hardware,
safety impact, etc., for regression testing to check the fixes
didn't create other problems elsewhere. If a problem-tracking
system is in place, it should encapsulate these determinations.
A variety of commercial, problem-tracking/management
software tools are available. These tools, with the detailed input
of software test engineers, will give the team complete
information so developers can understand the bug, get an idea
of its severity, reproduce it and fix it.

Q: What is configuration management?


A: Configuration management (CM) covers the tools and
processes used to control, coordinate and track code,
requirements, documentation, problems, change requests,
designs, tools, compilers, libraries, patches, changes made to
them and who makes the changes. Rob Davis has had
experience with a full range of CM tools and concepts, and can
easily adapt to your software tool and process needs.

Q:What if the software is so buggy it can't be tested at all?


A: In this situation the best bet is to have test engineers go
through the process of reporting whatever bugs or problems
initially show up, with the focus being on critical bugs.

Since this type of problem can severely affect schedules and


indicates deeper problems in the software development
process, such as insufficient unit testing, insufficient integration
testing, poor design, improper build or release procedures,
managers should be notified and provided with some
documentation as evidence of the problem.

Q: What if there isn't enough time for thorough testing?


A: Since it's rarely possible to test every possible aspect of an
application, every possible combination of events, every
dependency, or everything that could go wrong, risk analysis is
appropriate to most software development projects.

Use risk analysis to determine where testing should be focused.


This requires judgment skills, common sense and experience.
The checklist should include answers to the following questions:
• Which functionality is most important to the project's
intended purpose?
• Which functionality is most visible to the user?
• Which functionality has the largest safety impact?
• Which functionality has the largest financial impact on
users?
• Which aspects of the application are most important to the
customer?
• Which aspects of the application can be tested early in the
development cycle?
• Which parts of the code are most complex and thus most
subject to errors?
• Which parts of the application were developed in rush or
panic mode?
• Which aspects of similar/related previous projects caused
problems?
• Which aspects of similar/related previous projects had large
maintenance expenses?
• Which parts of the requirements and design are unclear or
poorly thought out?
• What do the developers think are the highest-risk aspects of
the application?
• What kinds of problems would cause the worst publicity?
• What kinds of problems would cause the most customer
service complaints?
• What kinds of tests could easily cover multiple
functionalities?
• Which tests will have the best high-risk-coverage to time-
required ratio?
Q: What if the project isn't big enough to justify extensive
testing?
A: Consider the impact of project errors, not the size of the
project. However, if extensive testing is still not justified, risk
analysis is again needed and the considerations listed under
"What if there isn't enough time for thorough testing?" do
apply. The test engineer then should do "ad hoc" testing, or
write up a limited test plan based on the risk analysis.

Q: What can be done if requirements are changing


continuously?
A: Work with management early on to understand how
requirements might change, so that alternate test plans and
strategies can be worked out in advance. It is helpful if the
application's initial design allows for some adaptability, so that
later changes do not require redoing the application from
scratch. Additionally, try to...
• Ensure the code is well commented and well documented;
this makes changes easier for the developers.
• Use rapid prototyping whenever possible; this will help
customers feel sure of their requirements and minimize
changes.
• In the project's initial schedule, allow for some extra time to
commensurate with probable changes.
Move new requirements to a 'Phase 2' version of an application
and use the original requirements for the 'Phase 1' version.
Negotiate to allow only easily implemented new requirements
into the project.
• Ensure customers and management understand scheduling
impacts, inherent risks and costs of significant requirements
changes. Then let management or the customers decide if the
changes are warranted; after all, that's their job.
• Balance the effort put into setting up automated testing
with the expected effort required to redo them to deal with
changes.
• Design some flexibility into automated test scripts;
• Focus initial automated testing on application aspects that
are most likely to remain unchanged;
• Devote appropriate effort to risk analysis of changes, in
order to minimize regression-testing needs;
• Design some flexibility into test cases; this is not easily done;
the best bet is to minimize the detail in the test cases, or set up
only higher-level generic-type test plans;
Focus less on detailed test plans and test cases and more on ad-
hoc testing with an understanding of the added risk this entails.

Q: How do you know when to stop testing?


A: This can be difficult to determine. Many modern software
applications are so complex and run in such an interdependent
environment, that complete testing can never be done.
Common factors in deciding when to stop are...
• Deadlines, e.g. release deadlines, testing deadlines;
• Test cases completed with certain percentage passed;
• Test budget has been depleted;
• Coverage of code, functionality, or requirements reaches a
specified point;
• Bug rate falls below a certain level; or
• Beta or alpha testing period ends.

Q: What if the application has functionality that wasn't in the


requirements?
A: It may take serious effort to determine if an application has
significant unexpected or hidden functionality, which it would
indicate deeper problems in the software development process.
If the functionality isn't necessary to the purpose of the
application, it should be removed, as it may have unknown
impacts or dependencies that were not taken into account by
the designer or the customer.
If not removed, design information will be needed to determine
added testing needs or regression testing needs. Management
should be made aware of any significant added risks as a result
of the unexpected functionality. If the functionality only affects
areas, such as minor improvements in the user interface, it may
not be a significant risk.

Common questions

Powered by AI

Strategies to manage changing requirements include involving management early to understand potential changes, using flexible and well-documented code, and employing rapid prototyping. Additionally, extra time should be allocated in initial scheduling to accommodate changes, and non-critical new requirements can be deferred to later phases. Automated testing efforts should focus on aspects unlikely to change, and flexibility should be built into test scripts to handle inevitable changes .

Risk analysis prioritizes testing efforts by identifying areas with the most risk, ensuring that testing focuses on functionalities that are vital for user satisfaction, have significant safety or financial impacts, and are most prone to errors. By focusing on these high-impact areas, the testing process becomes more efficient, covering critical risk without spreading resources too thin .

A "test-to-break" attitude is important for test engineers as it drives them to rigorously evaluate software from a customer's perspective, aiming to identify potential weaknesses and ensure high quality. This mindset promotes thorough testing and continuous communication with developers to collaboratively refine and improve the software product .

Ad-hoc testing is justified when extensive testing is not warranted, such as in small projects with minimal impacts of errors or when high-risk areas have been sufficiently covered. In these cases, conducting risk analysis helps determine focus areas, emphasizing critical functionalities and potential high-impact errors, allowing testers to efficiently use limited resources while accepting some risk .

Documentation is critical in QA because it ensures practices are repeatable and provides a reference for specifications, designs, and various reports. Well-organized documentation allows for easy location and retrieval of information, which is essential for efficiently maintaining the quality and standardization of processes. Using documentation change management systems aids in keeping track of changes and ensuring consistency .

Managing the requirements process is critical in software QA because it ensures that the requirements are clear, complete, and testable, which forms the foundation of a successful QA process. Poorly managed requirements can lead to defects being built into the software from the outset, complicating testing and potentially resulting in software that does not meet user needs .

When software is too buggy to be tested, focus on reporting critical bugs initially while using this as evidence to indicate suboptimal development processes, such as poor design and inadequate testing phases. Management should be alerted to prioritize corrections, potentially reallocating resources to resolve these fundamental issues to enable feasible testing .

Deciding when to stop testing involves considerations like meeting deadlines, completing test cases, depleting the testing budget, and reaching specific coverage of code and functionality. Furthermore, a decline in bug rate or the conclusion of beta/alpha testing stages can also dictate testing completion, ensuring that enough risk has been mitigated without exhausting resources .

The introduction of a new software QA process should consider the size of the organization and the risks involved. For large organizations with high-risk projects, management buy-in is crucial and a formal QA process is necessary. Medium-sized organizations with lower risk may adopt a slower, step-by-step approach. In smaller groups or projects, ad-hoc processes may be more efficient. It is crucial to balance QA processes with productivity to prevent excessive bureaucracy. Emphasizing clear, complete, and testable requirements management is the greatest value for effort, regardless of organizational size .

Developing test cases is integral as it forces a thorough consideration of the application's operations, often highlighting problems in requirements or design early in the development cycle. This proactive identification allows for early resolution of potential issues, enhancing the overall reliability and quality of the software before extensive coding and implementation occur .

You might also like