Showing posts with label Testing Methodologies. Show all posts
Showing posts with label Testing Methodologies. Show all posts

June 14, 2023

Test Levels in Software Testing

Great job on starting a new lesson! After reading this lesson, click Next 👉 button at bottom right to continue to the next lesson.

What are test levels in software testing? Test levels represent distinct stages of software testing that target specific aspects of your software. The primary test levels include unit testing, integration testing, system testing, and user acceptance testing. Each test level serves a special purpose in evaluating the quality of your software throughout various phases of the software development process.

Test levels examples (4 levels of test)

  • Unit testing: It is testing the individual components of a software. For example, testing a login method or a calculate method in a class.
  • Integration testing: It is testing the control and data flow between different components or modules of a software. For example, testing if the methods in the User class and the Home class work together correctly. At a minimum, integration testing is done on two components. When that passes, more components are added incrementally and integration testing is repeated.
  • System testing: It is testing the entire software as a single system. For example, testing the functionality, performance, security, and usability of an e-commerce website, and the website's integration with other services.
  • User acceptance testing: It is testing the software against the business requirements. For example, testing if the e-commerce website meets the needs and satisfaction of the customers, sellers, customer service representatives and other stakeholders.

Tips for test levels

  • Plan and design the tests for each test level according to the scope and objectives of that test level.
  • Use the standard tools in your project to automate your testing process.
  • Use the approach given in your test plan to execute tests and report the test results.
  • Communicate with the developers and other stakeholders during each test level.

FAQ (interview questions and answers)

  1. What do you think is the main difference between system testing and user acceptance testing?
    System testing is done by testers to validate that the software meets the requirements, technical specifications and quality standards. User acceptance testing is done by the users to validate if the software meets their needs and expectations. System testing is more detailed than user acceptance testing. System testing is done before user acceptance testing.
  2. What is the primary purpose of integration testing?
    Test if the different components and modules of a software communicate and work together as expected and without any errors.
  3. What types of unit testing do you know?
    Component testing, Module testing, Mocking and Stubbing, White-box and Black-box testing, Test-Driven Development (TDD) and Behavior-Driven Development (BDD), and Coverage testing
  4. What are some challenges of user acceptance testing?
    Defining realistic user requirements, ensuring active user participation, having clear communication, and managing user feedback and expectations in a dynamic environment.
Remember to just comment if you have any doubts or queries.

June 06, 2023

Test Design Techniques in Software Testing

Great job on starting a new lesson! After reading this lesson, click Next button at bottom right to continue to the next lesson.

What is test design techniques in software testing?

Test design techniques are standard methods to design test cases to test the requirements and functionality of your software product or service. Test design techniques help you to discover more defects in the software because they focus on the error-prone areas in the software, or provide more test coverage.

What are the techniques used for designing test cases?

Test design techniques examples are

  1. Equivalence partitioning: This technique divides the input or output values into partitions (groups) that are expected to behave similarly. For example, if an input field accepts numbers from 1 to 100, you can divide the test inputs into three groups: -infinity to 0, 1 to 100, and 101 to infinity. You can then test with one value from each group. View first video tutorial below for more examples.
  2. Boundary value analysis: This technique tests the values at the boundaries of an input or output range. For example, if an input field accepts numbers from 1 to 100, you can test with values 0, 1, 100, and 101. View first tutorial below for more examples.
  3. Decision table testing: This technique uses a table to show the combinations of inputs and outputs for a system. For example, if a system has two binary inputs (A and B) and one output C, you can create a table with three columns (A, B, and C) and four rows (all combinations of A and B). Each row represents a test case with different values for A and B and the expected value for C. View second video tutorial below for more examples.
  4. State transition testing: This technique tests how a system changes its state based on inputs and events. For example, if a system has three states (S1, S2, and S3) and two inputs (X and Y), you can create a state diagram that shows how the system transitions from one state to another based on X and Y. View third video tutorial below for more examples.

Test design guidelines

  1. Understand the requirements and specifications of the software before test design.
  2. Use multiple test design techniques, for example first Equivalence Partitioning and then Boundary Value Analysis, to increase the coverage and quality of your test cases.
  3. Use tools and frameworks that support test design techniques and automate test case generation. View fourth video tutorial below.

FAQ (interview questions and answers)

  1. What is the difference between test plan and test design?
    A test plan is a document that describes the scope, objectives, strategy, resources, schedule, risks, and deliverables of a testing project. A test design is a process that creates test cases that would be executed.
  2. What is the effect of using test design techniques?
    They help design effective tests that may discover more defects in the software because test design techniques focus on error-prone areas of the software.
  3. What are some challenges of using test design techniques?
    We need to have a good knowledge of the test design techniques and their applicability.
I wrote the original lesson on May 11, 2023. Remember to just comment if you have any doubts or queries.




Test Planning in Software Testing

Great job on starting a new lesson! After reading this lesson, click Next button at bottom right to continue to the next lesson.

What is test planning in software testing?

Test planning is the process of defining the scope, objectives, strategy, resources, schedule, and deliverables for testing your software product or service. Test planning helps you align your testing activities with the project goals, quality standards, and regulations, while making the best use of available resources. Also, in test planning, you estimate the effort, cost, and the duration required for testing.

Test plan sections

A test plan typically contains the following sections:

  1. Scope and test objectives: The scope and purpose of software testing, the features and functions to be tested and not tested, and the quality criteria to be achieved.
  2. Test strategy: The high-level testing approach and methodology for software testing, the testing types and testing levels, the test techniques and tools, and the test environment.
  3. Test resources: The roles and responsibilities of the software testers, their skills, and any training needed by them.
  4. Test schedule: The timeline(s) and milestones for software testing activities, the dependencies and risks, mitigation plans and contingency plans.
  5. Test deliverables: The outputs of the testing process, such as test cases, test data, test results, defect reports, status reports, etc.

Test planning is a critical step in software testing because it helps to:

  • Guide the testing process and ensure its alignment to project goals and quality standards.
  • Avoid testing out-of-scope functionalities.
  • Communicate the testing process to the stakeholders.
  • Track and control the testing process, progress and quality.
  • Reuse the test plan for future enhancements or similar projects in the organization.

Test Plan Example

A banking website allows customers to perform various transactions online, such as checking account balance, transferring money, paying bills, etc. The test plan for this website may include:

  1. Test objective: To verify that the website functions correctly and securely according to the requirements.
  2. Test strategy: Do functional testing (including manual testing exploratory testing and automated regression testing), security testing, performance testing, usability testing, and compatibility testing. To use Selenium WebDriver, JMeter, and OWASP tools.
  3. Test resources: One test lead and four test engineers. Provide training on the banking domain, the website features, and the test tools. To use laptops with Windows 11 OS, Chrome browser, Internet connection, etc.
  4. Test schedule: Follow a lifecycle with four phases: test planning (2 weeks), test design (4 weeks), test execution (6 weeks), and test closure (2 weeks). Identified dependencies on development team, business team, security team, and end-users. Identified risks (delays in development, changes in requirements, resource issues, and technical issues)
  5. Test deliverables: Test plan document, test cases, test data, test results, defect reports, status reports and test summary report.

How do you do test planning?

  1. First, understand and review all your software requirements and specifications.
  2. Estimate the effort, duration, cost, and resources required for software testing based on your data, assumptions and constraints. 
  3. Identify the dependencies, risks, and their mitigation and contingency plans.
  4. Use the standard format and terminology of your organization for your test plan document.
  5. Avoid using lengthy paragraphs and unnecessary details. Use lists and tables to structure the plan.
  6. Get feedback from the stakeholders in the test planning process.
  7. Review your test plan regularly. Update it if changes occur

FAQ (interview questions and answers)

  1. In your experience, what are the benefits of test planning?
    Test planning helps us ensure that the testing activities are aligned with the project goals, quality standards, and regulations, while making the best use of available resources. Test plan is an input to estimate the effort, cost, and the duration required for testing.
  2. Are there different types of test plans?
    Yes, there may be master test plan for the whole project, phase test plan for a single phase, and a specific test plan e.g. performance test plan for a specific type of testing
  3. What are the main components of a test plan?
    Scope, test objectives, test strategy, test resources, test schedule, test deliverables, and risk management.
  4. What is the difference between a test plan and a test case?
    A test plan is a document that describes the what, when, how, and who of software testing. A test plan refers multiple test cases. A test case is a set of steps that specifies what to test, how to test it, what inputs to use and the expected result.
I wrote the original lesson on May 11, 2023. Remember to just comment if you have any doubts or queries.


Testing Process in Software Testing - Methodologies - Testing Process Steps

Great job on starting a new lesson! After reading this lesson, click Next button at bottom right to continue to the next lesson.

What is testing process in software testing?

Testing process guides the software testing of your product or service, with the help of methodologies. Testing process helps you find out efficiently if your software meets the requirements, works as expected and fails safely. Testing process and methodologies include Test Planning, Test Design, Test Execution, Test Automation, Defect Management. Software testing is also guided by Requirements Management, Release Management, and Communication Management.

Testing process steps and methodologies examples

  • Requirements Management: A business analyst gathers and documents the functional and non-functional requirements of the software system from the stakeholders. He then reviews and prioritizes the requirements and communicates them to the development and testing teams. He uses tools like Jira, Trello or Confluence to manage and track the requirements.
  • Test Planning: A test manager creates a test plan document that defines the scope, objectives, strategy, resources, schedule, and risks of the software testing in the project. She identifies the test environment, tools and deliverables for the software testing. She shares the test plan to get reviews of the stakeholders, finalizes the test plan and gets their approval.
  • Test Design: A test engineer analyzes the requirements and designs tests that cover all the practical scenarios of the software system. He also writes automated testing scripts that automate the execution of the test cases using tools like Selenium, Appium, or Postman. He also collects or designs test data that is required for testing.
  • Test Execution: A test engineer executes the test cases and automated test scripts on the test environment and tools. He compares the actual results with the expected results and records the result of each test case. He also reports any defects or issues that he finds during testing to the development team using a defect management system.
  • Test Automation: A test engineer uses tools like Selenium WebDriver, Postman or JMeter to automate the testing of web applications, APIs or performance respectively. He writes test scripts using programming languages like Java, Python, or JavaScript and runs them using frameworks like TestNG, PyTest or Mocha. He also integrates the test scripts with continuous integration tools like Jenkins, Git or Docker to run them automatically.
  • Defect Management: A test engineer reports any defects or issues that he finds during testing to the development team using a defect tracking system like Bugzilla, Mantis or Zephyr. He also assigns a priority, severity and status to each defect and tracks its resolution. He retests the defects in the test environment after they are fixed.
  • Communication Management: A project manager communicates with all the stakeholders involved in the software development lifecycle, such as clients, developers, testers, and end-users. She also updates them on the progress, risks and issues of the project. She uses tools like Slack, Zoom or Microsoft Teams to communicate. The test engineer communicates with the development team regarding the defects.

Tips for Testing Process and Methodologies

  • Select or design your testing process and methodologies before starting development. This will help you to align your testing goals with your business objectives and avoid rework later.
  • Choose your testing tools wisely based on your project needs, budget, and skills
  • Follow best practices and standards for writing test cases, automated test scripts and test data. This helps you have consistency, readability and reuse of your test artifacts.
  • Perform testing at test levels, from unit testing to system testing. This will help you to detect defects earlier.
  • To test compatibility and scalability, do test execution in different test environments, such as local, remote, and cloud-based
  • To get feedback quickly and incorporate changes easily, do test execution iteratively and incrementally throughout the software development lifecycle
  • Perform testing collaboratively with your development team using Agile methodologies like Scrum or Kanban. This will help you to improve test process quality.

FAQ (interview questions and answers)

  1. What do you think are the main components of a test plan document?
    Test scope, test objectives, test strategy, test resources, test schedule, test risks, test deliverables and test environment.
  2. What are the types of test design techniques that you use?
    Equivalence partitioning, boundary value analysis, decision tables and state transition testing
  3. What are the phases of test execution?
    Test preparation: setting up the test environment, tools and data
    Test execution: running the test cases and test scripts on the software system
    Result analysis: comparison of the actual results with the expected results and recording the outcome of each test case execution
    Defect reporting: reporting any defects or issues that are found during test execution to the development team using the defect management process
  4. What are the factors that you should consider before reporting a defect?
    Reproducibility of the defect, severity and priority of the defect, impact and frequency of the defect, expected and actual results of the defect and evidence and details of the defect.
Remember to just comment if you have any doubts or queries.



May 18, 2023

User Acceptance Testing (Test Level)

Great job on starting a new lesson! After reading this lesson, click Next 👉 button at bottom right to continue to the next lesson.

User acceptance testing is a test level in software testing that is performed by the end users or the customers, or even independent testers to validate and accept the software system before deploying it in the production environment. User acceptance testing is done after unit, integration and system testing are done. User acceptance testing evaluates the functionality and quality of your software against business requirements, in real-world conditions.

User acceptance testing (UAT) examples

  • Alpha testing: It is a type of UAT that is done by the internal employees or testers of the software company at their own site.
  • Beta testing: It is a type of UAT that is done by the external users or customers at their own site. Beta testing is done after alpha testing.
  • Contract acceptance testing: It is a type of UAT that is ordered by the client to test if the software meets the contract or agreement specifications.
  • Regulation acceptance testing: It is a type of UAT that is done to test if the software complies with the applicable legal or regulatory standards.
  • Operational acceptance testing: It is a type of UAT that is done to test if the software can operate smoothly in the intended environment.

Tips for user acceptance testing

  • Define clear and measurable acceptance criteria for your software, before UAT begins.
  • Select skilled users who can represent end users.
  • Design realistic and relevant test scenarios, test cases, and test data. Include end-to-end (E2E) test cases.
  • Use appropriate methods for UAT, such as kick-off, triage, surveys, feedback forms, interviews, etc.
  • Report defects and issues and analyze them promptly.

FAQ (interview questions and answers)

  1. What is the difference between system testing and user acceptance testing?
    System testing is done by a team of testers who are independent of the development team to test your software as an entire system. UAT is done the end users or the customers, or even independent testers to validate and accept the software system before deploying it in the production environment. System testing is more detailed than UAT. UAT follows successful system testing.
  2. What are the benefits of UAT to your client?
    It increases customer confidence, because it is done by the end users who execute realistic tests in an environment that is exactly like or very similar to the production environment.
  3. What challenges do you face during UAT?
    It requires time and resources. It depends on the availability, skill level, and cooperation of users. Coordination with a large number of users is challenging.
  4. How do you measure the progress of UAT (user acceptance testing)?
    By measuring number of test cases executed, number of defects found and fixed, number of users involved and satisfied, number of requirements met and verified, etc.
Remember to just comment if you have any doubts or queries.


May 17, 2023

System Testing (Test Level)

Great job on starting a new lesson! After reading this lesson, click Next 👉 button at bottom right to continue to the next lesson.

System testing is a a test level in software testing that evaluates the overall functionality and other quality attributes of your integrated software. It tests if the system meets the specified requirements and design. In system testing, you find out if the system is ready for user acceptance testing by the end-users. System testing is performed after integration testing and before user acceptance testing.

System testing examples

  • Functional testing: It tests the functionalities and features of the software. For example, testing the login functionality of a web application, or testing the website reports.
  • Performance testing: It tests the speed, scalability, stability and reliability of the software. For example, simulating multiple users on a mobile application and measuring it's response time, throughput, and resource utilization.
  • Security testing: It tests the security of the software against various threats and attacks. For example, testing that the input fields do not accept code, or testing that the payment transactions are encrypted and secure.
  • Usability testing: It tests the user-friendliness and ease of use of the software. For example, testing that the user interface is intuitive and the work flow is logical.

Tips for System Testing

  • Create tests that cover all the functional and non-functional requirements, technical specifications and quality standards.
  • Use the test environment that is exactly or very similar to the real-time production environment.
  • Use the standard tools in your project test automation, user simulation, monitoring, etc.
  • After defect fixes or other changes to the test environment, perform regression testing to check for unwanted effects of the changes.
  • Report defects promptly.

FAQ (interview questions and answers)

  1. What is the difference between system testing and integration testing?
    System testing tests the functionality and other qualities (such as performance and security) of your entire software as a single system, while integration testing tests the interactions, between individual components or modules of your software. System testing follows successful integration testing.
  2. What are some of the types of system testing?
    Functional testing, performance testing, security testing, compatibility testing, and usability testing.
  3. What is the primary purpose of system testing?
    Evaluate the integrated system against the specified requirements and design.
  4. Who owns system testing?
    The testers, who are not developers. These testers should have a good knowledge of the system requirements and design specifications, and the tools for test automation and monitoring the system.
Remember to just comment if you have any doubts or queries.

May 16, 2023

Integration Testing (Test Level)

Great job on starting a new lesson! After reading this lesson, click Next 👉 button at bottom right to continue to the next lesson.

Integration testing is a test level in software testing where individual units or components of your software system are tested together as a group. It focuses on testing the control and data flow between individual units or components of the software. For example, testing if the methods in the User class and the Home class work together correctly. Integration testing is typically performed after unit testing and before system testing.

Integration testing examples

  • Testing the interface between a login module and a user profile module. For example, testing if the login module validates the user credentials and allows the validated user to perform edits in the user profile page.
  • Testing the data exchange between a front-end application and a back-end database. For example, testing a form to submit data that can then be queried in the database.
  • Testing the interaction between a payment gateway and a bank API. For example, testing a token payment sent from the payment gateway to the bank.
  • Testing the functionality of a web service that integrates multiple microservices.
  • Tips for integration testing

    • Use your test plan to review the scope, strategy, tools and schedule of integration testing.
    • Use test drivers to run the integration test.
    • Use mocking and stubbing to simulate the behavior of missing or incomplete components.
    • Use incremental testing approaches, such as top-down, bottom-up, or sandwich, to integrate and test components gradually.
    • Use tools and frameworks, such as JUnit, TestNG, or Selenium, to organize and run your integration tests.

    FAQ (interview questions and answers)

    1. What is the main difference between integration testing and system testing?
      Integration testing tests the interactions, such as interfaces, control flow, data flow and exception flow, and dependencies, between individual components or modules of your software, while system testing tests the functionality and performance of the entire software system as a single system.
    2. What are test drivers and test stubs?
      Test drivers are used to call and pass input data to the component under test, while test stubs are used to generate output data from the component under test. Test drivers and test stubs simulate the behavior of missing, incomplete or defective components during integration testing.
    3. Which one do you prefer out of incremental testing and big-bang testing?
      Incremental testing is an integration testing approach where components or modules are integrated and tested gradually, while big-bang testing is the testing approach where all components or modules are integrated and tested at once. Incremental testing has  advantages over big-bang testing, such as, it helps to find and fix defects earlier in the development process, it reduces the complexity of integration testing, it allows for parallel integration testing of some components and development of other components. The final preference should also consider project size, complexity, and development methodology.
    4. What are some challenges of integration testing?
      It requires coordination among different teams or developers. Setting up the integration test environment may be challenging. Integration may depend on the availability and quality of external components or services. Integration testing may involve complex scenarios and data flows, and it may require more effort than unit testing. Maintaining documentation (integration test cases, defect reports, etc.) may require additional effort.
    Remember to just comment if you have any doubts or queries.

    Unit Testing (Test Level)

    Great job on starting a new lesson! After reading this lesson, click Next 👉 button at bottom right to continue to the next lesson.

    Unit testing is a test level in software testing that focuses on an individual unit or component of your software, for example, testing a login method or a testing a calculate method in a class. The purpose of unit testing is to validate that each unit of the software meets the requirements and works as expected. Unit testing is typically performed early, before the code is integrated and the software is tested as a single system.

    Unit testing examples

    • Component testing: It is testing a component that provides a specific functionality, such as a button, a menu, or a form. For example, testing if a button changes color when clicked.
    • Module testing: It is testing a module that consists of multiple components that work together to provide a feature, such as a login module, a payment module, or a registration module. For example, testing if the login module validates the user credentials and redirects the validated user to the home page.
    • Mocking and stubbing: It is using mock objects or stubs to simulate the behavior of external dependencies, such as databases, APIs, or services, typically when these external dependencies are not yet available. For example, using a mock database to test a module that queries data from the database.
    • White-box testing: It is testing the internal structure and logic of the code, such as branches, loops, conditions, and statements. For example, testing if a function returns the correct output for different input values.
    • Black-box testing: It is testing the the code while ignoring its internal structure and logic. For example, testing if a method throws an exception for invalid input.
    • Test-driven development (TDD): It involves writing unit tests before writing the code, and then writing the code to make the tests pass. For example, writing a test case for a function that calculates the factorial of a number, and then writing the function to pass the test.
    • Behavior-driven development (BDD): It involves writing unit tests in a natural language that describes the expected behavior of the code. For example, writing and running a test case for a method that checks if a user input is a question using the format: Given an input string, when I check if it is a question, then I should get true or false.
    • Coverage testing: It is measuring how much of the code is covered by the unit tests, such as statements, branches, functions, or lines. For example, using a tool to measure the percentage of code coverage by the unit tests.

    Tips for unit testing

    • Use the appropriate unit testing framework, such as JUnit, TestNG, or NUnit, to automate and organize your unit tests.
    • Give clear and descriptive test names and add comments to document your tests.
    • Use mock objects or stubs to isolate the unit under test from external dependencies.
    • Cover both positive and negative test cases. Include edge cases and error conditions respectively.
    • Run your unit tests frequently and fix any failures as soon as possible.

    FAQ (interview questions and answers)

    1. What is the main difference between unit testing and integration testing?
      Unit testing tests individual units or components in isolation, while integration testing tests multiple units or components working together.
    2. What is test-driven development (TDD)?
      It is a software development methodology that involves writing unit tests before writing the code, and then writing only the code to make those unit tests pass.
    3. What are the benefits of unit testing to you?
      It helps to find bugs early in the development cycle, and the unit tests provide documentation for the code.
    4. What are some challenges of unit testing?
      It requires effort to write and maintain good unit tests, it may not detect all the defects or cover all the scenarios, it may introduce false positives or negatives, and it may be limited because of the test data and tools.
    Remember to just comment if you have any doubts or queries.



    May 12, 2023

    Defect Management in Software Testing

    Great job on starting a new lesson! After reading this lesson, click Next button at bottom right to continue to the next lesson.

    Defect Management is the process of creating, categorizing, analyzing, resolving and tracking defect reports. It helps to organize, and manage the defect reports and get reports on the current status of defect reports.

    Defect Management Example

    • The tester finds that the login button on their website does not work. He reports the defect with a unique auto-generated ID, summary, description, steps to reproduce, test data used, severity, priority, and screenshots.
    • The developer analyzes and fixes the defect, and marks it as resolved. She also provides the version of the software where the defect was fixed.
    • The tester verifies that the defect is fixed by retesting the login button on the their website. He marks the defect as closed if it is fixed or reopens it if it is not.
    • The test manager analyzes the defect data and generates reports on defect trends, root causes, and quality metrics.

    Tips for Defect Management

    • Use the standard defect tracking tool to record and track defects throughout the software development lifecycle.
    • Use the standard defect report template to provide clear and consistent information about defects.
    • Prioritize defects based on their impact and urgency. Fix and re-test high-severity and high-priority defects first.
    • Communicate effectively with the developers to resolve defects quickly and efficiently.
    • Review the defect reports to learn how to prevent them from recurring in the future.

    FAQ (interview questions and answers)

    1. What is the difference between a bug and a defect?
      A bug is a deviation from the expected software behavior, while a defect is caused by an error or issue in code.
    2. What are the common types of defects in software testing?
      Functional defects, performance defects, usability defects, security defects, compatibility defects, and user interface defects.
    3. In your experience, what are the common causes of bugs?
      Incomplete or ambiguous or conflicting requirements, design flaws, coding errors, incorrect test data, human errors, and test environmental issues.
    4. What are the benefits of defect management in software testing?
      Better organization, management, and reporting on the defect reports leading to team collaboration, and software quality visibility.
    Remember to just comment if you have any doubts or queries.


    Test Automation

    Great job on starting a new lesson! After reading this lesson, click Next button at bottom right to continue to the next lesson.

    Test automation uses software tools to create, execute, manage, and report software tests. It replaces many manual tasks. Test automation is often used for regression tests, which are repetitive actions, and for both functional and non-functional testing.

    Test Automation Examples

    • A tester uses a tool to automate the testing of a web application’s user interface. He records the actions of clicking on buttons, entering test data, and validating outputs. He then replays the recorded script to test the application on different browsers and devices.
    • A tester uses a tool to automate the testing of a software's compatibility with with different operating systems, browsers, and browser versions. He creates the test automation scripts in a programming language. He then executes the test scripts and analyzes the results.
    • A tester uses a tool to automate the testing of a mobile application’s performance. He sets up the parameters of load, concurrency, and duration. He then runs the test and monitors the response time, throughput, and resource utilization of the system.
    • A tester uses a tool to automate the testing of an e-commerce application’s security. He creates test cases to check for vulnerabilities such as SQL injection, cross-site scripting, and broken authentication. He then executes the test cases and reports the findings.

    Tips for Test Automation

    • Evaluate and choose a suitable test automation tool that meets your software requirements, skill level, and budget.
    • Define a test automation strategy (in your test plan) that aligns with your scope, testing objectives, and resources.
    • Design and develop reusable, maintainable, and modular test scripts that follow automation coding standards and automation design patterns.
    • Execute test automation regularly and continuously as part of your software test life cycle.
    • Analyze test automation results and metrics to identify defects in test automation, and improvement opportunities.

    FAQ (interview questions and answers)

    1. In your experience, what are the benefits of test automation?
      It saves time, money, and resources. It increases test coverage, accuracy, and reliability. It supports Agile and DevOps methodologies.
    2. What are the challenges of test automation?
      It requires an initial investment and effort. It may not be able to test all aspects of software functionality. There may be technical limitations. It may need frequent maintenance and updates.
    3. What are the skills required for test automation?
      Coding skills, testing skills, tool skills, domain knowledge, analytical skills, problem-solving skills, communication skills, etc.
    Remember to just comment if you have any doubts or queries.

    May 11, 2023

    Test Execution in Software Testing

    Great job on starting a new lesson! After reading this lesson, click Next button at bottom right to continue to the next lesson.

    Test execution means running test cases on a software to test if it meets the expected results. It is an important phase of the software testing life cycle (STLC). It helps to find the quality attributes of your software product or service.

    Test Execution Examples

    • A tester executes a test case to verify the login functionality of a web application. He enters valid credentials and clicks on the login button. He validates that he is redirected to the home page and his username is displayed correctly.
    • A tester executes a test case to check the performance of a mobile application. She uses a tool to simulate multiple users accessing the application at the same time. She measures the response time, throughput, and resource utilization of the application under different load conditions.
    • A tester executes a test case to validate the security of an e-commerce application. He tries to inject code into the input fields and observes if the application prevents it or not. He also tests that the payment transactions are encrypted and secure.
    • A tester executes a test case to verify the compatibility of a software application with different browsers, operating systems, and devices. She runs the application on various platforms and checks if it functions properly and consistently.

    Tips for Test Execution

    • Review the test plan before starting test execution. It defines the scope, objectives, approach, and resources for testing.
    • Use the test management tool to organize, execute, and track test cases. It helps to generate test reports and test metrics.
    • Follow the defect management process (in test plan) for logging and reporting defects. Use the standard defect tracking tool to assign, prioritize, re-test and close defects.
    • Execute test cases based on their priority (higher priority test cases first), dependencies, and risk level
    • Execute regression test cases after fixing defects or making changes in your software.

    FAQ (interview questions and answers)

    1. What are the prerequisites for test execution?
      Test environment, software tester, test cases, test data, and test tools
    2. What types of test execution have you performed?
      Manual testing and automated testing
    3. What are the outcomes of test execution?
      Test results, test logs, defect reports (bug reports), test metrics, status report (during test execution) and test summary report (after test execution is complete).
    4. What are the factors that affect test execution?
      Complexity of the software application, detail in the test cases, skill and experience of the software tester, and schedule.
    Remember to just comment if you have any doubts or queries.


    October 17, 2018

    New live webinar - 8 mistakes companies make when transitioning to CI and CD

    You can view this webinar to learn the common mistakes to avoid during continuous improvement. Three QA practitioners share these mistakes and tell you exactly how to resolve them.


    March 31, 2014

    What is Traceability Matrix?

    In software testing, an important document is called Traceability Matrix (TM) or Requirements Traceability Matrix (RTM). This is the document that connects the requirements to the test cases. The connection or mapping of the requirements to test cases is many-many. This means that one requirement is tested by one or more test cases. Conversely, it is possible to have one test case addressing one or more requirements.

    If you don't understand the RTM, view the video, Requirements Traceability Matrix that explains the RTM with an example.
    Next, let us see some useful points about the Requirement Traceability Matrix.
    1. A well-designed TM has the Req Ids and Test Case Ids. However, it should not have any text from the requirements or test cases because it is just a mapping. The TM could also contain module/ component/ sub-system Ids against each Req Id (see point no. 9).

    2. A TM can be as simple as Req Ids on one axis and Test Case Ids on the other axis. For example, a TM implemented in MS Excel could have Req Ids in a single column (vertically) and Test Case Ids in multiple columns (horizontally). A symbol could mark which requirement maps to which test case.

    3. The TM should be created as early as possible in the project. It becomes tedious to create if there are already numerous requirements and test cases.

    4. The TM should be updated for every requirement change. New requirement is added or an existing requirement is changed or an existing requirement is deleted.

    5. The TM should be updated when a new test case is written. This update could be the final step of completing the test case. If an existing test case is updated or enhanced, the TM should be reviewed for accuracy. The TM should be updated if any test case is retired.

    6. One should be careful with workflow changes because they can impact multiple requirements and therefore multiple test cases.

    7. It is simpler to update the TM if the requirements and test cases are modular and contain no repetitions.

    8. TM is only a document which can become corrupted. Especially if multiple people write to it in an uncontrolled way. Therefore, the TM should be stored in a revision control system with locking and backup/ restore features.

    9. If the TM contains module/ component/ sub-system Ids, it becomes simpler to identify the impacted modules whenever a requirement changes.

    10. Some project management software or test management software provide automatic generation of TM based on requirements and test cases stored in the system. It is even possible to run the queries against the TM because all its information lives in a database.

    Happy testing!

    March 19, 2014

    Example Test Strategy | Test Plan


    Test strategy is the plan (that may exist at any level like project, program, department or organization level) that describes how the test objectives would be met effectively with the help of the available resources. If you have a test strategy, it is easier to focus effort on the most important test activities at the time. Moreover, a test strategy provides clarity on the test approach to the project stakeholders. First, view my Test Strategy video. Then read on.
    Many readers have asked me for example software testing strategy document. I requested Varsha, who is a senior member of the Software Testing Space community, to create an example test strategy for a hypothetical agile project. First, view the video, Example Agile Test Strategy, Agile Test Plan. Then read on.
    Below is the resulting sample test strategy document. The sections contain much information. Additional guidelines are given in italics. I hope that this sample test strategy document helps you create a really effective test strategy for your own project. - Inder P Singh

    Example Test Strategy

    Introduction to Agile
    Agile is an iterative and incremental (evolutionary) approach to software development that is performed in a highly collaborative manner by self-organizing teams within a control framework. High quality and adaptive software is developed by small teams using the principles of continuous design improvement and testing based on rapid feedback and change. Agile is people-centric, development and testing is performed in an integrated way, self-organizing teams encourage role interchangeability, customer plays a critical role and Project Life-cycle is guided by product features.

    How Agile is different from Waterfall model
    1. Greater collaboration
    2. Shorter work cycle and constant feedback
    3. Need to embrace change
    4. Greater flexibility
    5. Greater discipline
    6. The goal should be quality and not just speed
    7. Greater stakeholder accountability
    8. Greater range of skills
    9. Go faster and do more
    10. Courage
    11. Confidence in design

    Purpose of this document
    The purpose of this Test Strategy is to create a shared understanding of the overall targets, approach, tools and timing of test activities. Our objective is to achieve higher quality and shorter lead times with minimum overhead, frequent deliveries, close teamwork with team and the customer, continuous integration, short feedback loops and frequent changes of the design. Test strategy guides us through the common obstacles with a clear view of how to evaluate the system. Testing starts with the exploration of the requirements and what the customer really wants by elaborating on the User stories from different perspectives. Testing becomes a continuous and integrated process where all parties in the project are involved. 
    Copyright © Software Testing Space

    Guiding standards
    StandardDescription
    Shared ResponsibilityEveryone in the team is responsible for quality.
    Data ManagementProduction data must be analyzed before being used for testing.
    Test ManagementTest cases, code, documents and data must be treated with the same importance as the production system.
    Test AutomationAttempt to automate all types of testing (Unit, Functional, Regression, Performance, Security) as far as feasible.

    Requirements strategy
    1. Always implement highest priority work items first (Each new work item is prioritized by Product Owner and added to the stack).
    2. Work items may be reprioritized at any time or work items may be removed at any time.
    3. A module in greater detail should have higher priority than a module in lesser detail.

    Quality and Test Objectives
    FeatureDescriptionMeasure and TargetPriority
    AccuracyFeatures and functions work as proposed (i.e. as per requirements)100% completion of agreed features with open
    • Severity 1 defects = 0
    • Severity 2 defects = 0
    • Severity 3 defects < 5
    • Severity 4 defects < 10
    Must Have
    IntegrityAbility to prevent unauthorized access, prevent information loss, protect from viruses infection, protect privacy of data entered
    • All access is via HTTPS (over a secured connection).
    • User passwords and session tokens are encrypted.
    Must Have
    MaintainabilityEase to add features, correct defects or release changes to the system
    • Code Duplication < 5%
    • Code Complexity < 8
    • Unit Test Coverage > 80%
    • Method Length < 20 Lines
    Must Have
    AvailabilityPercentage of planned up-time that the system is required to operateSystem is available for 99.99% for the time measured through system logs.Should Have
    InteroperabilityEase with which the system can exchange information with other systems User interface renders and functions properly on the following (and later) browsers versions:
    1. IE version = 9.0
    2. Firefox version = 18.0
    3. Safari version = 5.0
    4. Chrome version 11.0
    Must Have
    PerformanceResponsiveness of the system under a given load and the ability to scale to meet growing demand.
    1. Apdex Score > 0.9
    2. Response Time < 200ms
    3. Throughput > 100 pm
    Should Have

    Test Scope (both business processes and the technical solution)
    In Scope
    Identify what is included in testing for this particular project. Consider what is new and what has been changed or corrected for this product release.
    • Automated) Unit testing
    • Code analysis (static and dynamic)
    • Integration testing
    • (Automated) Feature and functional testing
    • Data conversion testing
    • System testing
    • (Automated) Security testing
    • Environment testing
    • (Automated) Performance and Availability testing
    • (Automated) Regression testing
    • Acceptance testing
    Copyright © Software Testing Space
    Out of Scope
    Identify what is excluded in testing for this particular project.

    Testing Types
    Testing typeDefinitionTest tool examples
    Remove tools that will not be used.
    Unit testingTesting that verifies the implementation of software elements in isolation Xunit test tools (Nunit, Junit), Mocking tools
    Code analysis (static and dynamic)Walkthrough and code analysis1. Static code tool -> Java – Checkstyle, Findbugs, Jtest, AgileJ Structure views .Net – FxCop, stypeCop, CodeRush
    2. Dynamic code tool ->Avalanche, DynInst, BoundsChecker.
    Integration testingTesting in which software elements, hardware elements, or both are combined and tested until the entire system has been integratedVector Cast C/C++
    Functional and Feature testingTesting an integrated hardware and software system to verify that the system meets required functionality:
    • 100% requirements coverage
    • 100% coverage of the main flows
    • 100% of the highest risks covered
    • Operational scenarios tested
    • Operational manuals tested
    • All failures are reported
    UFT, Selenium WebDriver, Watir, Canoo webtest , SoapUI Pro
    System testingTesting the whole system with end to end flowSelenium, QTP, TestComplete
    Security testingVerify secure access, transmission and password/ session securityBFB Tester, CROSS, Flowfinder, Wireshark, WebScarab, Wapiti, X5s, Exploit Me, WebSecurify, N-Stalker
    Environment testingTesting on each supported platform/ browserGASP, QEMU, KVM,Xen, PS tools
    Performance and Availability testingLoad, scalability and endurance testsLoadRunner, JMeter, AgileLoad test, WAPT, LoadUI
    Data conversion testingPerformed to verify the correctness of automated or manual conversions and/or loads of data in preparation for implementing the new systemDTM, QuerySurge, PICT, Slacker
    Regression testingTesting all the prior features and re-testing previously closed bugsQTP, Selenium WebDriver
    Acceptance testingTesting based on acceptance criteria to enable the customer to determine whether or not to accept the system Selenium , Watir, iMacros, Agile Acceptance Test Tool

    Test Design strategy
    1. Specification based / Black box techniques (Equivalence classes, Boundary value analysis, Decision tables, State Transitions and Use case testing)
    2. Structure based / white box techniques (Statement coverage, Decision coverage, Condition coverage and Multi condition coverage)
    3. Experience based techniques (Error guessing and Exploratory testing)

    Test Environments strategy
    NameDescriptionData SetupUsage
    DevelopmentThis environment is local and specific to each developer/tester machine. It is based on the version/branch of source code being developed. Integration points are typically impersonated.Data and configuration is populated through setup scripts.Unit, Functional and Acceptance Tests.
    Test tools e.g. Xunit test tools (Nunit, Junit), Mocking tools.
    Source code management for version control
    IntegrationThis environment supports continuous integration of code changes and execution of unit, functional and acceptance tests. Additionally, static code analysis is completed in this environment.Data and configuration is populated through setup scripts.Unit, Functional and Acceptance Tests.
    Static code analysis
    Continuous Integration tools e.g. Cruise control
    StagingThis environment supports exploratory testingPopulated with post-analysis obfuscated production dataExploratory testing
    ProductionLive environmentNew instances will contain standard project reference data. Existing instances will have current data migrated into the environmentProduction verification testing

    Test Execution strategy
    We will keep in mind the following points:
    1. Agile testing must be iterative.
    2. Testers cannot rely on having complete specification.
    3. Testers should be flexible.
    4. They need to be independent and independently empowered in order to effective
    5. Be generalizing specialists.
    6. Be prepared to work closely with developers.
    7. Focus on value added activities.
    8. Be flexible.
    9. Focus on What and Not How to test.
    10. Testers should be embedded in agile team.
    11. Flexible to contribute in any way then can
    12. Have wide range of skills with one or more specialties
    13. Shorter feedback cycles
    14. Focus on sufficient and straightforward situations.
    15. Focus on exploratory testing.
    16. Specify the meaning of "Done” i.e. when activities/tasks performed during the system development can be considered complete.
    17. Define when to continue or stop testing before delivering the system to the customer. Specify which evaluation criteria is to be used (e.g. time, coverage, and quality) and how it will be used.
    Additionally, use this section to describe the steps for executing tests in preparation for deployment/release/upgrade of the software. Key execution steps could include:
    1. Steps to build the system
    2. Steps to execute automated tests
    3. Steps to populate environment with reference data
    4. Steps to generate test report/code metrics


    Test Data Management strategy
    Use this section to describe the approach for identifying and managing test data. Consider the following guidelines:
    1. System and user acceptance tests – a subset of production data should be used to initialize the test environment.
    2. Performance and availability test – full size production files should be used to test the performance and volume aspects of the test.


    Test Automation strategy
    Adopt a planned approach to developing test automation. Increase the quality of test automation code. Select the test cases for automation based on the following factors:
    • Risk
    • How long it takes to run the tests manually?
    • What is the cost of automating the test?
    • How easy are the test cases to automate?
    • How many times is the test expected to run in project?
    Test Management
    The Test Plan, test scenarios, test cases and bug report should be in a same system as in Bugzilla, Zira. Any agile tool can be used where User stories, Test Plan, Test scenarios, test cases and bug report can be stored in the same place.

    Risks and Assumptions
    Risks and assumptions raised in Daily stand up meeting (in front of all team members, scrum master and members) should be logged and addressed immediately.

    Defect Management strategy
    Ideally, defects are only raised and recorded when they are not going to be fixed immediately. In this case, the conditions under which they occur and the severity needs to be accurately recorded so that the defect can be easily reproduced and then fixed.

    Defect Classification
    SeverityDescription
    CriticalDefect causes critical loss of business functionality or a complete loss of service.
    MajorDefect causes major impact to business functionality and there is not an interim workaround available.
    MinorDefect causes minor impact to business functionality and there is an interim workaround available.
    TrivialDefect is cosmetic only and usability is not impacted.

    Defect Lifecycle
    StepDescription
    Identify DefectEnsure defect can be reproduced. Raise in defect tracking system.
    Prioritize DefectBased on severity defect is prioritized in team backlog.
    Analyze DefectBased on analysis acceptance criteria and implementation details.
    Resolve DefectImplement changes and/or remediate failing tests.
    Verify ResolutionExecute tests to verify defect is resolved and no regression is seen.
    Close DefectClose in defect tracking system.

    Specify the shared defect tracking system.
    Copyright © Software Testing Space
    Note: This example test strategy has been contributed by Varsha Tomar. Varsha has 9 years experience in both manual and automated software testing. Currently, she works with Vinculum Solutions as Senior Test Lead. Her interests include software testing, test automation, training, testing methodologies and exploring testing tools.

    Please put any questions that you have in the comments.

    January 14, 2014

    Software Defect Prevention

    As software testers, we focus on defect detection. In this post, let us see different ways to prevent defects in software. Preventing defects is preferred to detecting defects and removing them. The following guidelines may take us out of our comfort zone. However, they help us connect better with other team members (developers, designers and analysts).
    1. Keep the requirements as simple as possible.
    2. Review the requirements for clarity, completeness, no conflicts and testability.
    3. Use design standards.
    4. Review the software design.
    5. Keep the software code as simple as possible.
    6. Use coding standards.
    7. Unit test the code.
    8. Review the code.
    9. Refactor code to make it simple to understand.
    10. Always communicate well within the team about requirements, design and code.

    September 27, 2013

    No Time to Test

    There was a thought-provoking discussion in the Software Testing Space LinkedIn group this month. The problem posed by Amit (many thanks for raising an important practical problem) was thus.

    Problem One is the only QA engineer for a large team of developers. The rate at which the developers build new features is far greater than the rate at which the QA engineer can test them. How would one ensure high quality is maintained in the application?

    Here are the solutions to this common problem that were proposed by the expert group members:
    1. Involve the developers in the team for some testing.
    2. At first, test the high business priority features and then the high-risk features. Perform a regression test whenever time permits.
    3. Prioritize all features to be tested. Test the new features and important bug fixes first.
    4. Raise the problem to the team highlighting the limited test coverage due to lack of time. Make sure that the team understands and accepts the risk.
    5. Use test automation to automate sanity test and regression test cases. Build the test automation framework so that developers can easily build automated test scripts.
    6. Merge test cases or write workflow-based test cases that allow more coverage and take less effort to write and execute.
    7. Use Requirement and Risk based testing approach by defining the testing scope based on priority, impact and timelines.
    Overall, the members agreed on the best solutions as no. 4 and no. 1 and 5. Michael neatly summarized the solution. Many thanks, Michael.

    Solution Ensure that the team is aware of the limitations of time and resources for testing new features and bug fixes and regression testing. The team needs to understand the risk due to these limitations. Always do your best. Ask and accept help from any team member with your testing tasks.

    April 11, 2011

    How to do end to end exhaustive testing?

    Testing a software application (except maybe a very simple program a few lines long) may well be an impossible task due to large number of:

    1. All possible inputs
    2. All possible input validations
    3. All possible logic paths within the application
    4. All possible outputs
    5. All possible sequences of operations
    6. All possible sequences of workflows
    7. All possible speeds of execution
    And the above for just with a single user
    8. All combinations of types of users
    9. All possible number of users
    10. All possible lengths of time each user may operate the application
    And so on (we have not even touched the types of test environments on which the tests could be run).

    However, it is possible to exhaustively execute your test suite using the following tips:

    1. Your test suite should have test cases covering each documented requirement. Here my assumption is that each requirement is documented clearly.
    2. The test cases should be specific, concise and efficient. Each test case should have clear and unambiguous steps and expected results.
    3. The configuration data, input test data and output test data should be clearly specified.
    4. You should have a clean and stable test environment in which to execute your test suite.
    5. In a perfectly working application, it should be possible to execute each test case in the suite.
    6.. Each confirmed bug (found during testing or found by the client) should result in another test case written or an existing test case updated.
    7. Important: You should not assume the correctness and completeness of your test suite by yourself. Review of such test suite by peers, business people, managers, clients and users may provide you valuable inputs to correct your test suite.
    8. Discipline in maintaining your test suite and executing it would go a long way in preventing bugs leaked to the clients/ users of your application.

    November 07, 2010

    Planning and execution problem solved

    I like doings my testing tasks really well. In fact, I prefer not starting a task to doing it in a sloppy way. Now, this created a problem whenever I was working on a project. There were many tasks to perform every day. In the past, it used to take a lot out of me in taking the actions related to my task. First, I used to understand the task and then estimate its priority (to schedule it accordingly). Then, I planned the approach to perform my task and identified each sub-task that required to be done. I performed each sub-task and tracked my progress frequently. After completing each sub-task, I reviewed it. Then, I reviewed the entire original task first with respect to each sub-task and then with respect to the original objectives. Finally, I communicated my task completion to the relevant people.

    Why did I end up expending a huge mental and physical effort in performing each task? Other than my desire to really shine at the task, I found that I repeated a lot of planning that I had done in the past on similar tasks. Attempting to excel at my task is welcome. It gives me the satisfaction that I am not doing the task mechanically. However, re-thinking everything is definitely overkill. Not required in a majority of tasks. Definitely not desirable in every iteration of the task.

    I needed to solve this problem. That is, I had to balance my wish to do really well with my need for speedy execution. Here is how I created my solution to this problem. First, I identified a couple of non-trivial tasks that I had to repeat. In my case, I selected 1) Review of test cases and 2) Creation of the monthly test results. Later on, I realized that this solution can be applied to any other repeated non-trivial tasks such as 1) Design test cases, 2) Execute test cases and 3) Log bug reports. Second, I made available to myself an ample time slot when I would work on nothing else but this selected task. Third, I started working on the task. But there was an important difference to the prior executions this time. I documented my approach making notes as I went along the task. For example, starting the test cases review, I listed each item that I had to plan e.g.Whether to do a high-level or detailed review or a hybrid review, Sequence in which to review the test cases, Level of detail in my review comments. While actually performing the reviews, I listed each sub-task as I remembered it e.g. Check if each requirement in scope of the test case is covered, Check if each design is covered in the test case, Check if there is an incompleteness in steps or expected results or test data, Check for duplicity within a test case or among two or more test cases.

    Finally, I ended up with a nice long list of sub-tasks (with comments) that I needed to perform to do justice to my original task. Some sub-tasks were linked to the understanding of the task, some to the planning, several to the execution and the others to the own review. Now, every time I faced this task, I did not go about it in my earlier way. I got my list out, looked at the list of sub-tasks and started executing them. Whenever I remembered a missing sub-task, I just inserted it in the appropriate place in the list. This practical list was really my own task execution procedure.

    How does this help you? If you are like me and want to perform due diligence on your important tasks, your own procedure will help you do just that. You will no longer need to rely on your memory alone. The second benefit is that it will reduce the planning effort required. You will just look at each sub-task in your procedure, decide if it is relevant to the current situation and if so, perform it. The third benefit is that your procedure will keep you focused and on track. If you enhance your procedure with every iteration of your task, you get the fourth benefit, which is improving execution ability. You will get the fifth benefit when you share your procedure with your team members. Your team members will improve their execution ability by benefitting from your procedure. In turn, they are likely to give you feedback that will help you enhance your procedure even more. Your procedure will also be a good means to transfer execution knowledge to new team members.

    If you are excited about creating your own procedures in order to excel at tasks, you should take a few pre-cautions:
    1. Do not get over-ambitious and try to create procedures for every task that you perform. If you do, your work will slow down to a crawl and you may be overwhelmed quickly.
    2. Choose to write the procedures for only regular AND non-trivial tasks. Do not spend hours writing a procedure for a task that you will perform once in six months (and the project situation may be quite different at that time). Do not write a procedure with obvious steps e.g. you know that you have to fill up each relevant field in the bug report form before you submit a bug report.
    3. Once you have a procedure ready, do not delete a sub-task even if you find that it is not required in the current task iteration. This sub-task may very well turn out to be important in a subsequent iteration.

    If you work with a process-oriented organization, you will find that documented procedures are available for only the major software testing tasks. In other organizations, you will find that you only have high-level industry guidelines and standards to follow. In both cases, you will find your own procedures very helpful. How else do you think high performers are able to produce substantial results in half the time that you take?

    October 17, 2010

    Test Strategy - How to define and implement it?

    On this October 14, I attended a web talk by Alan Page along with several others. The topic of Alan's session was Test Strategy. I would like to list the points that I saw and heard Alan make before making my own observations:
    1. Consider the context before creating your test strategy. It is useful to consider your own situation in terms of your team's composition, their current skills, their desired skills and other goals. For example, it may be okay communicating the test strategy verbally within a small team of say up to 20 people. However, when you have a large team, it becomes useful to document the test strategy and distribute it so that everyone is on the same page.
    2. After considering your context, the next step in the process is your fact-finding and assessment. This helps you answer questions like how is testing at present, how would it be different in the future, would other parameters change and how could the team change to meet the future requirements.
    3. A useful way of clarifying your thoughts is to map your facts to goals. What is your current state (fact) and what is your desired state (goal)?
    4. The journey from your Current state to Desired state may not be a straight jump but a series of steps. However, each step should aid the transition away from the Current state and towards the Desired state.
    5. Once the strategy is in place, just take the desired actions. Track and review the progress and adjust course if required.
    It was a clear and thought-out presentation. You can view the talk here. It should take you about 30 minutes to listen to it. Now my questions and comments.
    1. Each action (even the tiniest one) taken in an organization should contribute to the organization's objectives positively. How does the test strategist ensure that each step outlined in the test strategy maps to the organization's objectives and ultimately to its vision? A test strategist should be keenly aware of their organization's business objectives. Further, the test strategist should be aware of other factors such as the current customer experience, competition and the direction the industry is moving.
    2. Implementing a test strategy in a sizeable team is no mean task. Other than piloting actions and showing supporting data to other team members, what are the ways to smoothen the implementation of a test strategy? It may require sessions to explain the test strategy to each team member, arranging and executing any training they may need and providing the supporting processes and tools to the team help take action to move to the Desired state. Explaining what is in it for them, recognition of good performers and championing the test strategy may also help attain buy-in from the team members.
    3. How does the test strategist know that they have arrived and it is time for the next strategy? By ascertaining if the desired state is institutionalized (data consistently points to the desired state, team members discuss about the Desired state as the Current state and team members have become a little complacent).

    August 31, 2010

    Why you must know your product's competitors?

    Why does a customer purchase (a license) of your product? More likely than not, the primary reason for doing so may boil down to
    1. Getting something new e.g. increased productivity or increased efficiency or increased resources
    2. Overcoming a risk e.g. miscommunication or failing to meet statutory requirements

    However, your product may not be the only solution available in the market to satisfy the customer's primary requirement. If your product is not so well-known, it has to compete with leading products in the market in the particular category. If your product is the market leader, it may have to compete with products catering to specific niche markets. Even if yours is a one of a kind product, there may be a proven manual system that it has to compete with.

    When the customer evaluates or first uses your product, it is no leap of imagination to think that s/he would be actively comparing it with its competitors. If you test software, you can ill-afford to ignore your product's competitors. Software testing should include not only testing with respect to your organization's or customer's requirements but also testing to check how the product functions with respect to its competitors.

    Knowing the product's competitors is not the prerogative of product managers alone. As software testers, we pretend to be customers using the product. Therefore, just like customers, we should be aware of the alternative products. Only then would we come to know about how our product functions on its own and how it functions with respect to its peers.