0% found this document useful (0 votes)
187 views13 pages

Software Testing Framework: Tools

The document discusses various software development models and tools for testing automation. It provides details on: 1) Popular software development models like the waterfall model, prototyping model, and rapid application development model. 2) Tools for test automation like QTP, WinRunner, JMeter, LoadRunner, and Selenium. 3) Software testing frameworks like Bugzilla, Testopia, and Test Director for defect tracking and test case development.

Uploaded by

Mandeep Kaur
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
Download as doc, pdf, or txt
0% found this document useful (0 votes)
187 views13 pages

Software Testing Framework: Tools

The document discusses various software development models and tools for testing automation. It provides details on: 1) Popular software development models like the waterfall model, prototyping model, and rapid application development model. 2) Tools for test automation like QTP, WinRunner, JMeter, LoadRunner, and Selenium. 3) Software testing frameworks like Bugzilla, Testopia, and Test Director for defect tracking and test case development.

Uploaded by

Mandeep Kaur
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1/ 13

Tools Windows Apps Web based Apps Open Source Tools Performance testing Test Case Development Build

Automation Automated Code Inspection QTP, WinRunner (Mercury) JMeter, LoadRunner, Microsoft Application Center Test BugZilla (Defect Tracking), Water, Sahi, Selenium Load Runner, Badboy Testopia, Test Director CruiseControl.NET, Nant FxCop

Software Testing Framework

Test Automation Focus

Popular Software Development Models The following are some basic popular models that are adopted by many software development firms A. System Development Life Cycle (SDLC) Model B. Prototyping Model C. Rapid Application Development Model D. Component Assembly Model A. System Development Life Cycle (SDLC) Model This is also known as Classic Life Cycle Model (or) Linear Sequential Model (or) Waterfall Method. This model has the following activities. 1. System/Information Engineering and Modeling As software is always of a large system (or business), work begins by establishing the requirements for all system elements and then allocating some subset of these requirements to software. This system view is essential when the software must interface with other elements such as hardware, people and other resources. System is the basic and very critical requirement for the existence of software in any entity. So if the system is not in place, the system should be engineered and put in place. In some cases, to extract the maximum output, the system should be re-engineered and spruced up. Once the ideal system is engineered or tuned, the development team studies the software requirement for the system. 2. Software Requirement Analysis This process is also known as feasibility study. In this phase, the development team visits the customer and studies their system. They investigate the need for possible software automation in the given system.

By the end of the feasibility study, the team furnishes a document that holds the different specific recommendations for the candidate system. It also includes the personnel assignments, costs, project schedule, target dates etc.... The requirement gathering process is intensified and focussed specially on software. To understand the nature of the program(s) to be built, the system engineer or "Analyst" must understand the information domain for the software, as well as required function, behavior, performance and interfacing. The essential purpose of this phase is to find the need and to define the problem that needs to be solved. 3. System Analysis and Design In this phase, the software development process, the software's overall structure and its nuances are defined. In terms of the client/server technology, the number of tiers needed for the package architecture, the database design, the data structure design etc... are all defined in this phase. A software development model is thus created. Analysis and Design are very crucial in the whole development cycle. Any glitch in the design phase could be very expensive to solve in the later stage of the software development. Much care is taken during this phase. The logical system of the product is developed in this phase. 4. Code Generation The design must be translated into a machine-readable form. The code generation step performs this task. If the design is performed in a detailed manner, code generation can be accomplished without much complication. Programming tools like compilers, interpreters, debuggers etc... are used to generate the code. Different high level programming languages like C, C++, Pascal, Java are used for coding. With respect to the type of application, the right programming language is chosen. 5. Testing once the code is generated, the software program testing begins. Different testing methodologies are available to unravel the bugs that were committed during the previous phases. Different testing tools and methodologies are already available. Some companies build their own testing tools that are tailor made for their own development operations. 6. Maintenance The software will definitely undergo change once it is delivered to the customer. There can be many reasons for this change to occur. Change could happen because of some unexpected input values into the system. In addition, the changes in the system could directly affect the software operations. The software should be developed to accommodate changes that could happen during the post implementation period. B. Prototyping Model This is a cyclic version of the linear model. In this model, once the requirement analysis is done and the design for a prototype is made, the development process gets started. Once the prototype is created, it is given to the customer for evaluation. The customer tests the package and gives his/her feed back to the developer who refines the product according to the customer's exact expectation. After a finite number of iterations, the final software package is given to the customer. In this methodology, the software is evolved as a result of periodic shuttling of information between the customer and developer. This is the most popular development model in the contemporary IT industry. Most of the successful software products have been developed using this model - as it is very difficult (even for a whiz kid!) to comprehend all the requirements of a customer in one shot. There are many variations of this model skewed with respect to the project management styles of the companies. New versions of a software product evolve as a result of prototyping. C. Rapid Application Development (RAD) Model The RAD models a linear sequential software development process that emphasizes an extremely short development cycle. The RAD model is a "high speed" adaptation of the linear sequential model in which rapid development is achieved by using a component-based construction approach. Used primarily for information systems applications, the RAD approach encompasses the following phases: 1. Business modeling

The information flow among business functions is modeled in a way that answers the following questions: What information drives the business process?
What information is generated?

Who generates it? Where does the information go? Who processes it? 2. Data modeling The information flow defined as part of the business modeling phase is refined into a set of data objects that are needed to support the business. The characteristic (called attributes) of each object is identified and the relationships between these objects are defined. 3. Process modeling The data objects defined in the data-modeling phase are transformed to achieve the information flow necessary to implement a business function. Processing the descriptions are created for adding, modifying, deleting, or retrieving a data object. 4. Application generation The RAD model assumes the use of the RAD tools like VB, VC++, Delphi etc... Rather than creating software using conventional third generation programming languages. The RAD model works to reuse existing program components (when possible) or create reusable components (when necessary). In all cases, automated tools are used to facilitate construction of the software. 5. Testing and turnover Since the RAD process emphasizes reuse; many of the program components have already been tested. This minimizes the testing and development time. D. Component Assembly Model Object technologies provide the technical framework for a component-based process model for software engineering. The object oriented paradigm emphasizes the creation of classes that encapsulate both data and the algorithm that are used to manipulate the data. If properly designed and implemented, object oriented classes are reusable across different applications and computer based system architectures. Component Assembly Model leads to software reusability. The integration/assembly of the already existing software components accelerates the development process. Nowadays many component libraries are available on the Internet. If the right components are chosen, the integration aspect is made much simpler. THE SDLC WATERFALL Small to medium database software projects are generally broken down into six stages: Project Planning Requirements Definition Design Development Integration& Test Installation& Acceptance Following are stages in Waterfall model: System Requirement: - This is initial stage of the project where end user requirements are gathered and documented. System Design: - In this stage detail requirements, screen layout, business rules, process diagram, pseudo code and other documentations are prepared. This is first step in technical phase. Implementation: - Depending on the design document actual code is written here. Integration and Testing: - All pieces are brought together and tested. Bugs are removed in this phase. Acceptance, Installation and Deployment: - This is final stage where software is put in production and runs actual business. Maintenance: - This is least glamorous phase which runs forever. Code Changes, correction, addition etc are done in this phase. (B)How is normally a project management plan document organized? PMP document forms the bible of a project. It has normally these sections :- Project summary Project organization hierarchy WBS / Activity list to be performed with schedule. Work product identification (In short who will do what) Project schedule (GANNT chart or PERT chart). Estimated Cost and completion. Project requirements. Risk identification. Configuration management section. Quality section. Action Item status. (B)What is difference between SITP and UTP in testing? UTP (Unit Test Plan) are done at smallest unit level or stand alone mode. Example you have Customer and invoicing module. So you will do test on Customer and Invoice module independently. But later when we want test both customer and invoice in one set we integrate them and test it. So thats SITP (System Integration Test Plan) UTP can be done using NUNIT. Unit testing is done normally by developers and System testing is done normally by testing department in integration mode. (I)What are the metrics followed in project management? Twist: - What metrics will you look at in order to see the project is moving successfully? Most metric sets deal with a variation of these attributes and are

chosen to help project managers gain insight into their product (size, software quality, rework), process (rework, software quality) and project (effort, schedule). But below is a broader classification: - Project Management Metrics Milestone metrics number of milestones number of proved requirements per milestone controlling level metrics risk metrics probability of resources availability probability of the requirements validity risk indicators (long schedules, inadequate cost estimating, excessive paperwork, error-prone modules, canceled projects, excessive schedule pressure, low quality, cost overruns, creeping user requirements, excessive time to market, unused or unusable software, unanticipated acceptance criteria, hidden errors) application risk metrics Workflow metrics walkthrough metrics traceability metrics variance metrics controlling metrics size of control elements structure of control elements documentation level tool application level Management database metrics data quality metrics management data complexity data handling level (performance metrics) visualization level safety and security metrics Quality Management Metrics Customer satisfaction metrics characteristics size metrics characteristics structure metrics empirical evaluation metrics data presentation metrics Review metrics number of reviews in the process review level metrics review dependence metrics review structure metrics review resources metrics productivity metrics actual vs. planned metrics performance metrics productivity vs. quality metrics Efficiency metrics time behavior metrics resources behavior metrics actual vs. planned metrics Quality assurance metrics quality evaluation metrics error prevention metrics measurement level data analysis metrics Configuration Management Metrics Change control metrics size of change dependencies of changes change interval metrics revisions metrics Version control metrics number of versions number of versions per customer version differences metrics releases metrics (version architecture) data handling level

Unit Testing Starting from the bottom the first test level is "Unit Testing". It involves checking that each feature specified in the "Component Design" has been implemented in the component. In theory an independent tester should do this, but in practice the developer usually does it, as they are the only people who understand how a component works. The problem with a component is that it performs only a small part of the functionality of a system, and it relies on co-operating with other parts of the system, which may not have been built yet. To overcome this, the developer either builds, or uses special software to trick the component into believe it is working in a fully functional system.

Integration Testing As the components are constructed and tested they are then linked together to check if they work with each other. It is a fact that two components that have passed all their tests, when connected to each other produce one new component full of faults. These tests can be done by specialists, or by the developers. Integration Testing is not focused on what the components are doing but on how they communicate with each other, as specified in the "System Design". The "System Design" defines relationships between components. The tests are organized to check all the interfaces, until all the components have been built and interfaced to each other producing the whole system. System Testing Once the entire system has been built then it has to be tested against the "System Specification" to check if it delivers the features required. It is still developer focused, although specialist developers known as systems testers are normally employed to do it. In essence System Testing is not about checking the individual parts of the design, but about checking the system as a whole. In fact it is one giant component. System testing can involve a number of specialist types of test to see if all the functional and non-functional requirements have been met. In addition to functional requirements these may include the following types of testing for the non-functional requirements: Performance - Are the performance criteria met? Volume - Can large volumes of information be handled? Stress - Can peak volumes of information be handled? Documentation - Is the documentation usable for the system? Robustness Does the system remain stable under adverse circumstances? Acceptance Testing Acceptance Testing checks the system against the "Requirements". It is similar to systems testing in that the whole system is checked but the important difference is the change in focus:

Systems testing checks that the system that was specified has been delivered. Acceptance Testing checks that the system will deliver what was requested. The customer should always do acceptance testing and not the developer. The customer knows what is required from the system to achieve value in the business and is the only person qualified to make that judgment. This testing is more of getting the answer for whether is the software delivered as defined by the customer. Its like getting a green flag from the customer that the software is up to the expectation and ready to be used. [TD is a test management tool. WR is a testing tool by using which we can test the functionality of the application and we can use it for functionality and regression testings.LR is a performance testing tool. We can do load, stress and volume testing by which we can know the performance of the application. The adv of td are it becomes easy to manage testing process, we can do tests on the application by launching wr or qtp unattending.we can schedule our tests. Bug tracking becomes easy.]
[Test Director is Automation Testing Tool, it is a Management tool. When compared to other tools like win runner is Functional testing tool and Load runner is a Performance testing tool. The main advantage of test director u can connect to other tools and test your application. Test director also Mercury interacitve product.] (A)What is CMMI? It is a collection of instructions an organization can follow with the purpose to gain better control over its software development process. (A) What are the five levels in CMMI? There are five levels of the CMM. According to the SEI, Level 1 Initial Level 2 Repeatable Level 3 Defined Level 4 Managed Lev el 5 Optimizing

Testing method where user is not required: Functional Testing:

In this type of testing, the software is tested for the functional requirements. The tests are written in order to check if the application behaves as expected. Stress Testing: The application is tested against heavy load such as complex numerical values, large number of inputs, large number of queries etc. which checks for the stress/load the applications can withstand. Load Testing: The application is tested against heavy loads or inputs such as testing of web sites in order to find out at what point the web-site/application fails or at what point its performance degrades. Ad-hoc Testing: This type of testing is done without any formal Test Plan or Test Case creation. Ad-hoc testing helps in deciding the scope and duration of the various other testing and it also helps testers in learning the application prior starting with any other testing. Exploratory Testing: This testing is similar to the ad-hoc testing and is done in order to learn/explore the application. Usability Testing: This testing is also called as Testing for User-Friendliness. This testing is done if User Interface of the application stands an important consideration and needs to be specific for the specific type of user. Smoke Testing: This type of testing is also called sanity testing and is done in order to check if the application is ready for further major testing and is working properly without failing up to least expected level. Recovery Testing: Recovery testing is basically done in order to check how fast and better the application can recover against any type of crash or hardware failure etc. Type or extent of recovery is specified in the requirement specifications. Volume Testing: Volume testing is done against the efficiency of the application. Huge amount of data is processed through the application (which is being tested) in order to check the extreme limitations of the system. Regression testing: Regression testing is any type of software testing that seeks to uncover software errors by partially retesting a modified program. The intent of regression testing is to provide a general assurance that no additional errors were introduced in the process of fixing other problems Testing where user plays a role/user is required: User Acceptance Testing: In this type of testing, the software is handed over to the user in order to find out if the software meets the

user expectations and works as it is expected to. Alpha Testing: In this type of testing, the users are invited at the development center where they use the application and the developers note every particular input or action carried out by the user. Any type of abnormal behavior of the system is noted and rectified by the developers. Beta Testing: In this type of testing, the software is distributed as a beta version to the users and users test the application at their sites. As the users explore the software, in case if any exception/defect occurs that is reported to the developers.
Automated Testing: Manual testing is a time consuming process. Automation testing involves automating a manual process. Test automation is a process of writing a computer program in the form of scripts to do a testing which would otherwise need to be done manually. Some of the popular automation tools are Winrunner, Quick Test Professional (QTP), LoadRunner, SilkTest, Rational Robot, etc. Automation tools category also includes maintenance tool such as TestDirector and many other. Software Testing Artifacts Software testing process can produce various artifacts such as:

Test Plan: A test specification is called a test plan. A test plan is documented so that it can be used to verify and ensure that a product or system meets its design specification. Traceability matrix: This is a table that correlates or design documents to test documents. This verifies that the test results are correct and is also used to change tests when the source documents are changed. Test Case: Test cases and software testing strategies are used to check the functionality of individual component that is integrated to give the resultant product. These test cases are developed with the objective of judging the application for its capabilities or features.

Test Data: When multiple sets of values or data are used to test the same functionality of a particular feature in the test case, the test values and changeable environmental components are collected in separate files and stored as test data. Test Scripts: The test script is the combination of a test case, test procedure and test data. Test Suite: Test suite is a collection of test cases.

Software Testing Process Software testing process is carried out in the following sequence, in order to find faults in the software system: 1. 2. 3. 4. 5. 6. 7. 8. Create Test Plan Design Test Case Write Test Case Review Test Case Execute Test Case Examine Test Results Perform Post-mortem Reviews Budget after Experience

Here is a sample Test Case for you: # Software Test Case for Login Page:

Purpose: The user should be able to go to the Home page. Pre-requisite: 1. S/w should be compatible with the Operating system. 2. Login page should appear. 3. User Id and Password textboxes should be available with appropriate labels. 4. Submit and Cancel buttons with appropriate captions should be available. Test Data: Required list of variables and their values should be available.eg: User Id :{ Valid UserId, Invalid UserId, empty}, Password :{ Valid, Invalid, empty}.

Test Sr.No Case Test Case Name Id Checking User Interface requirements.

Steps/Action

Expected Results

1.

TC1.

User views the page to check whether it includes UserId and Password textboxes with Screen displays user interface appropriate labels. Also expects that Submit requirements according to the and Cancel buttons are available with user. appropriate captions i) Error message is displayed for numeric data. ii) Text is accepted when user enters alpha-numeric data into the textbox. i) Error message is displayed when user enters less than six characters in the password textbox. System accepts data when user enters more than six characters into the password textbox. System accepts data in the encrypted format else displays an error message. i)System displays 'SUBMIT'

2.

TC2.

Textbox for UserId should: i)User types numbers into the textbox. i)allow only alpha-numeric characters{a-z, A-Z} ii)not allow special characters ii)User types alphanumeric data in the like{'$','#','!','~','*',...} textbox. iii)not allow numeric characters like{0-9} i)User enters only two characters in the password textbox.

3.

TC3.

Checking functionality of the Password textbox: i)Textbox for Password should accept more than ii)User enters more than six characters in the six characters. password textbox. ii)Data should be displayed in encrypted format. ii)User checks whether his data is displayed in the encrypted format. Checking functionality of i)User checks whether 'SUBMIT' button is

4.

TC4.

enabled or disabled. 'SUBMIT' button. ii)User clicks on the 'SUBMIT' button and expects to view the 'Home' page of the application. i)User checks whether 'CANCEL' button is enabled or disabled. 5. TC5. Checking functionality of 'CANCEL' button. ii)User checks whether the textboxes for UserId and Password are reset to blank by clicking on the 'CANCEL' button.

button as enabled ii) System is redirected to the 'Home' page of the application as soon as he clicks on the 'SUBMIT' button. i) System displays 'CANCEL' button as enabled. ii) System clears the data available in the UserId and Password textbox when user clicks on the 'CANCEL' button.

What is a Bug Life Cycle? The duration or time span between the first time bug is found (New) and closed successfully (status: Closed), rejected, postponed or deferred is called as Bug/Error Life Cycle. (Right from the first time any bug is detected till the point when the bug is fixed and closed, it is assigned various statuses which are New, Open, Postpone, Pending Retest, Retest, Pending Reject, Reject, Deferred, and Closed. For more information about various statuses used for a bug during a bug life cycle, you can refer to article Software Testing Bug & Statuses Used During A Bug Life Cycle) There are seven different life cycles that a bug can passes through: < I > Cycle I: 1) A tester finds a bug and reports it to Test Lead. 2) The Test lead verifies if the bug is valid or not. 3) Test lead finds that the bug is not valid and the bug is Rejected. < II > Cycle II: 1) A tester finds a bug and reports it to Test Lead. 2) The Test lead verifies if the bug is valid or not. 3) The bug is verified and reported to development team with status as New. 4) The development leader and team verify if it is a valid bug. The bug is invalid and is marked with a status of Pending Reject before passing it back to the testing team. 5) After getting a satisfactory reply from the development side, the test leader marks the bug as Rejected. < III > Cycle III: 1) A tester finds a bug and reports it to Test Lead. 2) The Test lead verifies if the bug is valid or not. 3) The bug is verified and reported to development team with status as New. 4) The development leader and team verify if it is a valid bug. The bug is valid and the development leader assigns a developer to it marking the status as Assigned. 5) The developer solves the problem and marks the bug as Fixed and passes it back to the Development leader. 6) The development leader changes the status of the bug to Pending Retest and passes on to the testing team for retest. 7) The test leader changes the status of the bug to Retest and passes it to a tester for retest. 8) The tester retests the bug and it is working fine, so the tester closes the bug and marks it as Closed. < IV > Cycle IV: 1) A tester finds a bug and reports it to Test Lead. 2) The Test lead verifies if the bug is valid or not. 3) The bug is verified and reported to development team with status as New. 4) The development leader and team verify if it is a valid bug. The bug is valid and the development leader assigns a developer to it marking the status as Assigned. 5) The developer solves the problem and marks the bug as Fixed and passes it back to the Development leader. 6) The development leader changes the status of the bug to Pending Retest and passes on to the testing team for retest. 7) The test leader changes the status of the bug to Retest and passes it to a tester for retest. 8) The tester retests the bug and the same problem persists, so the tester after confirmation from test leader reopens the bug and marks it with Reopen status. And the bug is passed back to the development team for fixing. < V > Cycle V: 1) A tester finds a bug and reports it to Test Lead.

2) The Test lead verifies if the bug is valid or not. 3) The bug is verified and reported to development team with status as New. 4) The developer tries to verify if the bug is valid but fails in replicate the same scenario as was at the time of testing, but fails in that and asks for help from testing team. 5) The tester also fails to re-generate the scenario in which the bug was found. And developer rejects the bug marking it Rejected. < VI > Cycle VI: 1) After confirmation that the data is unavailable or certain functionality is unavailable, the solution and retest of the bug is postponed for indefinite time and it is marked as Postponed. < VII > Cycle VII: 1) If the bug does not stand importance and can be/needed to be postponed, then it is given a status as Deferred. .

Measuring Software Testing there arises a need of measuring the software, both, when the software is under development and after the system is ready for use. Though it is difficult to measure such an abstract constraint, it is essential to do so. The elements that are not able to be measured, needs to be controlled. There are some important uses of measuring the software:

Software metrics helps in avoiding pitfalls such as 1. cost overruns, 2. in identifying where the problem has raised, 3. Clarifying goals. It answers questions such as: 1. What is the estimation of each process activity?, 2. How is the quality of the code that has been developed?, 3. How can the under developed code be improved?, etc. It helps in judging the quality of the software, cost and effort estimation, collection of data, productivity and performance evaluation.

Some of the common software metrics are:

Code Coverage Cyclomatic complexity Cohesion Coupling Function Point Analysis Execution time Source lines of code Bug per lines of code

In short, measurement of software is for understanding, controlling and improvement of the software system. Software is subject to changes, with respect to, changing environmental conditions, varying user requirements, as well as configuration and compatibility issues. This gives rise to the development of newer and updated versions of software. But, there should be some source of getting back to the older versions easily and working on them efficiently. Testers play a vital role in this. Here is where change management comes into picture What is a Test Case? A test case is a set of conditions or variables and inputs that are developed for a particular goal or objective to be achieved on a certain application to judge its capabilities or features. It might take more than one test case to determine the true functionality of the application being tested. Every requirement or objective to be achieved needs at least one test case. Some software development methodologies like Rational Unified Process (RUP) recommend creating at least two test cases for each requirement or objective; one for performing testing through positive perspective and the other through negative perspective. Test Case Structure A formal written test case comprises of three parts 1. Information Information consists of general information about the test case. Information incorporates Identifier, test case creator, test case version, name of the test case, purpose or brief description and test case dependencies. Activity Activity consists of the actual test case activities. Activity contains information about the test case environment, activities to be done at test case initialization, activities to be done after test case is performed, step by step actions to be done while testing and the input data that is to be supplied for testing. Results Results are outcomes of a performed test case. Results data consist of information about expected results and the actual results.

2.

3.

Designing Test Cases Test cases should be designed and written by someone who understands the function or technology being tested. A test case should include the following information -

Purpose of the test Software requirements and Hardware requirements (if any) Specific setup or configuration requirements Description on how to perform the test(s) Expected results or success criteria for the test

Designing test cases can be time consuming in a testing schedule, but they are worth giving time because they can really avoid unnecessary retesting or debugging or at least lower it. Organizations can take the test cases approach in their own context and according to their own perspectives. Some follow a general step way approach while others may opt for a more detailed and complex approach. It is very important for you to decide between the two extremes and judge on what would work the best for you. Designing proper test cases is very vital for your software testing plans as a lot of bugs,

ambiguities, inconsistencies and slip ups can be recovered in time as also it helps in saving your time on continuous debugging and re-testing test cases. A spreadsheet was created in order to find out the estimation and calculate the duration of tests and testing costs. They are based on the following formulas: Testing working days = (Development working days) / 3. Testing engineers = (Development engineers) / 2. Testing costs = Testing working days * Testing engineers * person daily costs. As the process was only playing with numbers, it was not necessary to register anywhere how the estimation was obtained. Est. Estimation is the Testing time and resources required for testing. There is different kind of approach to estimate the Test * Percentage-of-Development Approach * Implicit Risk Context Approach * Metrics-Based Approach * Test Work Breakdown Approach * Iterative Approach Quality assurance, or QA for short, refers to a program for the systematic monitoring and evaluation of the various aspects of a project, service, or facility to ensure that standards of quality are being met. Testing is the process of examining an application to ensure it fulfills the requirements for which it was designed and meets quality expectations. More importantly, testing ensures the application meets customer expectations. Unit Testing Describes the process of testing modular sections of an application. Integration Testing Explains the process of testing the combination of two or more sections of an application. Regression Testing Describes the process of retesting an application following implementation changes Testing Cycle

Requirements analysis: Testing should begin in the requirements phase of the software development life cycle. During the design phase, testers work with developers in determining

what aspects of a design are testable and with what parameters those tests work. Test planning: Test strategy, test plan, testbed creation. Since many activities will be carried out during testing, a plan is needed. Test development: Test procedures, test scenarios, test cases, test datasets, test scripts to use in testing software. Test execution: Testers execute the software based on the plans and test documents then report any errors found to the development team.
Test reporting: Once testing is completed, testers generate metrics and make final reports on their test effort and whether or not the software tested is ready for release. Test result analysis: Or Defect Analysis, is done by the development team usually along with the client, in order to decide what defects should be treated, fixed, rejected (i.e. found software working properly) or deferred to be dealt with later. Defect Retesting: Once a defect has been dealt with by the development team, it is retested by the testing team. AKA Resolution testing. Regression testing: It is common to have a small test program built of a subset of tests, for each integration of new, modified, or fixed software, in order to ensure that the latest delivery has not ruined anything, and that the software product as a whole is still working correctly. Test Closure: Once the test meets the exit criteria, the activities such as capturing the key outputs, lessons learned, results, logs, documents related to the project are archived and used as a reference for future projects. Sanity Testing: Whether the build is stable to test or not. Regression Testing: Re execution of the test cases on

modified build is called Regression testing.

You might also like