Wednesday, June 22, 2011

Automation Testing - Software Testing Tools

Automation testing is testing software with the help of a tool. Automation software testing involves using a tool for writing and executing test cases/scripts. Computers (tools) are fast, reliable, capable of multi tasking, they do not require coffee breaks. But the point of concern is "tools cannot think." Your automation test cases/scripts are as best as your manual testing test cases. Tools convert manual test cases into executable scripts, nothing else.

There are various commercial and non-commercial tools (open source) tools available in the market today. Here is a list of automation tools that can be found commonly in testing domain.

                                      

Quick Test Professional, Ra
tional Functional Tester, WebAii Design Campus, Silk Test, Selenium

Jmeter, LoadRunner, WebLoad, Visual Studio Team System, Rational Performance Tester

 

Test Link, Quality Center, Rational Test Manager



Bugzilla, Rational Clear Quest, Mantis







General Guidelines for Preparing ISTQB Exam pattern and syllabus

  1. 40 Questions
  2. 65% pass mark
  3. 75 minutes
  4. Multiple choice questions
  5. Only one correct answer
  6. No negative marking
  7. Paper/Pen based exam. Darken the appropriate circle
  8. Not online
  9. All questions from syllabus
  10. Read the syllabus very carefully
  11. Question breakup
    1. 50% K1 (remember, recall type)
    2. 30% K2 (Compare, contrast type)
    3. 20% K3 (Analyze, apply type. Numerical type as well)
    4. See syllabus for details on K levels
» Syllabus Breakup for ISTQB Preparation Material
The principles of testing
Terminology; why testing is necessary; fundamental test process; psychology of testing;re-testing and regression testing; expected results; prioritization.
Testing throughout the life-cycle
Models for testing; economics of testing; high level test planning; acceptance testing; integration testing in the large; functional and non-functional system testing; integration testing in the small; component testing; maintenance testing.
Static testing
Reviews and the test process; types of review; static analysis.
Test design techniques
Dynamic testing techniques, Black and white-box testing techniques; error guessing, BVA etc.
Test management
Organization; configuration management; test estimation, monitoring and control; incident management; standards for testing.
Tool support for testing
Types of CAST tool (Computer-Aided Software Testing); tool selection and implementation.
Chapter
Number of Questions Expected from each chapter
The principles of testing
7
Testing throughout the life-cycle
6
Static testing
3
Test design techniques
12
Test management
8
Tool support for testing
4

Monday, June 20, 2011

In Real-time STLC Process

Software Testing Life Cycle:
Software testing life cycle or STLC refers to a comprehensive group of testing related actions specifying details of every action along with the specification of the best time to perform such actions. There can not be a standardized testing process across various organizations, however every organization involved in software development business, defines & follows some sort of testing life cycle.
STLC by & large comprises of following Six Sequential Phases: 

1) Planning of Tests
2) Analysis of Tests
3) Designing of Tests
4) Creation & Verification of Tests
5) Execution of Testing Cycles
6) Performance Testing, Documentation
7) Actions after Implementation
Every company follows its own software testing life cycle to suit its own requirements, culture & available resources. The software testing life cycle can’t be viewed in isolation, rather it interacts with the every phase of Software Development Life Cycle (SDLC). Prime focus of the software testing life cycle is on managing & controlling all activities of software testing. Testing might be manual testing or an automated testing using some tool.
1) Planning of Tests:
In this phase a senior person like the project manager plans & identifies all the areas where testing efforts need to be applied, while operating within the boundaries of constraints like resources & budget. Unless judicious planning is done in the beginning, the result can be catastrophic with emergence of a poor quality product, dissatisfying the ultimate customer. Planning is not limited just to the initial phase, rather it is a continuous exercise extending till the end. 
During the planning stage, the team of senior level persons comes out with an outline of Testing Plan at High Level. The High Level Test Plan comprehensively describes the following:
  • Scope of Testing : Defining the areas to be tested, identification of features to be covered during testing 
  • Identification of Approaches for Testing: Identification of approaches including types of testing
  • Defining Risks: Identification of different types of risks involved with the decided plan
  • Identification of resources : Identification of resources like man, materials & machines which need to be deployed during Testing
  • Time schedule: For performing the decided testing is aimed to deliver the end product as per the commitment made to the customer.

    Involvement of software testers begins in the planning phase of the software development life cycle. During the design phase, testers work with developers in determining what aspects of a design are testable and with what parameters those tests will work.
2) Analysis of Tests:
Based upon the High Level Test Plan Document, further nitty-gritty’s covering the following are worked out.
  • Identification of Types of Testing to be performed during various stages of Software Development Life Cycle.
  • Identification of extent to which automation needs to be done. 
  • Identification of the time at which automation is to be carried out. 
  • Identification of documentation required for automated testing
The Software project can’t be successful unless there is frequent interaction among various teams involved in Coding & Testing with the active involvement of the Project Managers, Business Analysts or even the customer. Any deficiencies in the decided test plans come to the surface, during such meetings of cross-functional teams. This provides an opportunity to have a rethinking & refining the strategies decided for testing.
Based upon the customer requirements a detailed matrix for functional validation is prepared to cover the following areas:
  • Ensure that each & every business requirement is getting covered through some test case or the other.
  • Identification of the test cases best suited to the automated testing
  • Identification of the areas to covered for performance testing and stress testing
  • Carry out detailed review of documentation covering areas like Customer Requirements, Product Features & Specifications and Functional Design etc.
3) Designing of Tests:
This phase involves the following:
  • Further polishing of various Test Cases, Test Plans
  • Revision & finalization of Matrix for Functional Validation.
  • Finalization of risk assessment methodologies. 
  • In case line of automation is to be adopted, identification of test cases suitable for automation.
  • Creation of scripts for Test cases decided for automation. 
  • Preparation of test data.
  • Establishing Unit testing Standards including defining acceptance criteria 
  • Revision & finalization of testing environment.
4) Construction and verification:
This phase involves the following:
  • Finalization of test plans and test cases
  • Completion of script creation for test cased decided for automation.
  • Completion of test plans for Performance testing & Stress testing.
  • Providing technical support to the code developers in their effort directed towards unit testing.
  • Bug logging in bug repository & preparation of detailed bug report. 
  • Performing Integration testing followed by reporting of defects detected if any.
5) Execution of Testing Cycles:
This phase involves the following:
  • Completion of test cycles by executing all the test cases till a predefined stage reaches or a stage of no detection of any more errors reach. 
  • This is an iterative process involving execution of Test Cases, Detection of Bugs, Bug Reporting, Modification of test cases if felt necessary, Fixing of bugs by the developers & finally repeating the testing cycles.
6) Performance Testing, Documentation & Actions after Implementation:
This phase involves the following:
  • Execution of test cases pertaining to performance testing & stress testing.
  • Revision & finalization of test documentation 
  • Performing Acceptance testing, load testing followed by recovery testing
  • Verification of the software application by simulating conditions of actual usage.
7) Actions after Implementation:
This phase involves the following:
  • Evaluation of the entire process of testing. 
  • Documentation of TGR (Things Gone Right) & TGW (Things Gone Wrong) reports. Identification of approaches to be followed in the event of occurrence of similar defects & problems in the future.
  • Creation of comprehensive plans with a view to refine the process of Testing.
  • Identification & fixing of newly cropped up errors on continuous basis.



Sunday, June 19, 2011

The Interview Guide for Testers

Abstract


Software Testing is a discipline that requires varied skills. Interviewing Software Testers for recruitment is not the same as interviewing for other Software Engineering discipline. This paper aims at uncovering the essential elements that the interviewers and interviewees need to be aware of during the Software Testing interviews. In my earlier paper, I had discussed about the essential skills that a tester needs to possess and they are as follows: Understanding, Listening, Observation, Test Planning, Test Designing, Test Execution, Defect Reporting and Analysis and Test Automation. From the interviewer perspective, this paper discusses about how to evaluate software testers in an interview. >From the interviewee perspective, this paper discusses about the necessary assertive skills that a tester needs to possess for getting through the interview.


Evaluation of Understanding and Listening Skills
The first and foremost activity of Software Testing is to understand the system requirements of the software to be tested. The key references for these system requirements in most of the projects are formal software requirements specification documents or software functional specification documents or the use case documents. In order to evaluate the tester’s skills in understanding these formal documents, a sample of this formal requirement specification document may be provided to the interviewee. The interviewer may request the interviewee to read/ understand the requirements and explain the same. The interviewee shall read/ understand the requirements and explain the same to the interviewer without any discrepancies and ambiguity. If provided with any ambiguous requirements, the interviewee shall identify them and seek clarification from the interviewer. If provided with any missing requirements, the interviewee shall provide justification as to why he/she feels that there are few missing requirements and get them defined by the interviewer. If the interviewee finds any difficulty in understanding the requirements, he/ she may get them clarified from the interviewer. The interviewer shall welcome the assertive communication skills of the interviewee.


Software Testing cannot be performed based on “assumed requirements” and all the requirements shall be explicitly defined (except for implicit requirements, which cannot be defined). The interviewee may emphasize this fundamental software testing standpoint during the interview.


Requirement Example: Let us take the case of the classical triangle software to be tested. The triangle software requires 3 positive integer inputs, which are the lengths of the 3 sides of the triangle (say A, B and C). The software evaluates the following the logical expression: A+B>C && B+C>A && C+A>B. If this logical expression evaluates to True, the software displays the status as green in the system console indicating that the inputs are valid lengths of the sides of the triangle and vice versa.
The interviewer may provide the interviewee with a document explaining the above mentioned requirement with few modifications to suit his/her need. To evaluate the understanding/ listening skills of the interviewee, the logical expression may not be mentioned in the document. When the interviewee was asked to read and understand the requirement, he/she shall raise this as an issue of missing requirement to the interviewer. The interviewee shall not assume anything about how the software validates the inputs and explain a new set of expressions such as C^2=A^2+B^2 (which applies only for right angle triangle) to the interviewer. “Defining Requirements” is always not a goal of Software Testing. The interviewee may emphasize this fundamental software testing standpoint during the interview. If the interviewee does not understand any of the sections in the requirement document or if the requirements are ambiguous, he/she shall get them clarified by the interviewer. The interviewer shall welcome those clarifications and shall not discourage the interviewee with respect to the clarifications sought. When the interviewee was asked to explain the software functionality, he/she shall explain it without any discrepancies.


Test Planning and Designing Skills
Test Planning is an activity that focuses on establishing the path way for all the Software Testing activities. The Software Testers shall keep abreast of his/her knowledge about the latest industrial trends of executing the software testing projects. The interviewer may request the interviewee to explain the industrial hot topics/ news and have a small talk about the same. The interviewer may not concentrate on asking straight forward questions such as “What is Regression Testing?”, “What is Defect Severity?”, “What is Functional Testing?”, “What is UAT?”, etc. These questions may be good for a university question paper but not during the recruitment interviews and that too exclusively. The interviews shall focus on the application side of it. The small talk/ discussion shall focus more on the “Why?”, “Where?” and “How?” of these terminologies along with “What?”. The interviewee shall be able to correlate the testing methodologies and techniques and devise the testing approach for the software under test and explain the same to the interviewer. For the triangle sample, the interviewee shall be able to devise the testing strategy and explain the same to the interviewer. The interviewer may request the interviewee to explain about the testing metrics that he/she need to capture in order to report the status and progress of the testing activities to the project management team.


The interviewer shall request the interviewee to explain about the test plan/ approach/ designing techniques for his/her current project. This is supposed to be illegal as these results in the dissemination of confidential information of the interviewee’s current organization to the public.


To evaluate the test designing skills the interviewer may provide a sample requirement like the triangle sample and request the interviewee to design few tests for the testing the requirement and explain the techniques used for designing the tests. For the triangle sample, the interviewee shall use equivalence class partitioning, decision table technique and the multiple condition coverage technique for designing the tests. The interviewee shall be capable of explaining all these techniques and the tests to the interviewer. The interviewer may request the interviewee to write down few test cases for the sample requirements that he/she provided. The interviewee shall write the test cases clearly – the test steps shall have enough details up to the level of key presses and mouse clicks and the expected results shall be non-ambiguous.


Test Execution and Defect Reporting Skills
To evaluate the test execution and defect reporting skills, the interviewer may request the interviewee to explain what he/she think to be the ideal process to be followed for test execution and defect reporting. As mentioned previously, the interviewer shall not request the interviewee to explain what he/she is currently following in his/her current project. The interviewer may initiate a small talk/ debate regarding the defect life cycle, defect attributes, defect management tools and the test reporting tools. The interviewer may request the interviewee to write down a sample defect providing the defect details. The interviewee shall write the defect report clearly with detailed steps to reproduce, expected and actual results, non-ambiguously.


Test Automation Skills
Test Automation is a wonderful phenomenon by which the testing cost is drastically reduced. On the other side, if there is no proper planning for Automation Script Creation & Maintenance, there is a risk that the Automation Suite may get outdated and may not be usable. This renders ROI=0. The testers shall be aware of this business risk and identify the automation candidates accordingly. The interviewers may provide a set of test cases that includes automation candidates (simple & complex) and non-automation candidates, to the interviewees for identifying the automation candidates. The interviewees may ask for clarifications, if any, and they shall be able to identify the automation candidates successfully. The interviewer may request the interviewee to explain the various Automation metrics that he/she need to capture in order to report the Automation ROI and other figures of interest to the project management team.
To evaluate the automation tool knowledge, the interviewer may initiate a small talk or discussion based on the respective automation tool. As mentioned previously, the questions may not focus on the automation activities in the interviewees’ current project.


Conclusion
The interviewer shall use several sample requirements and concentrate on evaluating the interviewees’ application of the software testing principles and techniques. This paper is purely a guideline and it talks about the basic things to focus on. The interviewers are requested to amend this guideline with their own approaches for conducting the interviews. The interviewees shall reply answers aptly and boldly – bold enough to challenge the interviewer in case of any wrong questions put forth by the interviewer where he/she may expect you to put the question back to him in this case.


Manual Testing Reatime FAQ

What's difference between client/server and Web Application?
          Client/server based is any application architecture where one server application and one or many client applications are involved like your mail server and MS outlook Express.
           Web Application is a kind that hosted on the web server and accessed over the internet or intranet.  
What is Mutation testing?
          Testing the application behavior when we introduce some known bugs.
          Our expected result will be user friendly error messages rather than technical error messages.
                      Note: this can also be used for testing the completeness of the Test Cases.
What are the browser compatibility test elements?
          Client side scripting
          Frames
          Images
          Links
          Browser specific HTML tabs
          Cookies
What is Memory Leak?
          While running a application if the created variables or objects are not destroyed after their usage, those variables forms as garbage in the runtime memory and this is called Memory Leak.
          Memory Leaks to introduce poor performance or application crash.
          Memory leaks can be identify by using Bounds checker or rational purity.
What is DLL?
          A dynamic link library which maintains code as methods and which performs some actions.
          DLL cannot run on their own. DLL need to be called by another running exe or DLL.
What is kickoff meeting?
          An initial Project start up meeting from which the formal test activities are initiated.
What are the drawbacks in manual testing compared to Automation?
          Following are the drawbacks of manual testing,
          Time consuming
          Low reliability
          Human resources
          Inconsistent
How do you conduct KT sessions?
          In my company different KT sessions would be conducted starting after gathering requirements, SRS preparation,
          Dev.team and testing team KT sessions with respect to client environment, domain etc.
          In most of KT will be coordinated by management team includes PM,PL,TL(Dev team),Test Lead.
Explain testing process?
          Explain STLC phases?
          Test initiation
          Test planning
          Test case design
          Test execution
          Defect Reporting and defect Tracking
          Test closure.
What is Test Strategy?
          The Test Strategy is the plan for how you are going to approach testing. It describes how you are going to approach the project to test.
What is cyclomatic complexity and function point?
          Using cyclomatic complexity techniques we can measure the complexity of requirements
          Using function point we can analyze the complexity of the requirement.
How do you plan the testing of an application when there was no-requirements or insufficient requirements are given?
          We plan the exploratory testing and we should start creating some of the documents during testing which can be helpful for future testing.
          Note: most of the exploratory test bugs may not be the real bugs.
How do you plan for testing?
          Test execution plan should be done based on the build that is delivered from the development team for testing.
           It consists of preparation of test sets ,setting up of test environment, identification of the resources and their respective tasks to be performed and the other related templates for reporting the status.
What is Test Setup environment?
          Test environment is the working environment where tests will be executed.
          The Testing Team may have their own Test Environment. Once the testing team certifies the system in Test Environment, it is migrated to the Production Environment
          Operating System
          Browsers
          Web Server and App Server
          Database (Oracle/DB2)
     What are the testing metrics?
          There can be different levels of metrics:
  1. Test Metrics:
  2. Defect Metrics:
  3. Resource Metrics:
 Can you provide status as closed for Duplicate and Rejected bugs?
          Duplicate and Rejected bugs may not be closed, Because closing such bugs will not allows to identify the actual bugs.
How many test cases you write in one day?
          On an average of 25 test cases a day
How many defects you find in one day?
           It depends on the application and the number of test cases executed in a day.
What is a release note?
          A release note is given to client along with the build when it is ready for the deployment. Release notes consists of the deployment details and also the known limitations of the build, if there are any critical errors that are still
          Not resolved and build is to be released, then those are also mentioned in the release notes.
What is Test Data?
          The data we need to supply on test cases which consists of customer expected data at the time of execution of test cases.
          We prepare test data generally in excel sheet based LLD,FRS/SRS.
What are the severity levels you give for a defect?
          We follow four levels of severity for a defect,
          Critical, High, Medium and low (cosmetic)
What are test scenarios?
          A. Test Scenarios are the end-to-end test cases prepared to test a realistic business flow with a specific pre-condition, Test Data Values and expected behavior.
How do you prepare test scenarios?
          The Test Scenarios are derived from Requirements documents/Use Case Documents based on the following factors
          Identify the multiple flows associated with a given event or a business scenario.
          Identify the multiple conditions associated with a given event.
          Identify the multiple data values required.
How are you tracking defects in your project?
          A: Using defect tracking tool (Quality Center)
To whom you report defects?
          A defect is generally reported to the test lead and lead will intern assign it to the development team, but at times the defect is assigned directly to the developer by the test engineer.
What is difference between severity and priority of a defect?
          Defect Severity represents how badly the functionality was failed and what serious impact it will have on the remaining system functionalities and also on the customer business.
          Defect Priority represents the importance of the defect and its resolution by the developers.
          The Severity and Priority levels are provided by the testers, The Severity levels should not be changed, but the priority levels can be changed by the Test Lead /Developers from one build to another based on the defects to be resolved.
What are the status for a Test Case?
          1.NoRun
          2.Not Completed
          3.Passed
          4.Failed
What is the status for defect?
          Open:
          Pending/deferred:
          Fixed:
          Rejected:
          Reopen:
          Closed:
Who will prepare a Test Summary Report?

FAQ’s on Manual Testing-In Real Time


1. What is software testing?
Testing is the process of evaluating a system or system component by manual or automated means to verify that it satisfies specified requirements”.
 2. Importance of software testing?
  1. To deliver a bug free application to the customer (Error free superior product)
  2. To deliver more reliable S/w to customer (Quality Assurance to the client)
  3. To reduce the maintenance cost to customer
  4. Early finding defects which helps to reduce cost to fix those defects (cut-down cost)
  5. When defects are removed quality improves.
3. Why do we need specialist tester?
  1. Developers will not do systematic and complete testing.
  2. A developer assumes few things to be working fine as they are owners of the work.
4. What is software Quality?
  1. Quality from the customer view point is that the S/w should be fit for use and for producer point of view the S/w should meet customer requirement.
  2. Factors affecting the quality:
  3. Time
  4. Cost/budget
  5. Reliability
5. Explain Quality management process?
Quality management is process of preventing defects in the S/w process to ensure that there are no defects in the final product, the whole quality process is divided into two parts:
1)      Quality Assurance (QA)
2)      Quality Control (QC)
6. What is Quality Assurance?
  1. It measures the quality of processes used to create a quality product.
  2. It is an activity that is based on process where we measure each process, identify any weakness and suggest improvement. 
7. What is Quality Control?
  1. It is an activity that is based on product where we measure the product, identify any weakness and improve the product.
  2. QC is oriented towards detection of defects.
8. Explain objective of S/w tester?
  1. The goal of a tester is to find bugs.
  2. Find bugs as early as possible
  3. Make sure those bugs should be fixed.
09. Explain Testing Limitations
  1. We can only test against system requirements
  2. May not detect errors in the requirements
  3. Incomplete requirements may lead to inadequate or incorrect testing.
  4. Exhaustive (total) testing is impossible in present scenario.
  5. Time and Budget constrains normally require very carefully planning of the testing effort.
  6. Test results are use to make business decisions for release dates.
10. Why Testing CANNOT Ensure Quality? Testing in itself cannot ensure the quality of software. All testing can do is give you a certain     level of assurance (confidence) in the software. On its own, the only thing that testing proves is that under specific controlled conditions, the software functioned as expected by the test cases executed.
11. What is SDLC?
SDLC: It is a process of developing a software project or product to fulfill the customer requirements within the specified cost and time.
15. Common problems in SDLC?
  1. Poor Requirements:
  2. Unrealistic schedule:
  3. adequate testing:
  4. Miscommunications:
16. Explain some of SDLC Models?
Based on the requirements and the needs of the customer there can be a specific model adopted in order to implement a S/w application
SDLC Models are:
  1. Waterfall Model
  2. Incremental Model
  3. Prototype Model
  4. Rapid Application Development (RAD) Model
  5. Spiral Model
  6. V-Model
17. What is Agile Testing model?
  1. It is new Gen. SDLC Model.
  2. Agile testing involves testing from the customer perspective as early as possible.
  3. Testing early and often as code becomes available and stable enough from module/unit level testing.
18. Explain V-model? Advantage of V-model?
  1. verification & validation. It is a suitable model for large scale companies to maintain testing process. This model defines co-existence relation between development process and testing process.
  2. The difference between other models and this model is that it has provided testing the same weight-age as other S/w development activities.
19. Explain Verification?
  1. In this process we say “Are we building the product right”?
  2. It is performed by reviewing the SRS Document, Design Document, Code to find any mistakes.
  3. It is consider as “Static” testing.
  4. Reviews, Walkthrough and Inspection are the examples of Verification Techniques.
20. Explain Verification Techniques?
a. Peer Review: It is informal meeting where the author provides the document to any one person to identify any mistakes.
b. Walkthrough: Semi-informal meeting where the participants comes to the meeting and author gives the presentation. In this case author himself is the presenter for explaining the project requirement.
It is planned meeting characterized by team of 2- people, led by author
Objective is to make other participants familiarize with the material and find any mistakes.
c. Inspection:
          Formal meeting characterized by individual preparation presentation prior to the   meeting
          The meeting is led by the Moderator who assure that rules are been followed and review is effective. Inspector does review for the document been presented.
          Presenter is the reader other than the author.
          Recorder records the defect identified in the meeting.
21.Explain Validation process?
          In this process we say “Are we build the right product?
          It is the process of confirmation whether software meets customer requirements.
          It is performed by executing the application to find any defect
          It is consider as “DYNAMIC” testing or testing with execution of system.
          Unit testing, Integration testing, system testing and Acceptance testing are the examples of validation techniques.

22. What is Coding, Testing, Debugging and Bug fixing?
23. What is White Box Testing?
  1. It is based on knowledge of the internal logic of the application’s code.
  2. Tests are based on coverage of code, statements, branches, paths, conditions & loops.
  3. It is also called as Clear Box or Glass Box or Open Box or Structural Testing.
  4. Explain disadvantages in WBT?
  5. Testers should be skilled to understand the code and internal structure.
In case of code lengthy it would impossible to go through each statement to find the hidden errors
24. What is Black Box Testing?
  1. Black box testing not based on any knowledge of internal design or code.
  2. Tests are based on requirements & functionality.
25. What is Gray box testing?
  1. This is just a combination of both black box and white box testing.  Tester should have the knowledge of both the internal and externals of the function.
  2. Tester should have good knowledge of white box testing & complete knowledge of black box testing
  3. Grey box testing is especially important with web & internet applications, because the internet is built around loosely integrated components that connect via relatively well-defined interfaces.
26. Explain levels of tests?
  1. Unit testing
  2.   Module testing
  3. Integration testing
  4. System testing
  5. User acceptance testing
27.Explain Unit Testing?
  1. It is code oriented testing.
  2. Individual components are tested to ensure that they operate correctly. Each component is tested independently, without other system components.
28. Explain Module Testing?
A module is a collection of dependent components such as an object class, an abstract data type or some looser collection of procedures and functions.A module encapsulate related components so it can be tested without other system modules.
29. Explain Integration testing?
  1. It is also called sub-system testing.
  2. It is Design oriented.
This phase involves testing collections of modules, which have been integrated into sub-systems. Sub-systems may be independently designed and implemented. The most common problems, which arise in large software systems, are sub-systems interface mismatches. The sub-system test process should therefore concentrate on the detection of interface errors by rigorously exercising these interfaces.
30. Explain system testing?
The sub-systems are integrated to make up the entire system. The testing process is concerned with finding errors that result from unanticipated interactions between sub-systems and system components. It is also concerned with validating that the system meets its functional and non-functional requirements.