Monday, August 15, 2011

General HR Interview Questions - Common for All


Top 200 Software Testing Interview Questions with Answers


Difference between Priority and Severity


Sr.
Priority
Severity
1
It is associated with schedule to resolve e.g. out of many issues to be tackled, which one should be addressed first by the order of its importance or urgency.
It is associated with benchmark quality or adherence to standard. It reflects harshness of a quality expectation.
2
Is largely related to Business or Marketing aspect. It is a pointer towards the importance of the bug.
Is related to technical aspect of the product. It reflects on how bad the bug is for the system.
3
Priority refers to how soon the bug should be fixed.
Severity refers to the seriousness of the bug on the functionality of the product. Higher effect on the functionality will lead to assignment of higher severity to the bug.
4
Priority to fix a bug is decided in consultation with the client.
The Quality Assurance Engineer decides the severity level. It is decided as per the risk assessment of the customer.
5
Product fixes are based on 'Project Priorities.
Product fixes are based on Bug Severity.
1) Generally speaking, a "High Severity" bug would also carry a "High Priority" tag along with it. However this is not a hard & fast rule. There can be many exceptions to this rule depending on the nature of the application and its schedule of release.
2) High Priority & Low Severity: A spelling mistake in the name of the company on the home page of the company’s web site is certainly a High Priority issue. But it can be awarded a Low Severity just because it is not going to affect the functionality of the Web site / application.
3) High Severity & Low Priority: System crashes encountered during a roundabout scenario, whose likelihood of detection by the client is minimal, will have HIGH severity. In spite of its major affect on the functionality of the product, it may be awarded a Low Priority by the project manager since many other important bugs are likely to gain more priority over it simply because they are more visible to the client.

Sunday, August 07, 2011

Exporting Requirements to Quality Center from Excel



HP Quality center Defect module overview



HP Quality center Test lab Module overview



Hp Quality center -Test plan module overview

HP Quality center intraduction




Quality Center demo part-2

HP Quicktest Professional 10.00 - New Capability: Quality Center 10.00 integration






HP Quality center video demo

HP Quality center Demo -simple Demo

Some important manual testing process

Software Testing Concepts

Manual-Testing-Real-Time


      1.differentiate between QA and QC?
It is process orientedit envolve in entire process of software developement.Preventin oriented.QC:It is product oriented.work to examin the quality of product.Dedection orientd.
      2.what is a bug?
Computer bug is an error, flaw, mistake, failure, or fault in a computer program thatprevents it from working correctly or produces an incorrect result.
      3.what is a test case?
Testcase is set of input values, execution preconditions,expected results and executionpostconditions, developed for a particular objective or test conditons, such as to exercise apaticular program path or to verify compliance with a specific requiremnt.
     4.What is the purpose of test plan in your project?
test plan document is prepared by the test lead,it contains the contents likeintroduction,objectives,test stratergy,scope,test items,program modules userprocedures,features to be tested features not to tested approach,pass or fail criteria,testingprocess,test deliverables,testing,tasks,responsibilities,resources,schedule,environmental requirements,risks & contingencies,change management procedures,planapprovals,etc all these things help a test manager undersatnd the testing he should do & what he should follow for testing that particular project.
      5.When the relationship occur between tester and developer?
developer is the one who sends the application to the tester by doing all the necessary code inthe application and sends the marshal id to the tester.The tester is the one who gives all theinput/output and checks whether he is getting reqd output or not.A developer is the one whoworks on inside interfacing where as the tester is the one who works on outside interfacing
      6.when testing will starts in a project?
the testing is not getting started after the coding.after release the build the testers performthe smoke test.smoke test is the first test which is done by the testing team.this is accordingto the testing team.but, before the releasing of a build the developers will perform the unittesting.
      7.If a bug has high severity then usually that is treated as high priority,then why dopriority given by testengineers/project managers and severity given by testers?
High severity bugs affects the end users ....testers tests an application with the users point of view, hence it is given as high severity.High priority is given to the bugs which affects theproduction.Project managers assign a high priority based on production point of view.
      8.what is the difference between functional testing and regresion testing
functional testing is a testing process where we test the functionality/behaviour of eachfunctional component of the application...i.e.minimize button,transfer button,links etc.i.e wecheck what is each component doing in that application...regression testing is the testing the behaviour of the application of the unchanged areas whenthere is a change in the build.i.e we chk whether the changed requirement has altered thebehaviour of the unchanged areas.the impacted area may be the whole of the application orsome part of the application...
      10.do u know abt integration testing,how do u intregate diff modules?
integration testing means testing an application to verify the data flows between themodule.for example, when you are testing a bank application ,in account balence it shows the100$as the available balence.but in database it shows the 120$. main thing is "integrationdone by the developers and integration testing done by the testers"
.       do u know abt configuration management tool,what is the purpose of maintainingall the documents in configuration manage ment tool?
It is focused primarily on maintaining the file changes in the history.Documents are subjected to change For ex: consider the Test case document .Initially you draft the Test cases document and place it in Version control tool(Visual SourceSafe for ex).Then you send it for Peer Review .They will provide some comments and thatdocument will be saved in VSS again.Similary the document undergoes changes and all thechanges history will be maintained in Version control.It helps in referring to the previous version of a document.Also one person can work on a document (by checking out) at a time.Also it keeps track who has done the changes ,time and date.Generally all the Test Plan, Test cases,Automation desgin docs are placed in VSS.Proper access rights needs to be given so that the documents dont get deleted or modified.
      12.How you test database and explain the procedure?
Database Testing is purely done based on the requirements. You may generalize a fewfeatures but they won't be complete. In general we look at1. Data Correctness (Defaults)2. Data Storage/Retreival3. Database Connectivity (across multiple platforms)4. Database Indexing5. Data Integrity6. Data Security
      13.suppose if you press a link in yahooshopping site in leads to some other companywebsite?how to test if any problem in linking from one site to another site?
1)first i will check whether the mouse cusor is turning into hand icon or not?2)i will check the link is highlingting when i place the curosr on the link or not?3)the site is opening or not?4)if the site is opening then i will check is it opening in another window or the same windowthat the link itself exitst(to check userfriendly ness of the link)5)how fast that website is opening?f 6)is the correct site is opening according to the link?7)all the items in the site are opeing or not?8)all other sublinks are opening or not?
      14.what are the contents of FRS?
F → Function BehavioursR → Requirements (Outputs) of the System that is defined.S → Specification ( How, What, When, Where, and Way it behavior's.FRS → Function Requirement Specification.This is a Document which contains the Functional behaviorof the system or a feature. This document is also know as EBS External BehaviourSpecification - Document. Or EFS External Function Specification.
      15.what is meant by Priority nad severity?
Priority means "Importance of the defect w.r.t cutomer requirement"Severity means "Seriousness of the defect w.r.t functionality"
      16.what is meant by Priority nad severity?
Severity:1. This is assigned by the Test Engineer2. This is to say how badly the devation that is occuring is affecting the other modules of thebuild or release.Priority:1. This is assigned by the Developer.2. This is to say how soon the bug as to be fixed in the main code, so that it pass the basicrequirement.
Eg., The code is to generate some values with some vaild input conditions. The priority will beassigned so based on the following conditions:a> It is not accepting any valueb> It is accepting value but output is in non-defined format (say Unicode Characters).A good example i used some unicode characters to generate a left defined arrow, itdisplayed correctly but after saving changes it gave some address value from thestack of this server. For more information mail me i will let you know.
      17.give me some example for high severity and low priority defect?
if suppose the title of the particular concern is not spelled corectly,it would give a negativeimpact.eg ICICC is spelled as a tittle for the project of the concern ICICI.then it is a highseverity,low priority defect.
      18.what is basis for testcase review?
the main basis for the test case review is1.testing techniques oriented review2.requirements oriented review3.defects oriented review.
      19.what are the contents of SRS documents?
Software requirements specifications and Functional requirements specifications.
      20.What is difference between the Web application testing and Client Server testing?
Testing the application in intranet(withoutbrowser) is an example for client -server.(Thecompany firewalls for the server are not open to outside world. Outside people cannot accessthe application.)So there will be limited number of people using that application.Testing an application in internet(using browser) is called webtesting. The application which isaccessable by numerous numbers around the world(World wide web.)So testing web application, apart from the above said two testings there are many othertestings to be done depending on the type of web application we are testing.If it is a secured application (like banking site- we go for security testing etc.)If it is a ecommerce testing application we go for Usability etc.. testings.
      21.Explain your web application archtechture?
web application is tested in 3 phases1. web tier testing --> browser compatibility2. middle tier testing --> functionality, security3. data base tier testing --> database integrity, contents
Suppose the product/appication has to deliver to client at 5.00PM,At that timeyou or your team member caught a high severity defect at 3PM.(Remember defect ishigh severity)But the the client is cannot wait for long time.You should deliver theproduct at 5.00Pm exactly.then what is the procedure you follow?
the bug is high severity only so we send the application to the client and find out the severityis preyority or not. if its preyority then we ask him to wait.Here we found defects/bugs in the last minute of the deliveryor realese dateThen we have two options1.explain the situation to client and ask some more time to fix the bug.2.If the client is not ready to give some some time then analyse the impact of defect/bug andtry to find workarounds for the defect and mention these issues in the release notes asknown issues or known limitations or known bugs. Here the workaround means remeadyprocess to be followed to overcome the defect effect.3.Normally this known issues or known limitations(defects) will be fixed in next version or nextrelease of the software

Benefit analysis - Manual versus Automated Testing


Problems with Manual Testing: Some of the problems with manual testing are:

  1. Less Reliable: Manual testing is not reliable, as there is no yardstick available to find out whether the actual and expected results have been compared. We just rely on the tester's words.
  2. 2) High Risk: A manual testing process is subject to high risks of oversights and mistakes. People get tired, they may be temporarily inattentive, they may have too many tasks on hand, they may be insufficiently trained and so on. Hence, unintentionally mistakes happen in entering data, in setting parameters, in execution and in comparisons.
  3. Incomplete Coverage: Testing is quite complex when we have mix of multiple platforms, O.S. Servers, clients, channels, business processes etc. Testing is non-exhaustive. Full manual regression testing is impractical.
  4. Time Consuming: Limited test resources makes manual testing simply too time consuming. As per a study done, 90% of all IT projects are delivered late due to manual testing.
  5. Facts and Fiction: The fiction is that manual testing is done while the fact is only some manual testing is done depending upon the feasibility.
  6. It is worth noting that the manual testing is used to do the documentation of tests, creating testing related guides according to data queries, providing structures for helping run the tests on temporary basis and measuring the test results. 
  7. Manual testing is considered to be costly and time-consuming; hence we use automated testing to cut down the time and cost.
Benefits of Automated Testing: On the contrary, Automated testing is having many benefits.
Automated testing is the process of automating the manual testing process. We use automated testing to substitute or provide a supplement to manual testing with the use of a comprehensive suite of testing tools. Automated testing tools assist software testers to evaluate the quality of the software by automating the mechanical aspects of the software testing task. The benefits of automation are better software quality, lesser time for marketing, repeatability of testing procedures & reduced cost of testing. We shall now list some more benefits of test automation. They are given below

  1. Automated execution of test cases is faster than manual execution. This saves time. This time can also be utilized to develop additional test cases, thereby improving the coverage of testing.
  2. Test automation can free test engineers from mundane tasks and make them focus on more creative tasks.
  3. Automated tests can be more reliable. This is because manually running the tests may result in boredom and fatigue, more chances of human error. While automated testing overcomes all these shortcomings.
  4. Automation helps in immediate testing, as it need not wait for the availability of test engineers. In fact,
  5. Automation = Lesser Person Dependence
  6. Test cases for certain types of testing such as reliability testing, stress testing, load and performance testing cannot be executed without automation. For example, if we want to study the behavior of a system with millions of users logged in, there is no way one can perform these tests without using automated tools.
  7. Manual testing requires the presence of test engineers but automated tests can be made to run round the clock, (24 x 7) environment. So, automated testing provides round the clock coverage.
  8. Tests, once automated, take comparatively far less resources to execute. A manual test suite requiring 10 persons to execute it over 31 days i.e., 31 x 10 = 310 man days, may take just 10 man-days for execution, if automated. Thus, a ratio of 1 : 31 is achieved.
  9. Automation produces a repository of different tests, which helps us to train test engineers to increase their knowledge.
  10. Automation does not end with developing programs for the test cases. Automation includes many other activities like selecting the right product build, generating the right test data, and analyzing results and so on.Automation should have scripts that produce test data to maximize coverage of permutations and combinations of input and expected output for result comparison. They are called as test data generators.
It is important for automation to relinquish the control back to test engineers in situations where further sets of actions to be taken are not known.
As the objective of testing is to catch defects early, the automated tests can be given to developers so that they can execute them as part of unit testing.
Drawbacks of Automated Testing: Despite of many benefits, pace of test-automation is slow.
Some of its disadvantages are as under:

  1. An average automated test suite development is normally 3-5 times the cost of a complete manual test cycle.
  2. Automation is too cumbersome. Who would automate? Who would train? Who would maintain? These issues complicates the matter.
  3. In many organizations, test automation is not even a discussion issue.
  4. There are some organizations where there is practically no awareness or only some awareness on test automation.
  5. Automation is not an item of higher priority for management. It does not make much difference to many organizations.
  6. Automation would require additional trained staff. There is no staff for the purpose. Automation actually allows testing professionals to concentrate on their real profession of creating tests and test cases, rather than doing the mechanical job of test execution.
Conclusion:
Test automation is a partial solution and not a complete solution. One does not go in for automation because it is easy. It is painful and resource-consuming exercise but once it is done, it has numerous benefits.

How to chose between Automated Testing & Manual Testing?


1) Based upon the frequency of use of Test Cases:
Automating a test case requires almost 3-4 times the effort as compared to manual execution of it once. To draw benefit out of significant investment in the automation tool, we should be able to execute the automation script the maximum number of times or at least 7-8 times. There is no worthwhile idea of going in for an automation tool for short-term product, which can be easily managed by manual means. For products involving many rounds of testing, the use of an automation tool is understandable.
2) Time comparison - Automated versus Manual Testing:
Generally an automation script runs much faster as compared to manual execution. However post execution activities are considerably time consuming. After running the automation script fully; time consuming activities are the analysis of test results & investigation or identification of actions causing failure at defined checkpoints. Whereas in case of manual execution of the test script, there is no need of any separate time for analysis of the results, since the actions causing failure at checkpoints become already known.
Hence automated testing is viable only if, the combined time spent on running the automation script as well as doing post automation analysis of test results is significantly less in comparison to the time spent on manual execution. However nicely developed automation scripts do not need constant monitoring and can run without any manual intervention. In such a case automation can be greatly productive & can cut down the time of running the scripts.
However when large number of regression issues are there after fixing the bugs, automation is the best alternative. Manual testing can be extremely time consuming in such cases.
3) Reusability of Automation Scripts:
The automated testing is viable in case automation scripts are reusable in future as well. One may be compelled to think again in favor of automation if considerable effort is expected to be made in upgrading the automation scripts. The return on investment made on the automation tool can be maximized by reusability of the automation scripts with small modifications.
4) Stability of the Product under Test:

Automation scripts are not advisable for use on software product, which itself is not adequately stable. Frequent changes in automation script are not desirable, thus unless the product acquires enough stability during its development cycle, there is no point in automation unless we are operating in an agile environment.
5) Adaptability of test cases for automation:

A statement that, "All test cases can easily be automated" – is not true. Many a times we land up with some type of test cases, which are not worth automating. Hence there is no point in wasting automation effort over such test cases. For complicated product with tightly integrated bunch of many applications, running the automation script again & again in the event of stoppage of the test case, can become a pain in the neck. In such cases tester would certainly prefer to run the script manually & save considerable amount of time as compared to running the automated script.
6) Exploitation of automation tool:
If we want to go in for automated testing of a product, we must draw full benefit out of the test tool to draw the maximum return on its investment. The automation tool should be deployed for performing less complicated / time consuming & repetitive tasks. This will help the test engineers in concentrating their time & energy on other significantly important tasks. There is no point in automating highly complex test cases, which can easily be executed manually.
Automation tool should be deployed to test the breadth of the application under test. However manual test engineers can handle the in-depth testing more efficiently.
Manual testing can not be totally eliminated in any way. Automation scripts have both advantage and disadvantages in the sense that they perform actions exactly in the way they are coded to perform without any deviation. For even a slight deviation, the script needs to be changed. Running the same automated script again & again will not detect more bugs. However to detect more & more bugs, we need to move across the side & have little deviation from the flow, & such actions can be better accomplished by manual testing.
When a test case is perfectly designed & made to execute from end to end automatically through the tool without, it automatically verifies all predefined check points without any manual intervention.
7) User simulation over large web application:
For simulation of several virtual / dummy users interacting over a large web application, load testing automated tools like LoadRunner etc. can be easily deployed to establish the load bearing functionality of the application. Such load testing by manual means is extremely difficult.


Introduction to automated testing:


"Automated Testing" means automating the manual testing process currently in use. The prime requirement is of presence of a formalized "manual testing process" in the organization.
Automation refers to the use of strategies & tools which augment or reduce the need of manual or human involvement in unskilled, repetitive or redundant tasks.
Automation process includes – Creation of detailed test cases, including predictable "expected results", which have been derived from Business Functional Specifications and other Design documentation.
A standalone Test Environment, including a Test Database that is restorable to a known constant, such that the test cases are able to be repeated each time there are modifications made to the application.
When to use the Automation tools:
Great effort is required in firstly automating the test cases and maintaining them thereafter. Hence some sort of cost-benefit analysis is quite helpful before investing money and putting efforts into it.
Automation is suited to following types of testing with specific intentions:
1) Functional Testing – on operations which perform as per the expectations.
2) Regression Testing – on the behavior of the system which has not been changed.
3) Exception or Negative Testing – thereby forcing error conditions in the system.
4) Stress Testing – to determine the absolute capacities of the application and operational infrastructure.
5) Performance Testing – to provide assurance that the performance of the system will be adequate for both batch runs and online transactions in relation to business projections and requirements.
6) Load Testing – to determine the points at which the capacity and performance of the system become degraded to the situation that hardware or software upgrades would be required.
Benefits of Automated Testing:
1) Reliable: Tests perform precisely the same operations each time they are run, thereby eliminating human error
2) Repeatable: You can test how the software reacts under repeated execution of the same operations.
3) Programmable: You can program sophisticated tests that bring out hidden information from the application.
4) Comprehensive: You can build a suite of tests that covers every feature in your application.
5) Reusable: You can reuse tests on different versions of an application, even if the user interface changes.
6) Better Quality Software: Because you can run more tests in less time with fewer resources
7) Fast: Automated Tools run tests significantly faster than human users.
8) Economical: As the number of resources for regression test are reduced.
Choosing the right tools for the job and targeting the right areas of the organization to deploy them can only realize these benefits. The right areas where the automation fit must be chosen. 
 Areas where Automation can be attempted first:
1. Highly redundant tasks or scenarios
2. Repetitive tasks that are boring or tend to cause human error
3. Well-developed and well-understood use cases or scenarios first
4. Relatively stable areas of the application over volatile ones must be automated.
Suggested guidelines for Automated Software Testers so as to draw maximum benefits from automation:
1) Concise: As simple as possible and no simpler.
2) Self-Checking: Test reports its own results; needs no human interpretation.
3) Repeatable: Test can be run many times in a row without human intervention.
4) Robust: Test produces same result now and forever. Tests are not affected by changes in the external environment.
5) Sufficient: Tests verify all the requirements of the software being tested.
6) Necessary: Everything in each test contributes to the specification of desired behavior.
7) Clear: Every statement is easy to understand.
8) Efficient: Tests run in a reasonable amount of time.
9) Specific: Each test failure points to a specific piece of broken functionality; unit test failures provide "defect triangulation".
10) Independent: Each test can be run by itself or in a suite with an arbitrary set of other tests in any order.
11) Maintainable: Tests should be easy to understand and modify and extend.
13) Traceable: To and from the code it tests and to and from the requirements. 
Disadvantages of Automation Testing:
Though the automation testing has many advantages, it has its own disadvantages too. Some of the disadvantages are:
1) Proficiency is required to write the automation test scripts.
2) Debugging the test script is major issue. If any error is present in the test script, sometimes it may lead to deadly consequences.
3) Test maintenance is costly in case of playback methods. Even though a minor change occurs in the GUI, the test script has to be re-recorded or replaced by a new test script.
4) Maintenance of test data files is difficult, if the test script tests more screens.


Understand the Best Practices in Test Automation


Why Automated Testing is Needed?: Today, rigorous application testing is a critical part of virtually all software development projects. As more organizations develop mission – critical systems to support their business activities, the need is greatly increased for testing methods that support business objectives. It is necessary to ensure that these systems are reliable, built according to specification and have the ability to support business processes. Many internal and external factors are forcing organizations to ensure a high level of software quality and reliability.
Why Automate the Software Testing Process?:
In the past, most software tests were performed using manual methods. This required a large staff of test personnel to perform expensive and time-consuming manual test procedures. Owing to the size and complexity of today’s advanced software applications, manual testing is no longer a viable option for most testing situations.
How to do Testing more Efficiently:
By definition, testing is a repetitive activity. The methods that are employed to carry out testing (manual or automated) remain repetitious throughout the development life cycle. Automation of testing processes allows machines to complete the tedious, repetitive work while human personnel perform other tasks. Automation eliminates the required "think time" or "read time" necessary for the manual interpretation of when or where to click the mouse. An automated test executes the next operation in the test hierarchy at machine speed, allowing test to be completed many times faster than the fastest individual. Automated test also perform load/stress testing very effectively.
Can Software Testing Costs be brought Down:
The cost of performing manual testing is prohibitive when compared to automated methods. The reason is that computers can execute instructions many times faster and with fewer errors than individuals. Many automated testing tools can replicate the activity of a large number of users (and their associated transactions) using a single computer. Hence, load / stress testing using automated methods requires only a fraction of the computer hardware that would be necessary to complete a manual test.
Replication of Testing over Variety of Platforms: Automation allows the testing organization to perform consistent and repeatable test. When applications need to be deployed across different hardware or software platforms, standard or benchmark tests can be created and repeated on target platforms to ensure that new platforms operate consistently.
Greater Application Coverage: The productivity gains delivered by automated testing allow and encourage organization to test more often and more completely. Greater application test coverage also reduces the risk if exposing users to malfunctioning or non-compliant software.
Effective Reporting of Test Results: Full-featured automated testing systems also produce convenient test reporting and analysis. These reports provide a standardized measure of test status and results, thus allowing more accurate interpretation of testing outcomes. Manual methods require the user to self-document test procedures and test results.
Clear Understanding of Testing Process:
The introduction of automated testing into the business environment involves far more than buying and installing an automated testing tool.
Typical Software Testing Steps: Most of the software testing projects can be divided into general steps like
Test Planning: This step determines like ‘which’ and ‘when’.
Test Design: This step determines how the tests should be built the level of quality.
Test Environment Preparation: Technical environment is established during this step.
Test Construction: At this step, test scripts are generated and test cases are developed.
Test Execution: This step is where the test scripts are executed according to the test plans.
Test evaluation: After the test is executed, the test results are compared to the expected results and evaluations can be made about the quality of an application.
Finding out the Tests best suitable for Automation: Most, but not all, types of tests can be automated. Certain types of tests like user comprehension tests test that run only once and tests that require constant human intervention are usually not worth the investment incurred to automate. The following are examples of criteria that can be used to identify tests that are prime candidates for automation.
High path frequency
– Automated testing can be used to verify the performance of application paths that are used with a high degree of frequency when the software is running in full production. Examples include: creating customer records.
Critical Business Processes
– Mission-critical processes are prime candidates for automated testing. Examples include: financial month-end closings, production planning, sales order entry and other core activities. Any application with a high –degree of risk associated with a failure is a good candidate for test automation.
Repetitive Testing
– If a testing procedure can be reused many times, it is also a prime candidate for automation
Applications with a Long Life Span
– If an application is planned to be in production for a long period of time, the greater the benefits are from automation.
Task Automation and Test Set-Up: In performing software testing, there are many tasks that need to be performed before or after the actual test. For example, if a test needs to be executed to create sales orders against current inventory, goods need to be in inventory. The tasks associated with placing items in inventory can be automated so that the test can run repeatedly. Additionally, highly repetitive tasks not associated with testing can be automated utilizing the same approach.
Who is ideally suited for doing Testing: There is no clear consensus in the testing community about which group within an organization should be responsible for performing the testing function. It depends on the situation prevailing in the organization.


Thursday, August 04, 2011

Open Source Project Management Defect Tracking Tools


Trac - Open source enhanced wiki and issue tracking system by Edgewall Software.
Own description: "The Trac Project"
Bugzilla - Open Source Bug Tracking System.
Own description: "Home :: Bugzilla :: bugzilla.org"
GNATS - GNU defect tracking tool.
Own description: "GNATS - GNU Project - Free Software Foundation (FSF)"
Mantis - Open source bug tracking system.
Own description: "MantisBT is a popular free web-based bug tracking system. It is written in PHP works with MySQL, MS SQL, and PostgreSQL databases. MantisBT has been installed on Windows, Linux, Mac OS, OS/2, and others. It is released under the terms of the GNU Genera..."
Redmine - Open source project management and bug tracking tool written in Ruby.
Own description: "Redmine"
Flyspray - Open source bug tracking tool.
Own description: "start - Flyspray - The Bug Killer!"
JTrac - Open source issue-tracking application.
Own description: "JTrac is a generic issue-tracking web-application that can be easily customized by adding custom fields and drop-downs. Features include customizable workflow, field level permissions, e-mail integration, file attachments and a detailed history view."
Request Tracker - Open source issue tracking system written in Perl by Best Practical.
Own description: "RT: Request Tracker - Best Practical"
Scarab - Open source issue tracking tool.
Own description: "CollabNet, Facilitating Collaborative Software Development"
BugTracker.NET-  Open source bug tracking tool based on ASP.NET, C#, and Microsoft SQL Server.
Own description: "BugTracker.NET Home - Free Bug Tracking"
Ditz - Open source distributed issue tracker designed to work with distributed version control systems written in Ruby.
Own description: "Ditz"
Eventum - Open source bug tracking system developed by the MySQL Technical Support team.
Own description: "MySQL :: Eventum Issue / Bug Tracking System"
PhpBugTracker- Open source bug tracking tool.
Own description: "phpBugTracker - open source issue tracking software"
Project Open - Open source project management and issue tracking tool.
Own description: "]project-open[ is an Open Source Project Management/ERP tool"
Bugs - The Bug Genie - Open source defect tracking tool based on PHP/MySQL.
Own description: "Get The Bug Genie at SourceForge.net. Fast, secure and free downloads from the largest Open Source applications and software directory"
WebIssues Open source issue tracking and team collaboration tool written in PHP by Michal Mecinski.
Own description: "WebIssues | Issue tracking and team collaboration system"
AVS - Advanced Versioning System - Open source configuration management system embedding a bug tracking tool.
Own description: "software configuration management with integrated bug tracking engine, 100% java client deployed through JNLP"
MyTracker - Open source defect tracking and collaboration system.
Own description: "myTracker Home page"

Tuesday, August 02, 2011

Bad idea to automate testing work in early development cycle (Unless it is agile environment)

Why Automation testing?
1) You have some new releases and bug fixes in working module. So how will you ensure that the new bug fixes have not introduced any new bug in previous working functionality? You need to test the previous functionality also. So will you test manually all the module functionality every time you have some bug fixes or new functionality addition? Well you might do it manually but then you are not doing testing effectively. Effective in terms of company cost, resources, Time etc. Here comes need of Automation.
- So automate your testing procedure when you have lot of regression work.
2) You are testing a web application where there might be thousands of users interacting with your application simultaneously. How will you test such a web application? How will you create those many users manually and simultaneously? Well very difficult task if done manually.
- Automate your load testing work for creating virtual users to check load capacity of your application.
3) You are testing application where code is changing frequently. You have almost same GUI but functional changes are more so testing rework is more.
- Automate your testing work when your GUI is almost frozen but you have lot of frequently functional changes.
What are the Risks associated in Automation Testing?
There are some distinct situations where you can think of automating your testing work. I have covered some risks of automation testing here. If you have taken decision of automation or are going to take sooner then think of following scenarios first.
1) Do you have skilled resources?
For automation you need to have persons having some programming knowledge. Think of your resources. Do they have sufficient programming knowledge for automation testing? If not do they have technical capabilities or programming background that they can easily adapt to the new technologies? Are you going to invest money to build a good automation team? If your answer is yes then only think to automate your work.
2) Initial cost for Automation is very high:
I agree that manual testing has too much cost associated to hire skilled manual testers. And if you are thinking automation will be the solution for you, Think twice. Automation cost is too high for initial setup i.e. cost associated to automation tool purchase, training and maintenance of test scripts is very high.
There are many unsatisfied customers regretting on their decision to automate their work. If you are spending too much and getting merely some good looking testing tools and some basic automation scripts then what is the use of automation?
3) Do not think to automate your UI if it is not fixed:
Beware before automating user interface. If user interface is changing extensively, cost associated with script maintenance will be very high. Basic UI automation is sufficient in such cases.
4) Is your application is stable enough to automate further testing work?
It would be bad idea to automate testing work in early development cycle (Unless it is agile environment). Script maintenance cost will be very high in such cases.
5) Are you thinking of 100% automation?
Please stop dreaming. You cannot 100% automate your testing work. Certainly you have areas like performance testing, regression testing, load/stress testing where you can have chance of reaching near to 100% automation. Areas like User interface, documentation, installation, compatibility and recovery where testing must be done manually.
6) Do not automate tests that run once:
Identify application areas and test cases that might be running once and not included in regression. Avoid automating such modules or test cases.
7) Will your automation suite be having long lifetime?
Every automation script suite should have enough life time that its building cost should be definitely less than that of manual execution cost. This is bit difficult to analyze the effective cost of each automation script suite. Approximately your automation suite should be used or run at least 15 to 20 times for separate builds (General assumption. depends on specific application complexity) to have good ROI.
Here is the conclusion:
Automation testing is the best way to accomplish most of the testing goals and effective use of resources and time. But you should be cautious before choosing the automation tool. Be sure to have skilled staff before deciding to automate your testing work
Instead of relying 100% on either manual or automation use the best combination of manual and automation testing. This is the best solution (I think) for every project. Automation suite will not find all the bugs and cannot be a replacement for real testers. Ad-hoc testing is also necessary in many cases.