TESTING NOTES

INTRODUCTION

"Software testing is not an activity to take up when the product is ready. An effective Software Testing begins with a proper plan from the user requirements stage itself. Software testability is the ease with which a computer program is tested. Metrics can be used to measure the testability of a product. The requirements for effective Software Testing are given in the following sub-sections."


Software Testing-Testing Requirements 

1)Operability              : Usable for a specific purpose.


2)Observability          :Capable of being seen or noticed.


3)Controllability         :Capable of being controlled.


4)Decomposability     :Capable of being partitioned.


5)Simplicity                 :The quality of being simple or uncompounded.


6)Stability                   :The quality or attribute of being firm and steadfast.  


                                      The quality of being enduring and free from change 


                                       or variation.


7)Understandability  :The quality of comprehensible language or thought.    



Website Urls dwn here for testing notes


http://www.etestinghub.com/v_model.php
http://www.nivistatech.com/quality/quality.htm
http://www2.it.lut.fi/project/MASTO/introduction.html
http://www.lgcnsindia.com/independent_software_Testing.html
http://smoothlab.com/software-testing.htm
http://www.lgcnsindia.com/independent_software_Testing.html
http://www.f14testing.com/archives/550
http://www.fiserv.co.in/func_testing.aspx?Serviceid=3&Link=Related&SubServiceId=9&TypeID=1
http://www.the-software-experts.de/e_dta-sw-process.htm
http://www.outsourcebazaar.com/index-2_ManualTesting.html
http://www.flickr.com/photos/qainfotech/4088941304/
http://testingcorner.blogspot.com/2009/03/software-testing-spiral-model.html
http://www.etestinghub.com/waterfall.php
http://www.maveric-systems.com/models.php
http://www.lims-software.com/index.php?title=Implemenation
http://www.vietnamesetestingboard.org/zbxe/?document_srl=42314
http://nana.com.pe/software-testing-life-cycle-diagram&page=5
http://www.testingthefuture.net/2009/09/the-w-model/


**

Manual testing

Manual testing is still the most popular method of software application testing and to make it efficient we at OsB always start with careful planning of the test process and test cases. Our specialists use mainly the V model described in BS7925 Standard for Software Component Testing promoted by British Computer Society to test the application through it's entier development lifecycle.
We offer you the services of a team of professsional software testers who work according to newest international standards and methodologies. We are performing all types of tests from document testing to System Test or User Acceptance Test
The V model
Is a framework to describe the software development life cycle activities from requirements specification to maintenance. The V-model illustrates how testing activities can be integrated into each phase of the software development life cycle.


Types of tests
The most 'micro' scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code.
testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc.
black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.
determining if software is satisfactory to an end-user or customer.
testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers are usually not appropriate as usability testers
similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort.
re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.
term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans
term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.

V-model
The V Model, while admittedly obscure, gives equal weight to testing rather than treating it as an afterthought.

Initially defined by the late Paul Rook in the late 1980s, the V was included in the U.K.'s National Computing Centre publications in the 1990s with the aim of improving the efficiency and effectiveness of software development. It's accepted in Europe and the U.K. as a superior alternative to the waterfall model; yet in the U.S., the V Model is often mistaken for the waterfall.

The V shows the typical sequence of development activities on the left-hand (downhill) side and the corresponding sequence of test execution activities on the right-hand (uphill) side.





In fact, the V Model emerged in reaction to some waterfall models that showed testing as a single phase following the traditional development phases of requirements analysis, high-level design, detailed design and coding. The waterfall model did considerable damage by supporting the common impression that testing is merely a brief detour after most of the mileage has been gained by mainline development activities. Many managers still believe this, even though testing usually takes up half of the project time.


Several testing strategies are available and lead to the following generic characteristics:

1) Testing begins at the unit level and works "outward" toward the integration of the entire system
2) Different testing techniques are appropriate at different points of S/W development cycle.
Testing is divided into four phases as follows:
a)Unit Testing
b)Integration Testing
c)Regression Testing
d)System Testing
e)Acceptance Testing

The context of Unit and Integration testing changes significantly in the Object Oriented (OO) projects. Class Integration testing based on sequence diagrams, state-transition diagrams, class specifications and collaboration diagrams forms the unit and Integration testing phase for OO projects. For Web Applications, Class integration testing identifies the integration of classes to implement certain functionality.


The meaning of system testing and acceptance testing however remains the same in the OO and Web based Applications context also. The test case design for system and acceptance testing however need to handle the OO specific intricacies.


Relation Between Development and Testing Phases



Testing is planned right from the URD stage of the SDLC. The following table indicates the planning of testing at respective stages. For projects of tailored SDLC, the testing activities are also tailored according to the requirements and applicability.


The "V" Diagram indicating this relationship is as follows






DRE: - Where A defects found by testing team. B defects found by customer side people during maintenance.
Refinement from v- model.
To decrease cost and time complexity in development process, small scale and medium scale companies are following a refinement form of VModel.



Software Testing Phases
1. Unit Testing

As per the "V" diagram of SDLC, testing begins with Unit testing. Unit testing makes heavy use of White Box testing techniques, exercising specific paths in a unit



s control structure to ensure complete coverage and maximum error detection.

Unit testing focuses verification effort on the smallest unit of software design - the unit. The units are identified at the detailed design phase of the software development life cycle, and the unit testing can be conducted parallel for multiple units. Five aspects are tested under Unit testing considerations:
  • The module interface is tested to ensure that information properly flows into and out of the program unit under test.
  • The local data structure is examined to ensure that data stored temporarily maintains its integrity during all steps in an algorithms execution.

  • Boundary conditions are tested to ensure that the module operates properly at boundaries established to limit or restrict processing.

  • All independent paths (basis paths) through the control structure are exercised to ensure that all statements in a module have been executed at least once.

  • And finally, all error-handling paths are tested.





Unit Test Coverage Goals:
Path Coverage:

Path coverage technique is to verify whether each of the possible paths in each of the functions has executed properly. A path is a set of branches of possible flow. Since loop introduces unbounded number of paths, the path coverage technique employs a test that considers only a limited number of looping possibilities.



Statement Coverage

The statement coverage technique requires that every statement in the program to be evoked at least at once. It verifies coverage at high level rather than decision execution or Boolean expressions. The advantage is this measure can be applied directly to object code & does not require processing source code.



Decision (Logic/Branch) Coverage

The decision coverage test technique seeks to identify the percentage of all possible decision outcomes that have been considered by a suite of test procedures. It requires that every point of entry & exit in the software program be invoked at least once. It also requires that all possible conditions for a decision in the program be exercised at least once.



Condition Coverage

This technique seeks to verify the accuracy of true or false outcome of each Boolean sub expression. This technique employs tests that measure the sub expressions independently.


Multiple-Condition Coverage:



It takes care of covering different conditions, which are interrelated.
Unit Testing (COM/DCOM Technology):
The integral parts covered under unit testing will be:
Active Server Page (ASP) that invokes the ATL component (which in turn can use C++ classes) The actual component Interaction of the component with the persistent store or database and Database tables Driver for the unit testing of a unit belonging to a particular component or subsystem depends on the component alone. Wherever User Interface is available UI called from a web browser will initiate the testing process. If UI is not available then appropriate drivers (code in C++ as an example) will be developed for testing.


Unit testing would also include testing inter-unit functionality within a component. This will consist of two different units belonging to same component interacting with each other. The functionality of such units will be tested with separate unit test(s).
Each unit of functionality will be tested for the following considerations:
Type: Type validation that takes into account things such as a field expecting alphanumeric characters should not allow user input of anything other than that.
Presence: This validation ensures all mandatory fields should be present, they should also be mandated by database by making the column NOT NULL (this can be verified from the low-level design document).
Size: This validation ensures the size limit for a float or variable character string input from the user not to exceed the size allowed by the database for the respective column.
Validation: This is for any other business validation that should be applied to a specific field or for a field that is dependent on another field. (E.g.: Range validation



Body temperature should not exceed 106 degree Celsius), duplicate check etc.
GUI based: In case the unit is UI based, GUI related consistency check like font sizes, background color, window sizes, message & error boxes will be checked.

2. Integration Testing



After unit testing, modules shall be assembled or integrated to form the complete software package as indicated by the high level design. Integration testing is a systematic technique for verifying the software structure and sequence of execution while conducting tests to uncover errors associated with interfacing.
Black-box test case design techniques are the most prevalent during integration, although limited amount of white box testing may be used to ensure coverage of major control paths. Integration testing is sub-divided as follows:
i) Top-Down Integration Testing: Top-Down integration is an incremental approach to construction of program structure. Modules are integrated by moving downward through the control hierarchy, beginning with the main control module (main program). Modules subordinate to the main control module are incorporated into the structure in either a depth-first or breadth-first manner.
ii) Bottom-Up Integration Testing:Bottom-Up integration testing, as its name implies, begins construction and testing with atomic modules (i.e., modules at the lowest level in the program structure). Since modules are integrated from the bottom up, processing required for modules sub-ordinate to a given level is always available and the need for stubs is eliminated.
iii)Integration Testing for OO projects:
Thread Based Testing Thread based testing follows an execution thread through objects to ensure that classes collaborate correctly.
In thread based testing
    • Set of class required to respond to one input or event for system are identified;
    • each thread is integrated and tested individually
    • Regression test is applied to ensure that no side effects occur

Use Based Testing



Use based testing evaluates the system in layers. The common practice is to employ the use cases to drive the validation process
In Use Based Testing
    • Initially independent classes (i.e., classes that use very few other classes) are integrated and tested.
    • Followed by the dependent classes that use independent classes. Here dependent classes with a layered approach are used
    • Followed by testing next layer of (dependent) classes that use the independent classes

This sequence is repeated by adding and testing next layer of dependent classes until entire system is tested.
Integration Testing for Web applications:
Collaboration diagrams, screens and report layouts are matched to OOAD and associated class integration test case report is generated.

3. Regression Testing



Each time a new module is added as part of integration testing, new data flow paths may be established, new I/O may occur, and new control logic may be invoked. These changes may cause problems with functions that previously worked flawlessly. In the context of integration test strategy, regression testing is the re-execution of some subset of tests that have already been conducted to ensure that changes have not propagated unintended side effects.
Regression testing may be conducted manually, by re-executing a subset of all test cases. The regression test suite (the subset of tests to be executed) contains three different classes of test cases:
    • A representative sample of tests that will exercise all software functions.
    • Additional tests that focus on software functions and are likely to be affected by the change.
    • Tests that focus on the software components that have been changed.

As integration testing proceeds the number of regression tests can grow quite large. Therefore, the regression test suite shall be designed to include only those tests that address one or more classes of errors in each of the major program functions. It is impractical and inefficient to re-execute every test for every program function once a change has occurred.

4. System Testing



After the software has been integrated (constructed), sets of high order tests shall be conducted. System testing verifies that all elements mesh properly and the overall system function/performance is achieved.
The purpose of system testing is to fully exercise the computer-based system. The aim is to verify that all system elements and validate conformance against SRS. System testing is categorized into the following 20 types. The type(s) of testing shall be chosen depending on the customer / system requirements.
Different types of Tests that comes under System Testing are listed below:
    • Compatibility / Conversion Testing: In cases where the software developed is a plug-in into an existing system, the compatibility of the developed software with the existing system has to be tested. Likewise, the conversion procedures from the existing system to the new software are to be tested.

    • Configuration Testing:Configuration testing includes either or both of the following:
      • testing the software with the different possible hardware configurations
      • testing each possible configuration of the software



If the software itself can be configured (e.g., components of the program can be omitted or placed in separate processors), each possible configuration of the software should be tested.
If the software supports a variety of hardware configurations (e.g., different types of I/O devices, communication lines, memory sizes), then the software should be tested with each type of hardware device and with the minimum and maximum configuration.

    • Documentation Testing: Documentation testing is concerned with the accuracy of the user documentation. This involves
      i) Review of the user documentation for accuracy and clarity
      ii)Testing the examples illustrated in the user documentation by preparing test cases on the basis of these examples and testing the system

    • Facility Testing:Facility Testing is the determination of whether each facility (or functionality) mentioned in SRS is actually implemented. The objective is to ensure that all the functional requirements as documented in the SRS are accomplished.

    • Installability Testing:Certain software systems will have complicated procedures for installing the system. For instance, the system generation (sysgen) process in IBM Mainframes. The testing of these installation procedures is part of System Testing.
      Proper Packaging of application, configuration of various third party software and database parameters settings are some issues important for easy installation.
      It may not be practical to devise test cases for certain reliability factors. For e.g., if a system has a downtime objective of two hours or less per forty years of operation, then there is no known way of testing this reliability factor.

    • Performance Testing:Performance testing is designed to test run-time performance of software within the context of an integrated system. Performance testing occurs throughout all phases testing. Even at the unit level, the performance of an individual module is assessed as white-box tests are conducted. However the performance testing is complete when all system elements are fully integrated and the true performance of the system is ascertained as per the customer requirements.

    • Performance Testing for Web Applications:The most realistic strategy for rolling out a Web application is to do so in phases. Performance testing must be an integral part of designing, building, and maintaining Web applications.
      i) Automated testing tools play a critical role in measuring, predicting, and controlling application performance. There is a paragraph on automated tools available for testing Web Applications at the end of this document.
      In the most basic terms, the final goal for any Web application set for high-volume use is for users to consistently have
      i) continuous availability
      ii) consistent response times
      even during peak usage times. Performance testing has five manageable phases:
      iii) architecture validation
      iv) performance benchmarking
      v) performance regression
      vi) performance tuning and acceptance
      vii) and the continuous performance monitoring necessary to control performance and manage growth.

    • Procedure Testing:If the software forms a part of a large and not completely automated system, the interfaces of the developed software with the other components in the larger system shall be tested. These may include procedures to be followed by
      i) The human operator
      ii) Database administrator
      iii) Terminal user
      These procedures are to be tested as part of System testing.

    • Recovery Testing:Recovery testing is a system test that forces the software to fail in a variety of ways and verifies that recovery is properly performed. If recovery is automatic (performed by the system itself), re-initialisation, check pointing mechanisms, data recovery, and restart are each evaluated for correctness. If recovery requires human intervention, the time required to repair is evaluated to determine whether it is within acceptable limits.

    • Reliability Testing:The various software-testing processes have the goal to test the software reliability. The "Reliability Testing" which is a part of System Testing encompasses the testing of any specific reliability factors that are stated explicitly in the SRS.
      However, if the reliability factors are stated as say, Mean-time-to-failure (MTTF) is 20 hours, it is possible to device test cases using mathematical models.




    • Security Testing:Security testing attempts to verify that protection mechanisms built into a system will protect it from improper penetration. During security testing, the tester plays the role(s) of the individual who desires to penetrate the system. Security testing involves designing test cases that try to penetrate into the system using all possible mechanisms.

    • Security Testing (Web applications):In case of web applications, one has take into account testing with appropriate firewall set-up. For data security, one has to take into consideration Data Transfer Checksum, Encryption or use of digital certificates, MD5 hashing on all vulnerable data and database integrity. For User security, encrypted passwords, audit trail logs containing (who, where, why, when and what information), auto log out based on system specifications (e.g. 5 minutes of inactivity), display of user information on the UI can be taken care by designing to code programmatically.

    • Serviceability Testing:Serviceability testing covers the serviceability or maintainability characteristics of the software. The requirements stated in the SRS may include
      i) service aids to be provided with the system, e.g., storage-dump programs, diagnostic programs
      ii) the mean time to debug an apparent problem
      iii) the maintenance procedures for the system
      iv) the quality of the internal-logic documentation
      Test cases are to be devised to ensure the coverage of the stated aspects.

    • Storage Testing:Storage testing is to ensure that the storage requirements are within the specified bounds. For instance, the amounts of the primary and secondary storage the software requires and the sizes of temporary files that get created.

    • Stress Testing:Stress tests are designed to confront programs with abnormal situations. Stress testing executes a system in a manner that demand rescues in abnormal quantity, frequency or volume. Test cases may be tailored by keeping some of the following examples in view:
      i) Input data rates may be increased by an order of magnitude to determine how input functions will respond
      ii) Test cases that may cause excessive hunting
      iii) Test cases that may cause thrashing in a virtual operating system may be designed
      iv) Test cases that may create disk resident data.
      v) Test cases that require maximum memory or other resources may be executed
      To achieve this, the software is subjected to heavy volumes of data and the behaviour is observed.

    • Stress Testing (Web applications):This refers to testing system functionality while the system is under unusually heavy or peak load; it is similar to the validation testing but is carried out in a "high-stress" environment. This requires some idea about expected load levels of the Web application. One of the criteria for web applications would be number of concurrent users using the application.

    • Usability Testing:Usability testing is an attempt to uncover the software usability problems involving the human-factor.
      Examples:
      i) Has each user interface is amicable to the intelligence, educational background, and environmental pressures of the end user?
      ii) Are the outputs of the program meaningful, useable, storable, etc.?
      iii) Are the error messages meaningful, easy to understand?
    • Usability Testing (Web Applications):The intended audience will determine the "usability" testing needs of the Web site. Additionally, such testing should take into account the current state of the Web and Web culture.

    • Volume Testing:Volume Testing is to ensure that the software
      i) can handle the volume of data as specified in the SRS
      ii) does not crash with heavy volumes of data, but gives an appropriate message and/or makes a clean exit.
      To achieve this, the software is subjected to heavy volumes of data and the behaviour is observed.
      Examples:
      i) A compiler would be fed an absurdly large source program to compile
      ii) A linkage editor might be fed a program containing thousands of modules
      iii) An operating system's job queue would be filled to capacity
      iv) If a software is supposed to handle files spanning multiple volumes, enough data are created to cause the program to switch from one volume to another
      As a whole, the test cases shall try to test the extreme capabilities of the programs and attempt to break the program so as to establish a sturdy system.

    • Link testing (for web based applications): This type of testing determines if the site's links to internal and external Web pages are working. A Web site with many links to outside sites will need regularly scheduled link testing, because Web sites come and go and URLs change. Sites with many internal links (such as an enterprise-wide Intranet, which may have thousands of internal links) may also require frequent link testing.

    • HTML validation (for web based applications):The need for this type of testing will be determined by the intended audience, the type of browser(s) expected to be used, whether the site delivers pages based on browser type or targets a common denominator. There should be adherence to the HTML programming guidelines as defined in Qualify.

    • Load testing (for web based applications):If there is a large number of interactions per unit time on the Web site testing must be performed under a range of loads to determine at what point the system's response time degrades or fails. The Web server software and configuration settings, CGI scripts, database design, and other factors can all have an impact.
    • Validation or functional testing (for web applications):This is typically a core aspect of testing to determine if the Web site functions correctly as per the requirements specifications. Sites utilising CGI-based dynamic page generation or database-driven page generation will often require more extensive validation testing than static-page Web sites.
    • Extensibility Promote-ability Testing:Software can be moved from one run-time environment to another without requiring modifications to the software, e.g. the application can move from the development environment to a separate test environment.

5. Acceptance Testing



When custom software is built for one customer, a series of acceptance tests are conducted to enable the customer to validate all the requirements. Acceptance tests are conducted at the development site or at the customer site depending upon the requirements and mutually agreed principles. Acceptance testing may be conducted either by the customer depending on the type of project & the contractual agreement. A series of acceptance tests are conducted to enable the customer to validate all requirements as per user requirement document (URD).




**Object Oriented Process**
The Object Oriented Methodology of Building Systems takes the objects as the basis. For this, first the system to be developed is observed and analyzed and the requirements are defined as in any other method of system development. Once this is done, the objects in the required system are identified. For example in case of a Banking System, a customer is an object, a chequebook is an object, and even an account is an object.
In simple terms, Object Modeling is based on identifying the objects in a system and their interrelationships. Once this is done, the coding of the system is done. Object Modeling is somewhat similar to the traditional approach of system designing, in that it also follows a sequential process of system designing but with a different approach. The basic steps of system designing using Object Modeling may be listed as:
System Analysis
As in any other system development model, system analysis is the first phase of development in case of Object Modeling too. In this phase, the developer interacts with the user of the system to find out the user requirements and analyses the system to understand the functioning.
Based on this system study, the analyst prepares a model of the desired system. This model is purely based on what the system is required to do. At this stage the implementation details are not taken care of. Only the model of the system is prepared based on the idea that the system is made up of a set of interacting objects. The important elements of the system are emphasized.
System Design
System Design is the next development stage where the overall architecture of the desired system is decided. The system is organized as a set of sub systems interacting with each other. While designing the system as a set of interacting subsystems, the analyst takes care of specifications as observed in system analysis as well as what is required out of the new system by the end user.
As the basic philosophy of Object-Oriented method of system analysis is to perceive the system as a set of interacting objects, a bigger system may also be seen as a set of interacting smaller subsystems that in turn are composed of a set of interacting objects. While designing the system, the stress lies on the objects comprising the system and not on the processes being carried out in the system as in the case of traditional Waterfall Model where the processes form the important part of the system.
Object Design
In this phase, the details of the system analysis and system design are implemented. The Objects identified in the system design phase are designed. Here the implementation of these objects is decided as the data structures get defined and also the interrelationships between the objects are defined.
Let us here deviate slightly from the design process and understand first a few important terms used in the Object-Oriented Modeling.
As already discussed, Object Oriented Philosophy is very much similar to real world and hence is gaining popularity as the systems here are seen as a set of interacting objects as in the real world. To implement this concept, the process-based structural programming is not used; instead objects are created using data structures. Just as every programming language provides various data types and various variables of that type can be created, similarly, in case of objects certain data types are predefined.
For example, we can define a data type called pen and then create and use several objects of this data type. This concept is known as creating a class.
Class: A class is a collection of similar objects. It is a template where certain basic characteristics of a set of objects are defined. The class defines the basic attributes and the operations of the objects of that type. Defining a class does not define any object, but it only creates a template. For objects to be actually created instances of the class are created as per the requirement of the case.
Abstraction: Classes are built on the basis of abstraction, where a set of similar objects are observed and their common characteristics are listed. Of all these, the characteristics of concern to the system under observation are picked up and the class definition is made. The attributes of no concern to the system are left out. This is known as abstraction.
The abstraction of an object varies according to its application. For instance, while defining a pen class for a stationery shop, the attributes of concern might be the pen color, ink color, pen type etc., whereas a pen class for a manufacturing firm would be containing the other dimensions of the pen like its diameter, its shape and size etc.
Inheritance: Inheritance is another important concept in this regard. This concept is used to apply the idea of reusability of the objects. A new type of class can be defined using a similar existing class with a few new features. For instance, a class vehicle can be defined with the basic functionality of any vehicle and a new class called car can be derived out of it with a few modifications. This would save the developers time and effort as the classes already existing are reused without much change.
Coming back to our development process, in the Object Designing phase of the Development process, the designer decides onto the classes in the system based on these concepts. The designer also decides on whether the classes need to be created from scratch or any existing classes can be used as it is or new classes can be inherited from them.
Implementation
During this phase, the class objects and the interrelationships of these classes are translated and actually coded using the programming language decided upon. The databases are made and the complete system is given a functional shape.
The complete OO methodology revolves around the objects identified in the system. When observed closely, every object exhibits some characteristics and behavior. The objects recognize and respond to certain events. For example, considering a Window on the screen as an object, the size of the window gets changed when resize button of the window is clicked.
Here the clicking of the button is an event to which the window responds by changing its state from the old size to the new size. While developing systems based on this approach, the analyst makes use of certain models to analyze and depict these objects. The methodology supports and uses three basic Models:
  • Object Model - This model describes the objects in a system and their interrelationships. This model observes all the objects as static and does not pay any attention to their dynamic nature.
  • Dynamic Model - This model depicts the dynamic aspects of the system. It portrays the changes occurring in the states of various objects with the events that might occur in the system.
  • Functional Model - This model basically describes the data transformations of the system. This describes the flow of data and the changes that occur to the data throughout the system.
While the Object Model is most important of all as it describes the basic element of the system, the objects, all the three models together describe the complete functional system.
As compared to the conventional system development techniques, OO modeling provides many benefits. Among other benefits, there are all the benefits of using the Object Orientation. Some of these are:
  • Reusability - The classes once defined can easily be used by other applications. This is achieved by defining classes and putting them into a library of classes where all the classes are maintained for future use. Whenever a new class is needed the programmer looks into the library of classes and if it is available, it can be picked up directly from there.
  • Inheritance - The concept of inheritance helps the programmer use the existing code in another way, where making small additions to the existing classes can quickly create new classes.
  • Programmer has to spend less time and effort and can concentrate on other aspects of the system due to the reusability feature of the methodology.
  • Data Hiding - Encapsulation is a technique that allows the programmer to hide the internal functioning of the objects from the users of the objects. Encapsulation separates the internal functioning of the object from the external functioning thus providing the user flexibility to change the external behaviour of the object making the programmer code safe against the changes made by the user.
  • The systems designed using this approach are closer to the real world as the real world functioning of the system is directly mapped into the system designed using this approach.
Advantages of Object Oriented Methodology
  • Object Oriented Methodology closely represents the problem domain. Because of this, it is easier to produce and understand designs.
  • The objects in the system are immune to requirement changes. Therefore, allows changes more easily.
  • Object Oriented Methodology designs encourage more re-use. New applications can use the existing modules, thereby reduces the development cost and cycle time.
  • Object Oriented Methodology approach is more natural. It provides nice structures for thinking and abstracting and leads to modular design.

 

Distributed Test Automation Infrastructure Plan #1

is mainly for those who know how much good can give us the tests automation, trying to apply and increase the quality of its software. However, from the group of potential readers, I does not exclude those who have not implemented this type of solution because they lack a coherent approach to a more complexity infrastructure of such tests. There are two ways of reaching the need to include automated testing in a larger, structuralized process enables extension of the test cases on multiple machines, multiple systems and multiple environments. The first is that resulting from the need to increase the quality of existing, good tests, but still running on the same machine, usually by hand. But once it comes time to another system, another type of processor, more hosts then starts to complicate. The second road leads from the planning phase of the implementation of the tests and thought “Can this be achieved in this particular way, under the 30 different systems, preferably in parallel – but then how to report it all?”
As the first of roads is usually carried out on the type of desktop application, second starts from the network solutions with increased performance requirements.

I do not want to write here about the specific tools that exist, costing tens of thousands of dollars, are very extensive, probably we will use only several percent of their capacity and are usually attached to a test robot.
Generally speaking, we would like to have the “something” with the following characteristics:
  • Architecture of the client – server
  • Works on multiple system platforms: Windows, Linux, Mac, Xbox, etc..
  • Does the testing distribution (mostly scripts), not their implementation
  • Allow the transfer of the implementation of feedback reports
  • Can, using the virtualization run computer (or several) of the desired parameters and then run tests on it
  • Simple API
  • Provide information on whether the machine can still be something to start – CPU and memory load
In my opinion the most important item on above list is talking about the test distribution, not their execution. It is very important that such a tool should elide both from the last chain link in the start-up testing, which is a robot that performs the test as well as from the first: base scenarios, test cases and steps. If we have a separate transport layer that is that the connector, which has a simple API, we can focus on improving the test process (scenarios, coverage etc) than improving test scripts for test robot.
Distribited Test automation diagram

 

5 Great Automation Tools Based On Image Recognition

this time to take to the workshop a set of tool for automated tests based on recognition of images. This type of treatment application corresponds to the type of Blackbox testing and allows you to quickly achieve the intended results.
I advise you to view all links included, their order is random, so read carefully. I have not seen a large operating systems based on the scripts in such technology. Whether they operate on a professional level? Are they reliable? What about mobile applications? What about changing the GUI, skins, themes? The questions are open…

Project Sikuli
Very simple script language connected with great screenshot and slice tool. Script is connected visualy with pictures. This a research project developed by people from MIT. Engine and IDE is written in Java so you can run it on every java based gui system. Includes great subcomponent / subpicture solution very helpful with checkboxes, radio buttons. I will make the review of Skiuli very soon, I am very impressed.


RoutineBot
Well-known tool, mature and rapidly growing. You can read my article RoutineBot – Test Automation Tool Review – Technology does not matter?“. It allows very quickly create test scripts, and is reliable.


Ranorex
Enterprise class tool, very professional and mature. My article titled “Covers Everything? – Ranorex Automation Tool Review” shows only record and code level automation but it includes image based engine described in the following link: Image-Based Automation with Ranorex Recorder http://www.ranorex.com/support/user-guide-20/image-recognition.html


T-Plan Robot
Intresting tool formerly known as VNCRobot, based on VNC connection to test machine. GUI under test does not matter, VNC is available for nearly all software platforms – mobile too via PocketVNC. In addition to basic functions, the T-plan also includes the management of test cases. There are GPL and Enterprise version.

EggPlant
Another technology agnostic test automation tool. Again based on VNC connection. For now IDE is available for Linux and Mac, soon for Windows. Includes scheduler and image collection tool wchich enables you to collect, in example, icons with different statuses: disabled, selected, etc.