CSTE & Software Testing

Software Testing & CSTE

ACID Testing

ACID stands for :  

Atomicity
C
onsistency
I
solation 
D
urability

For OLTP databases and when databases deal with  mission-critical business transactions and critical information ACID features are much need for reliability & stability. For customers that demand high quality of databases to maintain confidentiality of their informatino, ACID features is what you need.

Inspection and Reviews

Inspection in software engineering, refers to peer review of any work product by trained individuals who look for defects using a well defined process. An inspection might also be referred to as a Fagan inspection after Michael Fagan, the inventor of the process.

An inspection is one of the most common sorts of review practices found in software projects. The goal of the inspection is for all of the inspectors to reach consensus on a work product and approve it for use in the project. Commonly inspected work products include software requirements specifications and test plans. In an inspection, a work product is selected for review and a team is gathered for an inspection meeting to review the work product. A moderator is chosen to moderate the meeting. Each inspector prepares for the meeting by reading the work product and noting each defect. The goal of the inspection is to identify and repair defects. In an inspection, a defect is any part of the work product that will keep an inspector from approving it. For example, if the team is inspecting a software requirements specification, each defect will be text in the document which an inspector disagrees with.

Software Testing Glossary

These definitions have been extracted from Version 6.2 of the British Computer Society Specialist Interest Group in Software Testing (BCS SIGIST) Glossary of Testing Terms Version 6.2.

1 acceptance testing: Formal testing conducted to enable a user, customer, or other authorized entity to determine whether to accept a system or component. [IEEE]

2 actual outcome: The behaviour actually produced when the object is tested under specified conditions.

3 ad hoc testing: Testing carried out using no recognised test case design technique.

4 alpha testing: Simulated or actual operational testing at an in-house site not otherwise involved with the software developers.

5 arc testing: See branch testing.

6 Backus-Naur form: A metalanguage used to formally describe the syntax of a
language. See BS 6154.

7 basic block: A sequence of one or more consecutive, executable statements
containing no branches.

8 basis test set: A set of test cases derived from the code logic which ensure
that 100\% branch coverage is achieved.

9 debugging: See error seeding. [Abbott]

10 behavior: The combination of input values and preconditions and the
required response for a function of a system. The full specification of a
function would normally comprise one or more behaviors.

11 beta testing: Operational testing at a site not otherwise involved with the
software developers.

12 big-bang testing: Integration testing where no incremental testing takes
place prior to all the system's components being combined to form the system.


13 black box testing: See functional test case design.

14 bottom-up testing: An approach to integration testing where the lowest
level components are tested first, then used to facilitate the testing of
higher level components. The process is repeated until the component at the
top of the hierarchy is tested.

15 boundary value: An input value or output value which is on the boundary
between equivalence classes, or an incremental distance either side of the
boundary.

16 boundary value analysis: A test case design technique for a component in
which test cases are designed which include representatives of boundary
values.

17 boundary value coverage: The percentage of boundary values of the
component's equivalence classes which have been exercised by a test case
suite.

18 boundary value testing: See boundary value analysis.

19 branch: A conditional transfer of control from any statement to any other
statement in a component, or an unconditional transfer of control from any
statement to any other statement in the component except the next statement,
or when a component has more than one entry point, a transfer of control to an
entry point of the component.

20 branch condition: See decision condition.

21 branch condition combination coverage: The percentage of combinations of
all branch condition outcomes in every decision that have been exercised by a
test case suite.

22 branch condition combination testing: A test case design technique in which
test cases are designed to execute combinations of branch condition outcomes.


23 branch condition coverage: The percentage of branch condition outcomes in
every decision that have been exercised by a test case suite.

24 branch condition testing: A test case design technique in which test cases
are designed to execute branch condition outcomes.

25 branch coverage: The percentage of branches that have been exercised by a
test case suite

26 branch outcome: See decision outcome.

27 branch point: See decision.

28 branch testing: A test case design technique for a component in which test
cases are designed to execute branch outcomes.

29 bug: See fault.

30 bug seeding: See error seeding.

31 C-use: See computation data use.

32 capture/playback tool: A test tool that records test input as it is sent to
the software under test. The input cases stored can then be used to reproduce
the test at a later time.

33 capture/replay tool: See capture/playback tool.

34 CAST: Acronym for computer-aided software testing.

35 cause-effect graph: A graphical representation of inputs or stimuli
(causes) with their associated outputs (effects), which can be used to design
test cases.

36 cause-effect graphing: A test case design technique in which test cases are
designed by consideration of cause-effect graphs.

37 certification: The process of confirming that a system or component
complies with its specified requirements and is acceptable for operational
use. From [IEEE].

38 Chow's coverage metrics: See N-switch coverage. [Chow]

39 code coverage: An analysis method that determines which parts of the
software have been executed (covered) by the test case suite and which parts
have not been executed and therefore may require additional attention.

40 code-based testing: Designing tests based on objectives derived from the
implementation (e.g., tests that execute specific control flow paths or use
specific data items).

41 compatibility testing: Testing whether the system is compatible with other
systems with which it should communicate.

42 complete path testing: See exhaustive testing.

43 component: A minimal software item for which a separate specification is
available.

44 component testing: The testing of individual software components. After
[IEEE].

45 computation data use: A data use not in a condition. Also called C-use.

46 condition: A Boolean expression containing no Boolean operators. For
instance, A

Pages

Subscribe to RSS - CSTE & Software Testing