© Copyright: Ben Livson, 1988-2024. All rights reserved.

Web BAL Consulting

Link: Best of Software Engineering

Contact | Email | Feedback | Legal | Map | Privacy | Talent | Tools


First and Second Laws of Software Engineering

Evaluation of CASE Tools


Data Collection


NB. This article was written in 1998. IEEE and leading CASE vendors conducted in 1992-1995 classification and rationalisation work with limited results. Whereas Object Oriented is back in a big way, CASE is still struggling. Both CASE and OO were the great fads of the computer industry in the late eighties.

The first and second laws of thermodynamics are interpreted for Computer Aided Software Engineering CASE. The first law of software engineering warns CASE about Ex Nihilo Nihil Fit. The second law of thermodynamics states: The most probable processes that may occur in an isolated system are those in which entropy increases or remains constant.

In the author's language CASE tools are likely to cause more damage than good. A framework for CASE evaluation is presented.

CASE, as an engineering discipline has to support software engineering metrics and models. See 1,2,3 for a discussion about quantitative software models covering cost, productivity, reliability and complexity. This paper asserts that preconditions to accepting CASE shall be models evaluating CASE methods and tools, extensive data collection to validate the models, and positive evaluation results. The disappointment with the current state of software engineering methods is so severe that developing evaluation criteria for CASE is a must that suffers no further delay. Major inventions in natural sciences and medicine require massive controlled and repeatable experimentation with extensive statistical analysis before qualifying for publication.

Criteria for accepting CASE methods or tools for publication does not extend beyond the referee audit process to ascertain minimal subjective qualities of a scientific paper. This is extreme in the light that software engineering shall solve problems that may affect our lives. No other engineering discipline would apply methods or tools prior to extensive experimentation and proof of usability and usefulness. In contrast, a business accepts CASE methods or tools after interviewing a few happy (biased? motivated?) users or perhaps after minimal on-site trials with toy projects.

The First and Second Laws of Software Engineering top

The first axiom is that computers and software have no inherent value; until bioware is introduced they are not even edible. The sole value of computers and software is appreciated in terms of the problems we need to solve.

The second axiom is that WORK is needed to solve problems. The WORK required is the sum of the minimal effort required solving a given problem and wasted effort as a function of entropy. The minimal work required to solve the problem is defined as the worked required for the formal specification of the problem. Assuming the existence of a CASE tool that is able to automatically generate software from formal specifications of a given class of problems, then we have the ideal case of zero entropy with no work wasted.

The first law of software engineering is thus:


This is simply a restatement of the Conservation of' Energy and the first law of thermodynamics see [4]. In particular, there are no miracles:




No functional form of work is presented except that work is an exponentially increasing function of problem complexity and entropy.

Entropy is a function of management, people, hardware and software logistics, schedule and all environmental factors. Now, intuitively whatever you add or introduce to your software engineering environment is likely to increase entropy and to increase wasted work. Thus, the second law of software engineering is phrased as:

The most probable processes that may occur in software engineering environment are those in which the entropy either increases or remains constant.

This is a strong impediment against the process of introducing CASE methods or tools. The great importance of the proposed second software engineering law is that it indicates those processes that are more likely to occur in software engineering as a whole. Therefore there are many processes that could occur because they comply with other laws, such as the first law of software engineering. However, it is very improbable that they will occur, because they violate the second law.

Basic Software Engineering Metrics top

The efficiency E in software engineering is defined as the ratio of the problem solving work to the total work. Thus,

E = E(PROBLEM,ENTROPY) = WORK(PROBLEM) / WORK(PROBLEM,ENTROPY) = WORK(PROBLEM) / TOTAL_WORKTypical values for E range from 3% to 15%, see [2]. Current data collection does not support accurate estimation of WORK(PROBLEM), that is the minimal work required to solve a given problem. It is rare to have formal problem statements. Most software engineers intertwine the WHAT (problem statement) with the HOW (design and implementation). Data collection usually is delayed, and thus it is convenient to combine analyses and design. Also, there are considerable variations in life cycle paradigms. However, efficiency E does not require measurement of entropy. Instead of WORK it is possible to use COST as the basic concept. The values, of course, will be different because there are COST drivers not related to work. Observe that E does not depend on time.

Evaluation of Case Solutions top

Each CASE solution affects the level of entropy in your software engineering environment. The two conditions for accepting a CASE solution shall be:

  1. The CASE solution increases EFFICIENCY.
  2. The CASE solution does not decrease QUALITY and RELIABILITY. A relaxed condition requires software developed and supported by a CASE solution to have acceptable quality and reliability. More broadly: CASE has to provide ACCEPTABLE software.



Data Collection Requirements top

Data collection shall start at the first customer and contact. There shall be a strict separation between the PROBLEM solution "WHAT" and the software design and implementation. This separation is required for accurate WORK(PROBLEM) estimate. It is immaterial for our discussion, if intertwining of the "WHAT" and "HOW" is necessary for implementation. The only requirement is that both can be measured separately. A major problem with the EFFICIENCY measure is that it may take many years to know the TOTAL-WORK .

The problems with interim estimates of the TOTAL-WORK are the typical difficulties in estimating software cost or productivity.

These estimates can be improved, if extensive corporate data collection is available.

The problem is then reduced to the usual statistical experimentation problem of obtaining approval for a new MEDICINE. In software engineering, the patient data is the accumulation of project E data.

The analogy for medicine is CASE, and patients are projects. The experimentation has to be accurate, repeatable and controllable.

The guinea pigs are trial projects by software engineering R & D groups. Only after sufficient proof by dispensing hundreds of poor guinea pigs and only after extensive evidence that our CASE medicine has no toxic side-effects such as lowering product quality or reliability, is a permission given by the Drug Administration to start clinical trials with patients (real life projects).

The author proposes the establishment of Federal, European and Japanese standards bodies for certification of CASE solutions. The number of CASE vendors would reduce to a few key players -the cost of CASE certification would be several times the cost of developing the system.

CASE customers would benefit both in Price, Efficiency and Quality. The larger number of CASE solutions sold would more than compensate the certification cost, thus leading to more affordable solutions. Availability of certified CASE systems would boost CASE into a multi-billion dollar business.

New CASE solutions would be patented very much like medicine. Companies would still be allowed to purchase uncertified CASE solutions and freely try out new methods and tools. However, it is likely that projects funded by DoD or by major corporations would use certified CASE solutions. The merger of operating systems, stems, basic software and CASE would be likely. Software engineering publications would be separated into A (certified product) and B (free) series. Corporate readers chronically short of time would probably restrict themselves to reading the A series whereas B series would be a R&D domain.

R E F E R E N C E S top
  1. Quantitative Software Models, Data & Analysis Center for Softwareg Rome Air Development Center, Griffiss AFB, NY 13441, march 1979.
  2. Software Engineering Metrics and Models, Conte, Dunsmore and Shen, Copyright 1986 by the Benjamin/Cummings publishing Company, Inc.
  3. Software Engineering P982 Draft Standard for Measures to Produce Reliable Software, IEEE Computer Society, 1987.
  4. Fundamental University Physics, M. Alonso and E. Finn,Copyright 1968 by the Addison-Wesley publishing Company, Inc.


Contact | Email | Feedback | Legal | Map | Privacy | Talent | Tools