Rebel Science News
11/28/2012
Jeff Hawkins Is Close to Something Big
 
8/26/2012
The Myth of the Bayesian Brain
 
8/23/2012
The Second Great AI Red Herring Chase
 
8/15/2012
Rebel Speech Recognition Theory
 
8/8/2012
Rebel Speech Update
 

The Silver Bullet News

To Drastically Improve Software Reliability and Productivity

Latest News and Issues (November 2004 - March 2005)

 
Rebel Science Home
Why Software Is Bad
Project COSA
Operating System
Software Composition
Parallel QuickSort
The Devil's Advocate
COSA Discussion Forum
Not Associated with V.S. Merlot, Inc.
Contact Me

 

This page is where you will find short news articles and other musings related to the Silver Bullet hypothesis and Project COSA. All articles are listed in the reverse order of their date of publication.

More Recent News

March 2005

3/22/2005
Delegate! Delegate!
3/16/2005
Fighting the Devil
3/9/2005
Blind Code and Legacy Systems
3/2/2005
Another One
February 2005
2/24/2005
It's Time for a Change
2/23/2005
COSA and Multicore Processors
2/22/2005
The List
2/19/2005
The Need for a Global Solution
2/15/2005
Embedded Software Systems
2/13/2005
Adaptive Software
2/9/2005
IBM's New Cell Processor Missed the Mark
2-7-2005
The Enemy
2-6-2005
What's Bugging the High-Tech Car?
Killing Multiple Birds With One Stone
2-4-2005
La Bala de Plata (La Solució Definitiva)
2-3-2005
Do Your Part
2-2-2005
Vehicle Automation
2-1-2005
The Big Lie
January 2005
1-31-2005
Costly Failures
1-29-2005
Blind Code Again!
1-27-2005
Things That Suck
COSA Enquiries
1-26-2005
Silence of the Lab
1-23-2005
AI From the Bible
Memory Usage
1-21-2005
Toshiba and IBM's new "Cell Processor"
Cell Processor Update
OSNews
1-6-2005
Transmeta May Exit Processor Business
December 2004
12-28-2004
Airlines Computer Woes
12-27-2004
The Pirelli INTERNETional Award, IX Edition
12-21-2004

The New COSA Reliability Principle

A bon entendeur, salut!
The Blame Game
12-7-2004
Mapou Automation Research, Inc.

November 2004

11-12-2004

Still Crazy After all these Years
My Critics' Dilemma
Stumbling Blocks
Heavy Price
What Took You so Long?

11-10-2004

X-Prize for Software Silver Bullet?

11-05-2004

Failure to Communicate

11-02-2004

The Vision Problem
Sensors and the Vision Problem
Cell-Level Dependencies
Component-Level Dependencies
 

Older News

 

March 22, 2005

Delegate! Delegate! 3:35 PM EST

Software composition In COSA can be summed up with one word: Delegate! What this means is that a problem should be broken down into as many components as possible. One reason has to do with what I call delayed data dependencies. Sometimes it is more efficient to delay a sensor reading until a number of operations are finished. Remember that COSA data sensors (comparison operators) are automatically coupled with relevant effectors (operators).

In the example pictured above, the != comparison sensor is invoked automatically whenever the 100+ effector performs an addition on some data (not shown). This configuration can be used to implement a simple traditional loop that terminates when a specific condition is reached. However, complex loops that consist of multiple sequential steps and/or inner loops cannot be efficiently implemented without delaying certain dependencies. Consider this C++ code snippet excerpted from a QuickSort algorithm:

  while ( left < right ) {
    while( array[left] <= pivot_item )
      left++;
    while( array[right] > pivot_item )
      right--;
    if ( left < right )
      swap(array, left, right);
  }

The two inner loops should be implemented as a small separate component. One reason for doing so is that sensors in COSA are not explicitly invoked. A sensor is implicitly and automatically executed whenever a relevant variable is modified in such a way as to potentially affect the sensor. In the example, this would mean that the other comparison sensors "left < right" would be invoked every time left and right are incremented. This would be too expensive, performance wise. This can be easily prevented by delegating the loop functions to an external component. The reason is that effector/sensor associations cannot be made across component boundaries. Even the "if" statement should be its own small component. This way, by associating the sensor with one or more message effectors, the outer loop comparison can be executed only when necessary.

I am preparing a COSA example page that will contain a QuickSort component in which I demonstrate how recursion can be done in COSA with a special cell that I call the STACK cell. I will also introduce another signal pattern sensor called the ALL cell. The latter fires as soon as all of its input synapses have fired regardless of firing order. I should be done with the QuickSort example within the next couple of days, time permitting.

 

March 16, 2005

Fighting the Devil 12:45 PM EST

The day before yesterday, President Bush presented the National Medal of Technology to Watts Humphrey (Fellow of the Software Engineering Institute at Carnegie Mellon University) for applying the principles of engineering and science to software development. Humphrey is famous for his Capability Maturing Model (CMM) and two methods of software process improvement known as the Personal Software Process (PSP) and the Team Software process (TSP). Essentially, Humphrey believes that the best way to improve software quality is to apply the traditional engineering techniques used by bridge and construction engineers to the software process.

In spite of the fact that he is dead wrong (software engineering has little or nothing to do with mechanical engineering), Humphrey benefits from the incredible propaganda and money-making machine that is Carnegie Mellon University. CMU has managed to convince the U.S. government and a large part of the software industry that it has a better understanding of the software reliability and security problem than anyone else. A few years ago, CMU received a pile of cash (approximately $30 million) from NASA and various big-name international companies to launch the Sustainable Computing Consortium (SCC) with the expressed goal of finding a solution to the software crisis. The SCC is managed by CMU's CyLab.

In my opinion, Cylab's primary goal is to amass as much money (mainly from the U.S. government) as possible and to generate revenue for CMU. Even though Cylab is funded by outside "partners", it has no qualms in requesting royalty payments from its clients for the use of whatever technologies it might come up with. Well, CyLab has been operating since 2002. Has it come up with anything interesting or useful in the way of solving the software crisis? Don't hold your breath: when it comes to the real reason that software is unreliable, insecure and hard to develop, the folks at CyLab and CMU's computer science department are as clueless as it gets.

Will I ever succeed in convincing enough people in the software community of the soundness of the COSA software model so as to make a real difference? When I consider the sort of people and organizations that I am up against, I sometimes feel like I am fighting against the devil himself. It is enough to drive one to despair but I am not about to give up. I must take the battle to the devil's own territory. It is for this reason that I am adding Watts Humphrey, the darling of academia and the software quality industry, to my list of the real enemies of reliable software. Note that the folks at CyLab are very much aware of Project COSA. They just made a conscious decision to ignore it. The reason is obvious: if COSA is successful, they are out of a job.

I cannot fight this battle alone. I must enlist as many people as possible at the grass root level, the people who are actually writing code for a living. I hope my readers will join me in fighting for a good cause. It is an uphill battle but, together, we can win! And the world will be a better place for it. Write to me and let me know what you can do or what you are already doing to support this fight.

 

March 9, 2005

Blind Code and Legacy Systems 10:45 AM EST

The world has invested a lot of time and money in conventional software systems. These legacy systems are not about to disappear any time soon, regardless of the superiority of any new model. The questions is, what should be done with existing systems to improve their reliability and maintainability? As I have pointed out elsewhere on this site, the biggest problem with conventional software has to do with the lack of automatic means to resolve dependencies. This makes complex software systems a nightmare to maintain and modify because there is no guarantee that a minor change or addition will not introduce an unforeseen and potentially catastrophic side effect down the road. Not even the most stringent testing procedures can guarantee total coverage.

The principle that makes it possible to identify data dependencies is rather simple. On the one hand, there are operations (effectors) that modify data variables in memory and, on the other, there are comparison operators (sensors) that test the variables for a specific condition and return either a true or false value. An event dependency exists if a sensor reading on a variable can be affected by an operation on the same variable. Depending on the sensor's output value, one or more data variables are in turn modified by the code. This is a data dependency. An unresolved dependency exists when the modification of a variable is not followed by a comparison operation in a timely manner. In COSA, this is done automatically by the development tool through effector/sensor associations, but in conventional software, the burden is on the programmer to remember to invoke all related comparison operators soon after the change, usually by calling one or more subroutines. Oftentimes, the programmer either forgets or, due to unfamiliarity with the code, is unable to identify the dependencies. This can lead to bad assumptions in parts of the program and, consequently, to failures.

It is possible to devise an automated inspection or analysis tool that can parse existing source code (or even native code) and identify potentially unresolved dependencies. The idea is that any code that modify a given variable should be coupled with code that test the condition of the variable. After analysis, flagged dependencies could be presented in textual format as a list of paired (effector/sensor) code segments. Remark that such a tool will serve only as a warning aid. It will be up to the programmer to examine the context of the flagged code segments and determine whether or not there is an actual unresolved dependency. If so, the programmer must add code to fix the problem. The tool should be used every time an existing legacy system is modified.

Notice to reliability tool vendors and commercial software developers:  Many of the ideas presented on these pages have obvious commercial value. Please do not try to use intellectual property laws in order to gain an unfair advantage over your competitors. I offer these ideas freely to anybody who wants to implement them in commercial products and make an honest profit.

 

March 2, 2005

Another One 11:30 AM EST

I added a new name to my list of enemies of software reliability.

 

February 24, 2005

It's Time for a Change 7:40 PM EST

What would you say if someone told you that modern computer software is based on a technology that is more than 1.5 centuries old? Well, it's true. Lady Ada Lovelace was the first person to write an algorithm (table of instructions) for a computer. It happened in 1842. The computer was Charles Babbage's analytical engine, a machine built out of gears, cogs, and rotating shafts. We have been using the algorithmic approach ever since. All computer CPUs (central processing units) are optimized for algorithmic software. And yet, as I show in the Silver Bullet article, the algorithm is the reason that the software industry is in a state of crisis. Why? Because complex algorithmic software is simply not reliable.

The algorithmic approach has served us well but, for some time now, it has been showing its age. It has reached the limit of its usefulness and is rapidly becoming a serious liability. The unreliability of software is the biggest crisis the computer industry has ever faced. All the talk about security these days is besides the point. The security problem is really a software reliability problem. Most viruses, worms, Trojans and spyware take advantage of bugs and flaws in either the operating system or in software applications like browsers and email programs. I say it is time to retire the dinosaur once and for all. Now is the time for a non-algorithmic approach to software construction. The culprit is not complexity, as we have been wrongly led to believe by the likes of Brooks et al. There is no reason that we cannot guarantee that our programs are completely free of defects, regardless of their complexity. Read the Silver Bullet article to find out why using the algorithm as the basis of software construction is the real reason behind the crisis and why switching to a signal-based, reactive synchronous software model will solve the problem.

 

February 23, 2005

COSA and Multicore Processors 11:05 AM EST

Multicore processors seem to be the rage lately, what with the recent announcements by major players such as IBM, Toshiba, Sony, Intel and AMD. Essentially, a multicore processor is two or more processors in one. It is a type of vector processing whereby multiple streams are loaded into separate processor cores (on-chip engines) and executed in parallel. The COSA software model, by virtue of its sole use of elementary concurrent objects is an ideal candidate for multicore processing. A COSA-optimized processor maintains two on-chip lists of objects, an input list and an output list. While one list is being processed, the other is being filled (see the section on the COSA cell processor lists for more details on this topic). The advantage of this approach is that every object in an input list can be processed concurrently. A properly designed multicore processor optimized for a reactive, signal-based software model can easily channel these objects onto available cores for multiprocessing. There is no question, in my opinion, that the COSA software model, once adopted by the computer industry, will revolutionize computing in more ways than one: performance, reliability and productivity will be drastically improved.

 

February 22, 2005

The List 4:05 PM EST

As I promised, I added new page to the site. It is a list of people that I consider to be the enemies of reliable software. I will add more names to the list as they come to my attention. If your name is on the list and you feel that you have been unfairly treated or that you were somehow misquoted or misunderstood, do write me a note. And if you think that I have libeled you in any way, have your lawyer contact my lawyer.

 

February 19, 2005

The Need for a Global Solution 3:55 PM EST

The software reliability crisis is not an isolated concern. It affects the entire world. While I would not discourage private companies and individuals to embark on their own research and development efforts, I think it would be detrimental to society and the world for a local entity to gain a monopoly on any new technological solution to the crisis. COSA merely points the way to the solution. I strongly believe that the implementation of any COSA-compliant operating system and the necessary development tools destined for widespread use should conform to internationally agreed upon standards. The global importance of software reliability is such that the initial standardization work should be done by some world-renowned standards body, preferably financed by the World Bank Group and working under the auspices of the United Nations or an international consortium.

This does not mean that there is no room for private companies to offer specialized wares to address various needs of the market. There are plenty of opportunities for companies to make a profit. Examples of possible commercial products are COSA-optimized processors, development tools, browsers, email and word processing components, etc... The important thing, in my opinion, is global compatibility. The universal adoption of a single software construction model and the elimination of thousands of incompatible OSes and programming languages would trigger a renaissance of the golden age of computing. It is certain to revolutionize the computer and automation industries and the benefits to humanity will be immense.

 

February 15, 2005

Embedded Software Systems 9:08 AM EST

Embedded software is everywhere. Cameras, cell phones, DVD players, set-top boxes, toys, locomotives, modems, disk drives, printers, copiers, airplanes and microwave ovens are just a few examples of applications using embedded software. A single car may contain a dozen or more embedded control systems. Needless to say, reliability is the most important issue on the minds of embedded system designers. It is a constant headache because developers can never be 100% sure that their creations will not fail at one time or another. Reliability engineers love to talk about software reliability measurements the way bridge engineers talk about structural safety but the probability that a complex algorithmic program will fail is really unknown and will always remain so. The sad reality is that, no matter how reliable a safety-critical software system is estimated to be, it is never good enough. Unless the system is guaranteed to be 100% free of defects, it is potentially catastrophic.

Let's face it. Software reliability measurements based on probability are really accidents waiting to happen. Bridge engineers can reliably predict that a bridge can be safely used under specific conditions, but software engineers cannot do the same for complex algorithmic systems. The proper goal of software quality management is not to measure the reliability of software programs (a waste of time, in my opinion) but to guarantee their total reliability. It is about time that embedded system developers, CPU and tool vendors wake up and smell the COSA coffee, so to speak. Are you paying attention, QNX, Green Hills, Wind River, Quicklogic, OSE, MIPS, etc...? The silver bullet is here. When will you slay that nasty beast? Hear this: The first embedded software tool vendor to guarantee 100% reliability will make a killing in the market and leave its competitors scrambling and begging for mercy. It gets even better. The first CPU vendor to support synchronous reactive (signal-based) software will capture the market for mission-critical automated systems. Ą bon entendeur... (He who has an ear...)

 

February 12, 2005

Adaptive Software 9:50 AM EST

The similarity between the COSA software model and neurobiology is one of the more striking features of the model. The behavioral logic of a COSA program is determined by the synaptic connections between the cells that comprise the program. Every program interacts with its environment, that is to say, it is affected by changes in the environment and can, in turn, effect changes in it. One of the strengths of the COSA model is that it can automatically test every assumption a program (hence, the program's designer) makes with regard to the relative timing of environmental changes, whether incoming (sensor) or outgoing (motor). This is a direct consequence of the principle of complementarity (PC). There are two principles derived from the PC that a program can use to discover conflicts in its logical assumptions: the principle of sensor coordination (PSC) and the principle of motor coordination (PMC).

Assumptions about the evolution of external events can sometimes be wrong and a COSA program can be used to test them. Every synaptic connection in COSA has a strength property which reflects its past usage. If a bad assumption is discovered, the program can be preset to automatically disconnect it or alert the developer of the fact. The culprit is invariably the weakest connection. Note that a bad assumption does not mean that there is a defect in the program. It only means that the developer was not sure of the correctness of the assumption. A COSA program can thus be seen as an inductive tool for testing assumptions. In a future article, I will talk about how a COSA system can be used to, not only find bad connections, but also to discover new connections based on a simple reward/punishment mechanism.

 

February 9, 2005

IBM's New Cell Processor Missed the Mark 12:05 PM EST

IBM, Toshiba and Sony have unveiled their new Cell Processor (CP) with much hype and fanfare. The CP is being touted by some as the Intel x86 killer. Its primary advantage over previous processors is speed. However this increase in performance comes at the expense of forcing operating system designers and programmers to organize programs to fit the cell structure required by the new processor. A "cell" is a small self-contained bundle of algorithms and data that can be dispatched to one of several computing engines (on-chip cores) for processing. Multiple cells can run simultaneously. Thus the CP is essentially a very fast multi-processor.

The problem with the CP is that it introduces really nothing new to computing. It fails to address the most pressing problem in the computer industry today: software unreliability and low productivity. The reason is that CP-compatible programs are still based on the algorithm. Intel, AMD and other chip manufacturers have nothing to fear from the new kid on the block. In fact, in my considered opinion, the CP is a step backward only because it perpetuates the same fundamental flaw that has been the bane of the computer industry from the beginning. I secretly hope that some smaller player, preferably from a third world country, comes out with a killer revolutionary processor that leaves everyone else in the dust. Of course, such a processor would have to support a synchronous, signal-based, COSA-like model. The revolution is still in the future and it may come from unexpected quarters considering that the big processor vendors have not learned and, indeed, seem incapable of learning the hard lessons of history. Are you listening, Mexico, Brazil, Argentina, China, India, Malaysia, Africa? Are you listening, Eastern Europe? Are you listening, Transmeta?

 

February 7, 2005

The Enemy 4:35 PM EST

There are a few people in the software reliability industry that I consider to be the enemies of reliable software. They thrive on the continued existence of buggy systems. They have a vested interest in seeing that the crisis lasts as long as possible. They have invested a lot in traditional methods of software engineering and they have a lot to lose if a new way is found which solves the problem once and for all. They will fight it teeth and nails. Their favorite battle cry is "there is no silver bullet." It is getting tiresome. One such person is Nancy G. Leveson, a professor at the Computer science department of the University of Washington in Seattle. Here is what Leveson has to say about the software crisis in an article posted on her company's (Safeware Engineering) web site:

When a physicist makes an erroneous claim, such as in cold fusion, the idea may stay around for a while on the fringes of the field. However, the insistence on repeatability and careful experimentation allows such claims to be dismissed by the scientific majority within a relatively short period of time. We need to insist on the same level of evaluation and proof with regard to claims about software engineering techniques and tools. Unfortunately, this is rarely done and our belief in silver bullets persist. Even after Brooks' and Parnas' carefully reasoned and widely-acclaimed papers [8, 27], we are still seeing claims that the silver bullet has been found.

I am not advocating that everyone stop the research they are doing in software engineering and start testing hypotheses and building foundations. Invention is a very important part of progress in engineering. Tools and techniques are needed for the serious problems we face today. But inventions that are based on established principles will be more effective in solving the complex problems we are attempting to solve. We need to recognize the unproven assumptions and hypotheses underlying our current software engineering techniques and tools and evaluate them in the context of what has actually been demonstrated about these hypotheses instead of what we would like to believe.

This sort of self-serving, politically biased rant bothers me. The fact is that Brooks' paper is anything but carefully reasoned. It is full of logical holes and unsubstantiated claims, something that I have already demonstrated in the Silver Bullet page. In my opinion, Brooks' essay on the causes of software unreliability has been a disaster, not only for computer science, but for the world at large. And besides, why is "careful reasoning" good enough for Brooks but not good enough for his critics? Why must the critics of Brooks' erroneous ideas show "evaluation and proof" while Brooks himself is exempt from the same rule?

People like Leveson are doing the computer industry and the world a great disservice. They are using their positions of authority to perpetuate a myth, and a very harmful myth at that. They have built a personality cult around Brooks and his fallacious doctrine just as they have done with Alan Turing over the years. They are not helping with the problem, their claims to the contrary notwithstanding. They are, in fact, a hindrance. I am in the process of compiling an enemies list which will contain the names and pronouncements of various prominent personalities in the software quality industry and the computer science community. Frederick P. Brooks will, of course, be at the top of the list. It should be ready within the next week or so. If these folks feel that I am defaming their character or reputation in any way, they are free to have their lawyers contact my lawyer.

 

February 6, 2005

What's Bugging the High-Tech Car? 5:40 PM EST

The software, of course. Just a few days ago, I wrote about the increasing complexity of vehicle automation and how the newer systems come with high risks for both consumers and manufacturers. This story from the New York Times is being discussed on Slashdot. COSA, of course, is the ideal model for mission-critical embedded software systems. It will eliminate all these costly bugs. When is the car industry going to learn? Are you listening, Mercedes, BMW and the others?

Killing Multiple Birds with One Stone 5:30 PM EST

Currently, there are essentially three operating systems competing against each other for the desktop market: Microsoft Windows, Unix/Linux and variants, and Apple's OSX, which is itself a Unix descendant. Windows holds the lion share of the desktop market and is the subject of much hatred and animosity coming from Linux and Apple fanatics. Personally, I think all three are hopelessly flawed. There is only one way to dethrone these operating systems and that is to replace them with another that offers something that everyone wants desperately, something that is lacking in all the others. That something is 100% guaranteed software reliability, not only in the operating system itself, but in every application that runs on it. A COSA-based OS is just what the doctor ordered.

 

February 4, 2005

La Bala de Plata (La Solució Definitiva) 11:30 AM EST

Translation: The Silver Bullet (The Definitive Solution). It came to my attention recently that the Silver Bullet article has been translated into Catalan, one of the official languages spoken in Catalunya (Catalonia), a large autonomous region of northeast Spain. Catalan is also the official language of the tiny principality of Andorra located between France and Spain in the Pyrenees. The translation was done by Carles, a forum member at ComEsFa?Org. Moltes grącies, Carles. ComEsFa?Org is a Unix/Linux-oriented site. Apparently, Carles translated the original 2002 Silver Bullet article which can still be found at sbcglobal.net. I am delighted that Project COSA is generating worldwide interest. I would like to see all the COSA articles translated into as many languages as possible. If any reader is fluent in a language other than English and would like to volunteer as a translator, please contact me.

 

February 3, 2005

Do Your Part 1:55 PM EST

Many people in the computer industry and academia have visited the Silver Bullet site and are benefiting from it. Are you doing your part to promote Project COSA? If you find this site useful and/or enlightening, there are a number of things you can do to promote these ideas and draw other people's attention to them. For examples: Discuss Project COSA with your colleagues; mention it on an internet software forum; tell your boss about it; email the link to a friend; write an article and submit it to a magazine; make it the subject of your master's or PhD's thesis; etc.... My goal is to see COSA adopted by the computer industry as the new universal computing model for both hardware and software. This will undoubtedly start a revolution in the industry and I don't say this to brag. I sincerely believe that COSA will bring about a safer and more prosperous world. Like it or not, COSA is on its way to become the next BIG thing in the computer world. Why not be a part of the revolution? Write to me and let me know what you can do or what you have already done. Don't be afraid to stick your neck out. COSA is IT!

 

February 2, 2005

Vehicle Automation 2:40 PM EST

The automobile industry is spending huge sums of money in embedded automation software. Some of the latest offerings include cruise control systems that slow down when traffic slows down, collision avoidance, RFID-enabled key that unlocks the car as the driver approaches, etc... These automated systems come with huge risks because any defect is potentially catastrophic and can be very expensive because of repair and liability costs. So manufacturers find themselves having to subject the new systems to extremely stringent testing procedures in an effort to ensure safety and reliability.

The problem is that, when it comes to public safety, extremely reliable software is not good enough. What is needed is software that is guaranteed to be 100% free of defects. Unfortunately for the car industry, there is no way to guarantee that conventional software is bug-free. This is because algorithmic systems are temporally inconsistent and there is no way to ensure that all data dependencies are resolved. For that, one must use a reactive signal-based system like the COSA system. The ability to automatically resolve all data dependencies is crucial to reliability and maintainability. It makes modification a breeze: no more unpredictable side effects. This is the primary strength of the COSA software model.

If the automotive industry wants to add value to their products, they must use as much automation as possible. This is what customers want and what designers dream about. But many of the more advanced systems may never see the light of day due to concern over cost, safety and liability. However, this dark cloud has a silver lining. The industry can unlock the full promise of automation by abandoning the algorithmic model and adopting a synchronous signal-based software construction model.

 

February 1, 2005

The Big Lie 3:10 PM EST

Software reliability experts and the computer science community have been living under a big lie. We have been told by the cognoscenti (see Brooks' 1987 paper) that the unreliability of software comes from the difficulty of enumerating and understanding all the possible states of a program. How did such an obvious myth survive for so long in a community of professionals who fancy themselves among some of the most intelligent creatures on the surface of the earth? I already wrote about this in the Silver Bullet page but it bears repeating. Frederick P. Brooks explains the reason for software unreliability thus:

From the complexity comes the difficulty of enumerating, much less understanding, all the possible states of the program, and from that comes the unreliability.

How did this unfounded assertion make it past peer review? Why was it never challenged? Why was Brooks' paper turned into some sort of religious testament that reliability experts can use to justify their inflated salaries and continued employment? The truth is that it is not the states of a program that matter but how the program reacts to specific state changes. These are the conditions that a program is designed to detect. On the basis of these conditions, it can then effect its own changes in its environment (data properties). To repeat, only the conditions need to be tested: states are irrelevant. The strength of the COSA model is that all conditions are explicit and can thus be exhaustively tested using automatic means. 

The computer science community has been riding in the same bandwagon for more than half a century, while preaching the "virtues" of the Turing machine to a captive audience of students the world over. And what did we get for it? Hundreds of deaths and trillions of dollars wasted on projects that never see the light of day, not to mention interminable delays, cost overruns and catastrophic failures. Don't believe it? Just ask NASA, the FAA and the FBI. On top of this we now have a generation of computer programmers and engineers brainwashed into believing in a flawed paradigm and perpetuating the big lie. It is really sad.

Some of my readers have written to me to suggest that I publish a paper in a peer-reviewed scientific publication. I absolutely refuse. The way I see it, my peers are the lay public. My ideas will survive or perish on the basis of their correctness, not on the whims of a community of Turing worshippers.

If you must have someone to blame for the software reliability crisis, just knock on the doors of academia. They did it.

 

January 31, 2005

Costly Failures 12:05 AM EST

Government Computer Blunders Are Common     

Here are a few excerpts from this Associated Press article posted on Yahoo! News regarding the recent FBI $170 million software fiasco:

"There are very few success stories," said Paul Brubaker, former deputy chief information officer at the Pentagon. "Failures are very common, and they've been common for a long time."

"Ever since there's been IT (information technology), there have been problems," said Allan Holmes, Washington bureau chief for CIO, a magazine published for information executives. "The private sector struggles with this as well. It's not just ... the federal government that ... can't get it right. This is difficult."

Experts blame poor planning, rapid industry advances and the massive scope of some complex projects whose price tags can run into billions of dollars at U.S. agencies with tens of thousands of employees.

Of course, the experts are wrong. They were wrong fifty years ago and they are wrong now. They are wrong for the reasons that I explain in the Silver Bullet page. I have only one prescription for what ails the software industry: COSA.

 

January 29, 2005

Blind Code Again! 10:55 PM EST

I keep coming back to this subject over and over again only because unresolved dependencies (also known as blind code) are the biggest cause of unreliability in conventional software systems. The problem is especially severe in situations where complex legacy systems must be maintained by recently hired programmers who are not familiar with the code. Even minor modifications can result in unpredictable side effects that can cause catastrophic failures, sometimes weeks and months after the modification.

I have already shown how the problem can be solved once and for all in a COSA program. But data dependency is not limited to program data residing in memory. It is a system-wide problem and it is particularly dangerous in database systems where several stand-alone applications or components within a program have read/write access to a common pool of data. The two problems belong to different levels of abstraction but are logical analogs of each other. I wrote about it before. If you are truly interested in system-wide reliability take a look at this article I wrote on November 2 of last year. Pay particular attention to the paragraphs on cell level and component-level dependencies.

 

January 27, 2005

Things That Suck 7:50 PM EST

Operating systems, assembly languages, high-level languages, compilers and central processing units share one thing in common. They all suck. They suck because they are all based on the algorithm. If the computer industry wants to solve the software reliability problem, it must reinvent itself by gradually getting rid of these things and replacing them with a synchronous, signal-based computing model. They do not have to wait for the computer science community to wake up and realize that it has been wrong all these years. In general, scientists have a hard time admitting their errors. They have their careers to think about. The software reliability problem can be fixed by software and hardware engineers in the industry. But first, someone must have the courage to tell them what needs to be done. Sure, it's not going to be easy. It will take years but it can be done. We can start with safety and mission-critical systems and go from there.

Having said that, there is no reason to get stuck in a perpetual fix-it mode. We can also start working on all those super complex systems that we could not build in the past because of concerns over cost, safety and reliability. Defect-free software will open up the full promise of automation. Welcome to the new computing world!

COSA Enquiries 5:40 PM EST

I receive a fair number of enquiries about project COSA. Unfortunately, I cannot respond to them all. If you have a question or comment about the philosophy, theory or operation of COSA, please post a message on the Silver Bullet Discussion Group. I will try to reply as time permits. There are several other people who have been following Project COSA from the beginning and I am sure they would not mind helping.

 

January 26, 2005

Silence of the Lab 10:45 PM EST

The Silver Bullet pages received more than eleven thousand hits over the last four days with more than a thousand return visitors! People and organizations from all over the world are getting excited about Project COSA. But not a peep out of CyLab. For those of you who don't already know, Cylab is the group at Carnegie Mellon University whose professed goal is to find better ways to develop dependable and secure computing, i.e., to find a viable solution to the software reliability crisis. CyLab is supported by the U.S. federal government and by various international corporations including Microsoft, British Petroleum, Hewlett Packard, Sony and others. Due to the serious nature of the crisis, these companies and NASA (mostly NASA's money, to the tune of $25 million) decided to form what is known as the Sustainable Computing Consortium. The SCC is run by CyLab under the direction of Bill Guttman. When are you going to wake up, Bill, and start using other people's money wisely? I know you know about Project COSA. The COSA software model is going to succeed whether you like it or not. Once it does, that will be the end of the SCC and CyLab, you can bet on it. What will you do then?

 

January 23, 2005

AI From The Bible 6:35 PM EST

AI from the Bible? Where did that come from? Well, there is something I would like to get out of the way before some of my enemies try to use it to distract people's attention from the Silver Bullet message. Stoning the messenger is a time honored tradition in some circles. So let me be the first to point to my religious beliefs and my stance on physics, not to mention my overall rebellious atitude with regard to the scientific community. Indeed I place the blame for the software crisis squarely at the doorsteps of the computer science community. I had already written about this in a previous news item but now that the Silver Bullet site is getting a lot of attention from around the world (thanks to OSNews, Embedded.com and others), I think it is fitting that I restate my position.

Some people hate me for ideological reasons and they want to see me fail at all costs. Their weapon of choice is ridicule. Others feel threatened by my ideas because any solution to the software reliability crisis will simply put a lot of people out of work. Let me make it perfectly clear: I have nothing to hide and nobody puts me to shame (in the end, every human being is just a pile of dirt). So If the software industry allows my other work/beliefs to be a stumbling block in their way of embracing the soundness of the COSA model, it will only have itself to blame. In other words, if you ignore the message because you think the messenger's clothes are dirty, then you do not deserve the message. If my personal opinion on certain subjects offends you in any way, then don't read it: it was not meant for you. As simple as that.

I have put a lot of work in Project COSA and I have not asked anyone in the business for a penny. Any individual or organization with the needed resources (time and money) can develop their own COSA-like OS and tools without any further help from me. All the information they need to do so is right here. Free of charge! I already did the hard part: I figured it out. I am as convinced now as ever that the COSA software model will solve the software reliability crisis. The choice is simple: Take it or leave it.

Memory Usage 11:40 AM EST 

It came to my attention recently that one of the disadvantages of the COSA model is that, since almost every operation is a cell with one or more destination addresses, about half of the memory space used in a COSA program will be used for storing pointers to cells. In my opinion, this would be a problem if most of the memory allocated by a program was being used to store the active objects (cells) and not data. This is rarely the case. Exceptions are neural network programs which consist almost entirely of cells. Another thing to consider is that a cell's address does not have to be a full four-byte pointer. Recall that cells are encapsulated within components. Every component contains a list of its own cells and most cells are connected only to other cells belonging to the same component. Unless the component has more than 256 cells (which is rarely the case), a cell's address can be represented by a single byte.

 

January 21, 2005

Toshiba and IBM's new "Cell Processor" 10:35 AM EST

I just read an article by Nicholas Blachford describing the basic internal mechanism of IBM's new "Cell Processor". I had heard of it before but the details were always sketchy. Take a look at the page titled "Cell Architecture Explained - Part 2: Again Inside The Cell." Scroll down to the section labeled "Software Cells." I am totally stunned! The similarity to COSA cells and the COSA cell processor is unmistakable. The main difference that I could see is that a COSA cell can have multiple destination addresses. I could not make out from the article whether IBM's Cell Processor is a true signal-based reactive processor. If not, then the real revolution is still in the future. Check it out.

Cell Processor Update 2:10 PM EST

Ok, I just read the cell patent application and it is not anything like COSA. Software Cells, as described in the patent, are just small bundles of algorithmic code and data that can be sent to an available processor for processing. After processing, the cell's data is returned (copied back) to its original location. This is not a signal-based system. It's just a technology for fast multi-processing. So, I was right, the real revolution is still in the future.

OSNews 8:15 PM EST  

I just realized that the Silver Bullet site is being swamped with hits coming from OSNews. I thank editor in chief Eugenia Loli-Queru for mentioning Project COSA on OSNews. God bless you. This project needs all the publicity that it can get. I had intended to write an article for OSNews a while back but never got around to it for various reasons. My apologies, Eugenia. I'm working on it.

 

January 1, 2005

Transmeta May Exit Processor Business 6:05 PM EST

According to News.com, Transmeta is seriously looking at getting out of the microprocessor business. Too bad. Transmeta cannot compete against giants like Intel and others on their own turf. What it needs is a revolutionary killer processor that redefines the computer market and leaves everyone else in the dust. I suggest that Transmeta considers creating a RISC processor optimized for synchronous software based on the COSA model. What else is there in processor design that is truly revolutionary?

 

December 28, 2004

Airlines Computer Woes 7:40 PM EST

This holiday season, the U.S. airline industry, already reeling from losses suffered as a result of higher fuel costs and the 9/11 attack on the World Trade Center in New York, is being plagued by computer woes. Comair, a subsidiary of Delta Airlines, is still in the throes of a massive cancellation of flights due to a computer breakdown. Apparently the software responsible for crew scheduling suffered a catastrophic failure causing all 1100 Comair flights to be cancelled! The problem was so severe that department of transportation Secretary Norman Y. Mineta called for an investigation. Isn't it time that the software industry wakes up from its stupor and realizes that its approach to software construction is fundamentally flawed? It is never too late. My guess is that the DOT investigators will just recommend more of the same. And the crisis continues...

 

December 27, 2004

The Pirelli INTERNETional Award, IX Edition 12:05 AM EST

The Silver Bullet Site has been accepted into the 2004 Pirelli INTERNETional Award competition. The Pirelli INTERNETional Award is a "multimedia award for scientific and/or technological subjects."

 

December 21, 2004

The New COSA Reliability Principle 1:25 PM EST

It took me a while to arrive at the following conclusions:

Unreliability is not an essential characteristic of complex software systems.
It is possible to construct computer programs of arbitrary complexity and guarantee that they are free of defects.
To solve the software crisis, all software must be guaranteed to be free of defects.

Here are my thoughts in a nutshell:

Accidental Correlation
In conventional software systems, reliability is inversely proportional to complexity. To borrow a couple of expressions from Frederick P. Brooks, is this an essential or accidental correlation? I am now convinced that the correlation exists only because of a historical accident, i.e., because the Turing computability model (TCM) was adopted as the basis of software engineering at the onset of the modern computer era.
The Real Software Reliability Crisis
When it comes to safety and mission-critical applications such as air traffic control, avionics, banking and medical systems, even a single defect is not an option because it is potentially catastrophic. Unless we can guarantee that our programs are logically consistent and completely free of defects, the reliability problem will not go away. In other words, extremely reliable software is just not good enough. What we need is 100% guaranteed bug-free software, irrespective of complexity.
Zero Defect Software
So, is it possible to design a software program of arbitrary complexity that is guaranteed to be completely free of defects? Software experts will, of course, reply that the answer is no. But I would like to answer the question by way of another question: Is it possible to design a logic circuit of arbitrary complexity that is guaranteed to be free of defects? I propose that the two questions are equivalent because software can, in fact, be based on the same signal-driven, synchronous model used in logic circuit design. So if a logic circuit can be guaranteed to be free of defects, so can a software application.

In light of my new thinking, I have formulated a new Reliability Principle as follows:
 

All COSA programs are guaranteed to be free of internal defects regardless of their complexity.

I have revised the Silver Bullet, Project COSA and the COSA Operating System pages to reflect my new understanding.

A bon entendeur, salut!

False modesty aside, the idea that software programs of arbitrary complexity can be guaranteed to be completely free of defects is a revolutionary concept in today's software market. It has consequences that go well beyond the creation of word processing applications that do not crash every other day. It opens up the full promise of software automation.

Extremely large software automation projects which were once considered out of the question due to concerns over complexity and reliability will suddenly become technically achievable and economically attractive. For example, COSA will make it possible to automate the entire global air traffic control system and, in the process, make it as safe as it can possibly be. The aircrafts, too, should be autonomous, both on the ground and in the air. There is no reason that an airplane cannot taxi, take off, fly to its destination and land safely, all without human intervention. The idea of self-driving vehicles and automated highways no longer seems as far-fetched as it used to. And while we are speaking of transportation, why not automate all the rail systems of the world?  I won't even go into how vital rock-solid reliability is to medical and financial software systems, and to the defense and power generation industries. The possibilities are just endless.

Any country that gets an early lead in exploiting this technology is likely to leap far ahead of the others, both economically and militarily. This may potentially wreak havoc in certain sectors of the technological and economic landscape and may even threaten the global balance of power. As they say in France, ą bon entendeur, salut! In other words, he who has an ear, let him hear.

The Blame Game

Who was the most to blame for the current software crisis? Was it Lady Ada Lovelace (table of instructions) or Charles Babbage (analytical engine)? Was it Jacquard (punched cards)? Or was it Alan Turing (Turing machine)? Personally, I think it was the latter. Turing's predecessors were not really interested in a truly general purpose or universal behaving machine. All they wanted to do was to design a tool with which to solve purely algorithmic problems. Turing, on the other hand, was already thinking of using a computer for all sorts of non-calculational tasks, including artificial intelligence. His goals went way beyond solving mathematical algorithms. Yet he never seemed to have progressed past the algorithmic model.

I am sure I will offend a bunch of people with what I am about to say, but so what? I have been burning bridges for so long that one more will not make much of a difference. Turing's biggest mistake was the so-called Turing Machine (TM), the theoretical basis of software engineering to this day. I believe that, had Turing given it a little bit more thought, i.e., had he really dug deep into the true nature of computing, we would not be in the mess that we're in today. Even his famous test for artificial intelligence was based on text manipulation, which is essentially algorithmic. In my opinion, the Turing test was a conceptual disaster which led to GOFAI (good old-fashioned AI) in the latter part of the twentieth century. As far as I know, it never occurred to Turing that an intelligent machine (or any computer program, for that matter) is a behaving system. It is for these reasons that I put the blame for the software reliability crisis squarely in Turing's lap. This is all explained in several new paragraphs I recently added to the Silver Bullet page.

 

December 7, 2004

Mapou Automation Research, Inc. 4:30 PM EST

I am in the process of forming a privately held corporation called Mapou Automation Research, Inc. MAPOU (pronounced MAHPOO) Automation will offer software engineering and reliability consulting services (based on the COSA model) to private companies and government agencies. The company's initial location will most likely be somewhere in southern Florida. For more information, please contact me via email at the following address: eightwings2002@yahoo.com.

 

November 12, 2004

Still Crazy After all these Years 10:30 PM EST

Crazy? Of course, I'm crazy. Who in his right mind would dare to challenge the collective wisdom of an entire industry? I've been telling people in the computer business for close to twenty-five years that there is a fundamental flaw with the way we write software. Of course, nobody would listen, otherwise we wouldn't be in the sorry mess that we're in today. I was crazy then and I'm still crazy after all these years. It used to bother me a great deal that others could not see something that was so plainly visible to me. But I have grown to become immune to the ridicule, the personal attacks and the putdowns. I now view them as badges of honor.

My Critics' Dilemma

Even though the COSA model is beginning to attract worldwide attention, there is a concerted effort by some to paint me as some sort of crackpot. A search on Google will reveal that I have made myself a few enemies. I am the first to admit that my ideas are unorthodox and controversial, to say the least. Some of my detractors have a bone to pick with my unrepentant stance on physics and my lack of respect for their crackpot idols and con artists in the physics community. Others (mostly atheists and hardcore Darwinists) ridicule me for my religious beliefs and my interpretation of the book of Revelation. Still others feel that I have unfairly criticized Dr. Frederick P. Brooks in my Silver Bullet article. And now, horror of horrors, I have taken on the late great Alan Turing.

Some of my critics are faced with an uncomfortable choice. Even if they do agree with my position on the software crisis, the thought that my more controversial writings might receive additional press (should the COSA software model becomes the new software model) makes them recoil in horror. Indeed, what if COSA suddenly becomes the talk of the industry? A lot of people will flock to the Silver Bullet site and they will unavoidably surf to my physics and Bible pages. My critics can't let that happen. I have a suggestion for their dilemma. All they have to do is ignore me and adopt all my COSA ideas as their own. I really don't need the credit. I just want to see the software crisis go away. It would make my life and everybody else's life easier. The problem is that it will be kind of hard to pull it off. A lot of people have already read my articles and they will immediately recognize my approach regardless of how clever the disguise happens to be.

Stumbling Blocks

Some of the ill will directed at my person has to do with much more than just disagreeing with some of my philosophy. Some people feel genuinely and understandably threatened by my ideas and they react accordingly. So they badmouth me to destroy my reputation. It's a survival instinct. But I am not phased in the least. My attitude is as follows: If the leaders of the software industry, and the computer science community in particular, are willing to let these side issues become stumbling blocks in the way of solving the software crisis, then they do not deserve the solution. In other words, if they stone the messenger because they think his clothes are filthy, they do not deserve the message. As simple as that. Problem is, the price (PDF) for ignoring the message has been staggering, and it does not show any sign of abating in the near future.

Heavy Price

More than half of the cost of developing software is being spent on finding and correcting bugs. By some accounts, between forty and fifty percent of big and expensive software projects are cancelled before deployment due to bugs in the software. The others invariably fall prey to huge cost overruns and interminable delays. Add to this the costly task of maintaining the existing software infrastructure. Catastrophic software failures are the bane of the aerospace, transportation, defense, medical and financial industries. Hundreds of people have already lost their lives as a result of defective software. With the dawning of the age of the internet, "hackers" have found that one of the best ways to break into a network is to exploit obscure bugs in the system's software. The fight against viruses, Trojan horses and worms is a never ending war, and a very expensive one to boot. Nobody really knows precisely how much software malfunctions are costing the world economy but it is safe to assume that the global yearly cost of reliability-related setbacks easily runs into the trillions of dollars. According to Peter G. Neumann, the principal scientist at the Computer Science Laboratory at SRI International and the moderator of a forum on computer-related risks, the Y2k bug alone cost an estimated $1 trillion worldwide. A heavy price indeed.

What Took You so Long?

The software crisis has gotten to the point where major catastrophes costing $billions and hundreds of human lives will soon be common place. Society will not sit by and endure this state of affairs much longer. It will demand that something be done as soon as possible. The question is, how much longer can the industry continue to do business as usual? How long will they continue to ignore the COSA message and delay the inevitable? How many more deaths? How many more wasted billions? Whether or not I am crazy, the COSA model stands on its own two feet. COSA is not rocket science. Anybody with a modicum of common sense, honesty and a passable understanding of computers can see the merit of the approach. My advice to the software industry is this: "You have no excuse. Do the right thing and take your medicine. Otherwise, get ready for more unbearable pain. And the pain will not go away. If anything, it will get worse, much worse.

There will be hell to pay when the word gets out that the solution has been published on the internet for some time and that the leaders of the software community failed to act on it. They will only have themselves to blame. I am not one to say "I told you so", but when you finally decide to do the right thing (it will happen sooner or later), many will ask, what took you so long? Yeah, how could you have been so blind for so long?"

 

November 10, 2004

X-Prize for Software Silver Bullet? 4:35 PM EST

I am faced with a seemingly intractable chicken and egg dilemma. How do I make the case for the COSA software model without an actual working prototype? But then again, would a working prototype of a COSA system be enough to convince all critics of the soundness of the model? I doubt it. People with money to invest are just not interested in yet another OS. There are already plenty of those to go around. People are looking for actual case studies involving complex real world applications. In this case, the study would have to prove beyond a shadow of a doubt that the COSA model is orders of magnitude more reliable and productive than conventional systems of comparable complexity. Certainly a COSA OS would qualify as a complex real world application in its own right. However, developing an OS is out of the question unless someone is willing to invest the necessary funds, hence the chicken and egg dilemma.

In my estimation, a full COSA OS would take less than two years and cost less than two million dollars to develop. In addition to the RAD tools, it would include a comprehensive suite of applications such as a word processor, database, paint program, internet browser, email program, driver support for multiple devices, desktop interface, video games, etc... It would be an ideal project for a joint industry/government venture such as the Sustainable Computing Consortium (CyLab). Two million dollars is a mere pittance compared to the tens of millions that CyLab and Carnegie Mellon University have already received from NASA and private corporations over the last few years. As I wrote in a previous article, I seriously doubt that the sponsors of the SCC will ever get their money's worth. In my considered opinion, the SCC is a total waste of good money. Just one man's opinion.

It occurred to me, not long ago, that one way to spur interest in the search for a solution to the software reliability crisis would be to offer some sort of prize similar to the Ansari X-Prize. What the X-Prize has done to liven up research in commercial space transportation could also be done for software reliability research. How about ten million dollars for the first team to develop a multi-user, defect-free operating system, one which is able to run for a full year under heavy use without crashing? Of course, the prize committee would have to provide precise and unambiguous definitions for terms like "defect-free" and "operating system." Given the critical importance of bug-free software to society, the prize offer should remain in effect permanently until someone actually wins it. In my opinion, this would do much more to advance the state of the art in software tools and systems than a hundred CyLabs put together.

 

November 5, 2004

Failure to Communicate 12:10 AM EST

In the classic Hollywood chain-gang prison movie "Cool Hand Luke" starring Paul Newman, there is a scene in which a mean-spirited prison warden (Strother Martin), frustrated by the lack of discipline of one of his inmates (Newman), painfully enunciates his displeasure with this famous line: "What we've got here is failure to communicate." These words invariably pop into my consciousness every time I think of the software crisis. The reason is that most malfunctions in algorithmic software systems can be traced to a failure to communicate: one part of a program may depend critically on changes to a property but somehow fails to be notified of a change. I have already shown how this problem can be easily solved in a signal-based, synchronous system. Furthermore, it is necessary to address the problem not only at the cell level (within a program), but also at the component level (within a system or network of interacting programs). This was explained in the preceding news articles.

Some of you may have noticed that I continually return to the subject of dependencies. I do it, not just because it is a major facet of Project COSA, but because it is at the crux of the software reliability crisis. Unresolved data dependencies are the main cause of unforeseen software failures. This can never be overemphasized.

 

November 2, 2004

The Vision Problem 4:20 PM EST

Software objects must be constantly kept informed of important changes in data. They must, so to speak, have eyes in the back of their heads. This is what is known as the data dependency problem. Unresolved data dependencies are the leading cause of software unreliability. The reason is that the burden of resolving dependencies in algorithmic systems falls entirely on the programmer. It is easy for a programmer to overlook dependencies, especially in a complex program. I call it the software vision problem.

In order to ensure rock solid reliability in software systems, it is vital that data dependencies be resolved automatically at the operating system and development tool level. The COSA software development environment completely removes the data dependency burden from the programmer's shoulders. It does so both at the cell level and at the component level. I like to think of a COSA software system as having total vision.

Sensors and the Vision Problem

The vision problem does not exist in electronic logic circuits because hardware sensors are synchronous, that is to say, they are always active. As a result, changes in a hardware system are always detected and acted upon, barring some physical failure. The synchronous nature of hardware is the primary reason that electronic circuits are so much more reliable than software. By contrast, software sensors (comparison operators) must be explicitly invoked or updated at the right time, otherwise important changes in the system will go unnoticed. Updating every sensor at every tick of the clock would, of course, solve the software vision problem but only at the expense of very poor performance. As seen below, the COSA software model has an efficient solution to this problem.

Cell-Level Dependencies

As I wrote elsewhere, COSA automatically resolves in-memory data dependencies between cells (elementary objects) within a given component. Normally, a cell has access only to data within its parent component. Effector-sensor associations, the mechanism used in COSA to resolve dependencies, are forbidden across components. That is to say, an effector in one component cannot be associated with a sensor in another. The COSA development environment will not allow it because, for organizational and security purposes, no component is allowed to have direct sensory access to what is happening in another. If there is a need for one component to directly sense data changes in another at the elementary cell level, the application designer should combine the two into a single component. Indeed, this is the method one should use to determine the composition of a newly created component. External dependencies should be handled at the component level.

The COSA message-passing mechanism cannot be used to enforce data dependencies at the cell level. The reason is that, unlike data sensors which are always active, a message receptor must wait for an input signal to do something. In this light, messages should be thought of as commands.

Component-Level Dependencies

There is more to data than what is contained in memory during program execution. Data is also kept in files and database records. In a large system, multiple components may have read and write access to a common pool of stored data. This will inevitably result in data dependency problems at the component level. Resolving data dependencies among mass storage records is an absolute must for system-wide reliability. The problem is analogous to the cell-level dependency problem.

I personally subscribe to the notion that every correct solution to every problem in computing must somehow involve some form of complementarity. Indeed, complementarity is the most important of the principles on which Project COSA is based. In a COSA system, data dependencies at the cell level are resolved automatically through the use of two complementary types of cells: sensors and effectors. The COSA development environment automatically associates every effector with its relevant sensors, completely relieving the software developer of the burden of spotting dependencies. I think that a similar approach based on complementarity should be used with regard to data dependencies in mass storage repositories. Record writers should be automatically associated with record readers. This can and should be implemented solely on the basis of individual records since not all records need to be monitored. Of course, the only way to resolve data dependencies at the component level is to abandon our traditional chaotic file systems and adopt a database solution for all data storage needs.

All data I/O should be done via a special data access component (DAC). Part of the DAC's job is to make sure that all data dependencies are correctly identified and resolved, leaving nothing to chance. It should automatically associate record writers with pertinent readers. The DAC essentially knows which data records are of interest to any given component. This way it can send an alert message to the component whenever its associated data records are modified. This will go a long way toward improving system-wide reliability.

As I mentioned in a previous news item, data-centered software systems are not new. There are researchers who are very aware of the crucial importance of resolving data dependencies in mass storage repositories. Keep an eye on the work being conducted at the computer science department (DistriNet research group) of the Katholieke Universiteit Leuven in Belgium for example. For more information, contact Lieven Desmet.

I will have more to say on this important subject in a future page on COSA database service components. 

Older News

 

©2004-2006 Louis Savain

Copy and distribute freely