Rebel Science News
11/28/2012
Jeff Hawkins Is Close to Something Big
 
8/26/2012
The Myth of the Bayesian Brain
 
8/23/2012
The Second Great AI Red Herring Chase
 
8/15/2012
Rebel Speech Recognition Theory
 
8/8/2012
Rebel Speech Update
 

The Silver Bullet News

To Drastically Improve Software Reliability and Productivity

Latest News and Issues (April 2005 - April 2007)

 
Rebel Science Home
Why Software Is Bad
Project COSA
Operating System
Software Composition
Parallel QuickSort
The Devil's Advocate
COSA Discussion Forum
Not Associated with V.S. Merlot, Inc.
Contact Me

 

This page is where you will find short news articles and other musings related to the Silver Bullet hypothesis and Project COSA. All articles are listed in the reverse order of their date of publication.

April 2007
4/19/2007
Are You Offended by My Biblical Research?
March 2007
3/27/2007
The Entire Computer Industry Is Wrong!
September 2006
9/3/2006
Jaron Lanier
August 2006
8/30/2006
I Am a Crank
8/7/2006
Hiatus
July 2006
7/6/2006
Functional Programming
June 2006
6/20/2006
Dataflow vs. COSA
Keep It Complex, Stupid!
6/15/2006
Timing
April 2006
4/28/2006
The Devil's Advocate
February 2006
2/11/2006
Design Consistency
2/8/2006
Where Are the Gutsy Investors?
2/7/2006
Counter Intuition
December 2005
12/30/2005
Happy New Year!
Inertial Forces
Dilemma
12/27/2005
The Rebel in Me
12/7/2005
Project Free COSA
November 2005
11/30/2005
Network Security's Achilles' Heel
11/29/2005
What If...
11/25/2005
Revolution
11/23/2005
Digg, Del.icio.us, etc...
October 2005
10/20/2005
The COSA Reactive Database System
10/12/2005
Thinking of Everything
Optimization via Synchronous Sequences
September 2005
9/24/2005
Allchin's Predicament
9/8/2005
Advertising and Fund Raising
August 2005
8/30/2005
Academic Failure
Reinventing Computing
8/17/2005
Intel's New Processor Architecture
8/1/2005
New Forum
April 2005
4/18/2005
Not Enough Time
 

Older News

 

April 19, 2007

Are You Offended by My Biblical Research? 11:30 AM EST

Some of my readers write to advise me that I should refrain from mixing my software reliability work with my Biblical research on artificial intelligence and particle physics on the same site. My response is the same as always: it just ain't gonna happen! Their rationale is that most computer geeks are atheists and that people who are first attracted to the computer related stuff will stop taking me seriously as soon as they find out about my Bible stuff. Let me make it clear once and for all. I am not running for political office and this is not a popularity contest. If my writings on the Bible offend you, then don't read my site. It's not meant for you, sorry. Besides, it's not as if I'm making friends in the Christian community either. Only cowards fail to live by their convictions. If you think I'm a crackpot, more power to you.

 

March 27, 2007

The Entire Computer Industry Is Wrong! 12:10 PM EST

Both the hardware and the software industries are wrong. Hardware, because their processors are optimized to execute algorithms and software, because the use of the algorithm as the basis of software construction is the reason for the software crisis. Once we change our software model to a non-algorithmic, synchronous, reactive model, the hardware industry will have to follow suit and base their processor design on the new model. The industry's ultimate goal should be to replace all computers and software systems with the new paradigm. Then we will witness a fabulous renaissance of the computer age, one that will make the first computer revolution look boring in comparison.

 

September 3, 2006

Jaron Lanier 10:20 AM EST

I recently stumbled upon a January 2003 interview of Jaron Lanier conducted by Janice J. Heiss of Sun Microsystems. Lanier is an interesting individual. First off, he looks like a Rasta man with dreadlocks and all, the type of dude you'd expect to hand you a mini Dutch rolled with Hawaiian Gold. ahahaha... But don't let that fool you. Whether or not the man hits the bong, even a cursory look at his accomplishments will reveal that Lanier is what most people would call a deep thinker. The man has been busy. Here are a few excerpts from the Sun interview that are relevant to Project COSA:

"I think the whole way we write and think about software is wrong. If you look at how things work right now, it's strange -- nobody -- and I mean nobody -- can really create big programs in a reliable way. If we don't find a different way of thinking about and creating software, we will not be writing programs bigger than about 20 to 30 million lines of code, no matter how fast our processors become."

These are my thoughts exactly. Since the reliability of algorithmic programs is tied to their complexity, this effectively puts an upper limit to the complexity of our software systems. The goal of Project COSA is to rectify this problem.

"I'm working on some things, but you know, what most concerns me is what amounts to a lack of faith among programmers that the problem can even be addressed. There's been a sort of slumping into complacency over the last couple of decades. More and more, as new generations of programmers come up, there's an acceptance that this is the way things are and will always be. Perhaps that's true. Perhaps there's no avoiding it, but that's not a given. To me, this complacency about bugs is a dark cloud over all programming work."

How true! I believe that this attitude on the part of programmers is a direct result of the inherent inadequacy of our tools and operating systems. Fred Brooks' "No Silver Bullet" paper did not help either. It reinforced the complacency to the point of it becoming pathological.

"The problem with software is that we've never learned how to control the side effects of choices, which we call bugs."

Here Lanier hits the nail right on the head. What he calls "the side effects of choices" is what I have been calling "data and event dependencies" or the blind code problem. This is the main reason that software is unreliable and it is due to the practice of using the algorithm as the basis of software construction. The solution is to abandon the algorithmic software model and adopt a concurrent, signal-based, synchronous model. This, in effect, solves the dependency problem.

Lanier goes on to explain the inadequacy of using the Turing computing model (transmitting signals or instructions sequentially over a single wire). His proposed solution is to use pattern recognition. I agree. Pattern recognition is a concurrent process whereby multiple streams of instructions are operated on simultaneously. My thesis is that, in order to reliably perform pattern recognition, one must use a synchronous paradigm. The computer industry (and the world) desperately needs people like Lanier, dreadlocks and all!

 

August 30, 2006

I Am a Crank 12:30 PM EST

My enemies love to ridicule me and accuse me of being a crank and they are right: I am a crank. And I would not have it any other way. Problem is, to their eternal chagrin, I, too, am right. I am right about the crackpottery of the spacetime physics community. I am right about the personality cult of Brooks and Turing that has captivated and crippled the computer science community. It is the real reason for the software reliability crisis. Worst of all (and I intend to prove it in due time), I am right about the Biblical metaphors pertaining to the riddle of artificial intelligence and the brain. So my detractors (the smarter ones, at least) have a dilemma on their hands. They know that my thesis regarding software reliability is correct. They also know that acknowledging its correctness would enhance my credibility in other matters as well. Damned if they do and damned if they don't. I am laughing as I write this. ahahaha...

Years ago, I remember thinking that it would have been better for me to separate my work on software reliability from my much more politically incorrect ideas on physics and the Bible. I even thought of using a pseudonym for my non-reliability related work. But the rebel in me dismissed that thought. I may be a crank but I am not a coward when it comes to my religious and scientific views. So there you have it. What is a politically correct computer scientist to do? He or she is between a rock and a hard place if you ask me. But all is not lost because I have a plausible solution for their predicament. I think they should ignore me completely and introduce their own solution to the crisis without acknowledging my work. All they have to do is use new clever labels (e.g., concurrent synchronous threads) and avoid repeating the same terminology that I use. This way, they can claim that they originated the solution on their own. I would not mind. It would give them a way out and I would be happy to see the computer industry come to its senses. Of course, this would amount to applying a Band-Aid to a bullet wound and expecting the patient to recover. What will they do when a "self-deluded, superstitious nut" like me comes out with the solution to the artificial intelligence puzzle? I know. They think the possibility of this happening is very remote but they have a big surprise coming their way. I am not one to say "I told you so!" but, I got to admit, I live for that day. This crank is human after all. ahahaha...

 

August 7, 2006

Hiatus 12:40 PM EST

As many of you already know, the rebelscience site was down for a couple of weeks. I intentionally allowed it to expire because I had become very disappointed with the lackluster support and even outright hostility that Project COSA had received from the programming and computer science communities. A handful of people managed to contact me at my Yahoo email address and convinced me to bring the site back online. I have made myself many enemies (especially in the software quality industry) and some have stopped at nothing to discredit me and the value of my work. The COSA model should have been a success by now given its potential to revolutionize computing as we know it. I just had not expected such a virulent opposition from my detractors. Although I think that the software development community does not deserve COSA, I believe that the world at large will eventually benefit from it. I guess I need to be a little bit more patient.

 

July 6, 2006

Functional Programming 12:40 PM EST

Someone recently wrote to me about a functional programming language called Erlang. Apparently, functional languages are being used by various European corporations with great success. FP advocates love to point out that FP applications are extremely reliable, easy to debug and can be very robust even in the presence of bugs. They attribute these qualities to the concurrent nature of FP programs. Essentially, functions communicate exclusively by sending "asynchronous messages" to one another. Unlike a procedure call, an asynchronous message does not halt the calling function. This, in essence, is what I have been calling a non-algorithmic software model. The advantage is that functions (elementary operations in COSA) are not only concurrent but also distributed. As a result, defects are localized. I have already written about the benefits of concurrency with regard to failure localization and the resultant application robustness.

It is true that COSA shares these advantages with FP languages but there the similarity ends. One of the things that distinguishes the COSA model from the FP model is that a COSA program is synchronous at its core. That is to say, all elementary operations in COSA have equal durations and are synchronized to a common clock. This is absolutely essential to the COSA model as it makes it possible to enforce temporal determinism and overall consistency, not to mention the use of certain debugging strategies that simply cannot exist in a non-synchronous environment. So yes, due to their concurrent nature, FP applications will automatically benefit from a high degree of robustness in the presence of misbehaving objects but, as I tried to explain in these pages, this is only one aspect of the reliability problem. The COSA model is much more than just concurrent objects.

 

June 20, 2006

Dataflow vs. COSA 1:05 PM EST

I frequently receive emails from people who insist that COSA is nothing but a dataflow programming model and that I am just reinventing the wheel. The functional programming adherents tell me the same thing but that's another story for a future article. Dataflow is not new. I have been attracted to visual dataflow languages (e.g., LabView, Prograph, etc...) for many years. I remember thinking that the graphical representation, the powerful reuse mechanism and the flow-through style combined beautifully to create a much more intuitive and easy way to develop computer applications. People have been using dataflow languages for over twenty five years. The problem is that, if the dataflow model was the panacea that would cure all that ails the software industry, it somehow failed to gain widespread acceptance. I mean, shouldn't we be using dataflow for everything by now? The reason is obvious; something is missing. Dataflow languages tend to be rather high-level and are not as flexible as some of the more conventional languages. Adding a new, non-composite object into a dataflow language is not easy for the average user and usually requires the use of a compiled language such as C or Java.

Let me come right out and say it: COSA is not a dataflow programming model. COSA is a signal-based, synchronous, reactive software model in the tradition of reactive languages like Esterel, Lustre, Signal, etc... There are certainly similarities between COSA components and dataflow objects in that COSA components can be made to pass queued messages (data) to one another in a unidirectional (input/output) and asynchronous manner but that's about it. COSA is much more flexible than this. Here are a few differences:

Above all, COSA is synchronous at its fundamental level (the operator level). As far as I know, this is not true of dataflow languages. Heck, I don't think it's true of other reactive languages either.

Whereas dataflow languages are algorithmic internally, COSA's elementary cells are entirely signal-based.

COSA solves the data and event dependency problem.

There are more basic differences but these will do for now. I plan to create a special page where I will compare COSA to various existing paradigms. Stay tuned.

 

Keep It Complex, Stupid! 5:05 PM EST

Now this is an advice that we don't hear very often. It flies in the face of common sense. Yet, this is the advice I would give to any COSA application developer. Under the COSA software model, the best way to ensure the design correctness of a program is to make it as complex as possible, whether or not the added complexity adds new functionality to the existing design. This is very counterintuitive, I know, since we are accustomed to believe that the number of design defects in a system is proportional to its complexity. This is true for traditional algorithmic systems but it is not true for other types of behaving systems. In fact, we already have an existing proof of it: our own brain! Our behavior becomes more robust and dependable as we increase our knowledge and experience. Practice makes perfect, as they say. The brain is essentially a temporal processing mechanism. It uses the temporal correlations between signals to discover temporal constraints for testing new knowledge. Conflicts/violations are quickly eliminated and the addition of new knowledge introduces new constraints that conspire to increase overall robustness. It turns out that the same general principle that is used by the brain to increase its robustness is also present in a COSA program. See the previous article on timing for more on this subject.

 

June 15, 2006

Timing 10:45 AM EST

If I had to choose a single word to convey the most important aspect of dependability in behavioral systems, it would have to be timing. It is vital that the temporal behavior of a system be consistent during the lifetime of the system, hence the necessity of using synchronous elementary processes. In addition, the system's ability to detect the relative timing of events within a complex collection of interacting entities is essential to the formation of temporal constraints for use in the enforcement of design and operational consistency. An event (signal) is a temporal marker that indicates that a specific change occurred within a system. There are only two fundamental types of temporal correlations between events: they can either be sequential or concurrent. There is also a higher level form of temporal correlation based on the proportionality of intervals. A constraint discovery and enforcement mechanism simply finds invariant (unchanging over time) temporal correlations while the system is running and creates multiple temporal detectors that trigger an alarm whenever a constraint is violated. If need be, it can be more sophisticated; the constraint enforcer can even pinpoint the most likely culprits making troubleshooting a breeze. In a strictly deterministic system, constraints must never be violated. By contrast, in a non-deterministic system (as might result from uncertain or incomplete sensory data), constraints are probabilistic, in which case an alarm is triggered only if the number of violations rises above a given percentage. This is the basis of the COSA design consistency mechanism. It is not rocket science but its importance is paramount. It will become an essential part of all future COSA development systems.

Note: The idea of using temporal constraints for the enforcement of consistency in COSA systems is a direct result of my ongoing research in artificial intelligence. It is a non-symbolic approach in the sense that the meaning of signals (what they represent) is irrelevant. Only their temporal correlations are important. Its power and generality stem from its simplicity.

 

April 28, 2006

The Devil's Advocate 12:30 PM EST

I have prepared a new page to address various criticisms of my arguments in favor of the synchronous software model. More items will be added as time permits.

 

February 11, 2006

Design Consistency 3:30 PM EST

I have repeatedly claimed in the past that the COSA software model can be used to find logical inconsistencies in a design. Indeed, there is a very simple method, based on temporal constraints, that will ensure that a complex software system is free of internal logical contradictions. With this method, it is possible to increase design correctness simply by increasing complexity. The consistency mechanism can find all temporal constraints in a complex program automatically, while the program is running. The application designer is given the final say as to whether or not any discovered constraint is retained.

Normally, logical consistency is inversely proportional to complexity. The COSA software model introduces the rather counterintuitive notion that higher complexity is conducive to greater consistency. The reason is that both complexity and consistency increase with the number of constraints without necessarily adding to the system's functionality. Any new functionality will be forced to be compatible with the existing constraints while adding new constraints of its own, thereby increasing design correctness and application robustness. Consequently, there is no limit to how complex our future software systems will be. It is for this reason that I have always maintained that computing will finally reach its true potential with the widespread adoption of the COSA model.

I had earlier made up my mind to add a new page to the site in order to explain the constraint discovering mechanism. It is a critical part of the COSA model. However, it has slowly dawned on me that I have given away a lot of information while getting very little in return. For this reason, I decided to hold on to this knowledge until I see some willingness on the part of some of the people who are benefiting from the COSA model to invest some hard cash into this project. I have put a lot of thought and hard work into COSA over the years and it is about time I see some reward. If you can figure it out on your own, more power to you but, as of today, I will no longer put all my cards on the table. I am sorry.

 

February 8, 2006

Where Are the Gutsy Investors? 11:25 AM EST

There are those who believe that the COSA software model is much too radical to invest big money in. They lack long-term or even short-term vision, in my opinion. I am convinced that, sooner or later, the computer industry will have to adopt a signal-based, synchronous approach to software construction. This paradigm is not going to go away. The industry is lugging around a heavy weight attached to its collective ankle. Were it not for the astronomical costs of developing highly complex and reliable software applications destined for mass consumption, we would all be traveling safely in completely autonomous vehicles and aircrafts by now. But safe and automated mass transportation is just a small taste of the full COSA promise. COSA is designed to unleash the full potential of computing which will bring about a glorious renaissance of the computer revolution of the last century. Where are the gutsy investors with the vision and conviction to take a gamble on the unavoidable future of computing? It is either now or later but the early bird gets the worm, as they say. Last year, I compiled a list of possible COSA business models for potential investors. I reproduce it below just in case someone with the willingness and the guts to make a difference and be the first mover in an exciting new technology is listening. A bon entendeur, salut!

One of the nice things about COSA is that it can accommodate several types of business models that target specific markets. Below is a list of products and/or services for which COSA is ideally suited.

Embedded COSA Operating System (ECOS). COSA would be perfect as the basis for a small embedded operating system for mission-critical applications and/or portable devices such as automotive control systems, avionics, cell (mobile) phones, set-top boxes, PDAs, etc...

COSA Virtual Machine (CVM). Similar to the Java Virtual Machine (JVM), the CVM could serve as an application execution engine for use in existing legacy operating systems such as Windows, Linux, OSX, etc... CVM and ECOS would have largely compatible execution kernels. This means that the same software construction tools (see below) could be used to develop applications for both environments.

COSA Development Studio (CDS). The CDS would consist of a set of graphical tools for designing and testing COSA applications. It could be used as a proprietary rapid application development (RAD) tool with which to create software for either CVM, ECOS or COS (see below). CDS could be hosted on any of a number of existing desktop OSs. It could also be sold to the public as a RAD tool for legacy systems (CVM), embedded systems (ECOS) or the COSA operating system (COS).

COSA Operating System (COS). COS could be either an open or closed source OS depending on the business model. It is a full operating system in the sense that it would include all the usual service components and applications found in systems like Linux, MacOS and Windows. In addition, COS would, due to its very nature, automatically support cluster computing for high-performance applications such as weather forecasting and scientific/technical simulations. COS should be initially marketed to businesses and government agencies, especially for mission-critical environments.

COSA-Optimized Processors (COP). These are RISC-like central processing units (CPU) designed and built especially for the COSA software model. COPs would process COSA cells directly and would replace most of the COSA execution kernel. The end result would be extremely fast processing and simulated parallelism implemented at the chip level. COP chips can be designed for various markets such as end-user products (desktop computers, cell (mobile) phones, set top boxes, game boxes, notebook computers, laptops, etc...) and mission-critical business systems.

COSA Neural Processors (CNP). The COSA project was heavily influenced by my ongoing work in spiking (pulsed) neural networks or SNNs. Since COSA cells are similar to spiking neurons, it makes sense to extend the capabilities of COSA-optimized processors so as to add support for fast SNN processing. Neural network driven applications are bound to multiply in the near future. The nice thing about CNPs is that they would be ideal for large-scale distributed SNN applications that require hundreds of millions or even billions of neurons.

 

February 7, 2006

Counter Intuition 12:05 PM EST

The problem of software unreliability never seems to go away. Normally, software systems become less reliable as they grow in complexity. Even if all "accidental" defects could be corrected by our development tools, there isn't much they can do about the design errors. This type of error is a consequence of what Frederick Brooks calls the "essential complexity of software".

But all is not lost. There is a way to construct software such that the number of design errors decreases with complexity. This approach is the essence of COSA, the signal-based, synchronous software model promoted in these pages. In this model, logical contradictions are automatically nipped in the bud. By increasing complexity while retaining a fixed functionality, the software developer effectively increases the chances of finding all the logical contradictions in the design. This is the exact opposite of what one would expect. A silver bullet? You bet.

 

December 30, 2005

Happy New Year! 1:05 PM EST

I wish everyone who has supported my work over the years a happy, prosperous and fruitful 2006. I have tried my best to bring the COSA paradigm into the collective consciousness of the computer industry. However, the work is far from completion. It has not been easy and I doubt that it's going to be easier in the near future. Still, I expect 2006 to be a banner year. In the last two months, COSA has been getting a lot of exposure all over the world. Many seeds have been planted in excellent soil. It's only a matter of time before they start to germinate. We'll see.

Inertial Forces

As seen in the list below, there are powerful inertial forces that must be overcome before this model can be accepted by the mainstream.

Huge legacy infrastructure.
Not invented here syndrome.
Threat to reliability industry.

It has been done before.

The last item on the list is particularly bothersome. There are those who claim that a COSA program is either a finite state machine or a dataflow system similar to National Instruments' LabVIEW. They are wrong on both counts. A COSA program is a synchronous reactive system in the purest sense of the term. Certainly it encompasses the utility of a dataflow system since operations on data are performed at the right time (only when necessary) but it is much more than that. COSA contains innovations (e.g., automatic resolution of data dependencies) that do not exist in current systems. Likewise, COSA is not a finite state machine because the execution of operations depends on multiple state transitions (conditional changes). While a COSA program is causally deterministic, the machine's states are not determined in advance by the programmer. More importantly, COSA pushes the reactive model to its logical extreme, down to the individual instruction level. It abolishes algorithmic programming altogether, something that current reactive systems have not done. The only exception is a small execution kernel, and even the kernel will eventually be replaced by hardware logic once a COSA-optimized processor is available.

Dilemma

Looking back on 2005, I realize that I may have been targeting the wrong audience, or rather, I have failed to target the right one. I am beginning to think that embarking on a COSA development project is not so much a technology decision as it is a business one. True, investors, CEOs and company presidents must be technologically savvy enough to understand the problem and the solution being offered. The reason is that they cannot rely on their CTOs (chief technical officers) or engineers to suggest a radically new course of action. The risks are simply too high for middle-level management. Besides, there is the "not invented here" syndrome that kills almost every proposed new solution. The decision must therefore come from upper executive management independently of the company techies. They must be able to estimate the long-term cost of doing business as usual (using unreliable software) and compare it to the cost of making a drastic change in company technology (adopting  the COSA model). Again, to do so, they should not rely too much on the advice of their own technical staff: there is way too much ego and self-interest standing in the way of good judgment. Rather, they should consult with a third party from outside the company. There are many technology-oriented consulting agencies and qualified experts out there who would be willing to take a close look at the COSA model and render an impartial go or no-go judgment, for a price.

The dilemma is, how does one go about communicating something like COSA to CEOs and investors? Should they be the target audience in the first place? Would it not be easier and more sensible to get the message to those who have the CEOs' ears, i.e., to the consultants? I realize I cannot be a fair judge of the merits of the COSA approach since mine is an obviously biased position. All I can say is that I sincerely believe there is a fabulous gold mine at the end of this tunnel. My advice is, take it or lose it but don't be too selfish or greedy. There is plenty of opportunity for everyone. Besides, this is too revolutionary and too far-ranging a change to keep it to oneself. It calls for the forming of industry-wide alliances, the establishment of global standards and widespread cooperation among many players. As I have said in the past, it is a reinvention of computing as we know it and it will bring about a glorious rebirth of the computer software and hardware industries.

 

December 27, 2005

The Rebel in Me 10:25 AM EST

I regularly get emails from readers who advise me to cool it down. In their opinion, by criticizing academia (mostly physicists and computer scientists), I am alienating the very people who are in a position to help me achieve my goals. They are right. It is no secret that I can be vicious in my criticism of certain sectors of the scientific community. One of my favorite epithets that I use to characterize my enemies is "ass kissers". I make no bones about it. In turn, they blacklist me and call me a crackpot and a crank, which is to be expected. A quick search on Google will reveal that I do have a lot of enemies. They hate my guts and the feeling is mutual.

Truth is, I cannot help it. It's the rebel in me. I am a renegade, a revolutionary, a free thinker, a guerilla, a loner, a barbarian and an insurgent, all wrapped into one. It is no coincidence that this site is called rebelscience.org. My goal is not to be accepted by my enemies but to fight them with everything at my disposition, even if I lose in the end. I rebel against what I perceive as intellectual dishonesty, laziness, elitism, and favoritism in science. I cannot stand it when certain individuals in high places do their best to prevent or to slow down progress in humanity's search for truth and knowledge. I am particularly bothered by what I call incestuous thinking. It is a sort of deviant scientific thought that develops when a tight-nit and elitist group of scientific (or religious) leaders develop a private paradigm and then effectively isolate themselves from public scrutiny and criticism through incessant and careful propaganda. They don't do it for scientific reasons because theirs is a political and, when you stop to think about it, a religious platform. A perfect example of this is spacetime physics. The scientific monstrosities that the spacetime physics community has spawned in the last century are now legendary: time travel, motion in spacetime, temporal dimension, wormholes, black holes, etc... The sort of in-your-face crackpottery in high places that is prevalent in this field is so entrenched that no amount of rational or logical debate can put a dent in it. They get away with it by convincing the lay public that they are too stupid to understand it, the same public who pay their salaries.  Worse, they have made a career of their crackpottery. Any perceived criticism, good or bad, is seen as a threat to their livelihood: they will fight it teeth and nails. I know. I have seen it. I can make a similar observation with regard to software reliability experts.

As I write in my admittedly irreverent essay on voodoo physics on this site, "to succeed, the rebels must form a hostile political stronghold outside the walls and hope that they can gain enough converts from the the lay public (the despised peasantry) and enough defections from the enemy camp to eventually breach through. Once they are in, they must pillage and destroy the old order through terror. The leaders of the fortified castle must be put in chains, tarred and feathered and paraded through the streets for all to see (allegorically of course). This is war!"

 

December 7, 2005

Project Free COSA 1:35 PM EST

Many people have written to me to suggest the formation of a collaborative project to develop a COSA operating system or a COSA virtual machine to be used in an existing OS. I think it's an excellent idea. I have added a new forum topic titled 'Project Free COSA' for this purpose. If you have participated in a collaborative software project in the past, please share your experience and ideas on the forum. If you understand how SourceForge operates and would like to contribute to a SourceForge-hosted COSA project, please post your thoughts. Currently, I do not have the time to head such a project and I invite anyone interested to contact me via email.

I once wrote an article in the Silver Bullet news listing various COSA business models. Please take a look. I need suggestions regarding which one would be the best candidate for an initial SourceForge project. Some people believe that a textual COSA language would be a good way to start building development tools. I disagree but I am willing to hear counter-arguments. I am convinced that development tools should be entirely graphical. There are several reasons for my stance. I will expand on them in the near future.

 

November 30, 2005

Network Security's Achilles' Heel 8:55 AM EST

In our age of terrorism and rising crime, the need to secure one's assets, both tangible and intellectual, is a constant preoccupation of government agencies and the private sector. Computer systems, in particular, are under recurring attacks from intruders. Comprehensive measures must be taken to counter this threat. The problem with data security is not the lack of adequate technology or know-how because current methods of securing computers do work. Otherwise, our electronic banking and commerce systems would have collapsed years ago. The problem is that hackers look for and often find ways of getting around the security barriers by exploiting defects in the underlying software. A network's security is thus intimately tied to the reliability and robustness of the network's software. Security companies have no way of guaranteeing that the various software modules used in their systems are defect-free. This uncertainty is the Achilles' heel of the security industry. The solution is to move away from the algorithmic model of software construction and adopt a signal-based synchronous software model. Only then will we be able to build bug-free software, guaranteed. This is part of the motivation behind Project COSA.

 

November 29, 2005

What If... 10:10 AM EST

What if there were no limit to the complexity of the software systems we could build? How different would the world be? But even more important, what is keeping us from designing and implementing extremely complex automated systems? The answer is simple: bugs. It turned out that the biggest problem faced by the winner of the recent 2005 DARPA Grand Challenge competition was not writing the software but debugging it. Don't expect to see self-driving cars at your neighborhood dealer anytime soon, though. Even though the Stanford team won the competition, the software system they developed is certain to remain experimental for years to come, if not decades. Why? For the simple reason that they cannot guarantee that it is defect-free. Which brings me back to my original question. What if we could build bug-free software of arbitrary complexity? The mind boggles at the possibilities but here is something that sure would be welcome in these days of high energy costs.

Imagine that we could manufacture safe self-driving vehicles. How could this be used to lessen the world's dependence on fossil fuel? The solution is not hard to envision. Big cities like New York, London and Paris would simply acquire a large fleet of self-driving cars and ban privately owned vehicles. Everybody would then be given RFID and/or GPS-enabled beepers with which to summons a vehicle at the click of a button. Enter your current location, destination and desired urgency and click. The nearest parked vehicle would then drive itself to the customer and take them to their destination. The system could even be set up to figure out the most economical route and use carpooling to save fuel. Non-carpooling customers would have to pay a premium. Such a system would eliminate the need to have so  many vehicles on the road. Sure, the private transportation industry would not like it but we would save a bundle in energy costs. This is just one possible scenario of what could be done with complex automation as soon as we can rid ourselves of our algorithmic shackles. There are countless other possibilities. Join me and let's get the revolution started already.

 

November 25, 2005

Revolution 4:50 PM EST

A century and a half after Lady Ada Lovelace penned down the first algorithm (table of instructions) for Babbage's analytical engine, the algorithmic model of software construction is finally approaching the end of its reign. It has served us well but it can no longer keep up with the ever growing complexity of our software systems. The cost of software unreliability is staggering and it severely limits what we can accomplish with our machines. This situation can no longer be tolerated. It is time for a change. Project COSA is about a revolution in the making. It calls for a total rethinking of the way we design and program our computers. It will be costly but, in the long run, the benefits will far outweigh the disadvantages. I foresee a glorious renaissance of the field of computing, bigger and better than the revolution of the last century. We will build machines more complex and powerful than we ever dreamed possible. Question is, who, in the industry, is brave and wise enough to take the first step into the new golden era of computing?

 

November 23, 2005

Digg, Del.icio.us, etc...  2:15 PM EST

During the past twenty-four hours or so, rebelscience.org received close to ten thousand hits and eight thousand visitors from around the world. The sudden surge in traffic began with a front page mention on digg.com which then migrated to other social book-marking sites like del.icio.us and Blinklist. After that, word of mouth (emails, forums, etc...) added to the traffic. I thank all the diggers at digg.com for allowing the link to make it to the front page. I have already received several interesting emails from interesting people. I also noticed a sizable interest from a number of defense-related companies and embedded systems manufacturers. Unfortunately, due to the increased correspondence, I am afraid I cannot reply to everyone.

Some time ago, I submitted a similar article to slashdot but it never made it past their censors. Thank you, Kevin Rose, for democratizing editorial privilege on a wonderful site targeted toward science and technology geeks. By the way, peer review is choking the scientific world. Progress in physics has pretty much come to a standstill and part of it has morphed into blatant crackpottery. How about a scientific publishing site where the public at large gets to decide what is scientific or not? After all, most scientific research is funded with the public's money. Shouldn't they have a say-so into how it is spent? Just a thought.

 

October 20, 2005

The COSA Reactive Database System 3:50 PM EST

I have been meaning to add a new web page about a database management system based on the COSA model for quite a while. Unfortunately, I have very little time to spare these days. I hope this short article is enough to convey the basic principles of what I envision a COSA database should be like.

Essentially, a fully compliant COSA operating system treats mass storage data the same way a COSA program treats in-memory data. All applications must access the database via special sensors and effectors, and message connectors provided by the database interface layer. This means that a COSA database must follow the COSA principle according to which all actions are driven by changes. This means that the database code itself should be COSA-compliant (synchronous reactive) and should consist of COSA components. But even more important is the way access sensors and effectors are implemented. The COSA Reliability Principle must be adhered to. That is to say, the database interface layer should automatically associate effectors with sensors so as to make sure that every change to a record is immediately communicated to any application that is affected by the change. In addition to sensors and effectors, a COSA database must provide general message connectors that will be used for traditional relational database operations such as sending an SQL query. 

Having said that, there is no reason that a COSA interface layer cannot be designed to take advantage of existing database systems. Most databases already provide various NOTIFY-type triggers that can be used to implement sensors. The idea is to provide a mechanism that will automatically find the associations without the developer having to do it manually. This will do wonders for reliability because it would automatically solve the data dependency problem at the system level. For example, if a sensor is used to detect that a certain record reaches a specified value, the sensor should be automatically associated with any operation that can potentially satisfy the condition.

I'll have more to say on this subject in a future article...

 

October 12, 2005

Thinking of Everything 2:40 PM EST

In a recent discussion on the slashdot.org forum about software reliability, someone wrote the following in response to one of my messages:

So, if this non-algorithmic, signal-based, synchronous piece of software will encounter a situation that I, the programmer, didn't think about, and therefore couldn't give instructions to the computer about, it will automagically know what I would have wanted it to do in that particular situation ?

The COSA software model does not prevent one from creating destructive software. If someone wants to write a program that launches a ballistic missile every time the moon comes into view, that is his or her prerogative. What COSA guarantees is that the program will do what it is designed to do flawlessly. Note, however, that because of the temporal constraints used in COSA, the more complex the software, the less likely it will fail due to design oversight. The reason is that the number of temporal constraints is proportional to complexity. What this means is that the best way to ensure that one has thought of everything is to make the code as complex as possible while maintaining the desired functionality. This is a rather counterintuitive notion but it is one of the many advantages of the COSA model.

Optimization via Synchronous Sequences

One of the disadvantages of the COSA model, as I originally conceived it, is its low performance. The reason is that instructions (sensors and effectors) are not sequentially ordered in memory but connected via synapses. The kernel (or processor) is forced to look up the next instruction in a sequence using the time-consuming method of indirect memory access. Hopefully, a future COSA-optimized processor will solve this problem once and for all. At any rate, it occurred to me recently that there is a way out of this bottleneck. In spite of its non-algorithmic nature, a COSA application still consists of multiple sequences of operations linked together via pointers. It is possible to compile a program so as to remove all the links (synapses). The compiler would simply generate multiple lists of successive operations that the processor would just sequence through by incrementing a traditional instruction pointer. The main difference is that there would have to be several instruction pointers working concurrently, one for each running sequence or list. In addition, in order to maintain synchronicity, all the pointers would have to be incremented synchronously. I think this approach would bring COSA performance on a par with languages like FORTH and Java. Something to think about.

 

September 24, 2005

Allchin's Predicament 12:30 PM EST

Yesterday, an article in the Wall Street Journal's OnLine Edition gave a vivid description of the costly software reliability problems that Microsoft has had to endure in its effort to develop the next version of its Windows operating system. It drove home a point that I have repeatedly made in the past. The biggest problem with software is communication. I am not talking about the lack of perfect communication between programmers (nothing can really be done about that since programmers come and go) but about communication between various parts of the software. Microsoft is suffering from a classic case of the "right hand not knowing what the left hand is doing" syndrome. Here's a relevant excerpt from the article (emphasis added):

Mr. Allchin's [senior executive at Microsoft] reforms address a problem dating to Microsoft's beginnings. Old-school computer science called for methodical coding practices to ensure that the large computers used by banks, governments and scientists wouldn't break. But as personal computers took off in the 1980s, companies like Microsoft didn't have time for that. PC users wanted cool and useful features quickly. They tolerated -- or didn't notice -- the bugs riddling the software. Problems could always be patched over. With each patch and enhancement, it became harder to strap new features onto the software since new code could affect everything else in unpredictable ways.

The problem has to do with what I call blind code and it is not just Microsoft's problem. It is an old problem that has plagued the entire software development industry from the beginning. It is proportional to complexity but it does not have to be. In fact, it can be completely eliminated. The solution requires a rethinking of software construction, not only at the single program level but also at the operating system level. It calls for the reinvention of computing at the fundamental level. Eventually, even basic microprocessor architecture will have to be overhauled. This is what Project COSA is all about. For more on this important subject, see these previous news articles:

Component-Level Dependencies
Cell-Level Dependencies
Sensors and the Vision Problem
The Vision Problem
Failure to Communicate

 

September 8, 2005

Advertising and Fund Raising 2:30 PM EST

I have put a lot of work over the years in my research and I provide my findings free of charge to everybody. In order to help pay for my time and the cost of maintaining the site, I have decided to accept advertising from third parties, starting with Google ads. This is something I had always sworn I would never do, but the fact of the matter is that I need time and money to continue this work. The Rebel Science site gets a fair amount of traffic from all over the world. I don't think that a few ads would inconvenience anyone. It would help pay for some of my costs and would give me additional time to work on the things that are dear to me. Please email me and let me know what you think.

 

August 30, 2005

Academic Failure 2:15 PM EST

Over the years, I watched helplessly as the computer science community turned software engineering into a veritable tower of Babel. There is no doubt about it, academia is to blame for the mess that we find ourselves in. They turned a scientific discipline into a religion with its own pantheon of infallible deities: Alan Turing, Frederick P. Brooks, Ada Lovelace, etc... They only have themselves to blame because it was their job to fix the software reliability problem and they failed. In fact, they created the problem in the first place. For more than half a century, they indulged themselves in an incestuous algorithmic orgy, the monstrous offspring of which society has been forced to endure for far too long. In spite of countless examples of synchronous systems (e.g., biological neural systems, logic circuits, etc...) everywhere, they stuck to the algorithmic model. It never occurred to them that their fundamental approach to software construction might be flawed. After all, Turing and Brooks are gods, and the wisdom of the gods must not be questioned.

Inevitably, solving the reliability problem will point the finger at their own glaring incompetence and even stupidity. Of course, they cannot let that happen. They have the propaganda machine and resources to ensure that the problem lasts as long as possible. They have created an entire reliability industry, the sole purpose of which is to point the finger in another direction and, of course, to make money in the process. They have succeeded in convincing the industry that the best way to improve reliability is to embrace the same techniques used by other engineering disciplines. Any dissenting voice will be marginalized. But this little game can only last for so long because the monster is not going to go away. Computer programs are not bridges. Like global warming and high energy costs, the software crisis is getting more unbearable with every passing day. Something has to be done, soon. And they know it.

Reinventing Computing 2:15 PM EST

The only way to cure what ails the computer industry is to start over from the beginning. We must reinvent computing as we have come to know it. We must reinvent both software and processor technologies: Software, because it is based on the algorithm; and processors, because they are designed and optimized to execute algorithms. As I explain on the Silver Bullet page, the algorithmic software model is the problem. Move to a signal-based, synchronous model and the problem will disappear. Certainly, nothing is stopping anyone from designing and implementing COSA-compliant operating systems and development tools for current algorithm-optimized processors. However, such a system, in spite of its guaranteed reliability, would be somewhat limited in the market because of its relatively slow performance (see disadvantages). That does not mean that we should all wait for the arrival of new processors to develop COSA-compliant applications. This is not going to happen any time soon. Besides, no chip designer should even consider designing a COSA-optimized processor until the software has been in use long enough for all the kinks to be ironed out. Data structures should be stable and hopefully standardized by a world standards body. Of course, any company so-inclined is always free to implement its own proprietary standards so as to be the first to market with a revolutionary technology. The potential windfall is undeniable. But so is the risk because, later, when standards are agreed upon, an early entrant may find itself holding a bunch of obsolete technology. But the risk is small, in my opinion. The worse that can happen is that an early bird will have to change its initial design to ensure standards compliance. It would still be months ahead of everyone else and would still have a good chance to corner the market.

 

August 17, 2005

Intel's New Processor Architecture 1:20 PM EST

Intel Corp. recently announced its plans to overhaul its chip architecture according to the Wall Street Journal.

Consider that all processor architectures are based on and optimized for the algorithm, a custom started by a guy named Babbage more than 150 years ago. A really new architecture should abandon the algorithmic model and adopt a non-algorithmic, signal-based synchronous software model. It would revolutionize computing and solve the nastiest problem in the computer industry: software unreliability.

But we cannot expect big companies like Intel, AMD or IBM to be truly innovative when it comes to revamping processor architectures. Their approach is evolutionary, not revolutionary and they are doing just fine as it is. They have no great incentive to change. Hopefully, a bright upstart will get the message and make a killing while the behemoths are busy fighting each other for market share. They won't know what hit them until it's too late.
The message is simple: There is a solution to the software reliability crisis. The disadvantage is that it will require a radical change in both processor architecture and software construction methodology. The advantage is too good to ignore: 100% software reliability! Guaranteed!

This is the stuff that revolutions and great companies are made of. After a century and a half, I think it's time for a change. He who has an ear (and the venture capital) let him hear!

 

August 1, 2005

New Forum 12:40 AM EST

RebelScience.org now has its own discussion forums. All the forums are housed under one roof. Please feel free to register and let me know what you think. 

 

April 18, 2005

Not Enough Time 10:00 AM EST

Lately, it has been rather difficult for me to find time to spend on Project COSA. I must admit that COSA is not my main interest. Those of you who have been following my work over the years know that my primary interests are artificial intelligence and fundamental physics, in that order. Unless something unexpected happens to free me from other duties, I am afraid that my contribution to Project COSA will be limited in the next few months. In the meantime, take a look at this site, Synchronous Reactive Programming in Germany. I just recently found out about it. I am not affiliated with that project but the author seems to have taken a special interest in COSA and reactive programming in general.

 

Older News

 

2004-2007 Louis Savain

Copy and distribute freely