|
Abstract:
This page contains a description of the COSA operating system. COSA is
an alternative software construction and execution environment designed to improve software productivity by several orders of
magnitude. In addition, COSA will enable
the creation of bug-free software applications of arbitrary complexity. It
is
based on the premise that the primary reason that computer programs are
unreliable is the age-old practice of using the algorithm as the basis
of software construction. Switch to a synchronous, signal-based model
and the problem will disappear. Please refer to the previous articles (Silver
Bullet, Project COSA) in the series for
important background information.
COSA is a complete operating
system. It consists of two major subsystems, the execution kernel and
the support system. The latter consists of common service components
that are needed for a modern operating system. Note that there is no clear distinction between a
system service component and a user application other than the
convenience of labeling them as such. Every application is seen as an
extension of the operating system. Thus, every COSA system can be
custom-tailored for a given purpose. That is to say, if, for examples,
the application requirements do not call for file I/O and graphics
capabilities, the file and graphics components are simply left out.
The
execution kernel is the part of COSA that runs everything. It consists
of the cell processor, the cells proper, and the data objects. The term
"cell" is borrowed directly from neurobiology because
biological neurons are similar to the elementary processors used in
COSA, as explained below. Components are high-level
objects which are made of cells and data objects or other components.
Components are not considered part of the execution kernel. They are explained in detail in
the Software Composition page. The following is a
short listing of the items that comprise the execution kernel.
The
cell processor is an elementary object processor. It runs everything in
COSA including applications and services.
Data
objects are also called passive objects or properties because they do not do anything
of themselves. They are operated on by cells (active objects). A data
object can be either a variable or a constant, or a collection of
properties.
Cells
are active synchronous objects that reside in computer memory. A cell
communicates with other cells via signals. There are two types of cells:
sensors and effectors. Sensors detect changes or patterns of changes and
effectors execute changes. Together with data (passive objects) cells
comprise the basic building blocks of every COSA application.
A
synapse is part of a cell. It is a small data structure which is used to
connect one cell to another.
A
sensor is a type of cell that detects change. Sensors detect changes
either in data or in the external environment.
A
logic detector is a special sensor that detects a logical combination of
events.
A
sequence detector is a special sensor that detects a pattern of events
(changes) occurring over time.
An effector is a type of cell that operates on data variables and/or the external
environment.
The cell processor is a highly optimized
interpreter. It is the part of the COSA
operating system that handles the reactive logic for the entire system.
The processor’s function is to emulate parallelism by performing an operation for every cell
which is in need of processing. It should be viewed as a
necessary evil. It is needed only because software objects in a von
Neumann machine cannot process themselves. The cell processor is
designed to be completely transparent to the application developer.
In COSA, every computation is considered
to be a form of either input or output processing. For example,
detecting a change in a data variable (a comparison operation) is just as much a
sensory event as a key-press or a button-down signal. Likewise,
incrementing a counter variable is no less a motor action than sending a character
to the printer. The following elaborates on various aspects of the
cell processor.
An
ideal COSA operating system would have no algorithmic processes
whatsoever as all objects would be self-processing and would communicate
via direct connection pathways, not unlike biological neurons. Unfortunately, the
architecture of von Neumann computers is such that a software system
must have at least one algorithmic thread (a set of CPU instructions).
As a result, communication pathways between objects are virtual.
In
COSA, there is a single loop thread called the execution thread which
runs the entire system, including applications and services. The cell
processor is essentially a virtual machine similar to a FORTH
interpreter or the Java™ virtual machine. It is designed to
be completely transparent to the software designer. It is the only
directly executable code in the entire system. No new executable code is
allowed, not even for traditional system services such as file I/O,
memory management, device drivers, etc... Software construction is achieved by connecting
elementary cells
together using simple signal connectors called synapses.
Cells
can be combined into larger modules or plug-compatible components. This
is mostly
for
the benefit of the application designer because the cell processor sees
only the cells, their synapses and assigned data operands, if any.
Nothing else.
There
are a few exceptions to the single thread rule. Every operating system
must be able to handle hardware interrupts in real time. Interrupts are,
in essence, sensory events indicating that something important has
happened which needs immediate handling. Examples are mouse, keyboard,
network card or hard drive events. COSA uses small interrupt service
routines to modify relevant data/message structures and mark appropriate
sensors for updating by the cell processor at the earliest opportunity.
Neither applications nor services have permission to access interrupts
directly. Indeed they cannot do so since no new native (directly executable) code is
allowed in COSA. All possible interrupt eventualities must be handled by
the operating system and, if the need arises, corresponding sensors are
provided for use by applications and services. This eliminates a lot of
headaches, especially in safety-critical domains. Another
exception has to do with copying memory buffers. Sometimes, it is
imperative that such tasks be executed as rapidly as possible. It would
be best to delegate them to a dedicated DMA or graphics hardware chip.
However, this may not always be possible. The alternative is to use a
super-optimized assembly language thread. To the software designer, a
buffer-copy component would be no different than any other COSA service
component. It would have an input connector for service request messages
and an output connector for acknowledging when a task is completed. (See
the next article in the series for more information on components and
connectors).
The
job of the cell processor is to update all cells that need updating at
every tick of the master clock. This is done with the help of two update
lists, one for input processing and the other for output processing. The
lists contain pointers to all objects that need updating at any given
time. As one list is processed, the other is filled. The reason for
using two lists instead of one is to prevent signal racing conditions
that would otherwise arise.
During
every cycle, the lists are processed one after the other starting with
the input list. During input processing, the cell processor performs a
primitive operation on behalf of every cell in the input list according
to the cell’s type. The cell is stamped with an integer value
representing the time of activation (I shall explain later why this is
important). After each operation is performed, the cell is placed in the
output list for subsequent output processing. The input list is emptied
after processing is completed. Output processing is much simpler. It
consists of placing the destination targets of every cell currently in
the output list into the input list. After completion, the output list
is emptied, the master clock is incremented and the cycle begins anew.
Of course, in
order for any processing to occur, cells must first be placed in the
input list. This normally happens when an application or component is
loaded into memory and started or when a sensor detects a change in the
hardware.
Unfortunately
for the COSA software model, current microprocessors are optimized for
conventional algorithmic software. As I mentioned elsewhere on this
site, this is an old tradition that has its roots in Charles Babbage's
and Lady Lovelace's ideas for the analytical engine which was built more than
one hundred and sixty-two years ago out of metal gears and rotating
shafts! The use of the algorithm as the
basis of computing may have been a necessity in the
early days of the modern computer era when clock speeds were measured in
kilohertz and computer memory was at a premium, but with the advent of megahertz and gigahertz CPUs, there is no longer any excuse for
processor manufacturers to continue doing business as usual. A
specially designed COSA-optimized processor could maintain both update lists
in the processor's on-chip cache, obviating
the need to access main memory to read and write to them, thus saving processing time.
And since COSA cells are concurrent objects, they can be dispatched to
available execution cores within a single multicore processor for
simultaneous processing. Vector processing techniques can be used extensively to increase performance.
It should even be possible
to keep copies of the most often used cells in cache memory in order to
further improve performance. In addition, the processor should have
intimate knowledge of every cell type. As with FORTH processors, this
would eliminate the need for a software interpreter or microkernel. The CPU
itself would
be the cell processor. These
optimizations, among others, could bring performance to a level on a par
with or better than that of RISC processors. See
How
to Design a Fine-Grain, Self-Balancing, Multicore CPU. One
of the advantages of the COSA model is that it is a change-based
computing environment. As a result, a comparison test is made only
once, i.e., when a related change occurs. Contrast
this with algorithmic software where unnecessary comparison operations
are often performed every time a subroutine is called. Sooner
or later, the computer industry will come to its senses and recognize the
soundness of the synchronous software model. The traditional algorithmic
microprocessor will then join the slide rule and vacuum-tube computer as
mere historical curiosities.
In a
COSA program, there
are obviously no tangible signals--comparable to the changes of potential in a
digital circuit--that travel from one cell to another. There are no
physical pathways between cells. Signals are
virtual and signal "travel" time is always one cycle. The mere placement of a
cell in an update list means that it has either received or is on the
verge of transmitting
a signal. This is a plus for software because the signal racing
conditions that commonly occur in hardware are non-existent. It should
be noted that signals do not carry any information in COSA.
Particularly, a signal does not have a Boolean value. A signal is a simple
temporal marker that indicates that some phenomenon just occurred. The
nature or origin of the phenomenon cannot be determined by examining the
signal.
There
is a difference between a signal and a message in COSA. As explained above, a signal
is an abstract temporal marker. With regard to effector cells (see
below), it always marks either the end of an elementary operation or the
beginning of another or both. With regard to sensor cells it marks
either a positive or negative change in the environment. Signals are
always processed synchronously (i.e., immediately) by the cell
processor.
A message, on the other hand, is a data structure that can be
placed in a queue (FIFO) or a stack (LIFO) and assigned a priority level.
Messages are shared data structures between two or more components
and are associated with a set of male and female connectors. Message
queuing and stacking are not
handled directly by the execution kernel. That is the job of the message
server, one of the many service components in COSA (this will be
explained further in a future article).
One
of the most important aspects of using a master clock is that every
elementary operation lasts exactly one execution cycle. An
uncompromising consistency in the relative order of signals and
operations must be maintained at all costs. Logically speaking, it does
not matter whether the durations of the clock cycles are not equal. As
long as the master clock (really an integer count) is incremented once
every cycle and as long as it serves as the reference clock for timing
purposes, logical order consistency is maintained. This is because
coincidence and sequence detection are based on the master clock, not on
a real-time clock. Thus all concurrent operations execute synchronously
between heartbeats or clock ticks. Because of this deterministic
concurrency, COSA can be said to belong to a class of software systems
known as synchronous
reactive systems.
The
temporal deterministic nature of cell activation is maintained even
though the heart rate of a COSA system is non-deterministic relative to
real time: each cycle lasts only as long as necessary. This is not
entirely unlike the digital processor design approach known as
'clockless computing.' There is no need to waste time waiting for an
external clock tick if one does not have to. This does not mean that a
COSA system cannot react in real-time. It can. In fact, the software
designer will have ample flexibility to tweak message priorities to
handle the most demanding environments. The important thing is that the
temporal integrity of cell activations is maintained relative to the
virtual clock.
Sometimes
(e.g., simulation systems, parallel computing, etc...) it is necessary
to synchronize a COSA program with an external clock. The best way to
achieve this is to use the clock ticks emitted by one of the system's
interrupt-driven sensors as reference signals. In such cases, all operations must execute
within the fixed interval allotted between real-time ticks. In the event that the
execution time exceeds the clock interval, the application designer has
several options: a) use a slower real-time clock; b) use a faster CPU; c)
split long operations into shorter ones; d) lower the priority of some
message servers; or e) lighten CPU load by adding more CPUs to the
system and redistributing the load.
There
is no need for multi-threaded applications in COSA. The performance
overhead and latency problems associated with context switching is
nonexistent. In COSA, every cell is its own tiny task or process, so to
speak. This does not mean that the cells are always running. Since the system is driven by change, only
the cells that are
directly affected by events need to be processed during any given cycle.
This makes for a very efficient approach to concurrency because only a
relatively small percentage of the cells in a system are active at any
one time. As a result, a single-CPU COSA operating system can easily
support tens of thousands and even millions of concurrent cells. Priority
processing is controlled at the message server level, not at the kernel
level. The kernel never processes messages directly. High-priority
messages are immediately placed at the head of their queues. In
addition, all message servers automatically stop serving low-priority
messages while any high-priority message is still in a queue. This
technique ensures that time-critical tasks always get a disproportionate
share of processor runtime. Please refer to the Software
Composition page for more details on this topic.
As
mentioned above, one of the things that distinguishes concurrent COSA
objects from concurrent algorithms is that all COSA objects perform
their assigned elementary operations within one execution cycle. This
uniformity of execution lends itself to the creation of verification
tools and techniques for both static and dynamic analysis. For example,
it is easy to determine at run time whether or not a given data operand
is being accessed concurrently for modification and/or reading by
multiple effectors and/or sensors. It is also possible to determine
statically--by counting and analyzing convergent decision
pathways--whether the potential exists for conflicting data access. In
addition, since COSA program is a condition-driven behaving system and
since all conditions are explicit in the design, it is easy to create
tools that automatically test every condition before deployment. This is
part of the mechanism that guarantees the reliability of COSA programs.
A COSA cell is a primitive
concurrent object. Cells communicate with one another via simple
connectors called synapses, to borrow a term from neurobiology. The
function of cells is to provide a general purpose set of basic
behavioral objects which can be combined in various ways to produce
high-level components and/or applications. There is no computational
problem that can be solved with a general purpose algorithmic language
that cannot also be solved with a combination of COSA cells.
The number of input synapses
a cell may have depends on its type. The number of output synapses is
unlimited. Comparison sensors, for example, do not have input synapses. Internally,
a cell is just a data structure that contains the cell's type, a list of
synapses and pointers to data operands, if any. When a cell is processed
by the execution thread during input processing, an elementary operation
is performed on its behalf according to the cell's type. After
processing, the cell immediately emits an output signal to any
destination cell it may be connected to.
COSA
cells are divided into two complementary categories: sensors and effectors.
The latter operate on data while the former detect changes or patterns
of changes. What follows is a discussion of various topics related to
cells and their operations.
A
synapse is a simple data structure that serves as a connector between
two cells. It contains two addresses, one for the source cell and
another for the destination cell. A synapse is an integral part of its
source cell. It is used by the cell processor to effect communication
between cells. When two cells are connected, they share a synapse
together. Note that, even though cells are graphically depicted as being
connected via signal pathways, internally however, there are no
pathways. There are only the cells and their synapses. Still, it is
beneficial to think of a source and a destination cell as communicating
over a unidirectional pathway. The pathway is considered an essential
part of the source cell, similar to the axon of a neuron. All
synapses maintain a strength variable at least during the development
phase. The variable is incremented every time a signal arrives at the
synapse. Synaptic strength is an indication of usage and/or experience.
It can serve as a powerful debugging aid. In case of failure, the
culprit is almost invariably a young or rarely exercised synapse. From
the point of view of reliability, it is important to verify that every
signal pathway has seen activity at least once. Since all signals must
pass through synapses, the use of synaptic strengths can be used to
ensure total coverage of all decision pathways during testing. This is a
tremendous asset in failure diagnosis (blame assignment).
Sensors
have no input synapses (except Logic and Sequence
detectors) but they can have as many output synapses as necessary. Their function is to detect
specific changes or state transitions, either
in the computer's hardware or in data variables in memory. When a change
is detected, a sensor immediately sends a signal to all its
destination cells. A sensor should be thought as being always
active in the sense that it is always ready to fire upon detection of a
change. Every sensor in COSA has a complement or opposite. The following
table lists several examples of sensors and their complements:
mouse-button-up |
mouse-button-down |
key-up |
key-down |
greater-than |
not-greater-than |
less-than |
not-less-than |
equal |
not-equal |
bit-set |
bit-clear |
Comparison
sensors--also called data sensors--must be associated with one or more
effectors (see the discussion on sensor/effector associations below).
Their function is to detect specific types of changes in their assigned
data. As mentioned earlier, a signal-based system is a change-driven
system.

As
an example of how this works, let us say a given comparison sensor's job
is to detect equality between two variables A and B. If A and B are
equal both before and after an operation on either A or B, the sensor
will do nothing. There has to be a change from not-equal to equal in
order for the sensor to fire. The same change requirement applies to all
sensors. COSA does not draw a process distinction between external
(interrupts) and internal (changes in data) sensory phenomena.
Note: A cell's output synapses are always depicted as
small red circles in order to distinguish them from input synapses which
can be either white or black.
There is an implied logic to sensory signals, one
which is assumed a priori. The logic is so obvious and simple
as to be easily overlooked. Sensors react to specific
phenomena or changes in the environment. The offset of a
given phenomenon is assumed to follow the onset of the same phenomenon.
This is what I call the principle of sensor coordination or PSC.
It can be stated thus:
No
phenomenon can start if it has already started or stop if it is
already stopped. |
As an example, if the intensity of a light source increases to
a given level, it must first go back below that level in order to increase to
that level again. The logical order of events is implicit in all sensory
systems. That is to say, it is imposed externally (in the environment) and no special sensory
mechanism is needed to enforce it. This may seem trivial but its significance will become clear when I discuss
motor coordination and effectors below. As the principle of complementarity would have it, effector
coordination is the exact mirror opposite of sensor coordination.
That is, effector logic is the complementary opposite of sensory logic. I
like to say that effecting is sensing in reverse.
It
goes without saying that a complementary pair of sensors (e.g., button-up
and button-down) should never fire simultaneously. If they do,
something is obviously malfunctioning.
An important corollary of
the PSC is that positive and negative sensors always alternate. For
example, a mouse-button-up signal cannot be followed by another
mouse-button-up signal without an intervening mouse-button-down signal.
The PSC can be used as a debugging aid during development and/or as part
of a malfunction alert mechanism during normal operation.
An
effector is roughly analogous to a simple code statement in a conventional
program. Usually, it performs a single operation on data such as (A =
B + 1) or (A = B x C). An
operation is any effect that modifies a passive object (a variable).
Effectors operate on assigned data items using either direct, indirect or
indexed addressing. Data types (integer, floating point, etc...) are
specified at creation time.
Effectors are
self-activating cells, not unlike the tonically active neurons found in
the brain's motor system. What this means is that, once triggered, they
will repeatedly execute their operation for a
prescribed number of cycles. The left effector in the figure is
preset to repeat an addition 10 times while the right effector is preset to
perform 40 multiplications. The number of operations may range anywhere from 1
(for one-shot effectors) up to
an indefinitely high value. For this reason, all effectors, except one-shot
effectors, have two
input synapses, one for starting and the other for stopping the
activation.
In
keeping with the principle of complementarity, an effector is defined as the opposite of a
sensor. The complementarity is evident in the following table:
Sensor |
Effector |
Acted
on by the environment |
Acts
on the environment |
No
input connections |
One
or more input connections |
One
or more output connections |
No
output connections |
It
often turns
out that one or more objects need to be notified whenever a given
operation is performed. Normally, this calls for the use of a
complementary activation sensor,
one for every effector. However, rather than having two separate
cells, it seems much more practical to combine the
function of effector
and sensor into a single hybrid cell, as seen in the figure
below.
On receipt of a start signal,
a hybrid effector performs its assigned operation and then emits an outgoing
signal immediately afterwards. When creating a new effector, it is up to the software developer
to choose between either the normal or the hybrid type.
An
effector may have any number of start and stop synapses. If an effector does not receive a
stop command signal after initial activation, it will repeat its operation until it
has run its course, at which time it will emit an output signal to
indicate that it has finished its task. There
is a strict rule that governs the manner in which start and stop command
signals may arrive at an effector. I call it the Principle of Motor
Coordination or PMC. The PMC is used to detect command timing conflicts as
they happen.
No action can be started if it has already
started, or stopped if it is already stopped. |
In other words, a start
signal must not follow another start signal and a stop signal must not
follow another stop signal. The PMC further stipulates that an effector must not receive more than one signal at a
time, regardless of type. The reason is that an effector must not be
invoked for two different purposes simultaneously. In other words, one
cannot serve more than one master at the same time. Bad timing
invariably results in one or more signal arriving out of turn, which can
lead
to failures. During development or even during normal operation
in mission-critical environments, the environment can be set to trigger an
alert whenever the PMC is
violated. In addition, it is possible to create analysis tools to
determine whether or not the PMC can potentially be violated as a result
of one or more combination of events. This can be done by analyzing all
relevant signal pathways, event conditions and cells leading to a particular effector.
In a
truly parallel system (such as an electronic circuit), every cell
is its own processor. Since a hardware sensor is always active, it
continually keeps watch on whatever change event it is designed to detect and fires upon detection.
In a software system, however, sensors must be explicitly updated.
Updating every sensor at every tick of the clock would be prohibitively
time-consuming in anything but the simplest of applications. The reason is that a
single main processor must do the work of many small processors.
But
all is not lost. There is an easy and efficient technique for getting around the
bottleneck. I call it dynamic pairing or coupling. It suffices to associate one or more comparison sensors with
one's chosen effector (see figure above) and the cell processor does the rest.
The dotted line shown in the figure indicates that the 100+
effector is associated with the != comparison sensor. In other words, the
sensor does a comparison every time the effector performs an addition.
This is taken care of automatically by the cell processor. The sensor
shown in the example sends a stop signal to the very
effector with which it is associated. This way, as soon as the comparison is
satisfied, the sensor terminates the iteration. As will be seen in the Software
Composition page, this arrangement can be used to implement simple traditional
loops.
Note
that, even though, underneath, the sensor does a comparison test
immediately after the operation, both the operation and the comparison
should be thought of as happening simultaneously, i.e., within one
execution cycle. The idea is that the sensor immediately reacts to the
change, as it happens. Note also that an indefinite number of comparison
sensors may be associated with a given effector. This way, multiple related comparisons can be done every time an action is taken.
Conversely, a given comparison sensor may be associated with more than one
effector, that is, with any effector whose actions may potentially affect
the sensor. I
originally conceived of the concept of dynamic
pairing
as a
way to get around the problem of updating every sensor in a program at
every tick of the master clock. I soon realized that the development
system can be designed so that the programmer can be relieved of the
burden of finding and creating associations by hand. The system can take care of it automatically. The result is that
all data
dependencies are resolved, thereby completely
eliminating blind code!
A
message effector is a special hybrid
effector that is used to effect message communication between components.
An ME is an integral part of every message
connector and is the only cell that is shared by two components: a sender
and a receiver.

The sole restriction is that only the sender can start
a ME and only the receiver can stop
it. Remember that a hybrid cell emits a signal immediately at the end of
its operation. The signal is interpreted by the sender component to mean
that the message has been received and that the receiver is ready to
receive another. Note that the effector pictured above stops
automatically after 10 cycles. This way, the sender component can send a
new message automatically every 10 cycles. It is up to the component
designer to choose an appropriate interval.
It
is important to pick the right granularity level for basic effector
operations. It must be neither too high nor too low. For example, a
typical addition operation like A = B + C is performed in several
steps by the CPU. First B is fetched from memory and then C is fetched.
A binary addition is then performed and the result placed in a temporary
register. Finally, the result is copied into A. Should each step be
considered a separate action to be performed by a separate cell? The
answer is no.
As
mentioned earlier, a signal-based program is driven by change. The only
relevant changes are changes in the data operands that reside in memory.
Whichever method is used by the processor to effect a change is
irrelevant to the program because it belongs to a level of abstraction
that is below that of the program's reactive logic. In the addition
example, the only operand that may have changed after the operation is
the variable A. The changing of A is the only event that has the
potential of affecting the behavior of the program. So the entire
operation should be considered a single effect that takes place during a
single execution cycle.
When
an application designer creates an effector in COSA, he or she is
presented with a choice of operations. For example one can choose between
simple assignment (e.g., A = B), and complex assignment (A =
B + C). Once an operation
is chosen, one is given a choice of addressing modes for each variable,
direct, indirect, indexed, etc... Here are some examples of possible
effector operations:
1. A = B;
2. A = B + C;
3. A[x] = B[y];
4. A = A / 3.1416;
5. A = B[x][y][z];
Note
that every operation assigns a value to a target variable. This is the
general form of all primitive COSA operations: one assignment and one
arithmetic operation, if any. This is what is meant by an atomic
operation: it cannot be divided any further while remaining within the
application's level of abstraction.
A further advantage of ESA is that it
allows the creation of tools that automatically identify and correct weak or missing data
dependencies in an application. For example, let
us say we create a comparison sensor to detect when variable A changes to
a value greater than 0. As explained earlier, sensor A must be
associated with one or more existing effectors. This could be left to the discretion of the developer
but why leave anything to chance if it can be handled by the development
system? At the moment of creation, the system should immediately find
every effector that can potentially change variable A to a positive non-zero
value and associate it with our sensor. This can be done automatically
without any intervention from the programmer.
In sum, the system should enforce associations in such a way that every
comparison sensor is associated with every effector that may potentially
affect the comparison. This way, new additions (sensors and/or effectors)
to an existing program will not introduce hidden side effects which are
potentially catastrophic. The end result is extremely robust
applications.
Perhaps
the most revolutionary consequence of the automatic elimination of
blind code, as seen above, is what I call the COSA
Reliability Principle or CRP. It can be stated thus:
All COSA programs are guaranteed
to be free of internal defects regardless of their
complexity. |
Here, complexity is defined simply as the
number of connections or synapses in the program. Why is the CRP true?
The reason is simple. A COSA program obeys the same reactive principles as a
logic circuit. If a
logic circuit can be guaranteed to be free of defects, so can a COSA
program. As an example, let us say
a programmer creates a simple thermostat-based program that sends a signal A whenever it detects that integer variable
T changes to a value greater
than 70 and a signal B when T changes to a value less than 60.
Does this mean that the programmer has to test every possible value of T
in order to prove that the program is correct? Of course not. The
programmer needs to test only the prescribed conditions. This is true
regardless of the complexity of the program.
Note that a condition is not a state but a state transition. Since all signal activation pathways
and all conditions in a COSA program are explicit, circuit analyzers
can be created to exhaustively test every possible eventuality (i.e., every
condition) before it happens.
Thus, all potential conflicts can be resolved long before deployment.
As
I wrote in the Silver Bullet article,
the claim by Frederick P. Brooks and others that unreliability comes from the
difficulty of enumerating and/or understanding all the possible
states of a program is not a valid claim. The truth is that only the conditions
(state changes) for which the program is designed
to react to must be
tested. All other conditions are simply ignored because they are irrelevant.
A
logic detector is a special sensor that detects a logical combination of
events. Logic detectors are not to be confused with logic gates. Whereas
a logic gate operates on Boolean states, a logic sensor operates on
transient signals. Like all sensors, logic detectors come in
complementary pairs, positive or negative. A positive logic detector
fires if it receives a combination of input signals that satisfy its
function. Conversely, a negative detector fires if it receives a
combination of signals that do not satisfy its function. There are three
types of logic detectors in COSA: AND OR and XOR.
Note:
The labels "AND", "OR", and "XOR" are used
below only for their clear correspondence with the language (English) of
the accompanying text. In an actual COSA development environment, they
would be replaced with the more traditional and universal Boolean logic
symbols.
An
AND detector is used to detect concurrency. It fires if all its input
synapses fire at the same time.

The
AND detector depicted above has three positive synapses (white) and one
negative synapse (black). The cell is updated only when a signal arrives
at a positive synapse. If the three positive signals arrive
simultaneously, the cell fires. But if the negative signal arrives at
the same time as the others, the cell will not fire. The principle of
sensory coordination (PSC) is strictly enforced during development or
even during normal operation (mission-critical environments). This
means that the system will not allow the developer to create an AND cell
that receive signals from complementary sensors unless, of course, the
synapses are opposites.
An
OR sensor fires whenever a signal arrives at one or more of its positive
inputs.

The
OR cell shown above has three positive synapses and one negative
synapse. It will not fire if a negative signal arrives alone or at the
same time as a positive signal.
An
XOR detector fires if it receives a single signal. It will not fire if
it receives two or more signals simultaneously. Unlike Boolean XOR
operators, a COSA XOR detector can have an unlimited number of input
synapses.

Note
that the XOR detector shown above has a negative synapse. Negative
synapses are used only as a way to exclude other signals. They cannot
cause the cell to fire.
There
are times when the temporal order of signals is important for decision
making. This is the function of the sequence detector. All sequence
detectors have a special input synapse which is called the master or
reference synapse. The other synapses are called slaves. A sequence
detector is updated only when a signal arrives at the master input. In
order for a sequence detector to fire, its slave signals must arrive at
their prescribed times relative to the master signal. The designer must
specify whether the prescribed setting for each input connection
represents a precise temporal interval relative to the master signal or
just a relative order of arrival or rank. That is to say, there are two
types of sequence detectors, time-based and rank-based.
In
the figure above, the negative integer next to a slave is the precise
timing value for the connection. It indicates the temporal position of the
slave relative to the arrival time of the master. As with AND sensors, the system will enforce the principle of sensory coordination
(PSC). This means that, if two slaves receive signals from complementary
sensors, they cannot have equal temporal settings.
If
a sequence detector is rank-based, a slave setting represents the
order of arrival relative to the other connections. In this case, if two
slaves have equal settings, it does not mean that they must arrive
concurrently. It only means that they must precede a connection with a
higher setting or come after one with a lesser setting. Rank-based
detectors are perfect for the detection of key activation sequences. A
good example is the Ctrl-Alt-Delete pattern used in the MS Windows™
operating system:
As
seen in the above figure, for the detector to fire, the Ctrl-Down and
the Alt-Down signals must arrive before the Del-Down signal. But the Alt
key and the Ctrl key must not be released in between. The order in which
the Alt and Ctrl keys are pressed is unimportant.
All
sequence detectors come in complementary pairs, positive and negative. A
negative detector is just like its positive sibling, except that if fires
if its prescribed temporal conditions are not met when the master signal
arrives. It is sometimes a good idea to test for both eventualities. A
good COSA-compliant design tool must be able to automatically create
negative and positive detectors at the click of a mouse button.
In
this light, it is easy to see how a pattern detector can be used as an
alarm mechanism to enforce timing constraints. For example, a software
designer may have reasons to expect certain events to always occur in a
given temporal order called an invariant order. Conversely, there may be
reasons to expect certain events to never occur in a particular temporal
order. In such cases, a temporal cell can be used to sound an alarm or
take an appropriate action whenever the expectation is violated.
Many
catastrophic failures in software have to do with one or more violated
assumptions about event timing. Adding as many error detectors as possible
is a good way to detect problems that would otherwise go unnoticed until
too late.
A
timer cell emits a signal a specified number of cycles after receiving a
start signal. Like an effector, a timer- cell can have any number of
start and stop input synapses. The arrival of a stop signal inhibits the
cell's internal counter. 
By
default, a timer stops after it times out. However, the designer has the
option of specifying that the timer resets itself automatically so as to
emit a signal at regular intervals.
The
timer cell described above is called a virtual timer because its
internal counter is synchronized to the virtual master clock. There is
another type of timer cell called a 'real-time timer' which is
synchronized to an external clock. It can be used, as part of a watchdog
to enforce real time constraints. Sometimes, especially in embedded
applications, it is imperative that a computation finishes within a
predetermined interval. A watchdog cell can easily determine whether or
not a given signal A arrived before a timer signal B and alert the user
if necessary.
Reactive
software is not a new idea but many of the concepts and principles explained
here are new to reactive computing. False modesty aside, I consider
several of them to be genuine advances because of their value to software
reliability and productivity. I list some of the innovations below, not necessarily in
order of importance:
 |
The principle of complementarity (PC).
|
 |
The principle of motor coordination (PMC).
|
 |
The principle of sensor coordination (PSC).
|
 | The
concept of effector-sensor association (ESA).
|
 | The
automatic resolution of data dependencies at the
cell
level.
|
 | The
separation of implementation details from organizational structure.
|
 |
Self-activated effectors.
|
 | Comparisons
as sensory processes.
|
 | No
process distinction between internal and external sensory events.
|
 | No
process distinction between internal and external effects.
|
 | Logical
consistency. See this news item. |
Next: Software
Composition

©2004-2006 Louis Savain
Copy and distribute freely
|