[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


> From: Johnny Thunderbird [mailto:jthunderbird@nternet.com]
> The point is, tech should remain humanly comprehensible. You should be
> able to grasp the fundamentals of the design so you could
> duplicate the
> functionality by another route, if need be, after a primary device has
> failed. But I don't think the same set of rules applies to
> computers as to
> mechanical systems.

That was my question, the authors seemed to imply that this was a failing of
ALL organized systems. That would include software.

> Logic circuitry can be made fault-tolerant in the
> extreme; it is quite feasible today to build a CPU-memory
> module which,
> if kept cold enough and shielded from ionizing radiation, could be
> expected to grind continuously for a thousand years before
> its first error,
> let alone any kind of hard failure. If data were coupled in and out
> optically,
> you could also power the thing photovoltaically, it would use
> so little
> power.
> Put that together with a holographic mass storage, and you
> have a chunk
> of glass that just doesn't stop thinking, doesn't forget
> anything, and just
> doesn't fail.

Sounds like you just described a magic crystal ball, at least from the
outside that is what it would appear to be...

> It's an error to want dumb tools just because they're
> simpler. That is, if
> digitally-controlled systems make sense because of their
> precision, and
> if they can be designed so failure of the control unit will
> not sabotage the
> primary function of the system, but just degrade its
> performance, then by
> all means go for fancy. From now on, humanity won't be able to forget
> anything, including starship crews. Knowledge is strictly
> cumulative from
> this point, including the knowledge of how to build digital
> circuitry. We
> have machines which keep us from forgetting. We always will.

I didn't mean simpler per se, just that robustness should be given more
weight in the design process than intelligence. A case to illustrate, using
your own comment above:

Rather than build an expensive, screaming, state of the art supercomputer,
build a somewhat less expensive, less state of the art array of redundant
microcomputers. You get almost the same amount of sheer processing power,
the technology is more mature and therefore supposedly more reliable, and if
one microcomputer (out of hundreds, or even thousands) fails we have lost
only a small fraction of total capacity; now repeat that same exercise over
and over again on a local scale so that individual subsystems are not
dependant upon the central processing unit to function, each one of them is
also redundant. Then look at the software that runs everything and design it
to be fault tolerant, auto recovering, and if need bee self repairing. This
system is by no means "simpler" its just not fragile, that's all.

> I am totally in agreement that the crew of a starship should know how
> to build any part of that ship, but I believe that efficiency
> should be a
> stronger criterion of design than sheer  simplicity.

With modern tech, all that is really necessary is real time access to that
database of knowledge that you mentioned. A reasonably smart person can
follow instructions in the database to correct most problems. The ability to
build a whole new ship from scratch could even be included along with the
tech to build the tech to build the...

In some ways we have already touched on this subject somewhat when we
discussed sending a semi one-way mission ahead to build either a fueling
station or beam transmitter for the follow on missions. There is little
difference in the amount of knowledge required either way. Given a
reasonable machine tool base and decent automation to start with it would be
doable, just not as efficiently as we are used to doing it now.