[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

starship-design: Re: unmanned missions



Lindberg writes:
 > A common assumption is that any AI will think "like a human" and
 > therefore be subject to human mental troubles.  This is theoretically
 > possible, but almost completely useless for practical applications. 

Considering that we don't really fully understand human cognition
or the role of emotions in cognition, I don't think it's
justified to say that AIs will be emotionless drones.  Without
the biochemical factors that influence human emotion, I think the
most that can be said is that AIs probably won't have human
emotions, but could very well have emotional behavior of some
sort.

Greg Bear's _Queen of Angels_ has, as one of the four or so plot
threads, the description of an unmanned interstellar probe called
AXIS that is controlled by a "thinker", and the efforts of an
earthbound thinker to attempt to diagnose the problems that AXIS
is having, which center around the development of consciousness.
It's an interesting book for several reasons, not the least of
which is that it touches on some of the ideas involved in using
an AI to operate an autonomous interstellar probe.