[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Argosy Mission Overhaul
- To: KellySt@aol.com, firstname.lastname@example.org, email@example.com, firstname.lastname@example.org, email@example.com, firstname.lastname@example.org, email@example.com, David@InterWorld.com, firstname.lastname@example.org, email@example.com
- Subject: Re: Argosy Mission Overhaul
- From: T.L.G.vanderLinden@student.utwente.nl (Timothy van der Linden)
- Date: Tue, 12 Mar 1996 15:11:36 +0100
>>We "know" AI-robots could make anything work but that solution would be a
>bit >too simple, unless we could come up with a rough design for such kind
>Actually, I'm assuming that robots would have limits based on their
>programming. I imagine that the first working, completely automated systems
>would, in some ways, be less efficient in computer controled hands than if
>humans were doing the same job. For example: how do you think computers and
>robots would have handled the job of bringing home the Apollo 13 crew?
In my opinion such robots are intelligent or they aren't (no way between).
Say that you have figured out a machine with an IQ of 40. Then you could
probably link them up in such a way that 10 of them together would have an
IQ of 100.
>Suppose that computers and robots were acting as mission control. Also
>suppose that these computers were dependant on programming that told them
>what to do when hazardous "what if" situations threatened the mission (like
>an exploding oxygen tank disabling the Odyssey). If the only programming
>that the computers had for dealing with problems was what the programmers
>had anticipated, then Apollo 13 would never have made it back to Earth. The
>computers would have never used the LEM to do course corrections because
>none of the gremlin guys responsible for anticipating problems had even
>bothered to simulate using the LEM for course corrections and so that would
>never have been programmed in to be considered as a possibilty (that is a
If you want to write a program that says: Go there, look around, take care
and get back, then you need a good programmer.
>I realize this is a sketchy and even inaccurate description of just some of
>the complications of artificial intelligence. And the purpose of our
>discussion is to build a starship that will take us to TC. But needless to
>say, a computer would have lost the Apollo 13 mission because it was a dead
>ship, with no power to even run the guidance computers. So even if the
>computers had thought to use the LEM to correct the ship's course, it
>wouldn't have been able to carry that decision through for lack of power
You aren't writing about AI, but about an expert system. (the difference is
that AI can make interconnections)
>Why worry about a dying drone when you have a
>million others to handle its job? Fortunately, this system doesn't apply to
>human societies where we do bother to heal the sick.
The strength of us is that we are all different. Making us so different
takes many years. When you would simply preprogram us, we would all make the
same mistakes and die out quickly. (This isn't a complete arguement, but I
hope it makes you see that mass AI production may not be as nice as you think).
>The Argosy design that I have in mind is, in fact, a maser driven sail
>attached to a ion rocket with a habitat that carries colonists and explorers
>to a starsystem already visited by Pathfinding/Pathmaking robots. Those
>robots are assumed to have set up a maser system for decelerating the ship.
> This solves what has always been our biggest problem, stopping.
Of course you still need to stop the robots (may be easier because their
mass is smaller).