[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
- To: KellySt@aol.com, email@example.com, firstname.lastname@example.org, email@example.com, firstname.lastname@example.org, email@example.com, firstname.lastname@example.org, David@InterWorld.com, email@example.com, firstname.lastname@example.org, DotarSojat@aol.com
- Subject: Robots
- From: T.L.G.vanderLinden@student.utwente.nl (Timothy van der Linden)
- Date: Tue, 12 Mar 1996 23:14:49 +0100
>>>Actually, I'm assuming that robots would have limits based on their
>>>programming. I imagine that the first working, completely automated systems
>>>would, in some ways, be less efficient in computer controled hands than if
>>>humans were doing the same job. For example: how do you think computers and
>>>robots would have handled the job of bringing home the Apollo 13 crew?
>>In my opinion such robots are intelligent or they aren't (no way between).
>>Say that you have figured out a machine with an IQ of 40. Then you could
>>probably link them up in such a way that 10 of them together would have an
>>IQ of 100.
>Have you ever tried putting a room full of morons together and expect them
>to do one inteligent persons work? It doesn't work. Mobs tend to be less
>equivelent then the sum of their parts.
I was already thinking someone would say this.
What if we would learn every moron another set of tasks? Together they may
be able to solve more complicated tasks.
>Given that we have no idea on how
>to make an A.I. work its hard to tell what it could do, or what its
>limitations would be. It could be far more inteligent than humamans, or be
>an idiot savant. Great at one thing, and hopeless in general.
As I already said, no way between.