Path: news.cac.psu.edu!news.math.psu.edu!hudson.lm.com!newsfeed.pitt.edu!godot.cc.duq.edu!news.duke.edu!agate!msunews!netnews.upenn.edu!news.cc.swarthmore.edu!haneef
From: [email protected] (Omar Haneef '96)
Newsgroups: alt.cyberpunk
Subject: Re: Frankenstein (was Dystopia cont.)
Date: 11 Sep 1995 01:34:27 GMT
Organization: Swarthmore College Engineering, Swarthmore PA
Lines: 118
Message-ID: <[email protected]>
References: <[email protected]> <[email protected]> <[email protected]> <[email protected]>
NNTP-Posting-Host: garnet.engin.swarthmore.edu
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
X-Newsreader: TIN [version 1.2 PL2]

Nesta Stubbs ([email protected]) wrote:
> In article <[email protected]>,
> Omar Haneef '96 wrote:
> >
> > Technology doesn't have to take away our humanity. It changes us
> >which is fine IF the change is to empower us - a la Case's implants and
> >Molly's claws - but I want to bring up the potential of problematic changes.
> >I will grant that Case and Molly were fine with their toys but surely you
> >will concede that Molly as a meat puppet (or any of my examples which you
> >did not address) was disempowered and entirely not human.
> >
> I have a very good paper by Jaron Lanier that discusses the
> inhumanizing effects of "intelligent agents' and also such pothetic
> manifestations of them as PDAs(Aplle Newton) in which you change your
> schedule, your thinking patterns in ways to make the softwre look
> smart. Instead of making normal human associations, you limit your
> association sot those that can be interpreted by the
> software(intelligent agents) such as when your investigating a
> particular topic you may find yourself limited to searching only key
> phrases, rather then large "ideas" sicne search engines can't
> comprehend large phrases like "give me something that is sorta like
> this, but is more dense, possible has a touch of that), rather your
> associative thinking takes on patterns easily translatable into the
> language the intelligent agent uses.

Thats pretty interesting. I remember my professor telling me that AI
research was re-assessed when we realised that making a computer do what
humans do is silly - humans can already do it. The point is to make it do
what we cannot.

> One example is the filter, the kill file is very very mundane,
> it just searches for matching phrases, an intelligent agent is
> something that would follow your selections adn then build an idea of
> what you do and don't like, the problem is, that there is very litle
> software available capable of making such "analogs" of your likes and
> dislikes and habits, since the human machine is so diverse compared to
> what we now have in "machine intelligence." Jaron argues that it is
> better to develop skils within your own machine, sicne that human mind
> is so much better at drawing associations adn making qualitative and
> relational judgments, rather then relying ona peice of software to do
> it for you. It's the dumbing of the human, in order t make the
> interface to computer look better.
> The PDA is a perfect example. I have several freidns who
> purchased them, and then watched as they went thru grueling months of
> trying to structure their lives around it to make it appear useful,
> they woudl do such idiotic things as force themselves to use it to
> write down addresses and such, when apaper notepad was much more
> efficient. now all of their nice toys sit and gather dust, because
> the toys were basically trying to make them conform to the designers
> idea of "eficiency and ease of use' rather then their own. each of us
> works in vey different ways, we gather info, and proccess it a little
> bit differently than anyone else, trying to modify these routines in
> your brain, which are higly optimized in order that they coincide with
> soem peice of extrenal hardware is what i woudl call "de-humanizing."
> The key I think, and the one that Lanier puts forward, is nto to
> design better "intelligent agents' but rathe to design better
> interfaces so that the work is done by that which si best at it, your
> own mind. Gee, I wonder why Jaron is so into VR? Could it be that a
> good VR environment woudl provide a magnificent interface to
> computers/ Perhaps something along the lines of Cyberspace itself is
> really nthing mroe than an attempt to get an interface to informationa
> n computers that is easier to proccess fo the human mind which
> normally works much bette then machines at making qualitative
> judgments about ifnormation.
> For awhile I was enamored with the idea of making my own
> software agent, and I am still working on it(using a combination fo
> python and C and some existing search engines) but nwo I wonder if m
> tiem woudl be better spent designing a better interface for myself,
> like perhaps getting those V IO glasses I want, adn then designing the
> interface for it. Interface work is IMO, much harder than "software
> agent' work, cause the software agent allows you to structre the
> response you will get from the human side into smoething your machine
> can read, the real goal is to move the machines side toward being able
> to recieve somethign from the human side in a less "machine" like
> manner. Keyword searches can only go so far.
> The MIT WebHound si one nice idea tho, it lets you rate pages
> and then makes reccomendations of pages you may also like depending on
> hwo yu, and your neighbors rates them. it's really pretty nice
> actually. I have used it a few times myself, and it does a decent job,
> about one out of three pages it reccomends brings me something of
> interest. The hard part is building th database of pages within the
> WebHound and giving it a good idea of what you like. This is OK, but
> what I would prefer a interface that allows me to see ALL of the data
> available, adn then make t easier for my mind to do the work of
> picking what I like. our mind is uch better at making these judgements
> than any algorithm AI can think of to represent them.

The WebHound sounds like a splendid idea. How can I try it?

More to the point. Nesta, I agree that every piece of interaction with our
environment changes us, that every food affects our body chemistry and that
every piece of feedback is training. However, you will grant that there is a
difference between that and, well, manual labourers working 16 hour days to
compete with factories. You describe human beings changing in ways that are
not disturbing to me but does not mean that there are other ways human
beings can change that are not disturbing. The search engine that forces you
into an interface built for IT is de-humanizing in the way a text adventure
has a limited parser but surely there are worse examples.

> >Again, thinking of the meat puppets or the meaties or any of the other human
> >bodies but not minds makes me think that technology may transform us to a
> >point where language will apply different terms to us.

> It already has done that. face it, at least for myself I always put
> people into little "catagories" when I see them, or associate them
> witha givin media generated image. Fuck yah I'm controlled by what I
> see, and what media gets into my head. For instance Omar, like I
> described you in the other post, I see you as a college kid, slightly
> snobbish and a litle out of touch for where I come from, walking into
> a bar that was right next to the coffee shop you meant to hit. That's
> my image of you, be it good or bad, I don't know, it just is. And I
> develop images liek tht for all the people I meet, or hear about. It
> may or may nto be the best way of organizing all of the input I get,
> but it's waht I use now, and yes I am developing some other methods.

Hey, I hope you still think I am human!

-Omar Haneef


[Next appendix] | [Return to index for Appendix A4] | [Return to index for Appendix A]