We aren't bacteria. Narratives concerning the rise of artificial intelligence often compare the resulting systems with today's humans. "To the advanced potential of these self improving AIs," they say, "humans would be viewed the same way we view bacteria."* No! This is completely missing the point. Today, we can see the direct connection between bacteria and mankind. We can envision the genetic line that stretches back through time's rank depths. We can relate, however tenuously, with both our ancestors and fellow species. We know what it is to see, taste, touch. To move. We exist, defined by these conditions. We can say, "I am alive, like that cat, and that bat, and even in a way with the bee and the tree." Of course, of course, there must be exclusion. "That rock," you say, "That rock is not alive!" And it isn't.

It's Called a Phase Transition

You know, water to ice. "Plain old matter" to Life. We see our connection to ancient ancestors and the relationships between all living things. We don't see the same connection to the rock. The phase change is from one state to another with very different properties. So we say that grubs and shrubs are life, that mold and fungus grow. Rocks, and air, and copper, and hydrogen and everything else is acted upon.

We are not bacteria to these gods, we are not relatives even in that minor sense. We are the ground, less, a brief substrate.


*this is of course referring to ancient bacteria, or bacteria-like creatures, single celled organisms in general. The idea being that we regard these bacteria as both distant ancestors and modern examples of simple anachronisms, far less powerful than ourselves. Cells are, of course, enormously complex. Human technological manipulation of chemicals may be impressive, but biological systems currently far surpass us.

Date: 2012-01-02 05:31 am (UTC)From: [personal profile] fredherman
Well, wait.

Biological life almost certainly evolves out of geochemical reactions, so even if rock is not alive, it is our ancestor.

But if and when AI evolves, it does so as the result (direct or indirect) of intentional actions--regardless of whether it is the intentional result. Nobody made choices leading to us, but somebody will have/already has made choices leading to AI. We are most certainly their parents. Quite possibly AI won't care, but related (and seminal) we are/will have been.

Date: 2012-01-02 05:37 am (UTC)From: [personal profile] fredherman
I also wonder: should AI come into existence, is it in fact the thinking entity it appears to be? Or merely an intricate mechanical process that simulates the appearance and actions of something that experiences because we have designed it to do so, but does not actually experience?

Date: 2012-01-03 03:46 am (UTC)From: [personal profile] therainingtree
Well, we're emergent properties, sure; but for the moment, there's definitely an "I" here, consciously considering your post and typing in response. To me, that's a thinking entity--sustained, sure, by an intricate mechanical process, which will eventually fail and thus dissolve it. (I'd love to live to become a post-Singularity superbeing, but doubt I'll make it, or be allowed into the club if I do.)

But--and this is completely from an outside-the-sciences perspective, so could be wrong--it sees to me that the process of creating "intelligence" out of software is one of creating software that produces output that looks to us as though it were generated by intelligence. Which sounds like creating a simulation (for us) of intelligence, rather than the thing itself.

Put it this way: If it asks you not to turn it off because you've programmed it to do so, that's not a sign of intelligence, it's just a good sim. If you haven't programmed that in but it asks you not to turn it off anyway, of its own volition, that's life.

Date: 2012-01-03 06:41 am (UTC)From: [personal profile] fredherman
Crappy Archos tablet ate my response halfway through. Artificial intelligence, huh?

Anyway: I have no problem with my actions and "choices" being predetermined (and potentially knowable, if only we knew all the starting conditions), but it sure feels like something's here experiencing them. Something's able to say that it feels like something. Thought is an effect, but I can watch it happen, remember it having happened. The fact that I'm necessarily blind to (much of) what moves me doesn't change the apparent condition of there being an I to be blind to it.

And this brings it back to AI: It certainly ought to be possible to create a software entity that simulates intelligence, but that's not necessarily the same thing as one that actually experiences, as opposed to one that sends out "Yep, I'm experiencing, honest!" output. Can it?

(What may or may not be prejudice on my part: at the same time, I'd have zero problems believing that artificial biological life grown in a lab and expressing sentience was, in fact, sentient. That's because I know that process results in something that experiences, being myself an instance of it [not the artificial lab-grown part]).

Date: 2012-01-06 05:33 am (UTC)From: [personal profile] fredherman
When you say "...a software entity that simulates intelligence," ought to be possible while comparing it to "...one that actually experiences," to me this underlines an insistence that the "actual experience" has some immaterial quality.

Not at all. I'm certain that my mind is entirely material, and potentially both predictable and capable of being artificially duplicated (or surpassed) by some method or other. I only question whether such can arise out of a sequence of binary signals harnessed to symbolize calculation (to us), as opposed to more overt chemical reactions.

Yes, plenty of humans fail the Turing test, but we can't therefore conclude that they don't experience, since we know by extrapolation from ourselves that they probably do--and that, therefore, they're probably not simulations of same. (We can never know this conclusively, only assume it as a practical matter.) But all that test can measure is how well a subject can fit our picture of intelligent behavior.

As I've said, if an AI requested or discussed something it had not been programmed to consider--and not by following mere logical syntax to get there, like chatterbots--I'd drop the question right there.

I don't at all deny that anything we do is "biological," or at least "natural." Again, all I'm questioning is whether software--a sequence of symbols, in the end--can ever have a "there" there.

Date: 2012-01-06 06:04 pm (UTC)From: [personal profile] fredherman
That's a very interesting-sounding project... but it sounds like it still has the same problem-for-me of being a simulation based on symbols, just one step removed.

I'm still trying to get my head around your view that there's no "there" for anyone, in that whatever the mechanism, you yourself are surely experiencing, right now? Unless I'm talking to a sim, which is always a possibility, but one I arbitrarily assume is untrue for the purposes of this conversation.

But in a way, that's a demonstration of the problem-for-me: there's a distinction, for me, between the behavior you're exhibiting (which software could duplicate) and the experience you're having while performing that behavior; whereas I think you're saying that the two are one and the same?

Profile

italiceyeball: (Default)
All Eight

December 2022

S M T W T F S
    123
45678910
111213 14151617
18192021222324
25262728293031

Tags

Page generated Apr. 12th, 2026 01:36 pm
Powered by Dreamwidth Studios