We aren't bacteria. Narratives concerning the rise of artificial intelligence often compare the resulting systems with today's humans.
"To the advanced potential of these self improving AIs," they say, "humans would be viewed the same way we view bacteria."* No! This is completely missing the point. Today, we can see the direct connection between bacteria and mankind. We can envision the genetic line that stretches back through time's rank depths. We can relate, however tenuously, with both our ancestors and fellow species. We know what it is to see, taste, touch. To move. We exist, defined by these conditions. We can say, "I am alive, like that cat, and that bat, and even in a way with the bee and the tree." Of course, of course, there must be exclusion. "That rock," you say, "That rock is not alive!" And it isn't.
It's Called a Phase Transition
You know, water to ice. "Plain old matter" to Life. We see our connection to ancient ancestors and the relationships between all living things. We don't see the same connection to the rock. The phase change is from one state to another with very different properties. So we say that grubs and shrubs are life, that mold and fungus grow. Rocks, and air, and copper, and hydrogen and everything else is acted upon.
We are not bacteria to these gods, we are not relatives even in that minor sense. We are the ground, less, a brief substrate.
*this is of course referring to ancient bacteria, or bacteria-like creatures, single celled organisms in general. The idea being that we regard these bacteria as both distant ancestors and modern examples of simple anachronisms, far less powerful than ourselves. Cells are, of course, enormously complex. Human technological manipulation of chemicals may be impressive, but biological systems currently far surpass us.
It's Called a Phase Transition
You know, water to ice. "Plain old matter" to Life. We see our connection to ancient ancestors and the relationships between all living things. We don't see the same connection to the rock. The phase change is from one state to another with very different properties. So we say that grubs and shrubs are life, that mold and fungus grow. Rocks, and air, and copper, and hydrogen and everything else is acted upon.
We are not bacteria to these gods, we are not relatives even in that minor sense. We are the ground, less, a brief substrate.
*this is of course referring to ancient bacteria, or bacteria-like creatures, single celled organisms in general. The idea being that we regard these bacteria as both distant ancestors and modern examples of simple anachronisms, far less powerful than ourselves. Cells are, of course, enormously complex. Human technological manipulation of chemicals may be impressive, but biological systems currently far surpass us.
no subject
Date: 2012-01-02 05:31 am (UTC)From:Biological life almost certainly evolves out of geochemical reactions, so even if rock is not alive, it is our ancestor.
But if and when AI evolves, it does so as the result (direct or indirect) of intentional actions--regardless of whether it is the intentional result. Nobody made choices leading to us, but somebody will have/already has made choices leading to AI. We are most certainly their parents. Quite possibly AI won't care, but related (and seminal) we are/will have been.
no subject
Date: 2012-01-03 02:22 am (UTC)From:no subject
Date: 2012-01-02 05:37 am (UTC)From:no subject
Date: 2012-01-03 02:09 am (UTC)From:no subject
Date: 2012-01-03 03:46 am (UTC)From:But--and this is completely from an outside-the-sciences perspective, so could be wrong--it sees to me that the process of creating "intelligence" out of software is one of creating software that produces output that looks to us as though it were generated by intelligence. Which sounds like creating a simulation (for us) of intelligence, rather than the thing itself.
Put it this way: If it asks you not to turn it off because you've programmed it to do so, that's not a sign of intelligence, it's just a good sim. If you haven't programmed that in but it asks you not to turn it off anyway, of its own volition, that's life.
no subject
Date: 2012-01-03 05:09 am (UTC)From:Through a strictly materialist lens: We do not make decisions. Thought is an effect, not a cause.
no subject
Date: 2012-01-03 06:41 am (UTC)From:Anyway: I have no problem with my actions and "choices" being predetermined (and potentially knowable, if only we knew all the starting conditions), but it sure feels like something's here experiencing them. Something's able to say that it feels like something. Thought is an effect, but I can watch it happen, remember it having happened. The fact that I'm necessarily blind to (much of) what moves me doesn't change the apparent condition of there being an I to be blind to it.
And this brings it back to AI: It certainly ought to be possible to create a software entity that simulates intelligence, but that's not necessarily the same thing as one that actually experiences, as opposed to one that sends out "Yep, I'm experiencing, honest!" output. Can it?
(What may or may not be prejudice on my part: at the same time, I'd have zero problems believing that artificial biological life grown in a lab and expressing sentience was, in fact, sentient. That's because I know that process results in something that experiences, being myself an instance of it [not the artificial lab-grown part]).
no subject
Date: 2012-01-06 04:42 am (UTC)From:We generally don't want our experience of awareness to be understood. A system that fully described sentience would essentially destroy the myths we tell about ourselves. Imagine understanding yourself the way you might understand algebra or internal combustion engines. The idea that our experience of self could be broken down and understood the same way we understand the periodic table, with controlled and predictable results, is powerfully opposed by almost everyone I encounter. Yet, bring up anything else with someone: Weather, space travel, medicine, cosmology. All the workings of the so called physical world! But no, not this one thing. Not me, not myself. So the claim must be one of immateriality, essentially supernatural.
The only useful measure of whether or not an AI has been successfully created is whether or not it convinces enough people. This is a far higher standard than we hold our fellow biologicals to.
RE: your faith in "biological life"
Dawkins talks about what he calls the extended phenotype. Your genotype is the information described by your DNA. Phenotype is the physical effects of that information: cell structure, body, behavior... which blends in to the idea of the extended phenotype. Here we have the all effects that a gene has on its environment, inside or outside of the body of the individual organism. So, in this way, there is nothing that we do that is not biological. Creating computer software is biology at work.
no subject
Date: 2012-01-06 05:33 am (UTC)From:Not at all. I'm certain that my mind is entirely material, and potentially both predictable and capable of being artificially duplicated (or surpassed) by some method or other. I only question whether such can arise out of a sequence of binary signals harnessed to symbolize calculation (to us), as opposed to more overt chemical reactions.
Yes, plenty of humans fail the Turing test, but we can't therefore conclude that they don't experience, since we know by extrapolation from ourselves that they probably do--and that, therefore, they're probably not simulations of same. (We can never know this conclusively, only assume it as a practical matter.) But all that test can measure is how well a subject can fit our picture of intelligent behavior.
As I've said, if an AI requested or discussed something it had not been programmed to consider--and not by following mere logical syntax to get there, like chatterbots--I'd drop the question right there.
I don't at all deny that anything we do is "biological," or at least "natural." Again, all I'm questioning is whether software--a sequence of symbols, in the end--can ever have a "there" there.
no subject
Date: 2012-01-06 01:34 pm (UTC)From:Have you heard of the the Blue Brain Project? They're approaching the problem from the perspective of modeling actual neurons, worrying less about writing software with sentient behaviors and more about getting the (virtually constructed)physical hardware right, seeing what happens.
no subject
Date: 2012-01-06 06:04 pm (UTC)From:I'm still trying to get my head around your view that there's no "there" for anyone, in that whatever the mechanism, you yourself are surely experiencing, right now? Unless I'm talking to a sim, which is always a possibility, but one I arbitrarily assume is untrue for the purposes of this conversation.
But in a way, that's a demonstration of the problem-for-me: there's a distinction, for me, between the behavior you're exhibiting (which software could duplicate) and the experience you're having while performing that behavior; whereas I think you're saying that the two are one and the same?
no subject
Date: 2012-01-11 05:45 am (UTC)From: