Member # 163
posted 08. October 2003 14:23
The title of this topic should be self-explanatory, at least to those acquainted with my occasional contributions to Brainstorms.
For those who are not clear on what I mean by the irreducible complexity of metaphysical questions, I'm not sure how much I can help you, but I will give you a 'simple' example:
Suppose we take the Golden Rule as fundamental: 'do unto others as we would that they do unto us'. (You may notice that this is the positive, not the negative, formulation of the rule; the positive formulation addresses the negative by implication, but not the other way.) A possible question, then, is, how do we know when this rule has been violated? There is at least a minimum set of ideas that we must not fail to account for when applying this Golden Rule in the world that we all know. The Golden Rule would be quite simple for everyone to understand were it not that the world has instances of its violation. If the Golden Rule cannot rightly be violated, then how do we deal with instances of its violation where it might seem that we would be violating it ourselves in the very attempt to stop its being violated? Do we attack the liberty of the rapist in the act or not? To think that pacifism is the way to never violate the Golden Rule is to take the Golden Rule in a mindlessly absolutist sense, rendering the rule itself essentially a product of a mindless, unfeeling machine that somehow rules over us mindful, feeling beings. If the Golden Rule is to be meaningful, not to mention fundamental, then it must be able to address instances of its violation. It is not like the button in the cartoon where the curious child sees this button on the dashboard of his dad's car that reads "Do not push this button". Nor is it like the cartoon where the person is perplexed how he is ever going to microwave his frozen dinner according to the intructions on its package when, on the front of the package, it reads in bold 'Keep Frozen". The Golden Rule is simple enough in itself, but this does not mean it is unintelligent in the face of its own violation. One could not allow all life to be anhiliated by some violent act....nor allow life to be degraded by some erroneous assumptions. You see, if we take the Golden Rule as simple as it would be in a perfect world, then we would be violating it simply by posing a word of disagreement to anyone....Something about 'sensitivity training'. And, yet, the rule applies equally in the direction of courtesy. And, again, in the direction of correction where the person corrected is thankful that you took the trouble to correct them. As 'simple' as it is, the genuine Golden Rule is irreducibly complex---and so must be our understanding of it. That is metaphysics.
Now, on to the 'main' topic of this post.
First, a recollection, and then a re-formed question:
In the first online chat hosted by ISCID (see http://www.iscid.org/chat-events.php and http://www.iscid.org/arewespiritualmachines-chat.php ), for which Ray Kurzweil was the main featured guest, I asked Mr. Kurzweil the following question.
While we can have the idea that there are things outside ourself that are aware only because we ourself are aware, it is granted by nearly every one as objective fact that there are things outside himself that do not possess awareness (that do not feel either sensorily or emotionally, that do not think, etc.). In my opinion, central to the problem of multiple realizability is the question of how, in the first place, we each get the very idea that there are things that are not aware. How do you, personally, know that there are things that are not aware?
quote:Here is my question re-formed to Kurzweil:
I don't know that absolutely. Maybe I'm the only person who's conscious. That's consistent with a dream. Even by common wisdom, there seem to be both people and objects in my dream that are outside myself, but clearly they were created in myself and are part of me, they are mental constructs in my own brain. Or maybe every other object is conscious. I'm not sure what that would mean as some objects may not have much to be conscious of. It's hard to even define what each object or thing is that might be conscious as there are no clear boundaries. Or maybe there's more than one conscious awareness associated with my own brain and body. There are plenty of hints along these lines with multiple personalities, or people who appear to do fine with only half a brain (either half will do). We appear to be programmed with the idea that there are 'things' outside of our self, and some are conscious, and some are not. That's how we're constructed. But it's not entirely clear that this intuitive conception matches perfectly to ultimate ontology.
(end of response)
Speaking for myself and for those who agree, we are aware, and our awareness comes implicit with (at least) the three following assumptions. 1)there are things outside ourself, 2)there are things outside ourself that are aware, 3)there are things outside ourself that are not aware. (These three assumptions are central to the SETI program).
In my opinion, central to the philosophical problem of multiple realizability is the question of whether or not these assumptions necessarily follow logically from our awareness. Mr. Kurzweil, your reply seems to me to be that we are merely programmed with these assumptions, implying that we can logically be programmed otherwise. Are you certain that this is just programming, or do you admit that you are ignorant of whether or not your own awareness necessarily implies these assumptions?
Another question: Are you certain as to whether these assumptions are fundamental to, or, contrarily, are merely coincident with, the existence of the field of research known as AI? If 'Strong' AI is to mean much, then it must be concerned with human technological reproduction of awareness per se, not limited to human technological reproduction of human awareness. Can an object made entirely of non-aware parts be aware of anything? I find the positive answer to this question self-evidently false. Yet, that answer is what 'Strong' AI at least originated assuming, and this assumption had its intellectual strength from NeoDarwinism (represented, in my opinion, by the views of people like Richard Dawkins).
Some people would argue that all objects have a 'subjective' (i.e., conscious) sense of something. But, when Kurzweil says, "...maybe every other object is conscious. I'm not sure what that would mean as some objects may not have much to be conscious of. ", he seems to assume that consciousness comes implicit with the ability and, or, the desire, to act (or to act in an intelligently reciprocal manner with you or I, suggesting consciousness). I don't think this assumption is logically compulsory from a 'Strong' AI metaphysic (nor is it necessary from a purely medical standpoint, where, for example, a person may be totally paralyzed as an active human and yet be able to understand everything going on around him). It does not seem to me a logical impossibility that some objects have no 'subjective' sense. I think that, for most objects, there is "nobody in there."
But, if we take the 'Strong' AI metaphysic as the reality of the matter, then we can have no reason to assume that a person is concious of a particular thing at a moment when he is not expressing anything in regard to it. I, for instance, must not be conscious of the sky at any moment when my behavior does seem to suggest that I am aware of the sky; or, I must not be thinking anything (nor intending to say anything) in those moments when I am not talking. Then, again, we can have no reason to assume that a rock does not see, feel, and think everything that we see, feel, and think. As I hope is plain here, the way in which we all take for granted that some object is concious is by way of two things: probability (pertaining to the behavior and to the consitution/origin of the object), and empathy. We recognize that, without the fact that we think and feel, we would be incapable of assuming that our parents, siblings, or children think and feel. We assume this, in part, because we have not engineered them into existence. This is a negative, or implicit, prediction, and of a metaphysical sort. An example of a non-metaphysical negative prediction would be going to town any old day and predicting that we are not going to find the streets filled with elephants doing circus tricks: we simply do not think of it not happening in the first place, so we would rightly be surprised if it happened. Metaphysical negative predictions are a lot stronger in every sense. We would be surprised, metaphysically, to be soberly told that an object that is made entirely of non-conscious parts can be conscious ('Strong' AI). Only if we reduce consciousness to behavior can we have a false epiphany that this 'Strong' AI is logically possible.
The bigger question for us is not whether it is logically possible, but why someone would like to believe that it is logically possible? What is the motive for this belief? The deepest motive, I mean. I do not think that 'Strong' AI'ers have even begun to deal with the irreducible complexity of their motive. They are focused mainly on the idea that it is logically possible (as opposed to logically impossible), and do not cognize the main reason why they wish to think it so. We are, after all, motivated beings, and such an idea as 'Strong' AI is hardly metaphysically neutral. It is not a simple past-time, it has every level of metaphysical motive. Even from the very beginning.
Those who think I have violated the Golden Rule by this post should feel free to do as they see fit. I'm just here to offer what I think are truths in response to what I think are errors.
[ 24. October 2003, 00:08: Message edited by: Danpech ]