• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

What is Conscious? (Split from 'Morality in Bible stories that you don't understand')

For those agreeing that non-human creatures are "conscious" it might be fun to see where the line is drawn. Are bacteria conscious? Trees? Jellyfish? Coral colonies? Snails? Ants? Ant colonies? Volcanoes? The planet Jupiter?

Very simple living creatures are much more complex than most man-designed sensors and switches. I think those participating in this discussion should state where they stand on the questions above.

I'll start. I follow Julian Jaynes who ascribes subjective consciousness only to H. sapiens, and indeed only to most H. sapiens during or after the Age of Empires.

I follow Jaynes NOT out of certainty that he is correct, but by default since I lack the knowledge, imagination and gumption to form my own contrary viewpoint.

(Roger Penrose thinks that human cognition is more intelligent and "conscious" than one might expect BECAUSE it takes advantage of quantum effects in neurons' microstructures. Perhaps he's correct.)


DBT said:
Roughly speaking, defined as ''the state of being aware of and responsive to one's surroundings,''
Self driving cars, computers, AI, etc, are aware of nothing. Sensors and processors and mechanical actions work unconsciously....as does most of the activity of a brain.
I was working from YOUR definition. Now we're going in circles. WHY do you know the one is conscious, the other not?

I'm not saying you're wrong. Just that your definition does not define.

Equivocation.(sic)

... or, being aware of your awareness. ...

You added the new phrase "being aware of your awareness." This may conform to Jaynes' view, and certainly does restrict. But I'm not sure how easy it will be to apply this test.

What I added was more information. You can't say everything that may need to be said in a brief remark, so more information may be needed.

No. You added an important five-word phrase: "being aware of your awareness." With more than five unnecessary words, your "brief remark" could have been made even briefer yet more correct! :)

'Being aware is to be aware of your awareness, there is no need to say 'being aware of your awareness'

I agree to disagree here. "Being aware of your awareness" implies self-awareness, not the same thing as awareness. In fact I'm not at all sure that self-awareness implies awareness of the self-awareness! :)
 
DBT said:
Roughly speaking, defined as ''the state of being aware of and responsive to one's surroundings,''
Self driving cars, computers, AI, etc, are aware of nothing. Sensors and processors and mechanical actions work unconsciously....as does most of the activity of a brain.
I was working from YOUR definition. Now we're going in circles. WHY do you know the one is conscious, the other not?

I'm not saying you're wrong. Just that your definition does not define.

Equivocation.(sic)

The ability to detect does not equate to conscious awareness. A motion sensor, for instance, detects motion without being conscious or aware of motion, objects in motion, or itself or what it is doing. It functions unconsciously.

To be aware is to be conscious. A mechanical detector is not aware. A motion sensor is not in a 'state of being aware,' just the ability to detect what it was designed to detect.
...
''Consciousness is being aware of your surrounding and being able to process that information when it is given to you. This is in contrast with conscious awareness, which is being aware of that consciousness, or, being aware of your awareness. Consciousness can exist without awareness, but awareness cannot exist without consciousness.''

''Conscious awareness is a twofold state of being, in which the mind is both awake as well as cognizant of its surroundings''

You added the new phrase "being aware of your awareness." This may conform to Jaynes' view, and certainly does restrict. But I'm not sure how easy it will be to apply this test.

A millipede alters its travel path to cope with obstacles or poison. Is it aware, but not aware of its awareness? How about a tiger prowling through the jungle? The first chapter in Jaynes' book points out that the most creative human thinking is usually unconscious or subconscious. Some brain skills may require LACK of awareness: A pianist will falter if he's overly aware that he's playing the piano.

Here's an interesting 6-page article:
Plant Consciousness: The Fascinating Evidence Showing Plants Have Human Level Intelligence, Feelings, Pain and More
Here's another article.

When I read this article, I conclude that a tree might "think" in the same sense that a prowling tiger or a human pianist does, but may lack "awareness of awareness" -- the type of cognition Jaynes calls "subjective consciousness" and seeks to define and describe in great detail.
More, brain function is as much a process of insulation as much as transmission.

I would pose that the ideas of "unconscious" and "subconscious" are only relative in nature: that you are not conscious of it, but that some other part of the brain IS conscious of whatever-it-is, or you only have limited access to be aware of whatever-it-is.

It's an anthropic fallacy to think that just because you can't see it directly or observe it directly that there is no experience happening somewhere by something, especially since there is behavior being rendered there according, ostensibly, to exactly some experience of phenomena by some stuff that acts contingently.

Indeed a part of the pianist must lack specific awareness that "I am playing the piano", of the part that is playing the piano, just as some part of the eye must lack awareness of the blue photons hitting another part of the eye for a later portion to observe a line from where the line is not. The whole point is that each cone or rod, after all, collects only from a single narrow direction, lest it only be capable of detecting general light levels and incapable of inferring details.

Indeed the only thing you need for awareness of awareness is a memory of some kind on the output or on some interstitial value, whose value comes back in as an input. Really it's more "awareness of past awareness" though since you can't remember the future or even the present.
 
May I kindly continue to explicate Jaynes a bit? Here is the Conclusion section of one of his book's 18 chapters.
Tell me what you think.

Julian Jaynes Book II Chapter 3 Conclusion said:
T H E . . C A U S E S . . O F . . C O N S C I O U S N E S S

This chapter must not be construed as presenting any evidence about the origin of consciousness. That is the burden of several ensuing chapters. My purpose in this chapter has been descriptive and theoretical, to paint a picture of plausibility, of how and why a huge alteration in human mentality could have occurred toward the end of the second millennium B.C.

In summary, I have sketched out several factors at work in the great transilience from the bicameral mind to consciousness:
(1) the weakening of the auditory by the advent of writing;
(2) the inherent fragility of hallucinatory control;
(3) the unworkableness of gods in the chaos of historical upheaval;
(4) the positing of internal cause in the observation of difference in others;
(5) the acquisition of narratization from epics;
(6) the survival value of deceit; and
(7) a modicum of natural selection.

I would conclude by bringing up the question of the strictness of all this. Did consciousness really come de novo into the world only at this time? Is it not possible that certain individuals at least might have been conscious in much earlier time? Possibly yes. As individuals differ in mentality today, so in past ages it might have been possible that one man alone, or more possibly a cult or clique, began to develop a metaphored space with analog selves. But such aberrant mentality in a bicameral theocracy would, I think, be short-lived and scarcely what we mean by consciousness today. It is the cultural norm that we are here concerned with, and the evidence that that cultural norm underwent a dramatic change is the substance of the following chapters. The three areas of the world where this transilience can be most easily observed are Mesopotamia, Greece, and among the bicameral refugees. We shall be discussing these in turn.
 
For those agreeing that non-human creatures are "conscious" it might be fun to see where the line is drawn. Are bacteria conscious? Trees? Jellyfish? Coral colonies? Snails? Ants? Ant colonies? Volcanoes? The planet Jupiter?

Very simple living creatures are much more complex than most man-designed sensors and switches. I think those participating in this discussion should state where they stand on the questions above.

Well, yes, but this doesn't establish the presence of consciousness in computers, which is Jarhyn's claim. I haven't come across anyone besides him who has made that claim.

I'll start. I follow Julian Jaynes who ascribes subjective consciousness only to H. sapiens, and indeed only to most H. sapiens during or after the Age of Empires.

I follow Jaynes NOT out of certainty that he is correct, but by default since I lack the knowledge, imagination and gumption to form my own contrary viewpoint.

(Roger Penrose thinks that human cognition is more intelligent and "conscious" than one might expect BECAUSE it takes advantage of quantum effects in neurons' microstructures. Perhaps he's correct.)

My dispute is with Jarhyn's claim for the presence of consciousness in computers. Are you supporting his claim?



DBT said:
Roughly speaking, defined as ''the state of being aware of and responsive to one's surroundings,''
Self driving cars, computers, AI, etc, are aware of nothing. Sensors and processors and mechanical actions work unconsciously....as does most of the activity of a brain.
I was working from YOUR definition. Now we're going in circles. WHY do you know the one is conscious, the other not?

I'm not saying you're wrong. Just that your definition does not define.

Equivocation.(sic)

... or, being aware of your awareness. ...

You added the new phrase "being aware of your awareness." This may conform to Jaynes' view, and certainly does restrict. But I'm not sure how easy it will be to apply this test.

What I added was more information. You can't say everything that may need to be said in a brief remark, so more information may be needed.

No. You added an important five-word phrase: "being aware of your awareness." With more than five unnecessary words, your "brief remark" could have been made even briefer yet more correct! :)

Being conscious is to be aware of yourself and your surroundings.

And it's still beside the point. Which is Jarhyn's claim for the presence of consciousness in computers.

My point is that we know what it is to be conscious, to be aware of ourselves and our environment, while computers do not have the necessary complexity, architecture or design for generating consciousness as we experience it.


'Being aware is to be aware of your awareness, there is no need to say 'being aware of your awareness'

I agree to disagree here. "Being aware of your awareness" implies self-awareness, not the same thing as awareness. In fact I'm not at all sure that self-awareness implies awareness of the self-awareness! :)

Self awareness is the point. Not only self awareness, but being aware of your environment. Which falls into the broader category of 'consciousness.'

To be conscious is to be aware., to see, to feel, to touch, to smell, to taste

Which is not something being attributed to computers, as far as I know, Jarhyn being the only one here claiming that.

Are you supporting his claim?
 
Which is not something being attributed to computers, as far as I know, Jarhyn being the only one here claiming that.

Are you supporting his claim?

:confused: :confused2: :confused: :confused2: :confused: :confused2: :confused: :confused2:

(A) I've expressed my opinion, however ignorant, on the entire matter already.
(B) Is Jarhyn's allegedly wrong position the ONLY question in this subthread? Was it wrong for me to participate at all if I thought I had something to contribute beyond a Yea or Nay on Jarhyn's proposition?
(C) You seem to treat "awareness" and "self-awareness" as nearly synonymous. Am I missing something?
(D) You have still not addressed my questions, not commented on Jaynes' position.
 
Which is not something being attributed to computers, as far as I know, Jarhyn being the only one here claiming that.

Are you supporting his claim?

:confused: :confused2: :confused: :confused2: :confused: :confused2: :confused: :confused2:

(A) I've expressed my opinion, however ignorant, on the entire matter already.

Yet I am still not quite sure what your point is. Quibbling over the distinction between being aware and being self-aware seems pointless because they are related

(B) Is Jarhyn's allegedly wrong position the ONLY question in this subthread? Was it wrong for me to participate at all if I thought I had something to contribute beyond a Yea or Nay on Jarhyn's proposition?

Jarhyns claim of consciousness and will in computers is precisely what I was disputing. The nature of consciousness is a part of the issue but I don't see any evidence to support the claim that computers have a conscious experience of the world as we experience mind and consciousness.

(C) You seem to treat "awareness" and "self-awareness" as nearly synonymous. Am I missing something?

I don't know why you made an issue of it. Self-awareness specifically refers to being aware of oneself, your thoughts and feelings, how you relate to others. Awareness refers to being aware of something, anything, the world, existence, our surroundings, etc.

(D) You have still not addressed my questions, not commented on Jaynes' position.

What questions? I'm sure I have, but maybe I have overlooked something.

Not commented on Jarhyn's position? Come on. I questioned his position. I challenged his claim. I pointed out that he has shown no evidence to support what appears to be a fantastic claim, that computers have consciousness and will.

My comment is that the claim is absurd, that there is no evidence to support it. He, you or anyone can try to prove the proposition.
 
What even is consciousness? What creatures or machines are conscious?
  • Define consciousness
  • Humans are presumably conscious. What about chimps, whales, dogs, birds, or octopi?
    Other creatures? Ants? Ant colonies? Trees? A planet? A storm?
  • Are artificial switching devices conscious? If not, when, if ever, will a robot, auto-car, or chat-bot become conscious?
This list barely scratches the surface of interesting and important questions.

Another thread was hijacked to discuss these questions. I've decided to move those messages here, making a new thread.

Perhaps those interested in participating can post a summary or manifesto of their position.
 
Consciousness can be a name for a subjective human perception.

Love is another.

What is pain? We see somebody fall down and say 'Ouch that hurts, I am in pain'. I fall down and I associate the words pain and ouch with my experience.

Same with consciousness. We udderstand how the word realtes to our exprerince, but we can not define it.

Answers range from the scientific neuroscience to the philosophical to the psychological to mystical-supernatural.

A there is no evidence of a mind body duality, it all comes down to how our brain is wired and works.

If you define it with a set of attributes then you have to test it, which is the rub. Subjective terms used to define a subjective concept. Which to me describes an aspect of philosophy.

Modern psychology does have an experimental basis. There are probably experimental studies and experiments. Are chimps concious as humns are?

Kick a dog and it yelp and wimperss, do I conclude the dog feels 'pain' as I do?
 
Last edited:
I am sympathetic to the idea of panpsychism though I suspect complex consciousness may require complex brain.

Today Google News popped up with Biologist Says the Sun May Be Conscious "Consciousness does not need to be confined to brains."

Futurism.Com has other articles on consciousness including an article titled "Neuroscientist Warns That Current Generation AIs Are Sociopaths" -- though the author believes it is the LACK of consciousness that makes AIs sociopathic.
I keep on saying that I think people. Assumed consciousness was "deeper" than it was.

All complex statements on data require complex systems to render those statements.

My thought here is that people mistake the fundamental process of rendering a system "conscious" for this magical hidden "ensouled" property that people seem to be trying to imply exists.

Whenever I hear someone say "is it conscious" to me it sounds like they are trying to ask "does it have a soul?"

All of that is built on so much intellectually rotten wood that the whole structure needs to be scrapped and rebuilt exclusively from well understood concepts.

Yet again, being a neuroscientist will not tell you how behavior happens. Being a molecular biologist will tell you jack shit about consciousness, awareness, and behavior.a Many such people are well equipped to do nothing but talk out their ass on the nature of behavior and agency. Unless the person studying it is already aware of how to (and that they may) abstract and de-abstract neurons into switching and logical systems, they just aren't going to see it.

Turing and the people of his day largely assumed that computational systems, despite neurons being switches, acted in a different capacity than neurons.

in the past I've seen panpsychism sold as the idea of the brain as a "receiver for consciousness", although this also misses the mark while it is in some ways close. It is close because the brain receives information, not consciousness, and the brain converts that information into meta-information, information about the information.

So your cones become conscious of "> (small quantity) jules of blue light", and pass that on to another structure that generated consciousness of a line existing across many such instances of consciousness of blue light to form consciousness of a blue line, and then that passes on to accrete into consciousness of blue letter, etc..

The photon, the information, was received by the brain wherein the switches in it (the neurons) mediated making state transitions that reflected metadata about the information (not merely the presence or absence of energy, but a statement about the shape of that energy) and converted into consciousness thanks to the constraints within the switching system.

Eventually that "consciousness of a line", the heuristic with some certainty of signal being greater than noise that encodes line-ness, requires a fairly complex system just for that. This can be contrasted to the greater complexities in the system of any image identification process required to take "line" to "series of words containing a treatise on consciousness".

In this way complexity is necessary for all consciousness, and only constrained families of complexity (only certain structures within the class of "switches") seem capable of mediating such encoding and identifying meta-information, but not all complexity implies such organization.

You won't get consciousness of "A && B?" without specifically having an object within the family of "and gates" taking in the A and the B as differential signals no matter how intricate the design.
 

How the brain generates consciousness is not understood.

Well, right … and, given this fact, maybe it behooves everyone to be a little more cautious about sweeping declarations of what is, or isn’t true, or what can or cannot be true, about brains, minds, computers, etc.?

David Chalmers points out that we have a functionalist account of how brains generate consciousness, through the firing of neurons etc., and we can map those firings to specific experiences (just this morning I was reading about a study that used brain scans to determine that dogs understand nouns, and plenty of them, in the same way we do), but we still don’t know what consciousness IS — how these neurons firings produce, for example, the quale red.

On the other hand, there are eliminativists who say that once we identify neuron firings and such, we’ve explained the experience of red — there is nothing more to explain. I find that dubious.

The idea seems to be that mind is an emergent property of brain, in the same way that water is an emergent property of H2O. And that while individual water molecules are not wet, the arrangement as a whole can be wet, or gaseous, or solid, depending on temperature and air pressure.

And I’d say that is a perfectly OK explanation for how water is an emergent property of its molecules, but would counter that we have no such stepwise account of mind emerging from brain. So the Hard Problem of Consciousness, as Chalmers called it, remains unresolved.

I think to have a discussion like this, to avoid talking past one another, we should agree on some operational definitions of the terms being used. Personally I would operationally define intelligence as the ability to solve problems, and consciousness as being aware of one’s environment, and perhaps self-aware, too, and aware that we are aware (meta-consciousness). However, I think it’s possible to be conscious without being meta-conscious, at least in principle, though maybe that’s not true in fact. If these definitions are valid, it may be possible to be intelligent without being conscious, and even to be conscious without being intelligent.

We presume that a chess-playing computer like Deep Blue is really, really intelligent at solving chess problems — it beat the world chess champ, for example, back in 1997. I know that today, I lose chess games to computers all the time, provided they are set to maximum mastery level. The lower levels I can beat easily.

But we also presume that while Deep Blue was really intelligent at solving chess problems, it had no consciousness whatever of what it was doing, or why it was doing it, or who or what it was. Perhaps that false, though — how would we really know, after all? Precisely because we DON’T know how consciousness arises from the brains, as you point out, then we ought to be very cautious of saying what is and isn’t possible.

Today we have the growing field of artificial general intelligence, which will raise serious concerns as it gets better and better. If we were to accept the idea that machines can be intelligent without being conscious, then this raises the serious problem of having what someone in this thread called sociopathic machines. One could imagine a dystopian scenario in which we turn over to AGI machines certain problems to intelligently solve, like, for example, finding a solution to racism. And the machine, not being conscious and therefore possessing no ethical or moral sense, and no empathy for others, might decide that the most straightforward solution to the problem is to eliminate all people who are not white — problem solved!

OTOH, perhaps it’s not possible to have intelligent machines without them possessing at least some degree of consciousness. This seems to be Jarhyn’s position. You deny this, but then again, you also correctly note that we don’t know how consciousness arises from brains, and lacking that knowledge, how can you definitively say Jarhyn is wrong?

Then, too, there is this thing about free will, which of course you and I have discussed, sometimes acrimoniously. I have taken the stand that consciousness is selected for because it provides a huge evolutionary advantage over organisms that act merely on instinct — consciousness implies the ability that we can override our instincts and make genuine choices among genuinely available options. If consciousness is not able to do that — if free will is just an illusion, and in reality we are all taking our marching orders form a chain of causal events stretching back to the big bang — of what use is consciousness? I see no selective advantage to it, and therefore no reason for it to have evolved. (Although, admittedly, plenty of properties of living things did not evolve through selection but are simply happy accidents, like spandrels. But it seems to me highly unlikely that consciousness could be one of those sorts of things.)

Then again, if Jarhyn is right, and even low-level things like individual neurons have some degree of consciousness, or say that things like thermostats are conscious of what they do, we’d have to say that neurons and thermostats have consciousness but not free will, since it does not seem they can do, other than what they do.

How would we know if a machine were conscious, as opposed to just intelligent (able to solve problems without having any idea it is doing so or who or what it is)? I don’t think the Turing Test is sufficient. I think ChatGPT can arguably pass that test, at least sometimes, but most people don’t think it’s aware of anything. Sometimes I think the answer to this might be found in the visionary 2001 movie, in which HAL expresses fear of death (being disconnected). Unless we programmed “fear of death/disconnection” into a computer, it would have no reason to fear death. If it DID fear death, without being programmed to do so, that might be a clear indication it was conscious and not just intelligent.

But then, on the other hand, if an intelligent machine did NOT fear death/disconnection, that would still fail to rule out that it might be conscious after all. That’s because fear of death is an evolved trait, and computers are not evolved but made. I can well imagine a conscious, self-aware machine not fearing death, simply because it was not programmed to do so and because since it was not evolved, it had no evolved tendency to self-preservation.

Bilby raised the philosophical problem of other minds. I don’t think this problem, as originally formulated, was intended to make us seriously question the existence of other minds, but rather to drive home the point of how difficult it is to actually KNOW anything, in an absolute sense. Yet, still, we may question whether others have minds, or even whether we ourselves do — perhaps it’s an elaborate illusion? But then again, if the illusion is indistinguishable from the “real thing,” whatever that might be, then it should follow that the illusion IS the real thing.

Finally, I’d point out that we have another metaphysical option on offer, apart from metaphysical naturalism and metaphysical supernaturalism, and that is metaphysical idealism, that idea that reality consists primarily or exclusively of mental states — that mind does not depended on brain, but the other way around: brain depends on mind. At first glance this idea might seem absurd, but I don’t think it’s absurd at all and there could be a number of strong reasons to believe it’s true. That in itself would be worthy of discussion.

This is a great thread, one of the best I’ve seen in a long time, and when I get more time I’ll read it more carefully. I’ve only had time to skim it so far.
 
Back
Top Bottom