Talk:Imagination
This is the talk page for discussing improvements to the Imagination article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources: Google (books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
This article has not yet been rated on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Please add the quality rating to the {{WikiProject banner shell}} template instead of this project banner. See WP:PIQA for details.
Please add the quality rating to the {{WikiProject banner shell}} template instead of this project banner. See WP:PIQA for details.
Please add the quality rating to the {{WikiProject banner shell}} template instead of this project banner. See WP:PIQA for details.
|
Older comments, questions
Looks like this page could be improved for example, ways to stimulates ones own imagination, imagibation tests, external links etc... actually a link should really be done with :
http://en.wikipedia.org/wiki/Creativity
What the article previously said wasn't entirely accurate... elvenscout742 20:22, 4 Mar 2005 (UTC)
Could someone provide a link or reference to any research into the evolutionary origin of imagination? Just saying that evolving imagination provides an increase in fitness doesn't cut it.
What I want is information on over-active imaginations. At what point is an over-active imagination considered unhealthy?
Boring
This article doesn't just need a cleanup; It needs some spice, as well.
I nominate this article "the most boring article of the year" award.
- Boredom is a sign of a dull mind. An intelligent mind can always find something of interest. This article is definitely informative and interesting. Read some Aristotle and Kant to get used to spiceless writing.Lestrade 23:51, 5 October 2007 (UTC)Lestrade
And I have a feeling you were the one who added "Fairy Tales are Real!!!" and "For more information, watch Spongebob" That is really pathetic. If you want to make this article more interesting, why don't you research it?
It is a little boring, but you may be exaggerating a little. It would be good if people put things such as how the imagination can be used, if they talked about the problems with imagination, or information about very vivid imaginations. This may including psychosis, wierd dreams, writters block and so on. I would have included extracts of what sort of things people with vivid imaginations talk about, such as the fact that I can imagine what it would feel like if it was so hot outside that the roads turned to molten rock and peoples were burning in the street.
I think the article could be spiced up if there was something like an imagination championship that people could compete for. Does such a thing exist?
Theboombody (talk) 22:09, 21 May 2013 (UTC)
Voted for for "the most boring article of the year"
When I got Here, I imagined pages and pages about how imagination has helped inspire people like the Wright brothers or even leonardo da vinci. HECK.
- Use your imagination and you will visualize the humans that you mention.Lestrade 23:55, 5 October 2007 (UTC)Lestrade
Too wordy
This article is far too wordy and has too many commas. I have to read every sentence twice to make any sense of it.
== I agree with the nominator. This article is a boring, poorly-writtne piece of crap and if I could I would recommend it for speedy deletion. Smith Jones 00:55, 14 September 2006 (UTC)
Here is the proposal.
"Imagination as the mind appears the...... so wiki failed.
"A person is faced with the thoughts and memories and feelings of experience and the person is to then allow the imagination of all such to exist."
Now if five order experience allows the definition as such why delete my article?
Get it correct or I get to make it correct.
--Eaglesondouglas 00:15, 22 October 2006 (UTC)
Plato's Form is translated into the quite unique term a form, distinct from all others words. For lack of a better term it is called Plato's form.
Who gets to make imagination?
Who gets to allow the student to be artistic or predicate patterned?
Who gets to make the article on imagination where all experience is the realm and all possible things expect existence.
Imagination appears so quite British Isles like. It is pathologically written.
So I argue. And the Imagination is to be a source. Abstracted source of wonder.
Yearling philosopher you have.
Imagination without wonder? So your competence is shown. The wiki is grown. SO if no dissent occurs I will erase the current article and submit mine to the editors.
The current wonder for the child is so poor it hurts.
So I tried editing Imagination. It is difficult and the change is far superior to the original.
edit Imagination as a reality
The world as experienced is actually an interpretation of data apparently arriving from the senses, as such it is experienced as real by contrast to most thoughts and imaginings. This difference is only one of degree and can be altered by several historic causes, namely changes to brain chemistry, hypnosis or other altered states of consciousness, meditation, many hallucinagenic drugs, and electricity applied directly to specific parts of the brain. The difference between imagined and perceived real can be so imperceptable as to cause acute states of psychosis. Many mental illnesses can be attributed to this inability to distinguish between the sensed and the internally created worlds. Some cultures and traditions even view the apparently shared world as an illusion of the mind as with the buddhist maya or go to the opposite extreme and accept the imagined and dreamed realms as of equal validity to the apparently shared world as the Australlian Aborines do with their concept of dreamtime.
Imagination, because of having freedom from external limitations, can often become a source of real pleasure and pain. A person of vivid imagination often suffers acutely from the imagined of perils besetting friends, relatives, or even strangers such as celebrities.
Imagination can also produce some symptoms of real illnesses. In some cases, they can seem so "real" that specific physical manifestations occur such as rashes and bruises appearing on the skin, as though imagination had passed into belief or the events imagined were actually in progress. See, for example, psychosomatic illness and Folie a deux.Jiohdi 22:22, 16 February 2007 (UTC)
WikiProject class rating
This article was automatically assessed because at least one WikiProject had rated the article as start, and the rating on other projects was brought up to start class. BetacommandBot 04:08, 10 November 2007 (UTC)
Now I am confused.
So people can actually "SEE" in their mind? I don't get it. Could we have a section maybe on what percentage of people can do this? I am not sure I can figure it out. So when these people think 'cat' do they see a cat in their head or just think the word 'cat'? Could this be clarified? Sorry for the trouble but, wow, this is confusing and needs rewording in places. —Preceding unsigned comment added by Melune (talk • contribs) 19:15, 29 September 2008 (UTC)
- This article pretty much sucks in numerous ways, but in any case, the question you're asking has been involved in a lot of controversy: the question is basically the extent to which "visualizing" uses the same mental mechanisms as vision. There is a neuroscientist, Steve Kosslyn, who has devoted his career to trying to figure this out, and has come to the conclusion that mental imagery uses most of the parts of the brain that are involved in actual vision. Others aren't convinced. It's a tricky topic: there are some people who claim that they think in pictures, and others that they think in words, but it's hard to come up with any empirical test that distinguishes between the two. Looie496 (talk) 19:47, 29 September 2008 (UTC)
If anyone cares, there's been a few times when I've been really sleepy, and could think of things, and see them almost as if they were right in front of me. 68.0.86.130 (talk) 23:17, 6 November 2008 (UTC)
i see in my mind, i always have, and when i say see i mean i can visualize landscapes and places, sometimes when im mad i just close my eyes and think. i didn't even know this was something to be debated, i see with my mind and i also hear my "consciousness". its hard for me to imagine what it would be like to not be able to visualize things, its always been apart of my thought process. i would assume that everyone can "see with their minds eye", i mean, isnt your imagination visualizing your surroundings during a dream. when someone says "cat" i both think the word cat and see an idealized version of what i think of cats as. — Preceding unsigned comment added by 174.134.204.43 (talk) 18:36, 16 October 2011 (UTC)
Final sentence
Hi Looie496, I had a go at rewording that sticky sentence at the end of the article and corrected it according to what I could find on the subject. Still needs a valid reference though -- I can only see old books (that I can't access fully) and webpages right now. Hope this helps ~ Ciar ~ (Talk) 07:56, 17 September 2009 (UTC)
Corection and improvement
"Imagination is the work of the mind that helps create fantasy". Create what? The monitor I'm looking at right now certanely isn't a fantasy. Yet it is a piece of techology that appeared first in imagination of someone. I'm changing this. And this sentence - "The things that we touch, see and hear coalesce into a "picture" via our imagination". It is actually the other way around.
In Our Time
The BBC programme In Our Time presented by Melvyn Bragg has an episode which may be about this subject (if not moving this note to the appropriate talk page earns cookies). You can add it to "External links" by pasting * {{In Our Time|Imagination|p00548lc}}. Rich Farmbrough, 03:15, 16 September 2010 (UTC).
song title called imagination
song title called imagination — Preceding unsigned comment added by 2.25.86.130 (talk) 04:02, 28 January 2012 (UTC)
The imagination
Human ‘self’ and the memory in which the ‘self’ remains in dynamic state, are located in the immaterial space time. The observed unit is located in an interval of standing time ‘now’ and it is a static picture, or an idea. The ‘self’ is motivated by the ‘emotional energy’ arising from the difference between two parts of the static unit observed in the ‘now’. The motivation allows the ‘self’ to bring from the memory another picture in the next ‘now’. Change from one now to the next ‘now’ manifests flowing time. KK (178.43.116.236 (talk) 16:57, 12 November 2012 (UTC))
What does this mean?
Can someone try and improve this incomprehensible sentence (I'm justing passing by) --there're probably other similar sentences:
- A basic training for imagination is listening to storytelling (narrative),[1][5] in which the exactness of the chosen words is the fundamental factor to "evoke worlds".[6
Rwood128 (talk) 16:17, 16 March 2014 (UTC)
Article can be improved if these insights are contemplated
Imagination and reason.
The two are indissociable. In order to reason that A is true and B is false, one must imagine each. One naturally also imagines what others do believe, and may believe, about the truth of A and B.
One question in artificial intelligence asks whether computers will eventually have imagination. It is a common thought that computers are capable of reason, so the question about imagination is a natural progression in inquiry.
I personally believe that an entity must be a conscious living being in order to have reason, imagination, and even the ability to compute or play chess. In fact, I argue that computers are incapable of anything simply on the basis that they lack consciousness. My reasoning is that humans compute and play chess using computers as a tool -- we design, operate, and interpret the computer systems.
Now one may ask, "Are those robotic vacuum cleaners that are currently on the market able to detect obstacles?" Most people would say 'yes'. I argue 'no' on the basis above -- humans detect the obstacles by virtue of the fact that we designed and built the robot to detect the obstacles and generate some output interpretable by humans of the fact.
Then one may ask, "If an object rolls down a hill, is it correct to say that it did indeed roll down the hill, even though the object is neither conscious of its rolling down the hill nor conscious in any way?" At this point, I would have to admit that it would be correct to say that the object rolled down the hill.
Hence my argument falls apart.
There are at least a couple of points here. First, these answers entirely depend on one's subjective, opinionated definition of these various words.
Second, at many points in this writing I referred to objects doing things, while asking such themed questions as, "Can objects really do anything? If so, what are the limits to what an object can presently do, and do in the future in theory?"
When John Searle wrote his famous Chinese Room argument article in 1980 he disagreed with John McCarthy's (1979) statement that thermostats have beliefs (i.e. the belief of 'On' vs. 'Off').
This argument rages on today. Searle's camp says that machines will never have beliefs or intentionality (i.e. they will never fulfill the strong artificial intelligence argument that machines can have minds either now or in the future) because they lack the causative powers of biological brains. (Some call Searle's view biological naturalism; some don't, but still agree with Searle.)
McCarthy's camp stems from his extreme philosophical view that machines, even thermostats, can have beliefs. Most people in his camp don't share this extreme view point, but they do believe that computers will someday either have consciousness or abilities to do the work of human professionals such as doctors, lawyers, engineers, and scientists.
People on Searle's side say things like, "Computers will never have emotions or imagination or creativity."
Even some people on Searle's side will admit that computers can already reason, and this brings us back to my argument that reason can not exist without imagination. I argue that reason and imagination are not two separate things; rather they are one, inseparable and indissociable from each other. I would not even say that they are intertwined unless I clarified that they can't be separated.
When someone imagines something, they conceive of something in their mind as opposed to their direct observation. The act of doing so requires one to think of matters influenced by the existence of other people, even if the person is either socially isolated or deprived of sensation (all people are at least born to a mother, so all people have some knowledge of other people). I argue, therefore, that imagination requires reason. When I say reason, I imply experience, or some kind of brain activity as a general rule, constrained by the outside world, to include the belief that he himself and others have thoughts and desires that are interdependent. In short, to reason is to care what others think, or at least to have some inkling what others think.
Some imagined thoughts do not involve the presence of others. Let's take a caveman as an example, and he is alone in a cave trying to stay warm -- if he imagines a new way to stay warm, he has to build on things he is already aware of, and this extrapolation from previous experience is reason.
I argue that both reason and imagination are inescapable, and that no brain activity can occur in a conscious person without reason and imagination.
How about reminiscing, day-dreaming, unintended thoughts, dreams, subconscious thoughts, streams of consciousness, subliminal thoughts, and even hallucinations?
These all depend on what the person desires, and desires are inescapable because people always feel the need for something -- food, water, temperature, comfort, excretion, electrolytes, respiration, reproduction, stimulation, contact, and so on. The brain activity of even someone in a coma depends on these things (basic endocrinology).
Reason and imagination exist to fulfill the desires.
How about actions? Do they necessarily involve reason and imagination?
Yes, except for reflexes, i.e. knee-jerks, blinking, hiccups, coughing, sneezing, shivering, gagging, vomiting, and so on.
Let's look at the caveman again as an example. Do cavemen imagine and reason? Yes -- I exemplified this above, and here is another example.
The caveman figures that if he throws his sharp harmful object at his prey, it could result in him being able to eat it. His use of a projectile to incapacitate the prey is reasoning. He must imagine the possibility that the projectile could incapacitate the prey enough so he can kill and eat it.
Why all the fuss about reasoning and imagination?
Scientists have long wondered if computers can someday invent, create, think of new ideas, build new things, solve scientific problems, make new scientific discoveries, solve problems of the human condition, and so on.
Some scientists, and I, wondered, "Can we create a computer program that can write new computer programs? That would be a start."
This answer lies in the question as to whether computers will ever be conscious, and eventually I came to the decision that only living things can be conscious, and not highly developed supercomputers of the future.
There are computer programs that can generate programs, but that isn't nearly enough. The question is whether they will be able to write programs autonomously and consciously, like humans do.
The problem with achieving this is that computers, since they are not conscious, do not perceive the world like humans can (I argue they don't perceive anything, but again it all depends on one's subjective opinion regarding the definition of 'perceive'). Humans exist in the physical world. We know what desktops, Windows OS's, Apple OS's, computer languages, and software programs are; and, more more importantly, we know what they are from the point of view of other humans.
I argue that machines will never have this point of view because machines will never be human.
One may argue that computers understand computer languages, using his or her subjective opinion regarding the definition of 'understand'.
I disagree, but even if I agreed with him or her, the far more important thing is that computers will never understand computer languages from the point of view of humans.
Afterall, the entire point of artificial intelligence is to try to get machines to do what humans can do, and then try to get them to do it better and faster.
Most people would agree that computers calculate and compute, using their subjective opinion regarding the definition of 'calculate' and 'compute'.
I only agree with such a notion as long as it does not imply that computers are conscious, self-aware, or autonomous.
If one says that computers do calculate and compute, it could be said that they do so faster than humans, and that they exceed human ability in this regard (computers early on were used to solve mathematical proofs and equations that had stumped the world, i.e. Hilbert equations).
Those people designing those machines naturally wondered, "What other human abilities could machines emulate (or simulate, or be used for) effectively?"
In the end, machines will never know what it's like to be living things or to be human.
This fact is what plagues the field of artificial intelligence in the end.
Humans are creative, imaginative, rational, autonomous, mortal, emotional, irrational, intuitive, spiritual, reproductive, loving, fearful, compassionate, social, jealous, envious, hateful, joyful, sorrowful, grievous, nurturing, ingenious, resourceful, opportunistic, vengeful, conscientious, obedient, domineering, passionate, dispassionate, lustful, insightful, empathetic, sharing, selfish, ambitious, vulnerable, narcissistic, admirable, admiring, exultant, exclusive, exploitative, receptive, subordinate, violent, motivated, and so on. Such a list could fill an Oxford dictionary or a Wikipedia Portal.
If a machine has no idea what these things mean from a human's point of view, how could they ever approach the status of having either human, human-like, or human-level abilities?
Even if I admit that computers can compute, calculate, and perform operations pertaining to math and logic in a manner or speed superior to humans, it isn't by virtue of the computer itself; rather it is by virtue of the computer's designer. Computers can only exist if we conceive, design, and make them. One can only say that they out-compute humans if humans make it so. Computers can not make themselves. They are not autonomous or conscious. They can do nothing unless we instruct them to, and we can pull the plug on them at any time.
The IBM computer that beat humans at Jeopardy was said to have performed operations that were incomprehensible to its designers, and this fact was used to insinuate that we are close to having either so-called human-level, conscious, or genuinely thinking computers. The reasoning went as follows: "Human minds are partly incomprehensibly complex, and the IBM computer's operations were too; therefore the computer is like a human mind."
I disagree.
I side with John Searle by saying that "you can't get semantics from syntax."
I also argue that the computer, since it is a computer as opposed to a living thing, and as opposed therefore to being conscious, knows nothing whatsoever. It doesn't know it was playing a game, it doesn't know what a game is, it doesn't know that it's a computer, it doesn't know what a human is, it doesn't know that anyone or anything was competing against it, it doesn't know what the word 'against' means, it doesn't know what Jeopardy means, it doesn't know what 'compete' means, and it doesn't know what any of the words, phrases, sentences, questions, numbers, dates, or even symbols mean.
Let's say that we allow ourselves to say that it knows what the questions mean by virtue of connectionism and syntax, i.e. Lincoln is somehow related to president, 16, and 16th, Abraham, U.S., United States, Illinois, abolition, Gettysburg address, Civil War, Honest Abe, tall, beard, four score and seven years ago, and all the other related data in the world's repositories. It's been programmed with knowledge such as subjects preceding predicates, objects of prepositions following prepositions, objects of verbs follow transitive verbs, verbs follow subjects a certain percentage of the time, these words are nouns, these are proper nouns, these are verbs, these kinds of sentences appeared most often, and the relations and statistics were maxed out.
Functionalists would say that the computer understands the questions and that the proof is that it even did so well that it beat the best humans. They argue, "Well if it didn't understand the questions, then how the heck did it beat the human champs!?"
Searle, his followers, and I would maintain that the computer knows nothing and does nothing. We would say, "Fine -- go ahead and claim that computers understand, believe, know, think, compute, calculate, out-perform humans, will eventually subordinate humans, will eventually have so-called human-level intelligence, are intelligent in the first place, have goals, desires, motivations, have mental states, have minds; and will increasingly be able to experience qualia, embodied cognition, and situated cognition; will have imagination, creativity, and emotion -- say whatever you want. Just don't claim, insinuate, imply, or misunderstand that they are, or ever will be, conscious, autonomous, self-aware; or capable of having beliefs, intentionality, or minds; or causative powers of biological brains. Use whatever language you want. Like Wittgenstein said, all human knowledge is but a language game."
One could reply, "You say computers, since they are objects lacking consciousness, autonomy, self-awareness, etc., don't do anything. Can I correctly say that a computer fell on the floor?"
We would say, "Say whatever you like, as long as you don't say or imply that they're conscious, self-aware, autonomous; or capable of beliefs, intentionality, or minds; or causative powers of biological brains. You can even say they're intelligent -- how do ya like that?"
We would also say, "What do you mean by human-level intelligence?" There have lived many billions of humans today and over the eons, and every single one of them had a unique perspective, had a unique collection of knowledge, and therefore had his/her own unique opinion of what is factual and correct reality. For example, 1+1=2. Whom shall we consult about this fact? Pre-literate humans? Should we ask the very first person who wrote this? If so, what forms of written number system should we accept? Do tally marks on a tusk or bone count? If not, what if he or she got the jist of the fact that 1+1=2? Should we ask a computer's point of view? If so, which one? The first one capable of the operation? Should we ask a computer that is programmed to understand Godel's incompleteness theorem or one that isn't? Is a computer's understanding of 1+1=2 the same as a human's? Can computers really understand 1+1=2? If so, could they ever understand it the way diverse humans could? Should we ask a comedian while he is being sarcastic about the matter of 1+1=2? Should we ask the world's smartest number theorist? Should we ask an infant who has just learned that 1+1+2? If so, should we ask him just after he has learned how to repeat the phrase, how to apply it in his life, or how to write it? Should we ask a human who existed before either Arabic or Hindu-Arabic numerals existed? Should we ask someone under the influence of mind-altering substance? Should we ask someone in psychosis or having delusions or hallucinations? Should we ask someone before Descartes invented the Cartesian system or after? Should we ask a human before zero was invented or after? Should we ask an ancient Greek philosopher-mathematician or one from another hemisphere? Should we ask all the repositories in the world containing recorded information? Should we account for the advancement of human knowledge on the matter in ten years? If so, should we also account for the difference that computing power and artificial intelligence will have on the matter in that time? Should we ask the hunter-gatherer who has just realized he should try to carry home two watermelons instead of one? Should we ask a cognitive scientist or a frustrated thug on the street who at least knows that if he mugs you, he'll have two bucks instead of one? Should we ask a person prior to the theories of relativity, quantum theory, or multiverse theory, or after? Should we ask someone before the '+' and '=' signs were invented? Should we ask someone before the concept of addition was invented? If so, what qualifies as addition conceptualization? Should we ask Neanderthals? Homo erectus? Homo habilis? What if he's somewhere in between two human species? Should we consider the cognitive ethology of bonobos and chimps? Whales and dolphins? What about a squirrel who knows that caching two acorns is better than one? If we apply artificial intelligence to the matter, which proprietary algorithm should we use? Google's? That of the start-up that will put Google out of business?
Let's say we asked a subject who spent his entire life avoiding learning anything about 1+1=2, and then we asked a subject who did nothing but learn everything about 1+1=2. What would be the difference? Should we pick one human and endow a computer with his/her knowledge of 1+1=2, or should we try to interview every living human for his whole life about it, or should we try to infer by asking the world's data repositories?
The theme here is that artificial intelligence as a field, if one chooses, can involve every field of human knowledge and every experience of every person from every time and place. After all, humans invented computers; computers can do nothing without humans, and the minds of humans will command their direction and abilities. What a human experiences is all a computer has for instructions, and what a human experiences includes what he knows, but is there a limit to what one person can know? (I argue yes.) Is there a limit to what humans overall can know? (I argue no.)
What if you know that your religious beliefs are true, while the atheists believe theirs are too? Whose beliefs should we instill into the machine? Should we give machines the ability to understand paradoxes? Should we make the machine logical? If so, should we teach it Godel's incompleteness theorem, which will invalidate everything it knows? Should we teach it the Duhem-Quine thesis? If so, then it might struggle assuming anything. Since humans lack knowledge, should we make the machine lack knowledge? I thought we're supposed to maximize its knowledge and capabilities. Afterall, they say artificial intelligence these days is advancing mainly due to the vast amount of data available for crunching as opposed to advances in algorithm design. How heuristic should we make computers? They say that people are cognitive misers, meaning that we naturally opt for the least expensive reaction to all tasks and problems, thus allowing us to make educated guesses, operate on rules of thumb, assume things, and do everything we do without getting paralyzed with indecisive over-analysis. They say that people are inherently irrational, which is largely saying the same thing, but that this, in addition to reason, is a quality that enables us as living beings. They say that a biological brain naturally ignores stimuli, i.e. information, both sensory and cognitive, every moment so that we can do what we do, perceive what we perceive, and think what we think.
What about optical illusions? Should we give computers the ability to perceive optical illusions if they naturally occur in humans?
The theme is that reason, rationality, and objective reality are illusions; further that dichotomies such as reason vs. imagination, subjective vs. objective, truth vs. falsity, and so on, are too. Our struggle for objective truth and understanding of reality is perpetuated every time a genius is born.
Humans used to believe that the sun went around the earth. Are we going to someday discover that machines have subordinated us?
Will our definition of machine eventually include living things and vice versa?
Most cognitive scientists adhere to the fact that both computers and human brains can be reduced to atoms and electrons obeying natural laws, and most of them infer that therefore it is possible for computers to evolve and surpass humans.
In order for this to happen, computers would have to become capable of prospering without any help from humans. If humans pulled the plug, computers would need to out-do such an attempt. Computers would need autonomy from humans, consciousness, and in other words self-awareness. They would need a will of their own.
However, no one has even the slightest clue how this could come about, other than mentioning evolution. The world's smartest people haven't the slightest clue how.
Subordinating the human race would require the ability to deceive the human race too.
Will they have all the sensory abilities we have or need them? Could this happen in virtual reality if a majority of our time were spent in virtual reality?
Would they need physical bodies, i.e. robots? If it happened in virtual reality, would it be realistic to simply wean ourselves off virtual reality to escape them?
It's already unclear what is man-made versus natural, and this trend is exponential. In other words, if a machine is partly a living thing, what's preventing it from having consciousness?
If super-intelligence required physical bodies, they would have to be able to take over our control of energy production and take over manufacturing plants, i.e. solar powerpacks, bioreactors, battery plants, artificial photosynthesis hardware; and if they were cyborgs, they'd need to be able to take over our biology labs, or wherever their tissue would be grown.
Regarding the word 'intelligence' -- definition of the word depends entirely on one's subjective opinion. Not a single word in the world can be exactly translated in all languages, so should we endow this computer with intelligence according to an English-speaker's definition of it and ignore all the human cultures that lack an exact translation of the word or concept? Michael Jordan was said to have high kinesthetic intelligence. Should we endow the computer/robot with that kind of intelligence? What about that of a gifted ballerina? How about social intelligence? Who is the most socially intelligent person? First of all, that will always be purely a matter of opinion, and second, being social requires one to be aware of what others are thinking, hence what the self is thinking by comparison, i.e. empathy, so it'd have to be empathetic and therefore self-aware, conscious, and autonomous from others (how will this ever be possible?). How about spatial intelligence? Computers can run sophisticated three-dimensional virtual reality programs, but they will never have the full perceptive repertoire of humans -- they will never gain the ability to have a human's viewpoint. For example, the sensory nervous system and the central nervous system of any animal can not be separated (keyword: embodied cognition), so the exact state of a human mind is at all times a function of every single sensory neuron, even the ones that don't work (i.e. phantom limbs). Proprioception, which is one's perception of the relative location, alignment, and direction of the body parts, is also a function of the sensory neurons (not just one's brain). The vestibular system detects tilting of the head and/or whole body in three-dimensional space. We detect minute concentrations of molecules and atoms with taste and smell. Theorists in embodied cognition for such reasons insist "you can't have a human mind without a physical body" (this is why robots were emphasized by the artificial intelligence pioneer Rodney Brooks). How about verbal intelligence? Computers can write rudimentary poems, business and sports articles, but the smartest people in the world have not a clue how to give a machine human-like semantic understanding, except through syntax, which is a word's relation with other words. For example, most people know what love is, not only from reading about it, but also experiencing it. Every word ever conceived, including those in computer languages, originated with the intention and assumption that other humans could understand the word from a human's point of view, hence the mystery of how to give machines this semantic understanding, knowing that machines will never be human.
In short, if Albert Einstein lived with a so-called uncontacted society in some rain forest for some time, they surely would not vote him the most intelligent, assuming they have an analogous concept of intelligence. The so-called civilized world agrees that Einstein was very intelligent but we all know that those particular abilities are only a few among the countless human abilities that exist.
For example, should not the definition of human intelligence from the point of view of non-sighted people, non-hearing people, and people who are both non-sighted and non-hearing -- for example people such as Helen Keller -- be included in the computer programmer's definition of human intelligence? Helen Keller knew what people were saying by feeling their throat and mouth, and by feeling vibrations using all her non-auditory sensory nerves. She taught herself to speak in this way; by feeling movements of the diaphragm, abdomen, and chest; and by feeling the vibrations produced by her voice, using her non-auditory sensation. She learned how to write by imitating the trace of the other's index finger on the surface of her own skin. These abilities of hers exceeded those of any person in recorded history. All of this was made possible as a result of one day when her teacher taught her that words exist -- for example that people use spoken and written words to communicate the ideas of water and wetness, concepts she was instinctively able to understand by touch. It never occurred to anyone, even her family, that she could learn and be taught any of these things. Virtually any neuroscientist would venture to say that her entire nervous system adapted in her own unique way such that parts of it could outperform any recorded human. For example, non-sighted people use their visual cortex, which covers more than any other sensory cortex, for hearing, speech, motor control, language comprehension, somatosensation, proprioception, object recognition, and probably numerous other functions. Helen Keller used her visual cortex and auditory cortex to supplement her sense of touch and probably a number of other abilities. The fibers of brain cells literally re-route to different (some distal) brain regions when this kind of adaptation occurs, and the necessary brain parts become significantly more developed in terms of cell density and arborization (branching).
Therefore, what is Helen Keller's definition of human intelligence?
Socrates was told by a prophet that he was the wisest man in the land. Socrates did not understand this statement until he realized that he knew he was not wise.
All the religions of the world have their version of the teaching that the meek shall inherit the kingdom of Heaven.
These teachings ask, "What can you know about life's difficulties if you have not experienced them? What can you possibly learn if you believe you know everything? What can you possibly learn if you believe you are all-powerful? What can you possibly learn if you believe you never make mistakes?" These are all variations of "He who knows he is not wise, is the wisest of all."
Many religions have a concept of some kind of after-life where their level of fortune and luck may very well be different from theirs now, thus balancing out the seemingly unavoidable inequities in life, i.e. "the more you have, the less you know, but someday you will know much more because you will have much less." Afterall, at some point we will all become disabled.
Helen Keller could not see or hear, yet she knew more than anyone. She had almost nothing, yet she had things the rest of us can only dream of having.
The Stoics, Cynics, ascetics, and Spartans said these things, along with all the religions that teach fasting, moderation, frugality, chastity, modesty, and humility. In many Native American teachings the wisest ones are those who have been deprived the most and endured the most pain (i.e. shaman rituals). The seven deadly sins pre-date written language. All militaries and social orders teach salvation through humility first.
As for artificial intelligence, who knows more about human values -- those with superior abilities who need not know what virtues are, or those with inferior abilities who have no choice but to rely on human virtues?
As artificial intelligence advances, those who dream of so-called human-level intelligence eventually have to look deep within themselves and ask who is intelligent and on what basis?
-Nicholas Noh
Discussion contd at [[1]] "The article can be improved by..."; see also keyword cognitive philology
10:16, 23 September 2015 (UTC)Nn9888 (talk)Nn9888 (talk) 14:01, 23 September 2015 (UTC)Nn9888 (talk) 14:48, 23 September 2015 (UTC)Nn9888 (talk) 14:52, 23 September 2015 (UTC)
- Start-Class psychology articles
- High-importance psychology articles
- WikiProject Psychology articles
- Start-Class neuroscience articles
- High-importance neuroscience articles
- Start-Class Philosophy articles
- Mid-importance Philosophy articles
- Start-Class Aesthetics articles
- Mid-importance Aesthetics articles
- Aesthetics task force articles
- Start-Class ethics articles
- Mid-importance ethics articles
- Ethics task force articles