700字范文,内容丰富有趣,生活中的好帮手!
700字范文 > 人工智能的丛林迷失

人工智能的丛林迷失

时间:2021-07-02 01:39:03

相关推荐

人工智能的丛林迷失

【原题】Artificial Intelligence Is Lost in the Woods

【译题】人工智能的丛林迷失

【作者】David Gelernter

【说明】7月1日

A conscious mind will never be built out ofsoftware, argues a Yale University professor.

Artificial intelligence has been obsessedwith several questions from the start: Can we build a mind out of software? Ifnot, why not? If so, what kind of mind are we talking about? A conscious mind?Or an unconscious intelligence that seems to think but experiences nothing andhas no inner mental life? These questions are central to our view of computersand how far they can go, of computation and its ultimate meaning–and of themind and how it works.

They are deep questions with practicalimplications. AI researchers have long maintained that the mind provides goodguidance as we approach subtle, tricky, or deep computing problems. Softwaretoday can cope with only a smattering of the information-processing problemsthat our minds handle routinely–when we recognize faces or pick elements out oflarge groups based on visual cues, use common sense, understand the nuances ofnatural language, or recognize what makes a musical cadence final or a jokefunny or one movie better than another. AI offers to figure out how thoughtworks and to make that knowledge available to software designers.

It even offers to deepen our understanding of the minditself. Questions about software and the mind are central to cognitive scienceand philosophy. Few problems are more far-reaching or have more implicationsfor our fundamental view of ourselves.

The current debate centers on what I’ll call a“simulated conscious mind” versus a “simulated unconscious intelligence.” Wehope to learn whether computers make it possible to achieve one, both, orneither.

I believe it is hugely unlikely, though not impossible,that a conscious mind will ever be built out of software. Even if it could be,the result (I will argue) would be fairly useless in itself. But anunconscioussimulated intelligence certainly could be built out ofsoftware–and might be useful. Unfortunately, AI, cognitive science, andphilosophy of mind are nowhere near knowing how to build one. They are missingthemost important fact about thought: the “cognitivecontinuum” that connects the seemingly unconnected puzzle pieces of thinking(for example analytical thought, common sense, analogical thought, freeassociation, creativity, hallucination). The cognitive continuum explains howall these reflect different values of one quantity or parameter that I willcall “mental focus” or “concentration”–which changes over the course of a dayand a lifetime.

Without this cognitive continuum, AI has nocomprehensive view of thought: it tends to ignore some thought modes (such asfree association and dreaming), is uncertain how to integrate emotion andthought, and has made strikingly little progress in understandinganalogies–which seem to underlie creativity.

My case for the near-impossibility of conscioussoftware minds resembles what others have said. But these are minority views.Most AI researchers and philosophers believe that conscious software minds arejust around the corner. To use the standard term, most are “cognitivists.” Onlya few are “anticognitivists.” I am one. In fact, I believe that the cognitivistsare even wronger than their opponents usually say.

But my goal is not to suggest that AI is a failure. Ithas merely developed a temporary blind spot. My fellow anticognitivists haveknocked down cognitivism but have done little to replace it with new ideas.They’ve showed us what we can’t achieve (conscious software intelligence) butnot how wecancreate something less dramatic but nonetheless highlyvaluable:unconscioussoftware intelligence. Once AI has refocused itsefforts on the mechanisms (or algorithms) of thought, it is bound to moveforward again.

Until then, AI is lost in the woods.

What IsConsciousness?

In consciousthinking, you experience your thoughts. Often they are accompanied by emotionsor by imagined or remembered images or other sensations. A machine with aconscious (simulated) mind can feel wonderful on the first fine day of springand grow depressed as winter sets in. A machine that is capable only ofunconscious intelligence “reads” its thoughts as if they were on cue cards. Onecard might say, “There’s a beautiful rose in front of you; it smells sweet.” Ifsomeone then asks this machine, “Seen any good roses lately?” it can answer,“Yes, there’s a fine specimen right in front of me.” But it has no sensation ofbeauty or color or fragrance. It has no experiences to back up the currency ofits words. It has no inner mental life and therefore no “I,” no sense of self.

But if an artificialmind can performintellectuallyjustlike a human, does consciousness matter? Is there any practical, perceptibleadvantage to simulating a conscious mind?

Yes.

An unconsciousentity feels nothing, by definition. Suppose we ask such an entity somequestions, and its software returns correct answers.

“Ever feltfriendship?” The machine says, “No.”

“Love?” “No.”“Hatred?” “No.” “Bliss?” “No.”

“Ever felt hungry orthirsty?” “Itchy, sweaty, ­tickled, excited, conscience stricken?”

“Ever mourned?”“Ever rejoiced?”

No, no, no, no.

In theory, aconscioussoftware mind might answer “yes” toall these questions; it would be conscious in the same sense you are (althoughits access to experience might be very different, and strictly limited).

So what’s thedifference between a conscious and an unconscious software intelligence? Thepotentialhuman presencethatmightexist in the simulated conscious mindbut could never exist in the unconscious one.

You could nevercommunicate with an unconscious intelligence as you do with a human–or trust orrely on it. You would have no grounds for treating it as a being toward whichyou have moral duties rather than as a tool to be used as you like.

But would asimulated human presence have practical value? Try asking lonely people–and allthe young, old, sick, hurt, and unhappy people who get far less attention thanthey need. A made-to-order human presence, even though artificial, might be agodsend.

AI (I believe) won’tever produce one. But it can still lead the way to great advances in computing.Anunconsciousintelligencemight be powerful. Alan ­Turing, the great English mathematician who foundedAI, seemed to believe (sometimes) that consciousness wasnotcentralto thought, simulated or otherwise.

He discussedconsciousness in the celebrated 1950 paper in which he proposed what is nowcalled the “Turing test.” The test is meant to determine whether a computer is“intelligent,” or “can think”–terms Turing used interchangeably. If a human“interrogator” types questions, on any topic whatever, that are sent to acomputer in a back room, and the computer sends back answers that areindistinguishable from a human being’s, then we have achieved AI, and ourcomputer is “intelligent”: it “can think.”

Does artificialintelligence require (or imply the existence of) artificial consciousness?Turing was cagey on these questions. But he did write,

I do notwish to give the impression that I think there is no mystery aboutconsciousness. There is, for instance, something of a paradox connected withany attempt to localise it. But I do not think these mysteries necessarily needto be solved before we can answer the question with which we are concerned inthis paper.

That is, can webuild intelligent (or thinking) computers, and how can we tell if we havesucceeded? ­Turing seemed to assert that we can leave consciousness aside forthe moment while we attack simulated thought.

But AI has grownmore ambitious since then. Today, a substantial number of researchers believeone day we will build conscious software minds. This group includes suchprominent thinkers as the inventor and computer scientist Ray Kurzweil. In thefall of , Kurzweil and I argued the point at MIT, in a debate sponsored bythe John Templeton Foundation. This piece builds, in part, on the case I madethere.

A Digital Mind

The goal of cognitivist thinkers is to build an artificial mindout ofsoftwarerunningon adigitalcomputer.

Why does AI focus ondigital computers exclusively, ignoring other technologies? For one reason,because computers seemed from the first like “artificial brains,” and the firstAI programs of the 1950s–the “Logic Theorist,” the “Geometry Theorem-ProvingMachine”–seemed at their best to be thinking. Also, computers are thecharacteristic technology of the age. It is only natural to ask how far we canpush them.

Then there’s a morefundamental reason why AI cares specifically about digital computers:computationunderlies today’s most widely acceptedview of mind. (The leading technology of the day is often pressed into serviceas a source of ideas.)

The ideas of thephilosopher Jerry Fodor make him neither strictly cognitivist noranticognitivist. InThe Mind Doesn’t Work That Way(2000),he discusses what he calls the “New Synthesis”–a broadly accepted view of themind that places AI and cognitivism against a biological and Darwinian backdrop.“The key idea of New Synthesis psychology,” writes Fodor, “is that cognitiveprocesses arecomputational. … A computation, according to thisunderstanding, is a formal operation on syntactically structuredrepresentations.” That is, thought processes depend on theform,not the meaning, of the items they work on.

In other words, themind is like a factory machine in a 1940s cartoon, which might grab a metalplate and drill two holes in it, flip it over and drill three more, flip itsideways and glue on a label, spin it around five times, and shoot it onto astack. The machine doesn’t “know” what it’s doing. Neither does the mind.

Likewise computers.A computer can add numbers but has no idea what “add” means, what a “number”is, or what “arithmetic” is for. Its actions are based on shapes, not meanings.According to the New Synthesis, writes Fodor, “the mind is a computer.”

But if so, then acomputer can be a mind, can be aconsciousmind–ifwesupply the right software. Here’s where the trouble starts. Consciousness isnecessarily subjective: you alone are aware of the sights, sounds, feels,smells, and tastes that flash past “inside your head.” This subjectivity ofmind has an important consequence:there is no objective way to tell whether some entity isconscious.We canonly guess, not test.

Granted, we know ourfellow humans are conscious; buthow? Not by testing them! You know the person next toyou is conscious becausehe is human. You’re human, and you’re conscious–whichmoreover seems fundamental to your humanness. Since your neighbor is alsohuman, he must be conscious too.

So how will we knowwhether a computer running fancy AI software is conscious? Only by trying toimaginewhat it’s liketobe that computer; we must try to see inside its head.

Which is clearlyimpossible. For one thing, it doesn’t have a head. But a thought experiment maygive us a useful way to address the problem. The “Chinese Room” argument,proposed in 1980 by John Searle, a philosophy professor at the University ofCalifornia, Berkeley, is intended to show that no computer running softwarecould possibly manifest understanding or be conscious. It has beencontroversial since it first appeared. I believe that Searle’s argument isabsolutely right–though more elaborate and oblique than necessary.

Searle asks us toimagine a program that can pass a Chinese Turing test–and is accordingly fluentin Chinese. Now, someone who knows English but no Chinese, such as Searlehimself, is shut up in a room. He takes the Chinese-understanding software withhim; he can execute it by hand, if he likes.

Imagine “conversing”with this room by sliding questions under the door; the room returns writtenanswers. Itseemsequallyfluent in English and Chinese. But actually, there is no understanding ofChinese inside the room. Searle handles English questions by relying on hisknowledge of English, but to deal with Chinese, he executes an elaborate set ofsimple instructions mechanically. We conclude that tobehaveas if you understand Chinese doesn’tmean you do.

But we don’t needcomplex thought experiments to conclude that a conscious computer isridiculously unlikely. We just need to tackle this question:Whatis it like to be a computer running a complex AI program?

Well, what does acomputer do? It executes “machine instructions”–low-level operations likearithmetic (add two numbers), comparisons (which number is larger?), “branches”(if an addition yields zero, continue at instruction 200), data movement(transfer a number from one place to another in memory), and so on. Everythingcomputers accomplish is built out of these primitive instructions.

Sowhatis it like to be a computer running a complex AI program?Exactly like being a computer runningany other kindofprogram.

Computers don’t knowor care what instructions they are executing. They deal with outward forms, notmeanings. Switching applications changes the output, but those changes havemeaning only to humans.Consciousness, however,doesn’tdepend on how anyone else interprets your actions; it depends on whatyouyourselfare awareof. And the computer is merely a machine doing what it’s supposed to do–like aclock ticking, an electric motor spinning, an oven baking. The oven doesn’tcare what it’s baking, or the computer what it’s computing.

The computer’sroutine never varies: grab an instruction from memory and execute it; repeatuntil something makes you stop.

Of course, we can’tknowliterallywhatit’s like to be a computer executing a long sequence of instructions. But weknow what it’s like to be a human doing the same. Imagine holding a deck ofcards. You sort the deck; then you shuffle it and sort it again. Repeat theprocedure, ad infinitum. You are doing comparisons (which card comes first?),data movement (slip one card in front of another), and so on. To know what it’slike to be a computer running a sophisticated AI application, sit down and sortcards all afternoon. That’s what it’s like.

If you sort cardslong enough and fast enough, will a brand-new conscious mind (somehow) becreated? This is, in effect, what cognitivists believe. They say that when acomputer executes the right combination of primitive instructions in the rightway, a new conscious mind will emerge. So when apersonexecutes the right combination of primitiveinstructions in the right way, a new conscious mind should (also) emerge;there’s no operation a computer can do that a person can’t.

Of course, humansareradically slower than computers.Cognitivists argue thatsure,youknow what executing low-level instructionsslowlyislike; but only when you do themvery fastisit possible to create a new conscious mind. Sometimes, a radical change inexecution speeddoeschange the qualitative outcome. (When you look ata movie frame by frame, no illusion of motion results. View the frames in rapidsuccession, and the outcome is different.) Yet it seems arbitrary to the pointof absurdity to insist that doing many primitive operationsveryfastcould produceconsciousness. Why should it? Why would it? How could it? What makes such aprediction even remotely plausible?

But even ifresearchers could make a conscious mind out of software, it wouldn’t do themmuch good.

Suppose youcouldbuild a conscious software mind. Somecognitivists believe that such a mind, all by itself, is AI’s goal. Indeed,this is the message of the Turing test. A computer can pass Turing’s testwithout ever mingling with human beings.

But such a mindcould communicate with human beings only in a drastically superficial way.

It would be capableof feeling emotion in principle. Butwefeelemotions with our whole bodies, not just our minds; andithas no body. (Of course, we could say,then build it a humanlike body! But that is a large assignment and posesbioengineering problems far beyond and outside AI. Or we could build our newmind a body unlike a human one. But in that case we couldn’t expect itsemotions to be like ours, or to establish a common ground for communication.)

Consider thelow-energy listlessness that accompanies melancholy, the overflowingjump-for-joy sensation that goes with elation, the pounding heart associatedwith anxiety or fear, the relaxed calm when we are happy, the obvious physicalmanifestations of excitement–and other examples, from rage to panic to pity tohunger, thirst, tiredness, and other conditions that are equally emotions andbodily states. In all these cases, your mind and body form an integrated whole.No mind that lacked a body like yours could experience these emotions the wayyou do.

No such mind couldeven grasp the word “itch.”

In fact, even if weachieved the bioengineering marvel of a synthetic human body, our problemswouldn’t be over. Unless this body experienced infancy, childhood, andadolescence, as humans do–unless it could grow up, as a member of humansociety–how could it understand what it means to “feel like a kid in a candyshop” or to “wish I were 16 again”? How could it grasp the human condition inits most basic sense?

A mind-in-a-box,with no body of any sort, could triumphantly pass the Turing test–which is oneindex of the test’s superficiality. Communication with such a contrivance wouldbe more like a parody of conversation than the real thing. (Even in randomInternet chatter, all parties know what it’s like to itch, and scratch, andeat, and be a child.) Imagine talking to someone who happens to be asarticulate as an adult but has less experience than a six-week-old infant. Sucha “conscious mind” has no advantage, in itself, over a mere unconsciousintelligence.

But there’s asolution to these problems. Suppose we set aside the gigantic chore of buildinga synthetic human body and make do with a mind-in-a-box or amind-in-an-anthropoid-robot, equipped with video cameras and other sensors–arough approximation of a human body. Now we choose some person (say, Joe, age35) and simply copy all his memories and transfer them into our software mind.Problem solved. (Of course, we don’t know how to do this; not only do we need acomplete transcription of Joe’s memories, we need to translate them from theneural form they take in Joe’s brain to the software form that our softwaremind understands. These are hard, unsolved problems. But no doubt we will solvethem someday.)

Nonetheless:understand the enormous ethical burden we have now assumed. Our software mindisconscious(byassumption) just as a human being is; it can feel pleasure and pain, happinessand sadness, ecstasy and misery. Once we’ve transferred Joe’s memories intothis artificial yet conscious being, it can rememberwhatit was liketo have ahuman body–to feel spring rain, stroke someone’s face, drink when it wasthirsty, rest when its muscles were tired, and so forth. (Bodies are good formany purposes.) But our software mind haslostitsbody–or had it replaced by an elaborate prosthesis. What experience could bemore shattering? What loss could be harder to bear? (Some losses, granted, butnot many.) What gives us the right to inflict such cruel mental pain on aconscious being?

In fact, what givesus the right to create such a being and treat it like a tool to begin with?Wherever you stand on the religious or ethical spectrum, you had better beprepared to tread carefully once you have created consciousness in thelaboratory.

The Cognitivists’ Best Argument

But not so fast! say the cognitivists. Perhaps it seems arbitraryand absurd to assert that a conscious mind can be created if certain simpleinstructions are executed very fast; yet doesn’t italsoseem arbitrary and absurd to claimthat you can produce a conscious mind by gathering together lots of neurons?

The cognitivistresponse to my simple thought experiment (“Imagine you’re a computer”) mightrun like this, to judge from a recent book by a leading cognitivistphilosopher, Daniel C. Dennett. Your mind is conscious; yet it’s built out ofhuge numbers of tinyunconsciouselements.There are no raw materials for creating consciousness exceptunconsciousones.

Now, compare a neuron and a yeast cell. “A hundred kilos of yeast does notwonder about Braque,” writes Dennett, “… but you do, and you are made of partsthat are fundamentally the same sort of thing as those yeast cells, only withdifferent tasks to perform.” Many neurons add up to a brain, but many yeastcells don’t, because neurons and yeast cellshave different tasks to perform. They are programmeddifferently.

In short: if wegather huge numbers of unconscious elements together in the right way and givethem the right tasks to perform, then at some point,somethinghappens, and consciousness emerges. That’s how your brain works.Note that neurons work as the raw material, but yeast cells don’t, becauseneuronshave the right tasks to perform. So why can’t we dothe same thing using software elements as raw materials–so long as we give themthe right tasks to perform? Why shouldn’tsomething happen, and yield a conscious mind built outof software?

Here is the problem.Neurons and yeast cells don’t merely have “different tasks to perform.” Theyperform differently because they arechemicallydifferent.

One water molecule isn’t wet; two aren’t; three aren’t; 100 aren’t; but at somepoint we cross a threshold,something happens, and the result is a drop of water.Butthis trick only works because of the chemistry and physics of water molecules!It won’t work with justanykind of molecule. Nor can you takejust any kind of molecule, give it the right “tasks to perform,” and make it afit raw material for producing water.

The fact is that theconscious mind emerges when we’ve collected manyneuronstogether, not many doughnuts orlow-level computer instructions. Why should the trick work when I substitutesimple computer instructions for neurons? Of course, itmightwork. But there isn’t any reason to believe itwould.

My fellowanticognitivist John Searle made essentially this argument in a paper thatreferred to the “causal properties” of the brain. His opponents mocked it asreactionary stuff. They asserted that since Searle is unable to say just howthese “causal properties” work, his argument is null and void. Which isnonsense again. I don’t need to know anything at all about water molecules torealize that large groups of them yield water, whereas large groups of kryptonatoms don’t.

Why the Cognitive Spectrum IsMore Exciting than Consciousness

To say that building a useful conscious mind is highly unlikely isnot to say that AI has nothing worth doing. Consciousness has been a “mystery”(as Turing called it) for thousands of years, but the mind holds othermysteries, too. Creativity is one of the most important; it’s a brick wall thatpsychology and philosophy have been banging their heads against for a longtime. Why should two people who seem roughly equal in competence andintelligence differ dramatically in creativity? It’s widely agreed that discoveringnew analogies is the root (oroneroot) of creativity. But how are newanalogies discovered? We don’t know. In his 1983 classicTheModularity of Mind, Jerry Fodor wrote, “It is striking that, whileeverybody thinks analogical reasoning is an important ingredient in all sortsof cognitive achievements that we prize, nobody knows anything about how itworks.”

Furthermore, to speak of the mystery of consciousness makes consciousness soundlike an all-or-nothing proposition. But how do we explain the different kindsof consciousness we experience? “Ordinary” consciousness is different from your“drifting” state when you are about to fall asleep and you register externalevents only vaguely. Both are different from hallucination as induced by drugs,mental illness–or life. We hallucinate every day, when we fall asleep anddream.

And how do we explain the difference between a child’s consciousness and anadult’s? Or the differences between child-style and adult-stylethinking?Dream thought is different from drifting or free-­associating pre-sleepthought, which is different from “ordinary” thought. We know that children tendto think more concretely than adults. Studies have also suggested that childrenare better at inventing metaphors. And the keenest of all observers of humanthought, the English Romantic poets, suggest that dreaming and wakingconsciousness are less sharply distinguished for children than for adults. Ofhis childhood, Wordsworth writes (in one of the most famous short poems inEnglish), “There was a time when meadow, grove, and stream, / The earth, andevery common sight, / To me did seem / Apparelled in celestial light, / Theglory and the freshness of a dream.”

Today’s cognitivescience and philosophy can’t explain any of these mysteries.

The philosophy andscience of mind has other striking blind spots, too. AI researchers have beenworking for years on common sense. Nonetheless, as Fodor writes inTheMind Doesn’t Work That Way, “the failure of artificial intelligenceto produce successful simulations of routine commonsense cognitive competencesis notorious, not to say scandalous.” But the scandal is wider than Fodorreports. AI has been working in recent years on emotion, too, but has yet tounderstand its integral role in thought.

In short, there aremanymysteriesto explain–and many “cognitive competences” to understand. AI–and software ingeneral–can profit from progress on these problems even if it can’t build aconscious computer.

These observationslead me to believe that the “cognitive continuum” (or, equally, theconsciousnesscontinuum) is the most important andexciting research topic in cognitive science and philosophy today.

What is the “cognitive continuum”? And why care about it? Before I addressthese questions, let me note that the cognitive continuum is not even ascientific theory. It is a “prescientific theory”–like “the earth is round.”

Anyone might havesurmised that the earth is round, on the basis of everydayobservations–especially the way distant ships sink gradually below (or riseabove) the horizon. No special tools or training were required. That the earthis round leaves many basic phenomena unexplained: the tides, the seasons,climate, and so on. But unless we know that the earth is round, it’s hard to progresson any of these problems.

The cognitive continuum is the same kind of theory. I don’t claim that it’s amillionth as important as the earth’s being round. But for me as a student ofhuman thought, it’s at least as exciting.

What is this“continuum”? It’s a spectrum (the “cognitive spectrum”) with infinitely manyintermediate points between two endpoints.

When you think, the mind assembles thought trains–sequences of distinctthoughts or memories. (Sometimes one blends into the next, and sometimes ourminds go blank. But usually we can describe the train that has just passed.)Sometimes our thought trains are assembled–so it seems–under our conscious,deliberate control. Other times our thoughts wander, and the trains seem toassemble themselves. If we start with these observations and add a few simplefacts about “cognitive behavior,” a comprehensive picture of thought emergesalmost by itself.

Obviously, you mustbe alert to think analytically. To solve a set of mathematical equations orfollow a proof, you need tofocus your attention. Your concentration declines asyou grow tired over the day.

And your mind is in a strange state just before you fall asleep: afree-associative state in which, rather than following from another logically,one thought “suggests” the next. In this state, youcannotfocus:if you decide to think about one thing, you soon find yourself thinking aboutsomething else (which was “suggested” by thing one), and then something else,and so on. In fact, cognitive psychologists have discovered that we start todreambeforewefall asleep. So the mental state right before sleep is the state of dreaming.

Since we start theday in one state (focused) and finish in another (free-associating, unfocused),the two must be connected. Over the day, focus declines–perhaps steadily,perhaps in a series of oscillations.

Which suggests that there is acontinuumofmental states between highest focus and lowest. Your “focus level” is a largefactor in determining your mode of thought (or of consciousness) at any moment.This spectrum must stretch from highest-focus thought (best for reasoning oranalysis) downward into modes based more on experience or common sense than onabstract reasoning; down further to the relaxed, drifting thought that mightaccompany gazing out a window; down further to the uncontrolled freeassociation that leads to dreaming and sleep–where the spectrum bottoms out.

Low focus means thatyourtendency(notnecessarily your ability) to free-associate increases. A wide-awake person canfree-associate if he tries; an exhausted person has to try hardnotto free-associate. At the high end,you concentrate unless you try not to. At the low end, you free-associateunless you try not to.

Notice that the role of associative recollection–in which one thought or memorycauses you to recall another–increases as you move down-spectrum. Reasoningworks (theoretically) from first principles. But common sense depends on yourrecalling a familiar idea or technique, or a previous experience. When yourmind drifts as you look out a window, one recollection leads to another, and toa third, and onward–but eventually you return to the task at hand. Once youreach the edge of sleep, though, free association goes unchecked. And when youdream, one character or scene transforms itself into another smoothly andillogically–just as one memory transforms itself into another in freeassociation. Dreaming is free association “from the inside.”

At the high-focusend, you assemble your thought train as if you were assembling a comic strip ora story­board. You can step back and “see” many thoughts at once. (To thinkanalytically, you must have your premises, goal, and subgoals in mind.) At thehigh-focus end, you manipulate your thoughts as if they were objects; youcontrolthe train.

At the bottom, it’s just the opposite. You don’t control your thoughts. Yousay, “my mind is wandering,” as if you and your mind were separate, as if yourthoughts were roaming around by themselves.

If at high focus youmanipulate your thoughts “from the outside,” at low focus you stepintoeach thought as if you were entering aroom; you inhabit it.That’s what hallucination means. The opposite of highfocus, where you control your thoughts, is hallucination–where your thoughtscontrol you.Theycontrolyour perceived environment and experiences; you “inhabit” each in turn. (Wesometimes speak of “surrendering” to sleep; surrendering to your thoughts isthe opposite of controlling them.)

Atthe high-focus end, your “I” is separate from your thought train, observing itcritically and controlling it. At the low end, your “I” blends into it (orclimbs aboard).

The cognitive continuum is, arguably, the single most important fact aboutthought. If we accept its existence, we can explain and canmodel(say, in software) the dynamics ofthought. Thought styles change throughout the day as our focus level changes.(Focus levels depend, in turn, partly on personality and intelligence: somepeople are capable of higher focus; some are morecomfortableinhigher-focus states.)

It also seemslogical to surmise that cognitive maturing increases the focus level you areable to reach and sustain–and therefore increases your abilityandtendency to think abstractly.

Even more important: if we accept the existence of the spectrum, an explanationand model ofanalogy discovery–thus,of creativity–falls into our laps.

As you movedown-spectrum, where youinhabit(notobserve) your thoughts, youfeelthem.In other words, as you move down-spectrum,emotionsemerge.Dreaming, at the bottom, is emotional.

Emotions are a powerful coding or compression device. A bar code canencapsulate or encode much information. An emotion is a “mental bar code” thatencapsulates a memory. But the functionE(m)–the “emotion” function that takes a memorymand yields the emotionyouin particularfeel when you think aboutm–doesnot generate unique values. Two different-seeming memories can produce the same emotion.

How do we inventanalogies? What made ­Shakespeare write, “Shall I compare thee to a summer’sday?” Shakespeare’s lady didn’tlooklikea summer’s day. (And what does a “summer’s day” look like?)

An analogy is a two-element thought train–“a summer’s day” followed by thememory of some person. Why should the mind conjure up these two elements insuccession? Whatlinksthem?

Answer: in somecases (perhaps in many), their “emotional bar codes” match–or were sufficientlysimilar that one recalled the other. The lady and the summer’s day made thepoetfeelthesame sort of way.

We experience more emotions than we can name. “Mildly happy,” “happy,”“ebullient,” “elated”; our choice of English words is narrow. But how do youfeel when you are about to open your mailbox, expecting a letter that willprobably bring good news butmightbecrushing? When you see a rhinoceros? These emotions have no names. But each“represents” or “encodes” some collection of circumstances. Two experiences thatseemto have nothing in common might awaken–in you only– the same emotion. And you might see, accordingly, ananalogy that no one else ever saw.

The cognitivespectrum suggests that analogies are created bysharedemotion–the linking of two thoughts with shared or similaremotional content.

To build a simulated unconscious mind, we don’t need a computer with realemotions; simulated emotions will do. Achieving them will be hard. So willrepresenting memories (with all their complex “multi-media” data).

But if we take theroute Turing hinted at back in 1950, if we forget about consciousness andconcentrate on theprocess of thought, there’s every reason to believethat we can get AI back on track–and that AI can produce powerful softwareandshow us important things about thehuman mind.

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。