2018年2月28日星期三

霍金错了吗?平克犀利驳斥AI威胁论!(中英对照)

2018-02-27 平克著 王培译 






斯蒂芬·平克在新书中
犀利驳斥AI威胁论

作者:斯蒂芬·平克
译者:王培

原文标题:We're told to fear robots. But why do we think they'll turn on us?
本文发表于 POPULAR SCIENCE  2018/02/14


机器人造反是一种迷思。

尽管新闻标题一贯惊悚,但客观数据显示,平均而言,全球各地的人们寿命更长了,生病更少了,吃得更好了,受教育更多了,接触的文化更丰富了,并且死于战争、谋杀或意外的可能性也越来越小了。然而,失望似乎是永恒的人性。即便悲观主义者不得不承认,对于越来越多的人来说,生活正变得越来越好,但他们还是随时都有自己辩驳的理由。他们认为,我们正在狂欢中奔向末日,就好比一个人从天台跳下,在经过每层楼的时候说道:"目前感觉良好";或者,就像玩俄罗斯轮盘赌,一直玩下去,结局一定是死亡;或者,我们将遭遇"黑天鹅"事件,虽说从统计分布来看,尚有多达四个标准差之远,事件发生的概率极小,但后果却十分严重。

半个世纪以来,现代末日论有了四驾马车,分别是:人口过多、资源枯竭、环境污染和核武战争。现在,它们又多了一支新生的装甲部队:围困我们的纳米机器人、俘虏我们的机械人、将我们碾压成渣的人工智能,以及制造了计算机病毒、能在自己的卧室毁灭整个互联网的保加利亚年轻人(译注:1992年,一个名叫达克.埃文格的保加利亚人发明了一种"变换器",利用这个工具可以制造出更难查杀的"变形"病毒。这个年轻人直言不讳,其目的就是要破坏人类的工作成果。)

捍卫类似这些马车的人是浪漫主义者和卢德主义者(译注:卢德主义肇始于工业革命初期,意指盲目反对新技术和新事物。)然而,那些警告高科技十分危险的人,往往是些科学家和技术专家。他们不惜发挥自己的聪明才智,搞出这么多可以让世界迅速灭亡的东西出来。2003年,天体物理学家马丁·里斯(Martin Rees)出版了一本名叫《人类最后一小时》(Our Final Hour)的书,他在书中警告说,"人类正在亲手埋葬自己",然后列举了人类让"整个宇宙的未来变得危险"的12种方式。比如,粒子对撞实验可能会产生黑洞,从而毁灭地球;或者,某个被压缩的奇异夸克会将所有的宇宙物质黏在一起,然后同归于尽。里斯在书中探讨了很多支持末日论的理由。该书在亚马逊网站的网页说明是,"浏览了这本书的读者还浏览了《全球性灾难的风险》(Global Catastrophic Risks)、《我们的最后发明:人工智能和人类时代的终结》(Our Final Invention: Artificial Intelligence and the End of the Human Era)、《人类终局:科学和宗教告诉我们什么是世界末日》(The End: What Science and Religion Tell Us About the Apocalypse)、《世界大战Z:机器人战争的口述史》(World War Z: An Oral History of the Zombie War)"。技术慈善家已经为研究机构开具了支票,用于发现新的潜在威胁,找到将世界从这些威胁中拯救出来的方法,这些机构包括"人类未来研究机构"(the Future of Humanity Institute)、"生命未来研究机构"(the Future of Life Institute)、"潜在风险研究中心"(the -Center for the Study of Existential Risk)、"全球性灾难风险研究机构"(the Global Catastrophic Risk Institute)。

随着技术不断进步,我们该如何看待这些潜在的威胁呢?没人敢肯定,灾难永远不会发生,我的这本书也不会提供这种保证。气候变化和核武战争是尤其严峻的全球性挑战。尽管还没能得到解决,但它们是有办法解决的。我们已经拿出了减少碳排放和去核化的长期路线图,这些问题正在解决过程之中。就每一美元所产生的GDP而言,全球的碳排放量已经减少了,而全球核武器的数量已经削减了85%。当然,为了避免可能的灾难,我们应该想尽各种办法让核武器的数量减为零。

不过,在这些现实的挑战面前,有一种挑战是是似而非的。有些科技评论家预测,我们将受到人工智能(AI)的奴役,这种危险有时又被称为"机器末日论"(Robopocalypse),"终结者"系列电影就经常呈现这种场景。有些聪明的家伙还很严肃地看待这种威胁(如果不是有点故弄玄虚的话)。埃隆·马斯克,他的公司生产人工智能的无人驾驶汽车,将AI称为是"比核武器更危险"的事物。斯蒂芬·霍金,通过他的人工智能语音合成器警告说,AI可能会"招致人类的灭亡"。这些聪明的家伙都是人工智能和人类心智的顶尖专家,可他们从不会因为这些潜在的威胁而失眠。

"机器末日论"建立在对智力概念的错误理解之上,而这种理解要归功于"伟大的存在之链"(the Great Chain of Being)(译注:是中世纪从古希腊哲学中衍生出来的一种观念,认为自然世界和人类社会是有等级和优劣秩序的)和尼采主义者的权力意志论,而与现代科学无关。这种误解把智力看成是一种威力无穷的灵丹妙药,每个生物体都或多或少地拥有它。

人类拥有的智力比动物更多,而未来的人工智能计算机或机器人("一头AI",用了一种新的量词)将比人类更聪明。既然我们人类已经用了我们的中等智力驯化或灭绝了低等智力的动物(既然技术先进的社会已经侵略或消灭了技术落后的原始社会),那么超级聪明的AI就会以同样的方式对待人类。既然"一头AI"的思考速度比我们快数百万倍,并且它还能用自己的超级智力通过不断迭代来提升自己的这种能力(这种情况有时被称为"嘭",一本喜剧书中出现的拟声词)(译注:这里意指意识的突然觉醒)。这种自我迭代一旦开始,人类就无法阻止它了。

然而,这种情况就好比是在担心,既然喷气式飞机超越了老鹰的飞行能力,总有一天飞机会像老鹰一样盘旋于天空,捕获我们的牲畜。人们对AI的第一种误解在于,混淆了智力与动机——也即是对于欲望的感受、对于目标的追求、对于需求的满足——之间的区别。即便人类发明了超越了人类智力的机器人,为什么这些机器人一定想要奴役它们的主人,或者想要主宰这个世界呢?智力是一种运用新工具来实现目标的能力,但目标是与智力无关的:聪明与想要得到某个东西并不是一回事。只是很碰巧,在自然生态系统中,智人的智力是达尔文所谓的自然选择的副产品,是一个天然的竞争过程。在生物的大脑中,智力与主宰竞争对手、获取资源等目标产生了联系(其他动物在不同程度上也具有这样的联系)。然而,将灵长类动物大脑边缘的回路与智力本身的性质混淆起来,是明显错误的。人工智能系统是被人为设计出来的,而不是自然进化出来的,它的思维能力就像是阿尔·卡普(Al Capp)的连环漫画《莱尔·艾布纳》(Li'l Abner)中的漫画人物什穆(shmoos),他们是满脸雀斑的利他主义者,用自己无穷的智慧把自己烤熟,以饱人类的口福。复杂系统理论(译注:一种研究复杂系统和现象的科学研究领域,AI、股市、气象、生态都属于复杂系统)中没有任何定律认为,有智力的行为主体必定会成为残忍的征服者。

第二种误解是认为,智力是一种无限制的连续能力,是能够解决一切问题的全能的灵丹妙药,可以实现任何目标。这种误解让人们提出了一些毫无意义的问题,比如,什么时候AI可以"超越人类智力"?拥有上帝般全知全能的终极"通用人工智能"(AGI)会是什么样子?实际上,智力是一种精巧的装置:比如,被设计或者被编码的软件模型,在不同领域追求不同目标的知识。人类具有寻找食物、结交朋友、影响他人、吸引配偶、抚养小孩、全球旅行、从事其他人类爱好和消遣的能力。计算机也许可以被设计用于完成其中某些任务(比如,识别面孔),但却不能被设计来干扰他人(比如,吸引配偶),它可以处理一些人类无法解决的问题(比如,模拟气象变化,或者处理数以百万计的会计账目)。

问题不同,解决它们所需要的知识也不一样。不同于"拉普拉斯的恶魔"(Laplace's demon)——一种神秘的存在,它知道宇宙中每个粒子的位置和速度,并将这些信息输入物理定律公式,计算出每样事物在未来任何时点的状态——真实生活中的认知主体必须要从混乱的客观世界,以及从某个时点参与某个领域活动的人们那里获取信息。对知识的理解并不违背摩尔定律:它是通过不断构建理论并接受现实检验,而不是通过速度越来越快的算法来实现的。消化互联网上的海量信息不足以让AI拥有全知能力:大数据仍然是有限的数据,而宇宙的知识却是无限的。

正是因为有了这些误解,最近的舆论宣传(AI会带来永久的威胁)让AI研究者颇感烦扰,这些宣传让观察家们误以为通用人工智能的出现近在眼前。就我所知,根本没有任何机构在研究通用人工智能,这不仅仅是因为它的商业价值是不确定的,还因为这一概念在逻辑上就是自相矛盾的。当然,自2010年代以来,我们已经见证了AI能够驾驶汽车、标记图片、识别语音,见证了AI在《危险边缘》(Jeopardy!)(译注:美国智力竞赛电视节目)、围棋和阿塔里电脑游戏(Atari computer games)(译注:一类1980年代流行的电子游戏)中击败了人类。

然而,AI的这些进步不是来自于对智力机制的理解,而是来自于处理速度更快、能力更强的芯片和丰富的大数据,这些数据可以让计算机程序在数以百万计的案例中得到训练,并产生类似的新案例。每一个智力系统都是一个天才的白痴,几乎没有能力解决它不能解决的问题,并且也只能解决很小一部分它被设计出来用以解决的问题。比如,一个图像标记系统将一张即将坠毁的飞机照片识别成了"一架飞机停在柏油跑道上";一个电子游戏娱乐系统对计分规则的细微变化感到困惑不解。虽然这些系统肯定会表现得越来越好,但仍然没有"嘭"出现的迹象。这些系统的演进方向既不是要去接管实验室,也不是要奴役它们的设计人员。

即便通用人工智能想要展现权力意志,没有人类的合作,它仍然只是无用的"缸中之脑"(译注:著名哲学家希拉里·普特南提出的思想实验:人的大脑被取下,放进有营养液的缸中,大脑神经末梢被连接在计算机上,用以产生正常的意识和思维,而身体则失去了活动能力)。计算机科学家拉姆齐·纳姆(Ramez Naam)驳斥了甚嚣尘上的AI意识觉醒论、技术奇点论和几何指数的自我迭代论:

想象你是一个运行在某种微处理器上(或许,由数百万计的微处理器组成)的AI,拥有超级智能。突然,你想要设计一个更快、更强大的处理器,用于你的运行。现在……靠!你必须要生产制造这种处理器。而生产工厂的运转需要巨大的能量,它需要从周围环境中获取原材料,需要严格控制内部环境,保证气闸、过滤器以及其他各种特殊设备正常运转。获取材料、运输、加工、建造厂房、修建动力机房、测试、生产,所有这些工作都要花费时间和能量。现实世界是你螺旋式自我迭代的最大障碍。

现实世界不可能为很多数字技术灾难的发生提供条件。当HAL(译注:靠机械装置维持生命的机械人)变得狂傲的时候,戴夫(译注:Dave,《2001太空漫游》中的人物,是HAL的操控者)可以用螺丝刀把它废掉,让它可怜兮兮地对着自己唱儿歌:《双人自行车》(A Bicycle Built for Two)。当然,总是有人认为"末日机器"(Doomsday Computer)一定会出现,这种机器心狠手辣、无所不能、永久开启,并且可以防止人类干预。应对这种威胁的办法很简单:不去制造它!

正当人们意识到把机器人想得太邪恶似乎是在庸人自扰的时候,人类卫道士又发现了一种新的数字技术灾难。这种想象不是建立在弗兰肯斯坦(Frankenstein)(译注:试图造人的科学怪人)或隆隆岩(Golem)(译注:希伯来传说中有行动力、无思考力的假人)的故事之上的,而是建立在精灵和(Genie)和金·米达斯(King Midas)的故事之上的。精灵允许人们许三个愿,但第三个愿望的内容是阻止前两个愿望实现(译注:这是一个阿拉伯故事),而米达斯则后悔自己将他触碰到的每样事物都变成了黄金,包括他的食物和家人。这种危险有时被称为"价值取向问题"(the Value Alignment Problem),也即是说,我们可能会给AI一个目标,然后绝望地呆在一旁,看着它按照自己的理解去执行目标,并最终损害人类的利益。如果我们让AI去维持一个水库的水位,它就有可能泄闸,把一个城镇给淹了,而不管人类的死活。如果我们让AI生产回形针,它就有可能把所有能接触到的宇宙物质都制造成回形针,包括我们的私人物品和身体。如果我们让它将人类的幸福最大化,它就有可能用多巴胺营养液对我们进行静脉注射;或者,用计算机连接我们的大脑,让我们坐在化学缸中,快乐致死;又或者,如果它经过训练,通过反复观看笑脸图片理解了幸福的概念,它就会用几十亿张纳米大小的笑脸图片塞满整个银河系。

这些段子可不是我编造的。有些人的确认为,这些场景很好地描绘了先进的人工智能对人类所产生的威胁。所幸的是,这些场景是自败的。它们取决于如下一些假设:

1)人类太聪明,居然能设计出全知全能的AI,但又太愚蠢,居然在没能搞懂AI运作机制的情况下让它控制了全世界;

2)AI太聪明,以至于它知道如何改变化学元素,重新连接大脑,但又愚蠢到因为犯下一些低级的理解错误,造成了巨大破坏。

要让AI具备解决这种矛盾的能力,工程师可能心想给AI添加插件就行了,并为之前忘了做这项工作而后悔不迭,但添加插件是无济于事的,因为这个问题涉及到智力。同样,要想让AI具备理解语言使用者在特定语境下的意图,靠添加插件也是不管用的。机器人所能做的,只能是在类似于《糊涂侦探》(Get Smart)这样的喜剧片中,响应"抓起服务员"的命令,将大堂经理举过头顶;或者通过拔出手枪、开枪射击去"灭掉灯光"。

当我们抛开意识觉醒、技术自负、即时全知、控制宇宙每个粒子之类的幻想时,人工智能就跟其他任何技术无异。它的发展是渐进的,其目的是被用于满足人类的各种需求。在投入执行任务之前,它会经受严格测试,并总是会在效能和安全之间取得平衡。正如AI专家斯图尔特·罗素(Stuart Russell)所说:"没有任何人类工程师会说'修建不会垮的大桥',他们只会说'修建大桥'。"同理,他认为,只有益处没有危害的AI才是真正的AI。

当然,人工智能还是提出了更为日常性的挑战:我们该如何帮助那些工作被机器所取代了的人群?不过,这类工作还不会很快就消失。NASA在1965年发布的报告中提出的洞见至今仍是有效的:"人类是一种能量消耗最低、重量仅有150磅、思维非线性的通用计算机,哪怕是技能不熟练的工人也能生产出这种机器。"驾驶汽车是一种比卸下洗碗机、跑腿打杂、更换尿布以及此刻撰写文章更容易的工程技术问题,至今,我们还没有允许自动驾驶汽车行使在城市道路上。也许,机器人军团最终可以在发展中国家为婴儿接种疫苗、修建学校,或者在我们这样的国家修建基础设施,照顾老人,但要等到这一天,还有很多工作要做。用于设计软件和机器人的人类才智,同样可以被用于设计政府和私营部门政策,好让失业的人有工作可做。

本文改编自斯蒂芬·平克新书《ENLIGHTENMENT NOW: The Case for Reason, Science, Humanism, and Progress》,首发于《Popular Science》之2018年春季智能专刊(the Spring 2018 Intelligence issue )。

推荐关注:心智与实在


史蒂芬·平克
"语言与人性"四部曲
语言本能、思想本质
心智探奇、白板 
全美超级畅销书
认知心理学社会科学
长按二维码购买



英文原版

We're told to fear robots. But why do we thinkthey'll turn on us?

The robot uprising is a myth.
By Steven Pinker  February 14, 2018

Despite the gory headlines, objective datashow that people all over the world are, on average, living longer, contractingfewer diseases, eating more food, spending more time in school, getting accessto more culture, and becoming less likely to be killed in a war, murder, or anaccident. Yet despair springs eternal. When pessimists are forced to concedethat life has been getting better and better for more and more people, theyhave a retort at the ready. We are cheerfully hurtling toward a catastrophe,they say, like the man who fell off the roof and said, "So far so good" as he passedeach floor. Or we are playing Russian roulette, and the deadly odds are boundto catch up to us. Or we will be blindsided by a black swan, a four-sigma eventfar along the tail of the statistical distribution of hazards, with low oddsbut calamitous harm.

For half a century, the four horsemen ofthe modern apocalypse have been overpopulation, resource shortages, pollution,and nuclear war. They have recently been joined by a cavalry of more-exoticknights: nanobots that will engulf us, robots that will enslave us, artificialintelligence that will turn us into raw materials, and Bulgarian teenagers whowill brew a genocidal virus or take down the -internet from their bedrooms.

The sentinels for the familiar horsementended to be romantics and Luddites. But those who warn of the higher-techdangers are often scientists and technologists who have deployed theiringenuity to identify ever more ways in which the world will soon end. In 2003,astrophysicist Martin Rees published a book entitled Our Final Hour, in whichhe warned that "humankind ispotentially the maker of its own demise," and laid out some dozen ways in which we have "endangered the future of the entire universe." For example, experiments in particle colliders could create a blackhole that would annihilate Earth, or a "strangelet" of compressedquarks that would cause all matter in the cosmos to bind to it and disappear.Rees tapped a rich vein of catastrophism. The book's Amazon page notes, "Customers whoviewed this item also viewed Global Catastrophic Risks; Our Final Invention:Artificial Intelligence and the End of the Human Era; The End: What Science andReligion Tell Us About the Apocalypse; and World War Z: An Oral History of theZombie War." Techno-philanthropistshave bankrolled research institutes dedicated to discovering new existentialthreats and figuring out how to save the world from them, including the Futureof Humanity Institute, the Future of Life Institute, the -Center for the Studyof Existential Risk, and the Global Catastrophic Risk Institute.

How should we think about the -existentialthreats that lurk behind our incremental progress? No one can prophesy that acataclysm will never happen, and this writing contains no such assurance.Climate change and nuclear war in particular are serious global challenges.Though they are unsolved, they are solvable, and road maps have been laid outfor long-term decarbonization and denuclearization. These processes are wellunderway. The world has been emitting less carbon dioxide per dollar of gross-domestic product, and the world's nucleararsenal has been reduced by 85 percent. Of course, though to avert possiblecatastrophes, they must be pushed all the way to zero.

ON TOP OF THESE REAL CHALLENGES, though,are scenarios that are more dubious. Several technology commentators havespeculated about a danger that we will be subjugated, intentionally oraccidentally, by artificial intelligence (AI), a disaster sometimes called theRobopocalypse and commonly illustrated with stills from the Terminator movies.Several smart people take it seriously (if a bit hypocritically). Elon Musk,whose company makes artificially intelligent self-driving cars, called thetechnology "more dangerous thannukes." Stephen Hawking, speakingthrough his artificially intelligent synthesizer, warned that it could "spell the end of the human race." But among the smart people who aren't losing sleep are most experts in artificial intelligence and mostexperts in human intelligence.

The Robopocalypse is based on a muzzyconception of intelligence that owes more to the Great Chain of Being and aNietzschean will to power than to a modern scientific understanding. In thisconception, intelligence is an all-powerful, wish-granting potion that agentspossess in different amounts.

Humans have more of it than animals, and anartificially intelligent computer or robot of the future ("an AI," in the new count-nounusage) will have more of it than humans. Since we humans have used our moderateendowment to domesticate or exterminate less --well-endowed animals (and sincetechnologically advanced societies have enslaved or annihilated technologicallyprimitive ones), it follows that a super-smart AI would do the same to us.Since an AI will think millions of times faster than we do, and use itssuper-intelligence to recursively improve its superintelligence (a scenariosometimes called "foom," after the comic-book sound effect), from the instant it is turnedon, we will be -powerless to stop it.

But the scenario makes about as much senseas the worry that since jet planes have surpassed the flying ability of eagles,someday they will swoop out of the sky and seize our cattle. The first fallacyis a confusion of intelligence with motivationof beliefs with desires, inferences with goals, thinking withwanting. Even if we did invent superhumanly intelligent robots, why would theywant to enslave their masters or take over the world? Intelligence is theability to deploy novel means to attain a goal. But the goals are extraneous tothe intelligence: Being smart is not the same as wanting something. It just sohappens that the intelligence in one system, Homo sapiens, is a product ofDarwinian natural selection, an inherently competitive process. In the brainsof that species, reasoning comes bundled (to varying degrees in differentspecimens) with goals such as dominating rivals and amassing resources. But it's a mistake to confuse a circuit in the limbic brain of a certainspecies of primate with the very nature of intelligence. An artificiallyintelligent system that was designed rather than evolved could just as easilythink like shmoos, the blobby altruists in Al Capp's comic strip Li'l Abner, whodeploy their considerable ingenuity to barbecue themselves for the benefit ofhuman eaters. There is no law of complex systems that says intelligent agentsmust turn into ruthless conquistadors.

The second fallacy is to think ofintelligence as a boundless continuum of potency, a miraculous elixir with thepower to solve any problem, attain any goal. The fallacy leads to nonsensicalquestions like when an AI will "exceedhuman-level intelligence," and to theimage of an ultimate "ArtificialGeneral Intelligence" (AGI) withGod-like omniscience and omnipotence. Intelligence is a contraption of gadgets:software modules that acquire, or are programmed with, knowledge of how topursue various goals in various domains. People are equipped to find food, winfriends and influence people, charm prospective mates, bring up children, movearound in the world, and pursue other human obsessions and pastimes. Computersmay be programmed to take on some of these problems (like recognizing faces),not to bother with others (like charming mates), and to take on still otherproblems that humans can't solve (likesimulating the climate or sorting millions of accounting records).

The problems are different, and the kindsof knowledge needed to solve them are different. Unlike Laplace's demon, the mythical being that knows the location and momentum ofevery particle in the universe and feeds them into equations for physical lawsto calculate the state of everything at any time in the future, a real-lifeknower has to acquire information about the messy world of objects and peopleby engaging with it one domain at a time. Understanding does not obey Moore's Law: Knowledge is acquired by formulating explanations and testingthem against reality, not by running an algorithm faster and faster. Devouringthe information on the internet will not confer omniscience either: Big data isstill finite data, and the universe of knowledge is infinite.

For these reasons, many AI researchers areannoyed by the latest round of hype (the perennial bane of AI), which hasmisled observers into thinking that Artificial General Intelligence is justaround the corner. As far as I know, there are no projects to build an AGI, notjust because it would be commercially dubious, but also because the concept isbarely coherent. The 2010s have, to be sure, brought us systems that can drivecars, caption photographs, recognize speech, and beat humans at Jeopardy!, Go,and Atari computer games.

But the advances have not come from abetter understanding of the workings of intelligence but from the brute-forcepower of faster chips and bigger data, which allow the programs to be trainedon millions of examples and generalize to similar new ones. Each system is anidiot savant, with little ability to leap to problems it was not set up tosolve, and a brittle mastery of those it was. A photo-captioning program labelsan impending plane crash "An airplane isparked on the tarmac"; agame-playing program is flummoxed by the slightest change in the scoring rules.Though the programs will surely get better, there are no signs of foom. Norhave any of these programs made a move -toward taking over the lab or enslavingtheir programmers.

Even if an AGI tried to exercise a will topower, without the cooperation of humans, it would remain an impotent brain ina vat. The computer scientist Ramez Naam deflates the bubbles surrounding foom,a technological singularity, and exponential self-improvement:

Imagine you are a super-intelligent AIrunning on some sort of -microprocessor (or perhaps, millions of suchmicroprocessors). In an instant, you come up with a design for an even faster,more powerful microprocessor you can run on. Nowdrat! You have to actually manufacture those microprocessors. Andthose [fabrication plants] take tremendous energy, they take the input ofmaterials imported from all around the world, they take highly controlledinternal environments that require airlocks, filters, and all sorts ofspecialized equipment to maintain, and so on. All of this takes time and energyto acquire, transport, integrate, build housing for, build power plants for,test, and manufacture. The real world has gotten in the way of your upwardspiral of self-transcendence.

The real world gets in the way of manydigital apocalypses. When HAL gets uppity, Dave disables it with a screwdriver,leaving it pathetically singing "A Bicycle Builtfor Two" to itself. Of course,one can always imagine a Doomsday Computer that is malevolent, universallyempowered, always on, and tamper-proof. The way to deal with this threat isstraightforward: Don't build one.

As the prospect of evil robots started toseem too kitschy to take seriously, a new digital apocalypse was spotted by theexistential guardians. This storyline is based not on Frankenstein or the Golembut on the Genie granting us three wishes, the third of which is needed to undothe first two, and on King Midas ruing his ability to turn everything hetouches into gold, including his food and his family. The danger, sometimescalled the Value Alignment Problem, is that we might give an AI a goal, andthen helplessly stand by as it relentlessly and literal-mindedly implementedits interpretation of that goal, the rest of our interests be damned. If wegave an AI the goal of maintaining the water level behind a dam, it might flooda town, not caring about the people who drowned. If we gave it the goal ofmaking paper clips, it might turn all the matter in the reachable universe intopaper clips, including our -possessions and bodies. If we asked it to maximizehuman happiness, it might implant us all with intravenous dopamine drips, orrewire our brains so we were happiest sitting in jars, or, if it had beentrained on the concept of happiness with pictures of smiling faces, tile thegalaxy with trillions of nanoscopic pictures of smiley-faces.

I am not making these up. These are thescenarios that supposedly illustrate the existential threat to the humanspecies of advanced artificial intelligence. They are, fortunately,self-refuting. They depend on the premises that 1) humans are so gifted that theycan design an omniscient and omnipotent AI, yet so moronic that they would giveit control of the universe without testing how it works; and 2) the AI would beso brilliant that it could figure out how to transmute elements and rewirebrains, yet so -imbecilic that it would wreak havoc based on elementaryblunders of misunderstanding. The ability to choose an action that bestsatisfies conflicting goals is not an add-on to intelligence that engineersmight slap themselves in the forehead for forgetting to install; it isintelligence. So is the ability to interpret the intentions of a language userin context. Only on a television comedy like Get Smart does a robot respond to "Grab the waiter" by hefting themaitre d' over his head, or "Kill the light" by pulling outa -pistol and shooting it.

When we put aside fantasies like foom,digital megalomania, instant omniscience, and perfect control of every moleculein the universe, artificial intelligence is like any other technology. It isdeveloped incrementally, designed to satisfy multiple conditions, tested beforeit is implemented, and constantly tweaked for efficacy and safety. As AI expertStuart Russell puts it: "No one in civilengineering talks about 'buildingbridges that don't fall down.' They just call it 'buildingbridges.'" Likewise, he notes,AI that is beneficial rather than -dangerous is simply AI.

Artificial intelligence, to be sure, posesthe more mundane -challenge of what to do about the people whose jobs areeliminated by automation. But the jobs won't be eliminated that quickly. The observation of a 1965 report fromNASA still holds: "Man is thelowest-cost, 150-pound, nonlinear, all-purpose computer system that can bemass-produced by unskilled labor." Driving a caris an easier engineering problem than unloading a dishwasher, running anerrand, or changing a diaper, and at the time of this writing, we're still not ready to loose self-driving cars on city streets. Untilthe day battalions of robots are inoculating children and building schools inthe developing world, or for that matter, building infrastructure and caringfor the aged in ours, there will be plenty of work to be done. The same kind ofingenuity that has been applied to the design of software and robots could beapplied to the design of government and private-sector policies that match idlehands with undone work.

Adapted from ENLIGHTENMENT NOW: The Casefor Reason, Science, Humanism, and Progress by Steven Pinker, published byViking, an imprint of Penguin Publishing Group, a division of Penguin RandomHouse LLC. Copyright ? 2018 by Steven Pinker.

推荐关注:心智与实在


史蒂芬·平克
"语言与人性"四部曲
语言本能、思想本质
心智探奇、白板 
全美超级畅销书
认知心理学社会科学
长按二维码购买


没有评论:

发表评论

页面