【经济学人】Is Xi  an AI doomer?

【经济学人】Is Xi an AI doomer?

经济学人英语阅读笔记

China’s elite is split over artificial intelligence.

中国精英对人工智能存在分歧。

  1. be split over:A 在 B 上存在分歧。
IN JULY OF last year Henry Kissinger travelled to Beijing for the final time before his death. Among the messages he delivered to China’s ruler, Xi , was a warning about the catastrophic risks of artificial intelligence (AI). Since then American tech bosses and ex-government officials have quietly met with their Chinese counterparts in a series of informal meetings dubbed the Kissinger Dialogues. The conversations have focused in part on how to protect the world from the dangers of AI. On August 27th American and Chinese officials are expected to take up the subject (along with many others) when America’s national security advisor, Jake Sullivan, travels to Beijing.

去年7月,亨利·基辛格去世前最后一次前往北京。他向中国领导人习传达的信息之一是对人工智能(AI)灾难性风险的警告。自那以后,美国科技巨头和前政府官员在一系列被称为基辛格对话的非正式会议中悄悄会见了中国同行。对话部分集中在如何保护世界免受人工智能的危险。8月27日,当美国国家安全顾问杰克·沙利文访问北京时,美国和中国官员预计将讨论这个问题(以及许多其他问题)。

  1. catastrophic risks: 灾难性风险。
  2. informal meeting:非正式会议。
  3. dub:把 … 戏称为;给 … 起绰号;把 … 称为。
  4. take up the subject:讨论这个问题/主题。
Many in the tech world think that AI will come to match or surpass the cognitive abilities of humans. Some developers predict that artificial general intelligence (AGI) models will one day be able to learn, which could make them uncontrollable. Those who believe that, left unchecked, AI poses an existential risk to humanity are called “doomers”. They tend to advocate stricter regulations. On the other side are “accelerationists”, who stress AI’s potential to benefit humanity.

科技界的许多人认为人工智能将会赶上或超越人类的认知能力。一些开发人员预测,通用人工智能(AGI)模型有一天将能够学习,这可能会使它们变得不可控。那些认为,如果不加以控制,人工智能会给人类带来生存风险的人被称为“末日论者”。他们倾向于主张更严格的规定。另一边是“加速论者”,他们强调人工智能造福人类的潜力。

  1. existential risk:生存风险。
  2. doomer:末日论者。
  3. accelerationists:加速论者。
Western accelerationists often argue that competition with Chinese developers, who are uninhibited by strong safeguards, is so fierce that the West cannot afford to slow down. The implication is that the debate in China is one-sided, with accelerationists having the most say over the regulatory environment. In fact, China has its own AI doomers—and they are increasingly influential.

西方加速论者经常辩称,与中国开发商的竞争如此激烈,西方不能放慢脚步,因为中国开发商不受强有力的保障措施的约束。言下之意,中国的争论是一边倒的,加速论者对监管环境最有发言权。事实上,中国有自己的人工智能毁灭者——而且他们的影响力越来越大。

  1. be uninhibited by:不受 … 约束。
  2. safeguard:保障措施。
  3. one-sided:一边倒的。
  4. have the most say over:… 在 … 上最有话语权。
Until recently China’s regulators have focused on the risk of rogue chatbots saying politically incorrect things about the Communist Party, rather than that of cutting-edge models slipping out of human control. In 2023 the government required developers to register their large language models. Algorithms are regularly marked on how well they comply with socialist values and whether they might “subvert state power”. The rules are also meant to prevent discrimination and leaks of customer data. But, in general, AI-safety regulations are light. Some of China’s more onerous restrictions were rescinded last year.

直到最近,中国监管机构一直关注聊天机器人(chatbots)对共产党说政治上不正确的话的风险,而不是尖端模型脱离人类控制的风险。2023年,政府要求开发者注册他们的大型语言模型。算法经常被标记为它们在多大程度上符合社会主义价值观,以及它们是否可能“颠覆国家政权”。这些规则还旨在防止歧视和客户数据泄露。但是,总的来说,人工智能安全法规很宽松。去年,中国取消了一些更严厉的限制。

  1. cutting-edge:尖端的;前沿的。
  2. slip out of:脱离 …。
  3. subvert:颠覆。
  4. be meant to:旨在。
  5. onerous:繁重的;麻烦的;负有义务的;负有法律责任的。
  6. rescind:撤回;废除。
China’s accelerationists want to keep things this way. Zhu Songchun, a party adviser and director of a state-backed programme to develop AGI, has argued that AI development is as important as the “Two Bombs, One Satellite” project, a Mao-era push to produce long-range nuclear weapons. Earlier this year Yin Hejun, the minister of science and technology, used an old party slogan to press for faster progress, writing that development, including in the field of AI, was China’s greatest source of security. Some economic policymakers warn that an over-zealous pursuit of safety will harm China’s competitiveness.

中国的加速主义者希望保持这种状态。党的顾问兼国家支持的通用人工智能发展项目主任朱松纯认为,人工智能的发展与“两弹一星”项目一样重要,“两弹一星”项目是毛泽东时代推动生产远程核武器的项目。今年早些时候,科技部部长阴和俊使用了一句古老的党口号来敦促加快进步,他写道,包括人工智能领域在内的发展是中国最大的安全源泉。一些经济政策制定者警告说,过度追求安全将损害中国的竞争力。

  1. overzealous:过于热心的;激情过高的。
But the accelerationists are getting pushback from a clique of elite scientists with the Communist Party’s ear. Most prominent among them is Andrew Chi-Chih Yao, the only Chinese person to have won the Turing award for advances in computer science. In July Mr Yao said AI poses a greater existential risk to humans than nuclear or biological weapons. Zhang Ya-Qin, the former president of Baidu, a Chinese tech giant, and Xue Lan, the chair of the state’s expert committee on AI governance, also reckon that AI may threaten the human race. Yi Zeng of the Chinese Academy of Sciences believes that AGI models will eventually see humans as humans see ants.

但加速论者却遭到了共产党耳中的精英科学家集团的抵制。其中最著名的是姚期智,他是唯一一位因计算机科学进步而获得图灵奖的中国人。姚先生在七月表示,人工智能对人类构成的生存风险比核武器或生物武器更大。中国科技巨头百度前总裁张亚勤和国家人工智能治理专家委员会主任薛澜也认为人工智能可能威胁人类。中国科学院的曾毅认为,通用人工智能模型最终会像人类看到蚂蚁一样看到人类。

  1. a clique of:一群。
  2. reckon:猜想;估计
The influence of such arguments is increasingly on display. In March an international panel of experts meeting in Beijing called on researchers to kill models that appear to seek power or show signs of self-replication or deceit. A short time later the risks posed by AI, and how to control them, became a subject of study sessions for party leaders. A state body that funds scientific research has begun offering grants to researchers who study how to align AI with human values. State labs are doing increasingly advanced work in this domain. Private firms have been less active, but more of them have at least begun paying lip service to the risks of AI.

这些论点的影响力日益显现。今年 3 月,一个国际专家小组在北京召开会议,呼吁研究人员杀死那些似乎在寻求权力或表现出自我复制或欺骗迹象的模型。不久之后,人工智能带来的风险以及如何控制这些风险成为党的领导人研究会议的主题。一个资助科学研究的国家机构已经开始向研究如何使人工智能与人类价值观相一致的研究人员提供资助。国家实验室在这一领域的工作越来越先进。私营企业则不太活跃,但至少有更多的企业开始在口头上关注人工智能的风险。

  1. be increasingly on display:日益凸显。
  2. deceit:欺骗;虚伪。
  3. pay lip service to:在口头上关注…
The debate over how to approach the technology has led to a turf war between China’s regulators. The industry ministry has called attention to safety concerns, telling researchers to test models for threats to humans. But most of China’s securocrats see falling behind America as a bigger risk. The science ministry and state economic planners also favour faster development. A national AI law slated for this year quietly fell off the government’s work agenda in recent months because of these disagreements. The impasse was made plain on July 11th, when the official responsible for writing the AI law cautioned against prioritising either safety or expediency.

关于如何处理这项技术的争论引发了中国监管机构之间的 “地盘争夺战”。工业部呼吁关注安全问题,要求研究人员测试模型对人类的威胁。但大多数中国安全官员认为,落后于美国的风险更大。科学部和国家经济规划者也倾向于加快发展速度。由于存在这些分歧,最近几个月,原定于今年出台的国家人工智能法悄然退出了政府的工作日程。7 月 11 日,负责撰写人工智能法的官员警告说,不要把安全或权宜之计放在首位,这让僵局变得更加明显。

  1. turf war:地盘争夺战
  2. securocrat:安全官僚
  3. slate:预定;计划;安排
  4. impasse:僵局;死路
  5. prioritise:优先考虑
  6. expediency:权宜之计
The decision will ultimately come down to what Mr Xi thinks. In June he sent a letter to Mr Yao, praising his work on AI. In July, at a meeting of the party’s central committee called the “third plenum”, Mr Xi sent his clearest signal yet that he takes the doomers’ concerns seriously. The official report from the plenum listed AI risks alongside other big concerns, such as biohazards and natural disasters. For the first time it called for monitoring AI safety, a reference to the technology’s potential to endanger humans. The report may lead to new restrictions on AI-research activities.

这个决定最终将取决于习的想法。今年6月,他给姚先生写了一封信,赞扬了他在人工智能方面的工作。今年7月,在一次名为“三中全会”的党中央委员会会议上,习发出了迄今为止最明确的信号,表明他认真对待末日论者的担忧。全会的官方报告将人工智能风险与生物危害和自然灾害等其他重大问题一起列出。它首次呼吁监控人工智能的安全性,指的是该技术危及人类的潜力。该报告可能会导致对人工智能研究活动的新限制。

  1. come down to:取决于
  2. the party’s central committee:党中央委员会
  3. third plenum:三中全会
  4. alongside:和其他的一起
  5. biohazrds:生物危害
More clues to Mr Xi’s thinking come from the study guide prepared for party cadres, which he is said to have personally edited. China should “abandon uninhibited growth that comes at the cost of sacrificing safety”, says the guide. Since AI will determine “the fate of all mankind”, it must always be controllable, it goes on. The document calls for regulation to be pre-emptive rather than reactive.

习思想的更多线索来自为党的干部准备的学习指南,据说是他亲自编辑的。该指南称,中国应该“放弃以牺牲安全为代价的无节制增长”。它继续说,既然人工智能将决定“全人类的命运”,它就必须始终是可控的。该文件呼吁监管是先发制人的,而不是被动的。

  1. party cadre:党干部
  2. uninhibited growth:无节制增长
  3. at the cost of:以牺牲 … 为代价
  4. pre-emptive:先发制人的
  5. reactive:被动的
Safety gurus say that what matters is how these instructions are implemented. China will probably create an AI-safety institute to observe cutting-edge research, as America and Britain have done, says Matt Sheehan of the Carnegie Endowment for International Peace, a think-tank in Washington. Which department would oversee such an institute is an open question. For now Chinese officials are emphasising the need to share the responsibility of regulating AI and to improve co-ordination.

安全专家表示,重要的是如何执行这些指令。华盛顿智库卡内基国际和平基金会的马特·希恩表示,中国可能会建立一个人工智能安全研究所来观察前沿研究,就像美国和英国所做的那样。哪个部门将监督这样一个机构是一个悬而未决的问题。目前,中国官员强调需要分担监管人工智能的责任并改善协调。

  1. guru:领域专家;领导者
  2. think-tank:智库
  3. oversee:监管
  4. share the responsibility of:分担 … 的责任
If China does move ahead with efforts to restrict the most advanced AI research and development it will have gone further than any other big country. Mr Xi says he wants to “strengthen the governance of artificial-intelligence rules within the framework of the United Nations”. To do that China will have to work more closely with others. But America and its friends are still considering the issue. The debate between doomers and accelerationists, in China and elsewhere, is far from over.

如果中国确实继续努力限制最先进的人工智能研发,它将比任何其他大国走得更远。习表示,他希望“在联合国框架内加强人工智能规则的治理”。要做到这一点,中国必须与其他国家更加紧密地合作。但是美国及其朋友仍在考虑这个问题。在中国和其他地方,末日论者和加速论者之间的争论远未结束。

  1. be far from over:… 远未结束

【经济学人】Is Xi an AI doomer?

https://codefoxs.github.io/2024/08/26/12-eco1/

作者

CodeFox

发布于

2024-08-26

更新于

2024-09-07

许可协议

评论