18067909587
工作日 9:00-19:00
Opinions about artificial intelligence tend to fall on a wide spectrum. At one extreme is the utopian view that AI will cause runaway economic growth, accelerate scientific research and perhaps make humans immortal. At the other extreme is the dystopian view that AI will cause abrupt, widespread job losses and economic disruption, and perhaps go rogue and wipe out humanity. So a paper published earlier this year by Arvind Narayanan and Sayash Kapoor, two computer scientists at Princeton University, is notable for the unfashionably sober manner in which it treats AI: as "normal technology". The work has prompted much debate among AI researchers and economists.
人们对人工智能的看法各不相同,差异特别大。 一个极端是乌托邦式的观点,认为人工智能会带来迅猛的经济增长,加速科学研究进程,甚至可能让人类实现永生。 另一个极端是反乌托邦式的观点,认为人工智能将导致突然大规模失业,引发经济动荡, 还有可能失控并致使人类灭绝。 因此,普林斯顿大学两位计算机科学家阿尔温德·纳拉亚南和萨亚什·卡普尔,于今年年初发表的一篇论文显得尤为特别: 这篇论文不同于两种极端观点,而是以冷静的态度把人工智能当成“普通技术”来看待。 这项研究也让人工智能研究者和经济学家们展开了不少讨论。

Both utopian and dystopian views, the authors write, treat AI as an unprecedented intelligence with agency to determine its own future, meaning analogies with previous inventions fail. Messrs Narayanan and Kapoor reject this, and map out what they see as a more likely scenario: that AI will follow the trajectory of past technological revolutions. They then consider what this would mean for AI adoption, jobs, risks and policy. "Viewing AI as normal technology leads to fundamentally different conclusions about mitigations compared to viewing AI as being humanlike," they note.
作者写道,无论是乌托邦式还是反乌托邦式的观点,都将人工智能视为一种具备自主决定自身未来能力的“空前的智能体”, 这意味着以往发明创造的类比经验在此完全不适用。 纳拉亚南和卡普尔驳斥了这种观点,并提出了一个更贴近现实的推测: 人工智能将会遵循过往技术革命的轨迹。 接着,他们分析了这种情况会给人工智能的落地应用、就业形势、潜在风险和相关政策带来什么影响。 他们还提到:“将人工智能视为普通技术,而不是当作像人一样的智能,在降低风险方面会得出完全不同的结论。”
The pace of AI adoption, the authors argue, has been slower than that of innovation. Many people use AI tools occasionally, but at an intensity in America (in hours of usage per day) that is still low as a fraction of overall working hours. For adoption to lag behind innovation is not surprising, because it takes time for people and companies to adapt habits and workflows to new technologies. Adoption is also hampered by the fact that much knowledge is tacit and organisation-specific, data may not be in the right format and its use may be constrained by regulation. Similar constraints were in place a century ago, when factories were electrified: doing so took decades, because it needed a total rethink of floor layouts, processes and organisational structures.
作者认为,人工智能应用的速度始终慢于技术创新的速度。 许多人偶尔会使用人工智能工具, 但以美国为例,从每日使用时长占总工作时长的比例来看,AI使用强度仍处于较低水平。 技术应用滞后于技术创新本就不足为奇, 毕竟不管是个人还是公司,要改习惯、调工作流程来适应新技术,这个过程都需要时间。 而且,应用推广还受到多重因素阻碍:大量隐性知识的组织专属属性、数据格式不兼容、监管约束等。 100年前,工厂电气化时,也面临过类似制约:当时工厂电气化进程耗时数十年, 因为这需要对厂房布局、生产流程及组织结构进行全面重构。
Moreover, constraints on the pace of AI innovation itself may be more significant than they seem, argues the paper, because many applications (such as drug development, self-driving cars or even just booking a holiday) require extensive real-world testing. This can be slow and costly, particularly in safety-critical fields that are tightly regulated. As a result, economic impacts "are likely to be gradual", the authors conclude, rather than involving the abrupt automation of a big chunk of the economy.
而且这篇论文还提到,人工智能创新速度的限制因素,可能比我们看到的更多。 因为许多AI应用(例如药物研发、自动驾驶汽车,甚至只是简单的假期预订)都需要开展大量现实场景测试。 这类测试既缓慢又昂贵,在监管严格的安全关键领域更是如此。 因此,作者得出的结论是,经济影响“可能是渐进的”, 而不是使经济中的大幅领域突发自动化变革。
Even a slow spread of AI would change the nature of work. As more tasks become amenable to automation, "an increasing percentage of human jobs and tasks will be related to AI control." There is an analogy here with the Industrial Revolution, in which workers went from performing manual tasks, such as weaving, to supervising machines doing those tasks—and handling situations machines could not (like intervening when they get stuck). Rather than AI stealing jobs wholesale, jobs might increasingly involve configuring, monitoring and controlling AI-based systems. Without human oversight, Messrs Narayanan and Kapoor speculate, AI may be "too error-prone to make business sense".
即便人工智能的普及速度缓慢,它仍会改变工作的本质。 随着越来越多的任务能够通过自动化完成,“人类工作中与人工智能管控相关的占比将持续提升”。 这一点与工业革命类似:当时的工人从织布等体力劳动,转变为监督机器完成这些劳动,并处理机器无法应对的情况(如机器卡顿时介入)。 人工智能不会全盘取代人类工作岗位,相反,人类工作内容可能会越来越多地涉及人工智能系统的配置、监控与管控。 纳拉亚南和卡普尔推测,如果没有人类的监督,人工智能“可能会因错误率过高,而不具备商业应用价值”。
That, in turn, has implications for AI risk. Strikingly, the authors criticise the emphasis on "alignment" of AI models, meaning efforts to ensure outputs align with their human creators' goals. Whether a given output is harmful often depends on context that humans may understand, but the model lacks, they argue. A model asked to write a persuasive email, for example, cannot tell if that message will be used for legitimate marketing or nefarious phishing. Trying to make an AI model that cannot be misused "is like trying to make a computer that cannot be used for bad things", the authors write. Instead, they suggest, defences against the misuse of AI, for example to create computer malware or bioweapons, should focus further downstream, by strengthening existing protective measures in cyber-security and biosafety. This also increases resilience to forms of these threats not involving AI.
这反过来又对人工智能风险产生了影响。 引人注目的是,作者批评了对人工智能模型“一致性”的强调,这意味着努力确保输出与人类创造者的目标一致。 他们认为,给定的输出是否有害通常取决于人类可能理解的背景,但模型缺乏。 例如,被要求写一封有说服力的电子邮件的模型无法判断该消息是否将被用于合法营销或恶意网络钓鱼。 作者写道,试图制造一个不能被滥用的人工智能模型“就像试图制造一台不能被用来做坏事的计算机”。 相反,他们建议,针对人工智能滥用(例如制造计算机恶意软件或生物武器)的防御应该集中在下游, 通过加强网络安全和生物安全方面的现有保护措施。 这也提高了对不涉及人工智能的威胁形式的抵御能力。
Such thinking suggests a range of policies to reduce risk and increase resilience. These include whistleblower protection (as seen in many other industries), compulsory disclosure of AI usage(as happens with data protection), registration to track deployment (as with cars and drones) and mandatory incident-reporting (as with cyber-attacks). In sum, the paper concludes that lessons from previous technologies can be fruitfully applied to AI—and treating the technology as "normal" leads to more sensible policies than treating it as imminent superintelligence.
这种想法建议采取一系列政策来降低风险和提高韧性。 其中包括举报人保护机制(正如许多其他行业所看到的那样)、强制披露人工智能的使用情况(正如数据保护所发生的那样), 注册以跟踪部署(如汽车和无人机)和强制事件报告(如网络攻击)。 总而言之,该论文的结论是:以往技术发展中积累的经验可有效应用于人工智能领域;将人工智能视为“常规技术”来制定政策,比将其当作“即将到来的超级智能”更具合理性。