
-
Roman Yampolskiy出生地与早期生活:
-
他于 苏联 时期出生在现今的乌克兰境内(具体城市未见公开)。
-
他的家庭是犹太人。在当时的苏联环境下,作为犹太裔,他曾在学校里经历过反犹主义的困扰。这段经历可能影响了他日后对系统性和存在性风险的关注。
-
-
移民与国籍:
-
在苏联解体前后,他与家人以犹太难民的身份移民到了美国。
-
他在美国完成了高等教育和职业生涯,目前是美国路易斯维尔大学的终身教授。
-
因此,他通常被认为是美籍乌克兰裔或美籍俄裔科学家。他目前是一位美国公民。
-
Host:You’ve been working on AI safety for two decades at least. Yeah. 你从事AI安全研究至少二十年了。是的。
Dr. Roman Yampolskiy:I was convinced we can make safe AI, 我一度相信我们能制造出安全的AI,
——>> but the more I looked at it, 但我研究得越深入,
——>> the more I realized it’s not something we can actually do. 就越意识到这不是我们实际能做到的事情。
Host:You have made a series of predictions 你做过一系列预测
——>> about a variety of different dates. 关于不同的日期。
——>> So, what is your prediction for 2027? 那么,你对2027年的预测是什么?
Host:Dr. Roman Yampolskiy is a globally recognized voice on AI safety, 扬波利斯基博士是全球公认的AI安全领域的声音,
——>> and associate professor of computer science. 也是计算机科学副教授。
——>> He educates people on the terrifying truth of AI, 他教育人们关于AI的可怕真相,
——>> and what we need to do to save humanity. 以及我们需要做什么来拯救人类。
Dr. Roman Yampolskiy:In two years, the capability to replace 在两年内,取代大多数人类
——>> most humans and most occupations will come very quickly, 和大多数职业的能力将非常迅速地到来,
——>> I mean in five years 我的意思是,在五年内
——>> we’re looking at a world 我们将面对一个世界
——>> where we have levels of unemployment we never seen before. 那里有我们从未见过的高失业率水平。
——>> I’m not talking about 10 percent, 我指的不是10%,
——>> but 99 percent. 而是99%。
——>> and that’s without super intelligence. 而且这还没有超级智能。
——>> A system smarter than all humans in all domains, 一个在所有领域都比所有人类更聪明的系统,
——>> so it would be better than us at making new AI. 所以它在制造新AI方面会比我们更擅长。
——>> But it’s worse than that, we don’t know how to make them safe, 但更糟的是,我们不知道如何使它们安全,
——>> and yet we still have the smartest people in the world 然而我们仍然有世界上最聪明的人
——>> competing to win the race to super intelligence. 在竞争赢得超级智能的竞赛。
Host But what you make of people like 但你对像
If you say this phrase to someone, it means you’re asking their opinion, impression, or judgment about a person, especially when you are still trying to figure them out.
——>> Sam Altman’s journey with AI? 山姆·奥特曼这样的人在AI领域的旅程有什么看法?
Dr. Roman Yampolskiy:So decade ago, we published card games for how to do AI right, 所以十年前,我们发布了关于如何正确发展AI的卡片游戏,
——>> they violated every single one and he’s gambling eight billion lives 他们违反了每一条规则,而且他正在用八十亿人的生命赌博
——>> on getting richer and more powerful. 以变得更富有和更强大。
——>> So I guess some people want to go to Mars 所以我猜有些人想去火星
——>> others want to control the universe. 其他人想控制宇宙。
——>> But it doesn’t matter who builds it, 但谁建造它并不重要,
——>> the moment you switch to superintelligence, 一旦你切换到超级智能,
——>> we will most likely regret it terribly. 我们很可能会非常后悔。
Host:And then by 2045, 然后到2045年,
——>> now this is where it gets interesting. 现在这里变得有趣了。
——>> Dr. Roman Yampolskiy, let’s talk about simulation theory. 扬波利斯基博士,让我们谈谈模拟理论。
Simulation theory:it is the philosophical and scientific hypothesis that reality as we know it, including the Earth, the universe, and everything within it, might actually be an artificial simulation—most likely a highly sophisticated computer program created by an advanced civilization. (一种哲学和科学假说,认为我们所认知的现实——包括地球、宇宙以及其中的一切——可能实际上是一个人工模拟,很可能是由某个高度先进的文明创造的超级计算机程序。这个观点暗示,我们所感知的“真实”可能不过是虚拟构造,就像电子游戏里的角色,而我们的意识也处于模拟之中。)
Dr. Roman Yampolskiy: I think we are in one, 我认为我们就在一个模拟中,
——>> and there is a lot of agreement on this 而且对此有很多共识
——>> and this is what you should be doing in it. 而这是你应该在模拟中做的事情。
——>> So we don’t shut it down. First… 所以我们不关闭它。首先…
Host I see messages all the time in the comments section 我一直在评论区看到消息
——>> that some of you didn’t realize you didn’t subscribe, 说你们有些人没意识到自己没有订阅,
——>> So if you could do me a favor and double check 所以如果你们能帮我个忙,双击检查一下
——>> if you’re a subscriber to this channel, 是否订阅了这个频道,
——>> that would be tremendously appreciated. 我将不胜感激。
——>> It’s the simple, it’s the free thing that 这是一个简单、免费的事情,
——>> anybody that watches the show frequently can do 任何经常观看节目的人都可以做
——>> to help us here to keep everything going in this show in the trajectory [trə’dʒektəri] it’s on. 以帮助我们的节目保持当前轨迹,继续运行下去。
——>> So please do double check if you subscribed and thank you so much, 所以请务必检查一下是否已订阅,非常感谢!
——>> because in a strange way you are, 因为以一种奇妙的方式,你们是
——>> you’re part of our history and you’re on this journey with us 我们历史的一部分,你们与我们同行于此旅程
——>> and I appreciate you for that, so, yeah, thank you! 我为此感谢你们,所以,是的,谢谢!
Host Dr. Roman Yampolskiy 扬波利斯基博士
——>> What is the mission that you’re currently on? 你现在的使命是什么?
——>> Because it’s quite clear to me 因为我很清楚
——>> that you are on a bit of a mission 你肩负着某种使命
——>> and you’ve been on this mission 而且你从事这个使命
——>> for I think the best part of two decades at least, 至少二十年了,
Dr. Roman Yampolskiy:I’m hoping to make sure that super intelligence 我希望确保我们正在创造的超级智能
——>> we are creating right now does not kill everyone. 不会杀死所有人。(语不惊人死不休:Leave no stone unturned to draw sighs of awe. To turn every phrase until it hits home, and settle for nothing less than jaw-dropping )
Host:Give me some… give me some context on that statement, 请给我一些……给我一些这个声明的背景,
——>> because it’s quite a shocking statement. 因为这是一个相当惊人的声明。
Dr. Roman Yampolskiy:Sure so, in the last decade, 当然,在过去的十年里,
——>> we actually figured out 我们实际上已经找到了
——>> how to make artificial intelligence better. 让AI更强大的方法。
——>> Turns out, 事实证明,
——>> if you add more computer, more data 只要你增加更多的计算能力和数据
——>> it just kind of becomes smarter and so now 它就会变得更聪明,所以现在
——>> the smartest people in the world 世界上最聪明的人
——>> billions of dollars 数十亿美元
——>> all going to create the best possible super intelligence we can. 都在竞相创造我们能实现的最好的超级智能。
——>> Unfortunately, while we know how to make those systems much more capable, 不幸的是,虽然我们知道如何让这些系统能力更强,
——>> we don’t know how to make them safe. 但我们不知道如何让它们安全。
——>> How to make sure they don’t do something we will regret, 如何确保它们不会做出让我们后悔的事情,
——>> and that’s the state-of-the-art right now. 这就是目前人工智能发展的最高水平。
State-of-the-art:it used to describe something that is the most advanced, modern, or sophisticated of its kind, usually thanks to the latest technology or innovation. (近义词:Cutting-edge)
-
The hospital is equipped with state-of-the-art medical technology.
这家医院配备了最先进的医疗技术。 -
We use state-of-the-art software to ensure data security.
我们使用最尖端的软件来确保数据安全。
——>> Then we look at just prediction markets, 然后我们看看预测市场,
——>> how soon will we get to advanced AI. 关于我们何时能实现高级AI。
——>> The timelines are very short, a couple of years, 时间线非常短,只有几年,
——>> Two, three years according to prediction markets, 根据预测市场和顶级实验室首席运营官的说法,只有两三年,
——>> according to COOs of top labs and at the same time, 根据预测市场和顶级实验室首席运营官的说法,只有两三年,同时,
——>> we don’t know how to make sure 我们不知道如何确保
——>> that those systems are aligned with our preferences. 这些系统与我们的偏好保持一致。(In accordance with, be coordinated with)
——>> So we’re creating this alien intelligence, 所以我们正在创造一种外星智慧,
——>> if aliens were coming to earth, 如果外星人即将来到地球,
——>> you had three years to prepare; you would be panicking right now. 而你只有三年时间准备;你现在肯定会感到恐慌。
——>> But most people don’t even realize this is happening, 但大多数人甚至没有意识到这件事正在发生,
Host:So, some of the counter arguments might be… 所以,一些反对论点可能是……
——>> well, these are very very smart people, 嗯,这些人非常非常聪明,
——>> these are very big companies with lots of money, 这些是非常大的公司,有很多钱,
——>> they have an obligation and a moral obligation 他们有义务和道德义务
——>> but also just a legal obligation to make sure they do no harm. 也有法律义务确保他们不造成伤害。
——>> So I’m sure it’ll be fine. 所以我确信它会没事的。
Dr. Roman Yampolskiy:The only obligation they have is to make money for the investors, 他们唯一的义务是为投资者赚钱,
——>> that’s the legal obligation they have. 这是他们的法律义务。
——>> They have no moral or ethical obligations, 他们没有道德或伦理义务,
——>> also, according to them, 而且,据他们说,
——>> they don’t know how to do it yet, 他们还不知道怎么做,
——>> the state-of-the-art answers are:we’ll figure it out, 最先进的回答是我们会想办法解决,
The speaker is mocking the fact that even our top-level solutions sound more like hand-waving than real solutions. (在这里,“state-of-the-art” 并非强调真实的“尖端科技”,而是带有 讽刺意味,指 所谓的“最先进”或“最权威”的答案,但这些答案听上去并不靠谱。)
——>> then we get there or AI will help us control more advanced AI. 然后我们达到目标或者AI会帮助我们控制更先进的AI。
——>> That’s insane! 这太疯狂了!
Host:In terms of probability, what do you think 在概率方面,你认为
——>> is the probability that something goes catastrophically [ˌkætə’strɒfɪkli] wrong? 发生灾难性错误的概率是多少?
Dr. Roman Yampolskiy:So nobody can tell you for sure what’s going to happen, 所以没人能确切告诉你会发生什么,
——>> but if you’re not in charge, 但如果你不负责,
——>> you’re not controlling it. 不控制它。
——>> you will not get outcomes you want. 你就不会得到你想要的结果。
——>> The space of possibilities is almost infinite, 可能性的空间几乎是无限的,
——>> the space of outcomes we will like is tiny. 我们喜欢的结果的空间很小。
Host:And, who are you 那么,你是谁
——>> and how long have you been working on this? 你从事这项工作多久了?
Dr. Roman Yampolskiy:I’m a computer scientist by training, 我是一名受过专业训练的计算机科学家,
——>> I have a PhD in computer science and engineering 我拥有计算机科学与工程博士学位
——>> I probably started work in AI safety 我大概在15年前开始从事AI安全工作
——>> mildly [‘maɪldli] defined as control of bots at the time, 15 years ago. 当时宽泛地定义为对机器人的控制,15年前。
注:”mildly” 一词不能简单地理解为“轻微地”。它的核心含义是 “在有限、初步或宽松的定义下” ,用于强调事物在初始阶段的范围狭隘、定义不成熟或规模很小。
Host:15 years ago, so you’ve been working on AI safety before it was cool, 15年前,所以你早在AI安全成为热门话题之前就开始研究了,
Dr. Roman Yampolskiy:before the term existed, I coined the term AI safety. 甚至在这个词出现之前,我创造了‘AI安全’这个术语。
Host:So you’re the founder of the term AI safety? 所以你是‘AI安全’这个术语的创始人?
Dr. Roman Yampolskiy:The term yes, not the field, 是这个术语,但不是这个领域,
——>> there are other people who did brilliant work before I got there. 在我之前已经有其他人在做着卓越的工作。
Host:Why were you thinking about this 15 years ago? 为什么你15年前就在思考这个问题?
Dr. Roman Yampolskiy:Because most people have only been talking about the term AI safety 因为大多数人在过去两三年
——>> for the last two or three years. 才开始谈论AI安全这个术语。
——>> It started very mildly just as a security project I was looking at poker bots, 这一切始于一个非常普通的网络安全项目,我当时在研究扑克机器人,
——>> and I realized that the bots are getting better and better, 我意识到机器人变得越来越好,
——>> and if you just project this forward enough, 如果你向前投射得足够远,
——>> they’re going to get better than us, smarter, more capable 它们将超越我们,变得更聪明、更有能力
——>> and it happened they are playing poker way better than average players, 事实上它们玩扑克已经比普通玩家好得多了,
——>> but more generally, 但更普遍地说,
——>> it will happen with all other domains, 这将在所有其他领域上发生,
——>> all the other cyber resources. 所有网络资源上发生。
——>> I wanted to make sure AI is a technology 我希望确保AI是一项
——>> which is beneficial for everyone, 对所有人都有益的技术,
——>> so I started to work on making AI safer. 所以我开始致力于让AI更安全。
Host:Was there a particular moment in your career 在你职业生涯中,是否有某个特定时刻
——>> where you thought “Oh my god”! 让你感到‘天啊’!
Dr. Roman Yampolskiy:First, five years at least, 首先,至少前五年,
——>> I was working and solving this problem, 我一边工作一边解决这个问题,
——>> I was convinced we can make this happen, 我坚信我们能够实现目标,
——>> we can make safe AI and that was the goal. 能够制造出安全的AI,那就是目标。
——>> But, the more I looked at it, 但是,我研究得越深入,
——>> the more I realized 就越意识到
——>> every single component of that equation is not something we can actually do, 这个等式的每一个组成部分都是我们实际上无法做到的,
——>> and the more you zoom in, 而且你放大得越多,
——>> it’s like a fractal [‘fræktl], you go in and you find 10 more problems, 它就像一个分形,你深入进去,会发现10个更多的问题,
注:Here “fractal” serve as a metphor for a problem or situation of infinite and self-similar complexity. It means that upon closer examination, a single problem breaks down into numerous smaller, similar sub-problems, each of which in turn reveals yet another layer of complexity. The process of digging deeper never ends, and the total difficulty seems to multiply endlessly. (在这个语境中,“fractal”(分形) 是一个隐喻,用于形容一个具有无限、自相似复杂性的问题或情境。它意味着,当你深入研究一个问题时,它会分解成许多更小的、结构相似的子问题,而每一个子问题又会暴露出另一层的复杂性。这个过程永无止境,问题的总体难度似乎在无限地倍增。)
——>> and then a hundred more problems, 然后100个更多的问题,
——>> and all of them are not just difficult, 而且所有这些问题不仅困难,
——>> they’re impossible to solve. 甚至是不可能解决的。
——>> There is no seminal work in this field 在这个领域,没有开创性的工作
——>> were like we solved this, we don’t have to worry about this, 能宣称‘我们解决了这个问题,不必再担心了’,
——>> there are patches, there are little fixes we put in place 有的只是补丁,是我们临时设置的小修复
——>> and quickly people find ways to work around them, 然后人们很快就能找到绕过它们的方法,
Work around means to find an alternative method to deal with a problem, obstacle, or restriction, usually without directly fixing or removing the problem itself. It’s more about being resourceful and flexible rather than confronting the issue head-on.
Work around = 合法合规的变通办法(正面,中性,强调灵活性)。
Drill-break = 强行破坏规则或机制(负面,暴力/非法,强调突破)。
Get around = 取巧、规避障碍的办法(中性或带点狡猾,口语化)。
——>> they drill-break whatever safety mechanisms we have, 它们会破坏我们设置的所有安全机制, (像钻头凿穿坚硬物体一样,强行突破或绕过某种机制。)
The kids drill-broke their parents’ rules by sneaking out at midnight.
👉 小孩们像钻头一样冲破父母的规矩,半夜偷偷溜了出去。The company’s competitors drill-broke the market restrictions and found a way to sell directly to customers.
👉 这家公司的竞争对手强行突破了市场限制,找到了直接面向客户销售的途径。
——>> so while progress in AI capabilities is exponential [ˌekspə’nenʃl], 所以当AI能力呈指数级进步时,
——>> or maybe even Hyper exponential, 或者甚至超指数级进步时,
——>> progress in AI safety is linear or constant, AI安全的进展却是线性的,甚至是停滞不前的,
——>> the gap is increasing. 这个差距正在扩大。
Host:The gap between?… 什么之间的差距?……
Dr. Roman Yampolskiy:How capable the systems are 系统的能力
——>> and how well we can control them, 与我们控制它们的能力之间的差距,
——>> predict what they’re going to do, explain their decision-making. 预测它们行为、解释其决策能力之间的差距。
Host:I think this is quite an important point, 我认为这一点很重要,
——>> because you said they were basically patching over the issues that we find, 因为你说他们基本上是在修补发现的问题,
——>> so we’re developing this core intelligence 所以我们正在开发这种核心智能
——>> and then to stop it doing things, 然后为了阻止它做某些事情,
——>> or to stop it showing some of its unpredictability, 或阻止它表现出不可预测性,
——>> or its threats, the companies that are developing this AI 或威胁,开发AI的公司
——>> are programming in code over the top to say, 在其之上编写代码,比如,
——>> okay, don’t swear, don’t say that rude word 好吧,不许说脏话,不许说那个粗鲁的词
——>> don’t do that bad thing. 不许做那件坏事。
Dr. Roman Yampolskiy:Exactly, and you can look at other examples of that, so HR manuals, right? 完全正确,你可以看看其他类似的例子,比如人力资源手册,对吧?
——>> We have those humans, 我们是人类,
——>> they’re general intelligences, 是通用智能,
——>> but you want them to behave in a company, 但你希望他们在公司里行为得体,
——>> so they have a policy– 所以制定了政策——
——>> no sexual harassment, 禁止性骚扰,
——>> no this, no that, but if you’re smart enough, 禁止这个,禁止那个,但如果你足够聪明,
——>> you always find a workaround, 你总能找到变通办法,
——>> so you’re just pushing behavior into a different, 所以你只是把行为推到了一个
——>> not yet restricted subdomain. 尚未受到限制的子领域。
Host:We should probably define some terms here, 我们可能需要在这里定义一些术语,
——>> so there’s narrow intelligence which can play chess 有擅长下棋等的狭义AI,
——>> or whatever, there’s artificial general intelligence, 有跨领域运作的人工通用智能,
——>> which can operate across domains, 有跨领域运作的人工通用智能,
——>> and then superintelligence 然后是超级智能
——>> which is smarter than all humans in all domains, 它在所有领域都比所有人类更聪明,
——>> and where are? 我们现在处于哪个阶段?
Dr. Roman Yampolskiy:So that’s a very fuzzy [‘fʌzi] boundary, right? 这是一个非常模糊的边界,对吧?
——>> We definitely have many excellent narrow systems, 我们肯定有许多优秀的狭义系统,
——>> no question about it, 毫无疑问,
——>> and they are super intelligent in that narrow domain, 它们在特定领域是超级智能的,
——>> so, protein folding, is a problem which was solved using narrow AI, 所以,蛋白质折叠问题就是用狭义AI解决的,
👉 Protein folding is the biological process by which a protein chain, made up of amino acids, folds into a specific three-dimensional structure that allows it to function properly.
If proteins fail to fold correctly, it can lead to diseases such as Alzheimer’s or Parkinson’s. (蛋白质折叠 是一种生物过程,指由氨基酸组成的蛋白质链折叠成特定的三维结构,从而使其能够正常发挥功能。如果蛋白质折叠出错,可能会导致阿尔茨海默症或帕金森病等疾病。)蛋白质就像一把长条的铁丝,刚造出来时是一根直条。只有把它折弯、绕曲,最后弯成合适的形状,它才能变成“钥匙”,去打开正确的“锁”(也就是完成特定的生物功能)。
👉 如果折错了形状,钥匙就插不进锁,甚至可能卡住锁孔(对应蛋白质错误折叠导致疾病)。
——>> and it’s superior to all humans in that domain, 它在该领域超越了所有人类,
——>> in terms of AGI, again, 至于AGI,我说过,
——>> I said if we showed what 如果我们把今天拥有的东西
——>> we have today to a scientist from 20 years ago, 展示给20年前的科学家,
——>> they would be convinced we have full-blown AGI, 他们会确信我们已经拥有了成熟的AGI,
——>> we have systems which can learn, they can perform in hundreds of domains 我们拥有的系统可以学习,可以在数百个领域中表现
——>> and be better than human in many of them. 并在许多领域中超越人类。
——>> So you can argue we have a weak version of AGI. 所以你可以认为我们拥有一个弱版本的AGI。
——>> Now, we don’t have superintelligence yet, we still have brilliant humans 现在,我们还没有超级智能,我们仍然有杰出的人类
——>> who are completely dominating AI, 完全主导着AI,
——>> especially in science and engineering. 尤其是在科学和工程领域。
——>> But that gap is closing so fast, you can see 但这个差距正在迅速缩小,你可以看到
——>> especially in the domain of mathematics, three years ago, 特别是在数学领域,三年前,
——>> large language models couldn’t do basic algebra [‘ældʒɪbrə], 大语言模型连基础代数都做不好,
——>> multiplying three digit numbers was a challenge. 乘三位数都是个挑战。
——>> Now they helping with mathematical proofs, 现在它们正在帮助进行数学证明,
——>> they’re winning mathematics, Olympiads [ə’lɪmpiæd], competitions, 赢得数学奥林匹克竞赛,
——>> they are working on solving millennial [mɪ’lenɪəl] problems, 它们致力于解决千禧年难题,
——>> hardest problems in mathematics, so in three years, 数学中最难的问题,所以在三年内,
——>> we closed the gap from subhuman performance 我们从低于人类水平
——>> to better than most mathematicians [mæθə’mətɪʃnz] in the world, 提升到了比世界上大多数数学家都厉害的水平,
——>> and we see the same process 我们看到同样的过程
——>> happening in science and in engineering. 正发生在科学和工程领域。
Host:You have made a series of predictions 你做过一系列预测
——>> and they correspond to a variety of different dates, 它们对应着不同的日期,
——>> and I have those dates in front of me here, 我这里面前面就放着这些日期,
——>> what is your prediction for the year 2027? 你对2027年的预测是什么?
Dr. Roman Yampolskiy:We’re probably looking at AGI 我们很可能在那时看到AGI
——>> as predicted by prediction markets and tops of the labs. 正如预测市场和顶级实验室所预测的那样。
Host:So we’d have artificial general intelligence by 2027, 所以我们到2027年就会有人工通用智能了,
——>> and how would that make the world different to how it is now? 那会让世界与现在有何不同?
Dr. Roman Yampolskiy:So, if you have this concept of a drop in employee, 所以,如果你有这种”即插即用员工”的概念,
“a drop in employee”(即插即用的员工) 是一个强烈的隐喻,指代一种可以无缝、即时接入工作流程并执行任务的人工智能系统或自动化工具,它不需要支付薪水、福利或任何传统雇佣成本。“Drop-in” 在软件开发中是一个成熟概念,指一个组件、库或服务可以无缝替换另一个,而无需修改系统其他部分。比如 drop-in replacement(即插即用的替代品)。
对这个隐喻的解析:
技术正将人类历史上最复杂、最昂贵的“人力资源”系统,压缩成一个可以随意复制、缩放和丢弃的软件模块。
“Drop-in employee” 是这个梦想的终极表达:一个无需招聘、培训、支付薪水、管理情绪、也不会请病假或离职的“员工”。它直接将劳动力从“可变成本”变成了“固定基础设施”。
——>> you have free labor, physical and cognitive, trillions of dollars of it, 你就拥有免费的劳动力,包括体力和脑力劳动,价值数万亿美元,
——>> it makes no sense to hire humans for most jobs. 那么雇佣人类来做大多数工作就毫无意义了。
——>> If I can just get, you know a 20 dollar subscription, 如果我只需要花20美元订阅一个服务,
——>> or free model to do what an employee does, 或者使用免费模型,就能完成一个员工的工作,
——>> first, anything on a computer will be automated and next, 首先,任何在电脑上完成的工作都将被自动化,接下来,
——>> I think humanoid [‘hjuːmənɔɪd] robots are maybe five years behind, 我认为人形机器人可能落后五年左右,
——>> so in five years all the physical labor can also be automated. 所以五年内,所有体力劳动也可以被自动化。
——>> So we’re looking at a world 所以我们正在面对一个世界
——>> where we have levels of unemployment we never seen before, 那里有我们从未见过的高失业率水平,
——>> not talking about 10 percent unemployment which is scary, 我指的不是10%这种可怕的失业率,
——>> but 99 percent, all you have left is jobs where, 而是99%,剩下的只有那些工作,
——>> for whatever reason you prefer another human would do it for you, 出于某种原因你更希望由另一个人来为你做的工作,
——>> but anything else can be fully automated. 但其他一切都可以完全自动化。
——>> It doesn’t mean it will be automated in practice, 这并不意味着一夜之间所有工作都会自动化,
——>> a lot of times technology exists 很多时候技术存在了
——>> but it’s not deployed, video phones were invented in the 70s, 但并未部署,可视电话在70年代就发明了,
——>> nobody had them until iPhones came around. 但直到iPhone出现人们才真正拥有它。
——>> So we may have a lot more time with jobs and with world 所以我们可能还有更多时间保有工作,世界看起来
——>> which looks like this, 也依然如此,
——>> but capability to replace 但是,取代
——>> most humans and most occupations will come very quickly. 大多数人类和大多数职业的能力将会非常迅速地到来。
Host:Hmm… okay, 嗯……好吧,
——>> so let’s try and drill down into that and stress-test it, so… 那么让我们深入探讨并压力测试一下,所以……
——>> a podcaster like me, 像我这样的播客主持人,
——>> would you need a podcaster like me? 还会需要吗?
Dr. Roman Yampolskiy:So, let’s look at what you do, 那么,看看你做什么,
——>> you prepare, you ask questions, you ask follow-up questions 你做准备,你提问,你追问,
——>> and you look good on camera. 而且你在镜头前形象不错。
Host:Thank you so much. 太感谢了。
Dr. Roman Yampolskiy:Let’s see what we can do, 让我们看看我们能做什么,
——>> large language model today can easily read everything I wrote 今天的大语言模型可以轻松阅读我写的一切,
——>> and have very solid understanding better, 并且有非常扎实的理解,比我更好,
——>> I assume you haven’t read every single one of my books, 我猜你并没有读过我所有的书,
——>> that thing would do it, it can train on every podcast you ever did, 但那东西会做到,它可以训练学习你做的每一个播客,
——>> so it knows exactly your style, the types of questions you ask, 所以它确切知道你的风格,你提问的类型,
——>> it can also find correspondence between what worked really well, 它还能找到哪些问题效果特别好之间的关联,
——>> like this type of question really increased views, 比如这类问题大大增加了浏览量,
——>> this type of topic was very promising, 这类话题非常有潜力,
——>> so it can optimize I think better than you can, 所以我认为它可以比你做得更好,
——>> because you don’t have the data set. 因为你没有这样的数据集。
——>> Of course, visual simulation is trivial at this point. 当然,视觉模拟在这一点上已经很容易了。
Host:So can you make a video within seconds of me sat here and…? 所以你是说几秒钟内就能生成一个我坐在这里的视频,然后……?
Dr. Roman Yampolskiy:So we can generate videos of you interviewing anyone 所以我们可以生成你采访任何人
——>> on any topic very efficiently, 关于任何话题的视频,非常高效,
——>> and you just have to get likeness approval, whatever. 而你只需要获得肖像使用权之类的。
Host:Are there many jobs 还会有很多工作
——>> that you think would remain in a world of AGI? 在你认为AGI的世界里会保留下来吗?
——>> If you’re saying AGI’s potentially going to be here 如果你说AGI可能在2027年出现
——>> whether it’s deployed or not by 2027 and then… 无论是否部署,然后……
——>> okay, so let’s take out of this, any physical labor jobs, for a second, 好吧,那么让我们暂时把体力劳动的工作排除在外,
——>> are there any jobs that you think a human would be able to do 还有哪些工作是你认为人类在AGI的世界里
——>> better in a world of AGI, still? 仍然能够做得更好的?
Dr. Roman Yampolskiy so that’s the question I often ask people in a world with AGI, 所以这是我经常在拥有AGI的世界里问人们的问题,
——>> and I think almost immediately we’ll get super intelligence as a side effect, 而且我认为几乎立刻就会产生超级智能作为副作用,
——>> So the question really is, in a world of super intelligence, 所以问题实际上是,在一个拥有超级智能的世界里,
——>> which is defined as better than all humans in all domains, 它被定义为在所有领域都比所有人类更优秀
——>> what can you contribute? 你能贡献什么?
——>> And so, you know better than anyone what it’s like to be you, 所以,你比任何人都更了解做你自己的感受,
——>> you know what ice cream tastes to you, 你知道冰淇淋对你来说是什么味道,
——>> can you get paid for that knowledge? 你能靠这种知识获得报酬吗?
——>> Is someone interested in that? Maybe not, not a big market. 有人对此感兴趣吗?可能没有,市场不大。
——>> There are jobs where you want a human, 有些工作你可能想要一个人类,
——>> maybe you’re rich and you want a human accountant 也许你很有钱,你想要一个人类会计师,
——>> for whatever historic reasons, 出于某种历史原因,
——>> old people like traditional ways of doing things, 老年人喜欢传统的做事方式,
——>> Warren Buffett would not switch to AI, 沃伦·巴菲特不会转向AI,
——>> he would use his human accountant, 他会继续用他的人类会计师,
——>> but it’s a tiny subset of a market. 但这只是市场的一小部分。
——>> Today we have products which are man made in US, 今天我们有些产品标榜”美国手工制造”,
——>> as opposed to mass produced in China, 以区别于中国大规模生产的产品,
——>> and some people pay more to have those, 有些人愿意为此支付更高价格,
——>> but it’s a small subset, 但这只是一小部分,
——>> it’s almost a fetish [‘fetɪʃ] , there is no practical reason for it, 几乎是一种癖好,没有实际理由,
Fetish:an object, body part, or activity that someone has an unusually strong sexual or obsessive interest in.
Some people develop a fetish for certain materials, like leather or silk. (有些人会对某些材质产生迷恋,比如皮革或丝绸)。
Her fetish for cleanliness made her reorganize the office daily. (她对清洁的迷恋让她每天都要整理办公室。)
——>> and I think anything you can do on 而且我认为任何你可以在
——>> a computer could be automated using that technology. 电脑上完成的工作都可以用该技术实现自动化。
Host:You must hear a lot of rebuttals [rɪ’bʌtl] to this when you say it, 当你这么说的时候,肯定听到很多反驳吧,
——>> because people experience a huge amount of mental discomfort 因为人们听到他们的工作,
——>> when they hear that their job, 他们的职业,
——>> their career, the thing they got a degree in, 他们获得学位的东西,
——>> the thing they invested a hundred thousand dollars into 他们投入了数十万美元的东西
——>> is gonna be taken away from them, 将被夺走时,会感到巨大的心理不适,
——>> so their natural reaction, for some people is that cognitive dissonance [‘dɪsənəns] that 所以一些人的自然反应是那种认知失调,
Cognitive dissonance is a psychological theory stating that individuals experience mental discomfort or psychological stress when they hold two or more contradictory beliefs, ideas, or values at the same time, or when their behavior conflicts with their beliefs. This discomfort motivates people to reduce the inconsistency by changing their beliefs, justifying their behavior, or avoiding contradictory information. (认知失调是一个心理学理论,指当个体同时拥有两种或多种相互矛盾的信念、观点或价值观,或其行为与其信念相冲突时,所经历的心理不适或精神压力。这种不适感会驱动人们通过改变自身信念、为其行为寻找合理解释、或回避矛盾信息等方式,来减轻这种不一致性。
一个吸烟者(行为)明知吸烟导致肺癌(信念),他便处于认知失调中。他可能通过告诉自己“戒烟的压力反而对健康更不好”(改变信念)或“人生总需要些享受”(合理化行为)来缓解失调。)
——>> “No you’re wrong, AI can’t be creative, “不,你错了,AI没有创造力,
——>> it’s not this, it’s not that, it’ll never be interested in my job, I’ll be fine… 它不能做这个,不能做那个,它永远不会对我的工作感兴趣,我会没事的……”
——>> because you hear these arguments all the time right? 你肯定经常听到这些论点,对吧?
Dr. Roman Yampolskiy:It’s really funny, I ask people, 这很有趣,我问人们,
——>> and I ask people in different occupations, 我问不同职业的人,
——>> I’ll ask my Uber driver, 我问我的优步司机,
——>> are you worried about self-driving cars? And they go no. “你担心自动驾驶汽车吗?”他们说”不”。
——>> No one can do what I do, “没人能做到我做的,
——>> I know the streets of New York, I can navigate like no AI, I’m safe. 我熟悉纽约的街道,我能以任何AI都无法做到的方式导航,我很安全。”
——>> and it’s true for any job. Professors are saying this to me, 任何工作都是如此。教授们也对我这么说,
——>> “Oh nobody can lecture like I do”, like this is so special, “哦,没人能像我这样讲课”,好像这太特别了,
——>> but you understand it’s ridiculous, 但你应该明白这很荒谬,
——>> we already have self-driving cars replacing drivers, 我们已经有了自动驾驶汽车取代司机,
——>> that is not even a question if it’s possible, 这甚至已经不是是否可能的问题,
——>> it’s like how soon before you fired. 而是你多久会被解雇的问题。
Host:Yeah, 是的,
——>> I mean I’ve just been in LA yesterday and my car drives itself, 我的意思是,我昨天就在洛杉矶,我的车可以自动驾驶,
——>> so I get in the car, I put in where I want to go, 所以我上车,输入目的地,
——>> and then I don’t touch the steering wheel or the brake pedals, 然后我就不碰方向盘或刹车踏板,
——>> and it takes me from A To B, 它把我从A点带到B点,
——>> even if it’s an hour long drive without any intervention at all, 即使是一小时的车程也完全不需要干预,
——>> I actually still park it, 实际上我还是自己停车,
——>> but other than that, 但除此之外,
——>> I’m not driving the car at all, and obviously in LA, 我根本不开车,而且显然在洛杉矶,
——>> we also have Waymo now, 我们现在也有Waymo,
“Waymo” 源自其使命口号 “A new way forward in mobility”(未来出行的新方式)。It is a leading autonomous [ɔː’tɒnəməs] driving technology company and service, originally launched as a project within Google before becoming a standalone subsidiary under Alphabet Inc. It operates a commercial, fully autonomous ride-hailing service known as “Waymo One,” primarily in major metropolitan areas like Phoenix, Arizona, and San Francisco. Unlike many competitors, Waymo’s vehicles operate without a human safety driver in the vehicle, representing a truly driverless experience. Their technology utilizes a sophisticated suite of sensors (LiDAR, cameras, radar), high-definition maps, and powerful artificial intelligence to navigate public roads safely.
(Waymo 是一家领先的自动驾驶技术公司及服务商,最初作为谷歌内部项目启动,后成为 Alphabet Inc. 旗下的独立子公司。它运营着一项名为 “Waymo One” 的商业化全自动驾驶网约车服务,主要覆盖亚利桑那州凤凰城、旧金山等大都市区。与许多竞争对手不同,Waymo 的车辆在行驶时车内无需人类安全驾驶员,提供了真正的无人驾驶体验。其技术依托包括激光雷达、摄像头、雷达在内的精密传感器套件、高精地图以及强大的人工智能系统,以实现在公共道路上的安全行驶。)
——>> which means you order it on your phone, 这意味着你可以用手机叫车,
——>> and it shows up with no driver in it and takes you to 它没有司机,来接你并带你去
——>> where you want to go. 你想去的地方。
——>> so it’s quite clear to see how that is potentially a matter of time for those people 所以很容易看出这对那些人来说只是时间问题
——>> because we do have some of those people listening to this conversation right now 因为我们确实有些听众正在听我们对话
——>> that their occupation is driving. 他们的职业正是驾驶。
——>> To offer them… 对他们来说……
——>> and I think driving is the biggest occupation in the world if I’m correct, 而且我认为驾驶是世界上最大的职业,如果我没记错的话,
——>> I’m pretty sure it is the biggest occupation in the world. 我很确定它是世界上最大的职业。
Dr. Roman Yampolskiy:One of the top ones, yeah. 是规模最大的职业之一,是的。
Host:What would you say to those people, 你会对那些正在听我们对话的从业者说些什么?
——>> what should they be doing with their lives 他们应该如何规划自己的生活?
——>> what should they… should they be retraining in something 他们应该……他们应该接受再培训从事其他工作吗?
——>> or what time frame? 或者时间框架是怎样的?
Dr. Roman Yampolskiy:So that’s the paradigm [‘pærədaɪm] shift here, 这就是这里的范式转变,
“范式转移”指的是在某个特定领域或整个社会中,那些定义其如何运作的基本假设、理论和实践发生了根本性和变革性的改变。它不仅仅是一种改进或更新,更是一场革命性的飞跃,彻底改变了理解现实的框架,并使旧有的模式变得过时。它的主要特性有以下三点:
革命性,而非进化性: 它用一个全新的、不可通约的范式(主导世界观)取代了旧的范式。
对变革的阻力: 既定范式根深蒂固,因此新范式在被接受之前常常会遇到巨大的阻力。
改变一切: 一旦被接受,它将重新定义哪些问题是重要的、使用哪些方法、以及什么被接受为“真理”。
E.G.1:”Switching from a flip phone to a smartphone was a real paradigm shift for my grandma; it completely changed how she communicates, gets news, and even sees the world.” (对我奶奶来说,从翻盖手机换成智能手机是一次真正的范式转移;这完全改变了她沟通、获取信息乃至看待世界的方式。)E.G.2:The adoption of cloud computing represented a paradigm shift for the IT industry, moving companies away from owning expensive hardware to buying computing power as a flexible service. (云计算的采用对IT行业而言是一次范式转移,它让企业从购买昂贵的自有硬件,转向购买灵活的计算服务。)
——>> before we always said this job is going to be automated, 以前我们总是说,这个工作会被自动化,
——>> retrained to do this other job, 去接受再培训做那个工作,
——>> but if I’m telling you that all jobs will be automated, 但如果我告诉你所有工作都将被自动化,
——>> then there is no plan B, you cannot retrain. 那就没有B计划了,你无法接受再培训。
——>> Look at computer science, two years ago 看看计算机科学,两年前
——>> we told people learn to code, you are an artist, 我们告诉人们:学习编程,你是个艺术家,
——>> you cannot make money, learn to code, then we realized oh, 你赚不到钱,学习编程吧,然后我们意识到,哦,
——>> AI kinda knows how to code AI好像会编程了
——>> and getting better, become a prompt engineer, 而且越来越好,那就成为提示工程师吧,
——>> You can engineer prompts for AIs, 你可以为AI设计提示,
——>> it’s going to be a great job, get a 4 year degree in it, 这会是个很棒的职业,去拿个四年的学位,
——>> but then we’re like AI is way 但后来我们发现AI
——>> better at designing prompts for our AIs than any human, 在为它们自己设计提示方面比任何人类都强得多,
——>> so that’s gone. 所以这个工作也没了。
——>> So I can’t really tell you right now 所以我现在真的无法告诉你
——>> the hardest thing is design AI agents for practical applications, 目前最难的是为实际应用设计AI智能体,
——>> I guarantee you in a year or two, 我敢保证一两年后,
——>> it’s going to be gone just as well. 这个工作也会消失。
——>> So I don’t think there is a 所以我不认为存在
——>> this occupation needs to learn to do this, instead, “这个职业需要学习做那个”的解决方案,
——>> I think it’s more like we as a humanity, 我认为更应该是,我们作为人类,
——>> then we all lose our jobs, what do we do? 都失去了工作,我们该怎么办?
——>> What do we do financially, who’s paying for us? 我们财务上怎么办?谁为我们支付?
——>> And what do we do in terms of meaning? 我们在意义层面做什么?
——>> What do I do with my extra 60,80 hours a week? 我每周多出来的60、80小时空闲时间做什么?
Host:You’ve thought around this corner, haven’t you? 你已经思考过这个局面了,对吧?
Dr. Roman Yampolskiy:A little bit. 一点点。
Host:What is around that corner in your view? 在你看来,那个局面是怎样的?
Dr. Roman Yampolskiy:So the economic part seems easy, 经济部分似乎很容易,
——>> if you create a lot of free labor, 如果你创造了大量的免费劳动力,
——>> you have a lot of free wealth 你就拥有了大量的免费财富,
——>> abundance things which are right now not very affordable 那些目前不太负担得起的东西
——>> become dirt cheap, 会变得极其便宜,
——>> and so you can provide for everyone basic needs, 因此你可以为每个人提供基本生活保障,
——>> some people say you can provide beyond basic needs, 有些人说你可以提供超出基本需求的,
——>> you can provide very good existence for everyone. 为每个人提供非常好的生活。
——>> The hard problem is what do you do with all that free time? 困难的问题是你如何处理所有这些空闲时间?
——>> For a lot of people, 对很多人来说,
——>> their jobs are what gives them meaning in their life, 工作是他们生活的意义所在,
——>> so they would be kind of lost, 所以他们可能会感到迷失,
——>> we see it with people who retire or do early retirement, 我们看到有些人退休或提前退休后就是这样,
——>> and for so many people who hate their jobs, 对于那些讨厌自己工作的人来说,
——>> they’ll be very happy not working, 他们会很高兴不用工作,
——>> but now you have people who are chilling all day, 但现在你有了整天无所事事的人,
Chilling:it means relaxing, doing nothing serious, hanging out in a very casual way. It doesn’t carry the literal sense of being cold here, but rather of being laid-back and carefree. ( 放松、闲着没事、轻松消磨时光。这里并不是字面上的“冷”,而是指一种 很随意、悠闲的状态。)
——>> what happens to society? 这对社会有什么影响?
——>> how does that impact crime rate, pregnancy rate, 这如何影响犯罪率、生育率、
——>> all sorts of issues? 各种问题?
——>> Nobody thinks about it: 没有人思考过:
——>> Governments don’t have programs prepared to deal 政府没有准备好应对
——>> with 99 percent unemployment. 99%失业率的计划。
Host:What do you think that world looks like? 你认为那个世界会是什么样子?
Dr. Roman Yampolskiy:Again I think… you’re gonna be 再次我认为……这里非常重要的一点是理解
——>> the very important part to understand here is the unpredictability of it. 其不可预测性。
——>> We cannot predict what a smarter-than-us system will do, 我们无法预测一个比我们更聪明的系统会做什么,
——>> and the point when we get to that is often called singularity [ˌsɪŋɡju’lærəti] by analogy [ə’nælədʒi], 我们达到那个点的时刻,通常被类比为奇点,
——>> with physical singularity, you cannot see beyond the event horizon 就像物理上的奇点,你无法看到事件视界之外
——>> I can tell you what I think might happen, 我可以告诉你我认为可能会发生什么,
——>> but that’s my prediction, 但那是我的预测,
——>> it is not what actually is going to happen, 不是实际会发生的事情,
——>> because I just don’t have cognitive ability to predict 因为我根本没有认知能力去预测
——>> a much smarter agent impacting this world. 一个更聪明的智能体如何影响这个世界。
——>> Then you read science fiction, 你读科幻小说,
——>> there is never a super intelligence in it actually doing anything, 里面从来没有一个真正做事的超级智能,
——>> because nobody can write believable science fiction at that level, 因为没人能写出那个层面可信的科幻小说,
——>> they either banned AI like Dune, 他们要么像《沙丘》里那样禁止AI,
——>> because this way you can avoid writing about it, 这样就能避免去写它,
——>> or it’s like Starwars, you have this really dumb bots, 要么像《星球大战》那样,只有非常愚蠢的机器人,
——>> but nothing super intelligent ever, because by definition, 但从来没有超级智能的东西,因为根据定义,
——>> you cannot predict at that level. 你无法预测那个层面。
——>> Because by definition of it, being super intelligent, 因为根据定义,它是超级智能的,
——>> it will make its own mind up. 它会自己决定。
——>> By definition, if it was something you could predict, 根据定义,如果它是你可以预测的东西,
——>> you would be operating at the same level of intelligence 你就是在以相同的智能水平运作,
——>> violating our assumption that it is smarter than you. 这就违背了它比你更聪明的假设。
——>> If I’m playing chess with superintelligence, 如果我和一个超级智能下棋,
——>> and I can predict every move, 并且我能预测它的每一步,
——>> I’m playing at that level. 那我就处于和它相同的水平。
——>> It’s kind of like my French bulldog trying to predict exactly 这就像我的法国斗牛犬试图准确预测
——>> what I’m thinking and what I’m gonna do. 我在想什么以及我要做什么。
——>> That’s a good cognitive gap, 这是一个很好的认知差距的例子,
——>> and it’s not just he can predict you’re going to work, 它或许能预测你去工作、
——>> you’re coming back, but he cannot understand why you’re doing a podcast, 你会回来,但它无法理解你为什么做播客,
——>> that is something completely outside of his model of the world. 这完全超出了它对世界的理解模型。
Host:Yeah he doesn’t even know that I go to work, 是的,它甚至不知道我去工作,
——>> he just sees that I leave the house, 它只看到我离开房子,
——>> and doesn’t know where I go. 不知道我去哪里。
——>> Buy food for him. (只是希望我)给它买食物。
Host:What’s the most persuasive argument against your own perspective here? 针对你自己的观点,最有说服力的反对论据是什么?
Dr. Roman Yampolskiy:That we will not have unemployment due to advanced technology. 反对观点认为,我们不会因为先进技术而失业。
——>> That there won’t be this French bulldog human gap in understanding and 认为不会出现这种法国斗牛犬与人类之间的理解差距,以及
——>> I guess like power and control. 我猜还有权力和控制上的差距。



