栽培知识 分类
Bsports官网_人工智能与核武器 哪个更危险发布日期:2024-11-15 浏览次数:388
本文摘要:Ebola sounds like the stuff of nightmares. Bird flu and SARS also send shivers down my spine. But I’ll tell you what scares me most: artificial intelligence.埃博拉病毒听得一起像噩梦。

Ebola sounds like the stuff of nightmares. Bird flu and SARS also send shivers down my spine. But I’ll tell you what scares me most: artificial intelligence.埃博拉病毒听得一起像噩梦。禽流感和SARS也让我脊背发凉。但是我告诉他你什么让我最惧怕:人工智能。

The first three, with enough resources, humans can stop. The last, which humans are creating, could soon become unstoppable.如果有充足的资源,人类能制止前三项疾病的传播。但最后一项是由人类所建构,它迅速将显得无法挡住。

Before we get into what could possibly go wrong, let me first explain what artificial intelligence is. Actually, skip that. I’ll let someone else explain it: Grab an iPhone and ask Siri about the weather or stocks. Or tell her “I’m drunk.” Her answers are artificially intelligent.在我们探究有可能经常出现什么问题之前,让我再行解释一下什么是人工智能。实质上不必我说明。我让别人来解释一下。

你拿起iPhone,问问Siri天气和股票情况。或者对她说道“我喝酒了”,她的问就是人工智能的结果。Right now these artificially intelligent machines are pretty cute and innocent, but as they are given more power in society, these machines may not take long to spiral out of control.现在,这些人工智能机器十分甜美、无辜,但是随着它们在社会上被彰显更加多权力,用没法多久它们就不会失控。

In the beginning, the glitches will be small but eventful. Maybe a rogue computer momentarily derails the stock market, causing billions in damage. Or a driverless car freezes on the highway because a software update goes awry.一开始只是些小毛病,但是它们意义根本性。比如,一台经常出现故障的电脑瞬间让股市瓦解,造成数十亿美元的损失。或者一辆无人驾驶汽车因软件升级错误在高速公路上忽然静止不动。

But the upheavals can escalate quickly and become scarier and even cataclysmic. Imagine how a medical robot, originally programmed to rid cancer, could conclude that the best way to obliterate cancer is to exterminate humans who are genetically prone to the disease.但是这些暴乱能较慢升级,显得十分可怕,甚至变为大灾难。想象一下,一个最初用来对付癌症的医用机器人有可能得出结论这样的结论:歼灭癌症的最佳方法是歼灭那些从基因角度谈更容易患病的人。

Nick Bostrom, author of the book “Superintelligence,” lays out a number of petrifying doomsday settings. One envisions self-replicating nanobots, which are microscopic robots designed to make copies of themselves. In a positive situation, these bots could fight diseases in the human body or eat radioactive material on the planet. But, Mr. Bostrom says, a “person of malicious intent in possession of this technology might cause the extinction of intelligent life on Earth.”《超级智能》(Superintelligence)一书的作者尼克·博斯特罗姆(Nick Bostrom)叙述了几种不会造成人类绝种的可怕情况。一种是能自我复制的纳米机器人。在理想情态下,这些机器人能在人体内战胜疾病,或者避免地球上的放射性物质。但博斯特罗姆说道,“如果有恶魔企图的人掌控了这种技术,那有可能造成地球上智能生命的绝种。

”Artificial-intelligence proponents argue that these things would never happen and that programmers are going to build safeguards. But let’s be realistic: It took nearly a half-century for programmers to stop computers from crashing every time you wanted to check your email. What makes them think they can manage armies of quasi-intelligent robots?人工智能支持者们坚称,这些事情总有一天都会再次发生,程序员们不会设置一些防水措施。但是让我们现实一点:程序员们花上了近半个世纪才能让你在每次想要查阅邮件时电脑不瓦解。是什么让他们指出自己需要匹敌这些定智能机器人大军?I’m not alone in my fear. Silicon Valley’s resident futurist, Elon Musk, recently said artificial intelligence is “potentially more dangerous than nukes.” And Stephen Hawking, one of the smartest people on earth, wrote that successful A. I. “would be the biggest event in human history. Unfortunately, it might also be the last.” There is a long list of computer experts and science fiction writers also fearful of a rogue robot-infested future.不是只有我一个人有这样的担忧。硅谷的派驻未来主义者埃隆·马斯克(Elon Musk)最近说道,人工智能“有可能比核武器还危险性”。

斯蒂芬·霍金(Stephen Hawking)是地球上最聪明的人之一。他写到,顺利的人工智能“不会是人类历史上最根本性的事件。意外的是,它也可能会是最后一个大事件”。还有很多计算机专家和科幻小说作家担忧未来的世界充满著故障机器人。

Two main problems with artificial intelligence lead people like Mr. Musk and Mr. Hawking to worry. The first, more near-future fear, is that we are starting to create machines that can make decisions like humans, but these machines don’t have morality and likely never will.人工智能有两个主要问题让马斯克和霍金等人忧虑。离我们较将近的一个问题是,我们正在建构一些能像人类一样做到要求的机器人,但这些机器没道德观念,而且很有可能总有一天也会有。The second, which is a longer way off, is that once we build systems that are as intelligent as humans, these intelligent machines will be able to build smarter machines, often referred to as superintelligence. That, experts say, is when things could really spiral out of control as the rate of growth and expansion of machines would increase exponentially. We can’t build safeguards into something that we haven’t built ourselves.第二个问题离我们较近。

那就是,一旦我们建构出有和人一样智能的系统,这些智能机器将需要修建更加智能的机器,后者一般来说被称作超级智能。专家们说道,到那时,事情知道不会很快失控,因为机器的快速增长和收缩速度将是快速增长的。我们不有可能在自己仍未创建的系统中设置防水措施。

“We humans steer the future not because we’re the strongest beings on the planet, or the fastest, but because we are the smartest,” said James Barrat, author of “Our Final Invention: Artificial Intelligence and the End of the Human Era.” “So when there is something smarter than us on the planet, it will rule over us on the planet.”“我们人类掌控未来不是因为我们是地球上最强健或最慢的生物,而是因为我们是最智能的,”《我们的终极发明者:人工智能和人类时代的落幕》(Our Final Invention: Artificial Intelligence and the End of the Human Era)的作者詹姆斯·巴拉(James Barrat)说道,“所以当这个星球下有比我们更加智能的东西时,它将统治者地球。”What makes it harder to comprehend is that we don’t actually know what superintelligent machines will look or act like. “Can a submarine swim? Yes, but it doesn’t swim like a fish,” Mr. Barrat said. “Does an airplane fly? Yes, but not like a bird. Artificial intelligence won’t be like us, but it will be the ultimate intellectual version of us.”更加无以解读的是,我们并不清楚告诉超级智能机器的外形或不道德方式。

“潜水艇不会游泳吗?不会,但它的游泳方式跟鱼有所不同,”巴拉说道,“飞机不会飞来吗?不会,但它的飞行中方式跟鸟有所不同。人工智能会跟我们一模一样,但它将是我们的终极智能版本。”Perhaps the scariest setting is how these technologies will be used by the military. It’s not hard to imagine countries engaged in an arms race to build machines that can kill.或许最可怕的是这些技术将不会如何被军队利用。不难想像那些正在展开军备竞赛的国家不会生产能杀人的机器。

Bonnie Docherty, a lecturer on law at Harvard University and a senior researcher at Human Rights Watch, said that the race to build autonomous weapons with artificial intelligence — which is already underway — is reminiscent of the early days of the race to build nuclear weapons, and that treaties should be put in place now before we get to a point where machines are killing people on the battlefield.邦妮·多彻蒂(Bonnie Docherty)是哈佛大学的法律讲师,也是人权仔细观察的组织的高级研究员。她说道,人工智能自律武器的军备竞赛正在展开,这让人回想了核武器竞赛的初期;在这些机器人上战场杀人之前,我们必需再行订好条约。

“If this type of technology is not stopped now, it will lead to an arms race,” said Ms. Docherty, who has written several reports on the dangers of killer robots. “If one state develops it, then another state will develop it. And machines that lack morality and mortally should not be given power to kill.”“如果现在不阻止这种技术,它将不会造成军备竞赛,”多彻蒂说道。她写出过几个报告,描写刺客机器人的危险性。“如果一个国家在研发它,那另一个国家也不会研发。这些可怕的机器缺少道德观念,不应当被彰显杀人权力。

”So how do we ensure that all these doomsday situations don’t come to fruition? In some instances, we likely won’t be able to stop them.那么我们如何确保所有这些世界末日的情形会沦为现实?在某些情况下,我们很有可能无法制止它们。But we can hinder some of the potential chaos by following the lead of Google. Earlier this year when the search-engine giant acquired DeepMind, a neuroscience-inspired, artificial intelligence company based in London, the two companies put together an artificial intelligence safety and ethics board that aims to ensure these technologies are developed safely.但是在谷歌的领导下,我们能制止某些有可能经常出现的恐慌。今年年初,这个搜索引擎巨头并购了DeepMind公司,后者是伦敦的一家以神经系统科学为基础的人工智能公司。这两家公司创建了一个人工智能安全性伦理委员会,目的确保这些技术安全性发展。

Demis Hassabis, founder and chief executive of DeepMind, said in a video interview that anyone building artificial intelligence, including governments and companies, should do the same thing. “They should definitely be thinking about the ethical consequences of what they do,” Dr. Hassabis said. “Way ahead of time.”DeepMind的创始人、首席执行官杰米斯·哈萨比斯(Demis Hassabis)在一次视频专访中说道,所有研发人工智能的机构,还包括政府和公司,都应当这样做到。“他们一定要考虑到自己的所作所为不会带给的伦理后果,”哈萨比斯说道,“而且一定要早早考虑到。


本文关键词:B体育,Bsports官网,Bsports必一体育,bsport体育最新官网入口,b体育登录入口app下载安装免费

本文来源:B体育-www.erdmanav.com