霍金最新演講:讓人工智能造福人類及其賴以生存的家園 | 視頻
?GMIC 北京 2017 霍金演講完整視頻。來源:長城會(huì)
前言:
作為“全世界最具智慧的頭腦”,劍橋大學(xué)物理學(xué)家斯蒂芬·威廉·霍金在前沿科技領(lǐng)域的探索和思考從未止步。4月底,霍金專門為GMIC 北京 2017大會(huì)做了主題為“讓人工智能造福人類及其賴以生存的家園”的視頻演講,并回答了中國科技大咖、科學(xué)家、投資家和網(wǎng)友的8個(gè)問題。
● ● ●
Over my lifetime, I have seen very significant societal changes. Probably one of the most significant, and one that is increasingly concerning people today, is the rise of artificial intelligence. In short, I believe that the rise of powerful AI, will be either the best thing, or the worst, ever to happen to humanity. I have to say now, that we do not yet know which. But we should do all we can, to ensure that its future development benefits us, and our environment. We have no other option. I see the development of AI, as a trend with its own problems that we know must be dealt with, now and into the future.
在我的一生中,我見證過深刻的社會(huì)變化。其中最深刻的,同時(shí)也是對人類影響與日俱增的變化,是人工智能的崛起。簡單來說,我認(rèn)為強(qiáng)大的人工智能的崛起,要么是人類歷史上最好的事,要么是最糟的。是好是壞,我不得不說,我們依然不能確定。但我們應(yīng)該竭盡所能,確保其未來發(fā)展對我們和我們的環(huán)境有利。我們別無選擇。我認(rèn)為人工智能的發(fā)展,本身是一種存在著問題的趨勢,而這些問題必須在現(xiàn)在和將來得到解決。
The progress in AI research and development is swift. And perhaps we should all stop for a moment, and focus our research, not only on making AI more capable, but on maximizing its societal benefit. Such considerations motivated the American Association for Artificial Intelligence's, two thousand and eight to two thousand and nine, Presidential Panel on Long-Term AI Futures, which up to recently had focused largely on techniques, that are neutral with respect to purpose. But our AI systems must do what we want them to do. Inter-disciplinary research can be a way forward: ranging from economics, law, and philosophy, to computer security, formal methods, and of course various branches of AI itself.
人工智能的研究與開發(fā)正在迅速推進(jìn)。也許科學(xué)研究應(yīng)該暫停片刻,使研究重點(diǎn)從提升人工智能能力轉(zhuǎn)移到最大化人工智能的社會(huì)效益上面。基于這樣的考慮,美國人工智能協(xié)會(huì)(AAAI)于2008至2009年,成立了人工智能長期未來總籌論壇(原文:Presidential Panel on Long-Term AI Futures),他們近期在中立于目的的中性技術(shù)上投入了大量的關(guān)注。但我們的人工智能系統(tǒng)的原則依然需要按照我們的意志工作??鐚W(xué)科研究可能是一條可能的前進(jìn)道路:從經(jīng)濟(jì)、法律、哲學(xué)延伸至計(jì)算機(jī)安全、形式化方法,當(dāng)然還有人工智能本身的各個(gè)分支。
Everything that civilization has to offer, is a product of human intelligence, and I believe there is no real difference between what can be achieved by a biological brain, and what can be achieved by a computer. It therefore follows that computers can, in theory, emulate human intelligence, and exceed it. But we don’t know. So we can not know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it. Indeed, we have concerns that clever machines will be capable of undertaking work currently done by humans, and swiftly destroy millions of jobs.
文明所產(chǎn)生的一切都是人類智慧的產(chǎn)物,我相信生物大腦能取得的成就,與計(jì)算機(jī)可以達(dá)到的成就,沒有本質(zhì)區(qū)別。因此,它遵循了“計(jì)算機(jī)在理論上可以模仿人類智慧,然后超越”這一原則。但我們并不確定,所以我們無法知道我們將無限地得到人工智能的幫助,還是被藐視并被邊緣化,或者很可能被它毀滅。的確,我們擔(dān)心聰明的機(jī)器將能夠代替人類正在從事的工作,并迅速地消滅數(shù)以百萬計(jì)的工作崗位。
While primitive forms of artificial intelligence developed so far, have proved very useful, I fear the consequences of creating something that can match or surpass humans. AI would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded. It will bring great disruption to our economy. And in the future, AI could develop a will of its own, a will that is in conflict with ours. Although I am well-known as an optimist regarding the human race, others believe that humans can command the rate of technology for a decently long time, and that the potential of AI to solve many of the world's problems will be realized. I am not so sure.
在人工智能從原始形態(tài)不斷發(fā)展,并被證明非常有用的同時(shí),我也在擔(dān)憂這樣的結(jié)果,即創(chuàng)造出可以等同或超越人類的事物:人工智能一旦脫離束縛,以不斷加速的狀態(tài)重新設(shè)計(jì)自身。人類由于受到漫長的生物進(jìn)化的限制,無法與之競爭,將被取代。這將給我們的經(jīng)濟(jì)帶來極大的破壞。未來,人工智能可能會(huì)發(fā)展出自我意志,一個(gè)與我們沖突的意志。很多人認(rèn)為人類可以在相當(dāng)長的時(shí)間里控制技術(shù)的發(fā)展,這樣我們就能看到人工智能解決世界上大部分問題的潛力成為現(xiàn)實(shí)。但我并不確定。盡管我對人類一貫持有樂觀的態(tài)度。
In January 2015, I along with the technological entrepreneur, Elon Musk, and many other AI experts, signed an open letter on artificial intelligence, calling for serious research on its impact on society. In the past, Elon Musk has warned that super human artificial intelligence, is possible of providing in calculable benefits, but if deployed in cautiously, will have an adverse effect on the human race. He and I, sit on the scientific advisory board for the Future of Life Institute, an organization working to mitigate existential risks facing humanity, and which drafted the open letter. This called for concrete research on how we could prevent potential problems, while also reaping the potential benefits AI offers us, and is designed to get AI researchers and developers to pay more attention to AI safety. In addition, for policymakers and the general public, the letter is meant to be informative, but not alarmist. We think it is very important, that everybody knows that AI researchers are seriously thinking about these concerns and ethical issues. For example, AI has the potential to eradicate disease and poverty, but researchers must work to create AI that can be controlled. The four-paragraph letter, titled Research Priorities for Robustand Beneficial Artificial Intelligence, an Open Letter, lays out detailed research priorities in the accompanying twelve-page document.
2015年1月份,我和科技企業(yè)家埃隆·馬斯克,以及許多其他的人工智能專家簽署了一份關(guān)于人工智能的公開信,目的是提倡就人工智能對社會(huì)所造成的影響做認(rèn)真的調(diào)研。在這之前,埃隆·馬斯克就警告過人們:超人類人工智能可能帶來不可估量的利益,但是如果部署不當(dāng),則可能給人類帶來相反的效果。我和他同在“生命未來研究所”擔(dān)任科學(xué)顧問委員會(huì)的職務(wù),這是一個(gè)旨在緩解人類所面臨的存在風(fēng)險(xiǎn)的組織,而且之前提到的公開信也是由這個(gè)組織起草的。在收獲人工智能帶給我們的潛在利益的同時(shí),這封公開信號召我們也要展開能阻止?jié)撛趩栴}的直接研究,并致力于讓研發(fā)人員更加關(guān)注人工智能的安全問題。此外,對于決策者和普通大眾來說,這封公開信內(nèi)容翔實(shí),并非危言聳聽。人人都知道人工智能研究人員們在認(rèn)真思索這些擔(dān)心和倫理問題,我們認(rèn)為這一點(diǎn)非常重要。比如,人工智能具有根除疾患和貧困的潛力,但是研究人員必須能夠創(chuàng)造出可控的人工智能。那封公開信只有四段文字,題目為《應(yīng)優(yōu)先研究強(qiáng)大而有益的人工智能》,在其附帶的十二頁文件中對研究的優(yōu)先次序作了詳細(xì)的安排。
For the last 20 years or so, AI has been focused on the problems surrounding the construction of intelligent agents, systems that perceive and act in some environment. In this context, intelligence is related to statistical and economic notions of rationality. Colloquially, the ability to make good decisions, plans, or inferences. As a result of this recent work, there has been a large degree of integration and cross-fertilisation among AI, machine learning, statistics, control theory, neuro science, and other fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes invarious component tasks, such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems.
在過去的20年或更長時(shí)間里,人工智能領(lǐng)域一直專注于建設(shè)智能代理相關(guān)的問題,即在特定環(huán)境下可以感知并行動(dòng)的各種系統(tǒng)。在這種情況下,智能是一個(gè)與統(tǒng)計(jì)學(xué)和經(jīng)濟(jì)學(xué)相關(guān)的理性概念。通俗地講,這是一種做出好的決定、計(jì)劃和推論的能力?;谶@些工作,大量的整合和交融被應(yīng)用在人工智能、機(jī)器學(xué)習(xí)、統(tǒng)計(jì)學(xué)、控制論、神經(jīng)科學(xué)以及其它領(lǐng)域。共享理論框架的建立,結(jié)合數(shù)據(jù)的供應(yīng)和處理能力,在各種細(xì)分的領(lǐng)域取得了顯著的成功。例如語音識別、圖像分類、自動(dòng)駕駛、機(jī)器翻譯、步態(tài)運(yùn)動(dòng)和問答系統(tǒng)。
As development in these areas and others, moves from laboratory research to economically valuable technologies, a virtuous cycle evolves, whereby even small improvements in performance, are worth large sums of money, prompting further and greater investments in research. There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer, is a product of human intelligence; we can not predict what we might achieve, when this intelligence is magnified by the tools AI may provide. But, and as I have said, the eradication of diseaseand poverty is not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits, while avoiding potential pitfalls.
隨著這些領(lǐng)域的發(fā)展,從實(shí)驗(yàn)室研究到有經(jīng)濟(jì)價(jià)值的技術(shù)之間的流動(dòng),已形成良性循環(huán)。哪怕很小的性能改進(jìn),都會(huì)帶來巨大的經(jīng)濟(jì)效益,進(jìn)而鼓勵(lì)更長期、更大的投入和研究。目前人們廣泛認(rèn)同,人工智能的研究正在穩(wěn)步發(fā)展,而它對社會(huì)的影響很可能擴(kuò)大,潛在的好處是巨大的——畢竟文明能提供的一切,都是人類智力的產(chǎn)物;但當(dāng)這種智力被人工智能提供的工具放大過后,我們無法預(yù)測我們獲得的結(jié)果。但是,正如我說過的,根除疾病和貧窮并不是完全不可能,由于人工智能的巨大潛力,研究如何(從人工智能)獲益并規(guī)避風(fēng)險(xiǎn)是非常重要的。
Artificial intelligence research is now progressing rapidly. And this research can be discussed as short-term and long-term. Some short-term concerns relate to autonomous vehicles, from civilian drones and self-driving cars. For example, a self-driving car may, in an emergency, have to decide between a small risk of a major accident, and a large probability of a small accident. Other concerns relate to lethal intelligent autonomous weapons. Should they be banned. If so, how should autonomy be precisely defined. If not, how should culpability for any misuse or malfunction be apportioned. Other issues include privacy concerns, as AI becomes increasingly able to interpret large surveillance datasets, and how to best manage the economic impact of jobs displaced by AI.
現(xiàn)在,關(guān)于人工智能的研究正在迅速發(fā)展。這一研究可以從短期和長期兩個(gè)角度來討論。一些短期的擔(dān)憂,與無人駕駛相關(guān),從民用無人機(jī)到自主駕駛汽車。比如說,在緊急情況下,一輛無人駕駛汽車不得不在大事故的小風(fēng)險(xiǎn)和小事故的大概率之間進(jìn)行選擇。另一個(gè)擔(dān)憂是致命性智能自主武器。他們是否該被禁止?如果是,那么“自主”該如何精確定義。如果不是,任何使用不當(dāng)和故障的過失應(yīng)該如何問責(zé)。還有另外一些擔(dān)憂:由人工智能逐漸可以解讀大量監(jiān)控?cái)?shù)據(jù)引起的隱私和擔(dān)憂,以及如何管理因人工智能取代工作崗位帶來的經(jīng)濟(jì)影響。
Long-term concerns, comprise primarily of the potential loss of control of AI systems, via the rise of super-intelligences that do not act in accordance with human wishes, and that such powerful systems would threaten humanity. Are such days topic outcomes possible. If so, how might these situations arise. What kind of investments in research should be made, to better understand and to address the possibility of the rise of a dangerous super-intelligence, or the occurrence of an intelligence explosion.
長期擔(dān)憂主要是人工智能系統(tǒng)失控的潛在風(fēng)險(xiǎn)——由于不遵循人類意愿行事的超級智能的崛起,這樣強(qiáng)大的系統(tǒng)將威脅到人類。如今這樣的話題是否會(huì)成為現(xiàn)實(shí)?如果是,這些情況是如何出現(xiàn)的?我們應(yīng)該投入什么樣的研究,以便更好理解和解決危險(xiǎn)的超級智能崛起的可能性,或智能爆發(fā)的出現(xiàn)?
Existing tools for harnessing AI, such as reinforcement learning, and simple utility functions, are inadequate to solve this. Therefore more research is necessary to find and validate a robust solution to the control problem.
當(dāng)前控制人工智能技術(shù)的工具,例如強(qiáng)化學(xué)習(xí)、簡單的效用函數(shù),還不足以解決這個(gè)問題。因此,我們需要進(jìn)一步研究來找到和確認(rèn)一個(gè)可靠的解決辦法來解決掌控這一問題。
Recent land marks, such as the self-driving cars already mentioned, or a computer winning at the game of Go, are signs of what is to come. Enormous levels of investment are pouring in to this technology. The achievements we have seen so far, will surely pale against what the coming decades will bring, and we can not predict what we might achieve, when our own minds are amplified by AI. Perhaps with the tools of this new technological revolution, we will be able to undo some of the damaged one to the natural world by the last one, industrialisation. Every aspect of our lyves will be transformed. In short, success in creating AI, could be the biggest event in the history of our civiliztion.
近來的里程碑,比如說之前提到的自主駕駛汽車,以及人工智能贏得圍棋比賽,都是未來趨勢的跡象。巨大的投入正傾入這項(xiàng)科技。我們目前所取得的成就,和未來幾十年后可能取得的成就相比,必然相形見絀。當(dāng)我們的想法被人工智能放大以后,我們無法預(yù)測我們將取得怎樣的結(jié)果。也許在這種新技術(shù)革命的輔助下,我們可以解決一些工業(yè)化對自然界造成的損害。關(guān)乎到我們生活的各個(gè)方面都即將被改變。簡而言之,人工智能的成功有可能是人類文明史上最大的事件。
But it could also be the last, unless we learn how to avoid the risks. I have said in the past that the development of full AI, could spell the end of the human race, such as the ultimate use of powerful autonomous weapons. Earlier this year, I, along with other international scientists, supported the United Nations convention to negotiate a ban on nuclear weapons. These negotiations started last week, and we await the outcome with nervous santicipation. Currently, nine nuclear powers have access to roughly 14,000 nuclear weapons, any one of which can obliterate cities, contaminate wide swathes of land with radio active fall-out, and the most horrible hazard of all, cause a nuclear-induced winter, in which the fires and smoke might trigger a global mini-ice age. The result is a complete collapse of the global food system, and apocalyptic unrest, potentially killing most people on earth. We scientists bear a special responsibility for nuclear weapons, since it was scientists who invented them, and discovered that their effects are even more horrific than first thought.
但是人工智能也有可能是人類文明史的終結(jié),除非我們學(xué)會(huì)如何避免危險(xiǎn)。我曾經(jīng)說過,人工智能的全方位發(fā)展可能招致人類的滅亡,比如最大化使用智能性自主武器。今年早些時(shí)候,我和一些來自世界各國的科學(xué)家共同在聯(lián)合國會(huì)議上支持其對于核武器的禁令。我們正在焦急地等待協(xié)商結(jié)果。目前,九個(gè)核大國可以控制大約一萬四千個(gè)核武器,它們中的任何一個(gè)都可以將城市夷為平地,放射性廢物會(huì)大面積污染農(nóng)田,最可怕的危害是誘發(fā)核冬天,火和煙霧會(huì)導(dǎo)致全球的小冰河期。這一結(jié)果將是全球糧食體系崩塌、末日般的動(dòng)蕩,很可能導(dǎo)致大部分人死亡。我們作為科學(xué)家,對核武器承擔(dān)著特殊的責(zé)任,因?yàn)檎强茖W(xué)家發(fā)明了它們,并發(fā)現(xiàn)它們的影響比最初預(yù)想的更加可怕。
At this stage, I may have possibly frightened you all here today, with talk of doom. I apologize. But it is important that you, as attendees to today's conference, recognize the position you hold in influencing future research and development of today's technology. I believe that we join together, to call for support of international treaties, or signing letters presented to individual government all powers. Technology leaders and scientists are doing what they can, to obviate the rise of uncontrollable AI.
現(xiàn)階段,我對災(zāi)難的探討可能驚嚇到了在座的各位。很抱歉。但是作為今天的與會(huì)者,重要的是,你們要認(rèn)清自己在影響當(dāng)前技術(shù)的未來研發(fā)中的位置。我相信我們團(tuán)結(jié)在一起,來呼吁國際條約的支持或者簽署呈交給各國政府的公開信,科技領(lǐng)袖和科學(xué)家正極盡所能避免不可控人工智能的崛起。
In October last year, I opened a new center in Cambridge, England, which will attempt to tackle some of the open-ended questions raised by the rapid pace of development in AI research. The Leverhulme Centre for the Future of Intelligence, is a multi-disciplinary institute, dedicated to researching the future of intelligence, as crucial to the future of our civilisation and our species. We spend a great deal of time studying history, which let's face it, is mostly the history of stupidity. So it's a welcome change, that people are studying instead the future of intelligence. We are aware of the potential dangers, but I am at heart an optimist, and believe that the potential benefits of creating intelligence are huge. Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world, by industrialisation.
去年10月,我在英國劍橋建立了一個(gè)新的機(jī)構(gòu),試圖解決一些在人工智能研究快速發(fā)展中出現(xiàn)的尚無定論的問題?!袄バ菽分悄芪磥碇行摹笔且粋€(gè)跨學(xué)科研究所,致力于研究智能的未來,這對我們文明和物種的未來至關(guān)重要。我們花費(fèi)大量時(shí)間學(xué)習(xí)歷史,深入去看——大多數(shù)是關(guān)于愚蠢的歷史。所以人們轉(zhuǎn)而研究智能的未來是令人欣喜的變化。雖然我們對潛在危險(xiǎn)有所意識,但我內(nèi)心仍秉持樂觀態(tài)度,我相信創(chuàng)造智能的潛在收益是巨大的。也許借助這項(xiàng)新技術(shù)革命的工具,我們將可以削減工業(yè)化對自然界造成的傷害。
Every aspect of our lives will be transformed. My colleague at the institute, HuwPrice, has acknowledged that the center came about partially as a result of the university’s Centre for Existential Risk. That institute examines a wider range of potential problems for humanity, while the Leverhulme Centrehas a more narrow focus.
我們生活的每一個(gè)方面都會(huì)被改變。我在研究所的同事休·普林斯承認(rèn),“利弗休姆中心”能建立,部分是因?yàn)榇髮W(xué)成立了“存在風(fēng)險(xiǎn)中心”。后者更加廣泛地審視了人類潛在問題,“利弗休姆中心”的重點(diǎn)研究范圍則相對狹窄。
Recent developments in the advancement of AI, include a call by the European Parliament for drafting a set of regulations, to govern the use and creation of robots and AI. Somewhat surprisingly, this includes a form of electronic personhood, to ensure the rights and responsibilities for the most capable and advanced AI. A European Parliament spokesman has commented, that as a growing number of are as in our daily lyves are increasingly affected by robots, we need to ensure that robots are, and will remain, in the service of humans. There port as presented to MEPs, makes it clear that it believes the world is on the cusp of a new industrial robot revolution. It examines whether or not providing legal rights for robots as electronic persons, on a par with the legal definition of corporate personhood, would be permissible. But stresses that at all times, researchers and designers should ensure all robotic design incorporates a kill switch. This didn't help the scientists on board the spaceship with Hal, the malfunctioning robotic computer in Kubrick’s two thousand and one, a Space Odyssey, but that was fiction. We deal with fact. Lorna Brazell, a partner at the multinational law firm Osborne Clarke, says in the report, that we don’t give whales and gorillas personhood, so there is no need to jump at robotic personhood. But the wariness is there. The report acknowledges the possibility that within the space of a few decades, AI could surpass human intellectual capacity, and challenge the human robot relationship. Finally, the report calls for the creation of a European agency for robotic sand AI, that can provide technical, ethical, and regulatory expertise. If MEP svote in favor of legislation, the report will go to the European Commission, which has three months to decide what legislative steps it will take.
人工智能的最新進(jìn)展,包括歐洲議會(huì)呼吁起草一系列法規(guī),以管理機(jī)器人和人工智能的創(chuàng)新。令人感到些許驚訝的是,這里面涉及到了一種形式的電子人格,以確保最有能力和最先進(jìn)的人工智能的權(quán)利和責(zé)任。歐洲議會(huì)發(fā)言人評論說,隨著日常生活中越來越多的領(lǐng)域日益受到機(jī)器人的影響,我們需要確保機(jī)器人無論現(xiàn)在還是將來,都為人類而服務(wù)。向歐洲議會(huì)議員提交的報(bào)告,明確認(rèn)為世界正處于新的工業(yè)機(jī)器人革命的前沿。報(bào)告中分析的是給機(jī)器人提供作為電子人的權(quán)利,這等同于公司法人(的身份),是否有可能。報(bào)告強(qiáng)調(diào),在任何時(shí)候,研究和設(shè)計(jì)人員都應(yīng)確保每一個(gè)機(jī)器人設(shè)計(jì)都包含有終止開關(guān)。在庫布里克的電影《2001太空漫游》中,出故障的超級電腦哈爾沒有讓科學(xué)家們進(jìn)入太空艙,但那是科幻。我們要面對的則是事實(shí)。奧斯本·克拉克跨國律師事務(wù)所的合伙人,洛納·布拉澤爾在報(bào)告中說,我們不承認(rèn)鯨魚和大猩猩有人格,所以也沒有必要急于接受一個(gè)機(jī)器人人格。但是擔(dān)憂一直存在。報(bào)告承認(rèn)在幾十年的時(shí)間內(nèi),人工智能可能會(huì)超越人類智力范圍,人工智能可能會(huì)超越人類智力范圍,進(jìn)而挑戰(zhàn)人機(jī)關(guān)系。報(bào)告最后呼吁成立歐洲機(jī)器人和人工智能機(jī)構(gòu),以提供技術(shù)、倫理和監(jiān)管方面的專業(yè)知識。如果歐洲議會(huì)議員投票贊成立法,該報(bào)告將提交給歐盟委員會(huì)。它將在三個(gè)月的時(shí)間內(nèi)決定要采取哪些立法步驟。
We too, have a role to play in making sure the next generation has not just the opportunity, but the determination, to engage fully with the study of science at an early level, so that they can go on to fulfil their potential, and create a better world for the whole human race. This is what I meant, when I was talking to you just now about the importance of learning and education.We need to take this beyond a theoretical discussion of how things should be, and take action, to make sure they have the opportunity to get on board. We stand on the threshold of a brave new world. It is an exciting, if precarious place to be, and you are the pioneers. I wish you well.
我們還應(yīng)該扮演一個(gè)角色,確保下一代不僅僅有機(jī)會(huì)還要有決心,在早期階段充分參與科學(xué)研究,以便他們繼續(xù)發(fā)揮潛力,幫助人類創(chuàng)造一個(gè)更加美好的的世界。這就是我剛談到學(xué)習(xí)和教育的重要性時(shí),所要表達(dá)的意思。我們需要跳出“事情應(yīng)該如何”這樣的理論探討,并且采取行動(dòng),以確保他們有機(jī)會(huì)參與進(jìn)來。我們站在一個(gè)美麗新世界的入口。這是一個(gè)令人興奮的、同時(shí)充滿了不確定性的世界,而你們是先行者。我祝福你們。
Thank you for listening.
謝謝。
Professor Hawking, we have learned so much from your insight.
Next I’m going to ask some questions. These are from Chinese scientists and entrepreneurs.
霍金教授,我們從您的洞見中學(xué)到了很多。
接下來我將要問一些問題,來自于中國的科學(xué)家和企業(yè)家。
Q
Kai-FuLee,CEO of Sinovation Ventures:
"The large internet companies have access to massive databases, which allows them to make huge strides in AI by violating user's privacy. These companies can’t truly discipline themselves as they are lured by huge economic interests. This vastly disproportionate access to data could cause small companies and startups to fail to innovate. You have mentioned numerous times that we should restrain artificial intelligence, but it’s much harder to restrain humans. What do you think we can do to restrain the large internet companies?"
創(chuàng)新工場CEO李開復(fù)(問):
互聯(lián)網(wǎng)巨頭擁有巨量的數(shù)據(jù),而這些數(shù)據(jù)會(huì)給他們各種以用戶隱私和利益換取暴利的機(jī)會(huì)。在巨大的利益誘惑下,他們是無法自律的。而且,這種行為也會(huì)導(dǎo)致小公司和創(chuàng)業(yè)者更難創(chuàng)新。您常談到如何約束人工智能,但更難的是如何約束人本身。您認(rèn)為我們應(yīng)該如何約束這些巨頭?
A
As I understand it,the companies are using the data only for statistical purposes,but use of any personal information should be banned. It would help privacy, if all material on the internet, were encrypted by quantum cryptography with a code, that the internet companies could not break in a reasonable time. But these curity services would object.
據(jù)我了解,許多公司僅將這些數(shù)據(jù)用于統(tǒng)計(jì)分析,但任何涉及到私人信息的使用都應(yīng)該被禁止。會(huì)有助于隱私保護(hù)的是,如果互聯(lián)網(wǎng)上所有的信息,均通過基于量子技術(shù)加密,這樣互聯(lián)網(wǎng)公司在一定時(shí)間內(nèi)便無法破解。但安全服務(wù)會(huì)反對這個(gè)做法。
Q
Professor, the second question is from Fu Sheng, CEO, Cheetah Mobile:
“Does the human soul exist as a form of quantum or another form of higher dimensional space?"
第二個(gè)問題來自于獵豹移動(dòng) CEO 傅盛:
“靈魂會(huì)不會(huì)是量子的一種存在形態(tài)?或者是高維空間里的另一個(gè)表現(xiàn)?”
A
I believe that recent advances in AI, such as computers winning at chess and Go, show that here is no essential difference between the human brain and a computer. Contrary to the opinion of my colleague Roger Penrose. Would one say a computer has a soul. In my opinion, the notion of an individual human soul is a Christian concept, linked to the after life which I consider to be a fairy story.
我認(rèn)為近來人工智能的發(fā)展,比如電腦在國際象棋和圍棋的比賽中戰(zhàn)勝人腦,都顯示出人腦和電腦并沒有本質(zhì)差別。這點(diǎn)上我和我的同事羅杰·彭羅斯正好相反。會(huì)有人認(rèn)為電腦有靈魂嗎?對我而言,靈魂這個(gè)說法是一個(gè)基督教的概念,它和來世聯(lián)系在一起。我認(rèn)為這是一個(gè)童話故事。
Q
Professor, the third question is from Ya-QinZhang, President, Baidu:
“The way human beings observe and abstract the universe is constantly evolving, from observation and estimation to Newton's law and Einstein’s equation, and now data-driven computation and AI. What is next?”
第三個(gè)問題來自于百度總裁張亞勤:
“人類觀察和抽象世界的方式不斷演進(jìn),從早期的觀察和估算,到牛頓定律和愛因斯坦方程式,到今天數(shù)據(jù)驅(qū)動(dòng)的計(jì)算和人工智能,下一個(gè)是什么?”
A
We need a new quantum theory, which unifies gravity with the other forces of nature. Many people claim that it is string theory, but I have my doubts. So far about the only prediction is that space-time has ten dimensions.
我們需要一個(gè)新的量子理論,將重力和其他自然界的其它力量整合在一起。許多人聲稱這是弦理論,但我對此表示懷疑,目前唯一的推測是,時(shí)空有十個(gè)維度。
Q
Professor, the forth question is from Zhang Shoucheng, Professor of Physics, Stanford University:
“If you were to tell aliens about the highest achievements of our human civilization on the back of one envelope, what would you write ?”
第四個(gè)問題來自于斯坦福大學(xué)物理學(xué)教授張首晟:
“如果讓你告訴外星人我們?nèi)祟惾〉玫淖罡叱删停瑢懺谝粡埫餍牌谋趁?,您?huì)寫什么?”
A
It is no good telling aliens about beauty or any other possible art form that we might consider to be the highest artistic achievement, because these are very human specific. Instead I would write about Godel’s Incompleteness Theorems and Fermat’s Last Theorem.These are things aliens would understand.
告訴外星人關(guān)于美,或者任何可能代表最高藝術(shù)成就的藝術(shù)形式都是無益的,因?yàn)檫@是人類特有的。我會(huì)告訴他們哥德爾不完備定理和費(fèi)馬大定理。這才是外星人能夠理解的事情。
Q
The next question is from myself:
“We wish to promote the scientific spirit at all 9 GMIC conferences globally. What three books do you recommend technology leaders read to better understand the coming future and the science that is driving it?”
下一個(gè)問題來自我自己:
“我們希望提倡科學(xué)精神,貫穿GMIC全球九站,請您推薦三本書,讓科技屆的朋友們更好地理解科學(xué)及科學(xué)的未來。”
A
They should be writing books not reading them. One fully understands something only when one has written a book about it.
他們應(yīng)該去寫書而不是讀書。只有當(dāng)一個(gè)人關(guān)于某件事能寫出一本書,才代表他完全理解了這件事。
Q
The next question is from Weibo user:
"What is the one thing we should never do in life, and the one thing we should all do?"
下一個(gè)問題來自于微博用戶:
“您認(rèn)為一個(gè)人一生中最應(yīng)當(dāng)做的一件事和最不應(yīng)當(dāng)做的一件事是什么?”
A
We should never give up, and we should all strive to understand as much as we can.
我們絕不應(yīng)當(dāng)放棄,我們都應(yīng)當(dāng)盡可能多的去理解(這個(gè)世界)。
Q
The next question is also from Weibo user:
“Human beings have experienced many evolutions, for example, the Stone Age, the age of steam to the age of electricity. What do you think will drive the next evolution?”
下一個(gè)問題同樣來自于微博用戶:
“人類在漫漫的歷史長河中,重復(fù)著一次又一次的革命與運(yùn)動(dòng)。從石器、蒸汽、電氣……您認(rèn)為下一次的革命會(huì)是由什么驅(qū)動(dòng)的?”
A
Advances in computer science, including artificial intelligence and quantum computing. Technology already forms a major part of our lives but in the coming decades, it will permeate every aspect of our society. Intelligently supporting and advising us in many areas, including healthcare work education and science. But we must make sure we control AI not it us.
(我認(rèn)為是)計(jì)算機(jī)科學(xué)的發(fā)展,包括人工智能和量子計(jì)算??萍家呀?jīng)成為我們生活中重要的一部分,但未來幾十年里,它會(huì)逐漸滲透到社會(huì)的每一個(gè)方面,為我們提供智能的支持和建議,在醫(yī)療、工作、教育和科技等眾多領(lǐng)域。但是我們必須要確保是我們來掌控人工智能,而非它(掌控)我們。
Q
Professor Hawking,the last question is from Hai Quan , Musician and VC:
“If the technology is not mature yet for interstellar immigrants, do human beings have unsolvable challenges that could lead to human extinction apart from external catastrophes like asteroid hitting earth?”
最后一個(gè)問題來自于音樂人、投資者胡海泉:
“如果星際移民技術(shù)的成熟窗口期遲到,有沒有完全解決不了的內(nèi)發(fā)災(zāi)難導(dǎo)致人類滅絕?拋開隕星撞地球這樣的外來災(zāi)難?!?nbsp;
A
Yes. over-population, disease, war, famine, climate change and lack of water. It is within the power of man to solve the secrises, but unfortunately these remain serious threats to our continued present on earth. These are all solvable, but so far have not been.
是的。人口過剩、疾病、戰(zhàn)爭、饑荒、氣候變化和水資源匱乏, 人類有能力解決這些危機(jī)。但很可惜,這些危機(jī)還嚴(yán)重威脅著我們在地球上的生存,這些危機(jī)都是可以解決的,但目前還沒有。
本文原載微信公眾號《長城會(huì)》,《知識分子》獲授權(quán)刊載。