337p人体粉嫩胞高清图片,97人妻精品一区二区三区在线 ,日本少妇自慰免费完整版,99精品国产福久久久久久,久久精品国产亚洲av热一区,国产aaaaaa一级毛片,国产99久久九九精品无码,久久精品国产亚洲AV成人公司
網(wǎng)易首頁 > 網(wǎng)易號 > 正文 申請入駐

人工智能真的永遠(yuǎn)不會有意識?

0
分享至

邁克爾·波倫在他的新書《世界顯現(xiàn)》中指出,人工智能可以做很多事情——但它不能成為人。

MICHAEL POLLAN

THE BIG STORY

FEB 24, 2026


如今,布萊克·勒莫因事件被視為人工智能炒作的巔峰之作。它讓“有意識的人工智能”這一概念在新聞周期內(nèi)迅速進(jìn)入公眾視野,同時也引發(fā)了計算機(jī)科學(xué)家和意識研究者之間的一場討論,這場討論在隨后的幾年里愈演愈烈。盡管科技界仍在公開場合對這一概念(以及可憐的萊莫因)不屑一顧,但在私下里,他們已經(jīng)開始更加認(rèn)真地看待這種可能性。有意識的人工智能可能缺乏明確的商業(yè)邏輯(如何將其商業(yè)化?),并會帶來棘手的道德困境(我們應(yīng)該如何對待一臺能夠感受到痛苦的機(jī)器?)。

然而,一些人工智能工程師開始認(rèn)為,通用人工智能的終極目標(biāo)——一臺不僅具有超級智能,而且擁有人類水平的理解力、創(chuàng)造力和常識的機(jī)器——或許需要類似意識的東西才能實現(xiàn)。在科技界,曾經(jīng)圍繞有意識的人工智能的非正式禁忌——公眾會覺得這種前景令人毛骨悚然——突然開始瓦解。


轉(zhuǎn)折點出現(xiàn)在2023年夏天,當(dāng)時19位頂尖的計算機(jī)科學(xué)家和哲學(xué)家發(fā)布了一份長達(dá)88頁的報告,題為《人工智能中的意識》,非正式名稱為“巴特林報告”。短短幾天內(nèi),人工智能和意識科學(xué)界的幾乎所有人都閱讀了這份報告。報告草稿的摘要中有一句引人注目的話:“我們的分析表明,目前沒有任何人工智能系統(tǒng)具備意識,但也表明,構(gòu)建具有意識的人工智能系統(tǒng)并不存在明顯的障礙。”

作者們承認(rèn),召集這個小組并撰寫這份報告的部分靈感來源于“布萊克·勒莫因的案例”。一位合著者告訴《科學(xué)》雜志:“如果人工智能能夠給人以意識的印象,那么科學(xué)家和哲學(xué)家們就必須對此進(jìn)行深入探討,這已成為一項緊迫的任務(wù)。”

但真正吸引所有人目光的是預(yù)印本摘要中的一句話:“構(gòu)建有意識的人工智能系統(tǒng)不存在明顯的障礙。”當(dāng)我第一次讀到這句話時,我感覺自己跨越了一道重要的門檻,而且這不僅僅是技術(shù)上的門檻。不,這關(guān)乎我們作為物種的本質(zhì)。

如果人類在不久的將來發(fā)現(xiàn)世上出現(xiàn)了一臺完全有意識的機(jī)器,那意味著什么?我猜想,那將是一個如同哥白尼式的重大發(fā)現(xiàn),它會突然動搖我們自以為是的中心地位和特殊性。幾千年來,我們?nèi)祟愐恢蓖ㄟ^與“低等”動物的對立來定義自身。這意味著我們否認(rèn)動物擁有諸如情感(笛卡爾最明顯的錯誤之一)、語言、理性以及意識等被認(rèn)為是人類獨有的特征。然而,近年來,隨著科學(xué)家們證明許多物種都擁有智慧和意識,擁有情感,并且會使用語言和工具,這些區(qū)分大多已經(jīng)瓦解,同時也挑戰(zhàn)了幾個世紀(jì)以來人類的優(yōu)越論。這種轉(zhuǎn)變?nèi)栽谶M(jìn)行中,它引發(fā)了關(guān)于我們身份認(rèn)同以及我們對其他物種的道德義務(wù)的棘手問題。

人工智能對我們崇高的自我認(rèn)知構(gòu)成的威脅,完全來自另一個層面。如今,我們?nèi)祟惐仨氈匦露x自身,不再與其他動物比較,而是與人工智能建立聯(lián)系。隨著計算機(jī)算法在純粹的腦力上超越我們——在國際象棋、圍棋以及數(shù)學(xué)等各種“高等”思維領(lǐng)域輕松擊敗我們——我們至少可以感到欣慰的是,我們(以及許多其他動物物種)仍然擁有意識的恩賜與重負(fù),擁有感受和主觀體驗的能力。從這個意義上講,人工智能或許可以成為我們共同的敵人,將人類與其他動物拉近距離:我們對抗它,生命對抗機(jī)器。這種新的團(tuán)結(jié)將構(gòu)成一個令人振奮的故事,對于那些被邀請加入“意識團(tuán)隊”的動物來說,或許是個好消息。但是,如果人工智能開始挑戰(zhàn)人類——或者更確切地說,是動物——對意識的壟斷,又會發(fā)生什么呢?那時,我們又會變成什么樣子呢?

我覺得這前景令人深感不安,雖然我并不完全確定原因。我逐漸接受了與其他動物(就我而言,甚至可能包括植物)共享意識的想法,并且樂于將它們納入不斷擴(kuò)大的道德考量范圍。但是機(jī)器呢?

或許我對這個想法的不安源于我的背景和教育。我從小就浸潤在人文學(xué)科的溫水煮沸中,尤其是在文學(xué)、歷史和藝術(shù)領(lǐng)域,這些學(xué)科一直將人類意識視為一種值得捍衛(wèi)的卓越存在。我們所珍視的文明幾乎一切都是人類意識的產(chǎn)物:藝術(shù)與科學(xué)、高雅文化與通俗文化、建筑、哲學(xué)、宗教、政府、法律、倫理道德,更不用說價值本身的概念了。我想,有意識的計算機(jī)或許能為這些輝煌的寶庫增添一些全新的、我們尚未想象到的東西。我們當(dāng)然可以抱有這樣的希望。迄今為止,人工智能創(chuàng)作的詩歌水平與打油詩相差無幾;缺乏意識或許可以解釋為什么它們連一絲原創(chuàng)性或新穎見解的火花都沒有。但是,如果(或者說當(dāng)?)有意識的人工智能開始創(chuàng)作真正優(yōu)秀的詩歌時,我們又會作何感想呢?

我們憑什么認(rèn)為有意識的機(jī)器會比有意識的人類更具美德?

人工智能永遠(yuǎn)不會有意識作為一個人道主義者,我難以接受動物對意識的壟斷地位可能會被打破。但我現(xiàn)在遇到了一些其他類型的人(其中一些人自稱為超人類主義者),他們對這個未來持更為樂觀的態(tài)度。一些人工智能研究人員支持制造有意識的機(jī)器,因為作為擁有自身情感的實體,有意識的機(jī)器比僅僅具備智能的計算機(jī)更有可能發(fā)展出同理心。

正如一位神經(jīng)科學(xué)家和一位人工智能研究人員試圖說服我的那樣,制造有意識的人工智能是一項道德義務(wù)。為什么?因為另一種選擇是擁有超凡智慧卻冷酷無情的人工智能,它會為了實現(xiàn)目標(biāo)而不擇手段,因為它缺乏我們意識和共同脆弱性所帶來的所有道德約束。只有有意識的人工智能才有可能發(fā)展出同理心,從而拯救我們。我并非夸大其詞,這就是他們的論點。

真讓人懷疑這些人是否讀過《弗蘭肯斯坦》!弗蘭肯斯坦博士賦予他的造物不僅生命,更賦予其意識,而這正是問題的關(guān)鍵所在。瑪麗·雪萊的小說記錄了“一個敏感而理性的動物的誕生”,正是這兩種特質(zhì)的結(jié)合決定了怪物的命運(yùn)。驅(qū)使怪物尋求復(fù)仇并最終走上殺戮之路的,并非怪物的理性,而是他內(nèi)心的創(chuàng)傷。

“我所見之處皆是幸福,唯獨我被無可挽回地排除在外,”怪物被逐出人類社會后向弗蘭肯斯坦博士抱怨道,“我原本仁慈善良,是苦難讓我變成了惡魔。”怪物的理性能力固然幫助他實現(xiàn)了邪惡的計劃,但真正賦予他動機(jī)的卻是他的意識——他的情感。我們又憑什么認(rèn)為有意識的機(jī)器會比有意識的人類更具美德呢?

令人驚訝的是,巴特林關(guān)于人工智能意識的報告代表了該領(lǐng)域某種程度上的共識;我采訪的大多數(shù)計算機(jī)科學(xué)家都贊同其結(jié)論。然而,我花在閱讀這份報告(以及采訪其中一位合著者)上的時間越多,就越開始質(zhì)疑其關(guān)于人工智能意識即將實現(xiàn)的結(jié)論。值得稱贊的是,作者們嚴(yán)謹(jǐn)?shù)仃U述了他們的假設(shè)和方法,但這反而讓我懷疑,他們是否是建立在一個站不住腳的基礎(chǔ)上,才得出如此大膽的結(jié)論。

本書開篇,這些計算機(jī)科學(xué)家和哲學(xué)家就提出了他們的指導(dǎo)性假設(shè):“我們采用計算功能主義作為工作假設(shè),即執(zhí)行正確類型的計算是意識存在的必要且充分條件。”計算功能主義的出發(fā)點是,意識本質(zhì)上是一種運(yùn)行在可能是大腦或計算機(jī)的硬件上的軟件——該理論完全持不可知論的態(tài)度。但計算功能主義是正確的嗎?作者們并不打算對此妄下斷言,只是說它是“主流觀點——盡管存在爭議”。即便如此,出于“務(wù)實的原因”,他們還是會假設(shè)它是正確的。

坦誠固然可貴,但這種信念上的跳躍需要極大的勇氣,我不確定我們是否應(yīng)該這樣做。

就本報告而言,系統(tǒng)的“物質(zhì)基礎(chǔ)”(即大腦或計算機(jī))“對意識而言并不重要……它可以存在于多種基礎(chǔ)中,而不僅僅是生物大腦。”任何能夠運(yùn)行必要算法的基礎(chǔ)都可以。“我們初步假設(shè),我們所知的計算機(jī)原則上能夠?qū)崿F(xiàn)足以產(chǎn)生意識的算法,”作者指出,“但我們并不聲稱這一點是確定的。”這種對不確定性的承認(rèn)遠(yuǎn)遠(yuǎn)不夠。報告中未加質(zhì)疑的比喻是:大腦是計算機(jī)——意識軟件運(yùn)行的硬件。在這里,我們看到的是一個偽裝成事實的比喻。事實上,整篇論文及其結(jié)論都建立在這個比喻的有效性之上。

隱喻是強(qiáng)大的思考工具,但前提是我們不能忘記它們只是隱喻——一種不完美或片面的類比,將一件事物比作另一件事物。兩者之間的差異與相似之處同樣重要,但這些差異似乎在人工智能的狂熱浪潮中被忽略了。正如控制論專家阿圖羅·羅森布魯斯和諾伯特·維納多年前所指出的那樣,“隱喻的代價是永恒的警惕。” 除了這份報告的作者之外,整個人工智能領(lǐng)域似乎都放松了警惕。

想想硬件和軟件之間涇渭分明的區(qū)別。計算機(jī)中硬件與軟件分離的妙處在于,同一臺機(jī)器上可以運(yùn)行許多不同的程序;軟件及其編碼的知識在硬件“消亡”后依然存在。這種分離也符合我們關(guān)于二元論的直覺——即,正如笛卡爾所言,我們可以清晰地劃分精神和物質(zhì)。但在大腦中,硬件和軟件之間的區(qū)別根本不存在;在那里,軟件就是硬件,反之亦然。記憶是大腦中神經(jīng)元之間物理連接的模式,它既非硬件也非軟件,而是兩者兼具。

事實上,發(fā)生在你身上的一切——你經(jīng)歷、學(xué)習(xí)或記憶的一切——都會改變你大腦的物理結(jié)構(gòu),永久性地重塑其連接。(從這個意義上講,大腦中不存在二元性;精神層面的東西永遠(yuǎn)無法與物質(zhì)層面完全分離。)認(rèn)為同樣的意識算法可以在各種不同的載體上運(yùn)行的想法是毫無意義的,因為所討論的載體——大腦——會不斷地被運(yùn)行在其上的任何信息(或“意識算法”)進(jìn)行物理重構(gòu)。你的大腦與我的大腦在物質(zhì)層面上是不同的,正是因為它們被不同的生活經(jīng)歷——也就是意識本身——所塑造。大腦根本無法互換,無論是與電腦還是與其他大腦。

幾乎在任何方面,你只要深入探討,就會發(fā)現(xiàn)“計算機(jī)等同于大腦”的比喻并不成立。計算機(jī)科學(xué)家將大腦中的神經(jīng)元視為芯片上的晶體管,通過電脈沖來控制它們的開關(guān)。這種類比固然有一定道理,但實際情況卻很復(fù)雜,因為電并非影響神經(jīng)元活動的唯一因素。大腦中還充斥著各種化學(xué)物質(zhì),包括神經(jīng)調(diào)節(jié)劑和激素,它們不僅影響神經(jīng)元是否放電,還影響其放電的強(qiáng)度。這就是為什么精神活性藥物能夠深刻地改變意識(而對計算機(jī)卻沒有明顯影響)的原因。神經(jīng)元的活動還受到大腦中以波狀模式傳播的振蕩的影響;這些振蕩的不同頻率與不同的心理活動相關(guān),例如意識及其缺失、注意力集中和做夢(以及其他睡眠階段)。

將神經(jīng)元比作晶體管,是對它們復(fù)雜性的極大低估。與芯片上的晶體管相比,大腦中的神經(jīng)元相互連接極其復(fù)雜,每個神經(jīng)元都直接與其他多達(dá)10000個神經(jīng)元通信,構(gòu)成一個極其精細(xì)的網(wǎng)絡(luò),以至于我們距離繪制出其連接的最粗略圖譜,仍然需要數(shù)十年時間。在計算機(jī)科學(xué)領(lǐng)域,人們對“深度人工神經(jīng)網(wǎng)絡(luò)”的出現(xiàn)大加贊賞——這是一種機(jī)器學(xué)習(xí)架構(gòu),據(jù)稱以大腦為模型,它以驚人的數(shù)量疊加處理器,使網(wǎng)絡(luò)能夠處理和學(xué)習(xí)海量數(shù)據(jù)。這令人印象深刻,但最近的一項研究表明,單個皮層神經(jīng)元就能完成整個深度人工神經(jīng)網(wǎng)絡(luò)所能完成的一切。

沒錯,計算機(jī)在很多方面確實與大腦相似,計算機(jī)科學(xué)也通過模擬大腦的各個方面和運(yùn)作方式取得了長足的進(jìn)步。但是,認(rèn)為大腦和計算機(jī)在任何方面都可以互換——計算功能主義的前提——無疑是牽強(qiáng)附會的。然而,這不僅是巴特林報告的立足之地,也是該領(lǐng)域大多數(shù)理論的基石。原因不難理解。如果大腦是計算機(jī),那么足夠強(qiáng)大的計算機(jī)應(yīng)該能夠做到大腦所做的一切,包括產(chǎn)生意識。這個前提幾乎必然得出這樣的結(jié)論。換句話說,正是作者們自己掃除了構(gòu)建有意識人工智能的最大“障礙”——即認(rèn)為大腦與計算機(jī)在關(guān)鍵方面存在差異的障礙。

將神經(jīng)元比作晶體管,是對神經(jīng)元復(fù)雜性的嚴(yán)重低估。

報告的第二個方面讓我質(zhì)疑其結(jié)論的可信度,那就是它提出的判斷人工智能是否真正具有意識的標(biāo)準(zhǔn)。這是一個嚴(yán)峻的挑戰(zhàn)。作者引用了勒莫因事件(無論是否恰當(dāng)),指出人工智能很容易欺騙人類,讓他們相信自己擁有意識,而實際上并非如此。(或許更準(zhǔn)確的說法是我們自己欺騙了自己,這要歸功于我們對擬人化和魔法的迷戀。)當(dāng)人工智能的訓(xùn)練數(shù)據(jù)幾乎涵蓋了所有關(guān)于意識的論述時,“可報告性”(哲學(xué)術(shù)語,其實就是直接詢問人工智能)就無法奏效了。解決這一難題的一個方法是,從人工智能訓(xùn)練所用的數(shù)據(jù)集中移除所有關(guān)于意識(以及可能包括感覺和情感)的引用,然后觀察它是否還能令人信服地表達(dá)自己擁有意識。

作者建議,我們應(yīng)該尋找與各種意識理論預(yù)測相符的人工智能意識“指標(biāo)”。例如,如果人工智能的設(shè)計包含一個工作空間,該工作空間匯集了各種信息流,但前提是這些信息流必須經(jīng)過競爭才能進(jìn)入該空間,那么這很符合全局工作空間理論,因此可能被視為具有意識。該報告回顧了六種意識理論,并確定了人工智能必須展現(xiàn)的“指標(biāo)”,以滿足每種理論的要求,從而被認(rèn)為具有潛在的意識。

這里的問題(或者說其中一個問題)在于:它提出的用來衡量人工智能的意識理論,沒有一個能達(dá)到任何人滿意的程度。那么,這算什么證明標(biāo)準(zhǔn)呢?更糟糕的是,很多這類理論都可以在人工智能設(shè)計中模擬出來,這并不奇怪,因為它們都基于意識是計算問題這一前提。我們陷入了無休止的循環(huán)。

當(dāng)我仔細(xì)研讀完巴特林報告后,我之前一直擔(dān)憂的“哥白尼時刻”似乎比報告大膽的結(jié)論所暗示的要遙遠(yuǎn)得多。在回顧了報告中提到的六種左右的意識理論后,我發(fā)現(xiàn)它們都存在一個共同的缺陷:它們都想當(dāng)然地認(rèn)為意識可以被簡化為某種算法。

我也注意到,這些理論存在一些缺失。它們都沒有提及具身性——即意識可能依賴于同時擁有身體和大腦——或者說,它們對任何與生物學(xué)相關(guān)的概念都只字未提。這些理論也沒有解釋意識主體。究竟是誰或什么接收了在全球工作空間中傳播的信息?或者說,整合信息理論(IIT)中整合的信息?情感在使體驗成為意識的過程中又扮演著怎樣的角色?

最后一點作者們也注意到了,他們指出大多數(shù)現(xiàn)有理論都忽略了“情感”這一概念,并建議該領(lǐng)域應(yīng)更加關(guān)注有意識的機(jī)器是否擁有“真實”的情感這一問題。因為如果事實證明它們確實擁有情感,我們將面臨一場道德和倫理危機(jī)。報告指出:“任何能夠感知痛苦的實體都應(yīng)受到道德考量。”(但痛苦難道不總是有意識的嗎?)報告繼續(xù)說道:“這意味著,如果我們未能認(rèn)識到有意識的人工智能系統(tǒng)的意識,我們可能會造成或允許造成具有重大道德意義的傷害。”我們究竟對能夠感知痛苦的機(jī)器負(fù)有什么責(zé)任?我們真的希望給這個世界帶來更多的痛苦嗎?

除了這種對情感(作為賦予機(jī)器意識的棘手副產(chǎn)品)的高度推測性討論之外,在人工智能領(lǐng)域,關(guān)于意識的討論一如既往地抽象——如同人們所預(yù)期的那樣,它冷冰冰的、沒有形體的,并且完全無視生物學(xué)。當(dāng)我向一位致力于構(gòu)建有意識人工智能的研究人員提出“計算機(jī)是否會感到痛苦”的難題時,他輕描淡寫地?fù)]了揮手,解釋說只需對算法進(jìn)行簡單的修改就能解決這個問題:“我們完全可以把快樂的程度調(diào)高一點。”

改編自邁克爾·波倫的《世界顯現(xiàn):意識之旅》。版權(quán)所有?2026 邁克爾·波倫。經(jīng)企鵝出版社(企鵝出版集團(tuán)旗下品牌,企鵝蘭登書屋有限責(zé)任公司)授權(quán)出版。

AI Will Never Be Conscious

In his new book, A World Appears, Michael Pollan argues that artificial intelligence can do many things—it just can’t be a person.


PHOTO-ILLUSTRATION: WIRED STAFF; GETTY IMAGES

SAVE THIS STORY

THE BLAKE LEMOINE incident is remembered today as a high?water mark of AI hype. It thrust the whole idea of conscious AI into public awareness for a news cycle or two, but it also launched a conversation, among both computer scientists and consciousness researchers, that has only intensified in the years since. While the tech community continues to publicly belittle the whole idea (and poor Lemoine), in private it has begun to take the possibility much more seriously. A conscious AI might lack a clear commercial rationale (how do you monetize the thing?) and create sticky moral dilemmas (how should we treat a machine capable of suffering?). Yet some AI engineers have come to think that the holy grail of artificial general intelligence—a machine that is not only supersmart but also endowed with a human level of understanding, creativity, and common sense—might require something like consciousness to attain. In the tech community, what had been an informal taboo surrounding conscious AI—as a prospect that the public would find creepy—suddenly began to crumble.


COURTESY OF PENGUIN PRESS

Buy This Book At:

  • Amazon

  • Bookshop.org

  • Target

If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more.

The turning point came in the summer of 2023, when a group of 19 leading computer scientists and philosophers posted an 88?page report titled “Consciousness in Artificial Intelligence,” informally known as the Butlin report. Within days, it seemed, everyone in the AI and consciousness science community had read it. The draft report’s abstract offered this arresting sentence: “Our analysis suggests that no current AI systems are conscious, but also suggests that there are no obvious barriers to building conscious AI systems.”

ADVERTISEMENT

The most ambitious, future-defining stories from our favorite writers.

SIGN UP

By signing up, you agree to our user agreement (including class action waiver and arbitration provisions), and acknowledge our privacy policy.

The authors acknowledged that part of the inspiration behind convening the group and writing the report was “the case of Blake Lemoine.” “If AIs can give the impression of consciousness,” a coauthor told Science magazine, “that makes it an urgent priority for scientists and philosophers to weigh in.”

FEATURED VIDEO

Michael Pollan Answers Psychedelics Questions From Twitter

But what caught everyone’s attention was that single statement in the abstract of the preprint: “no obvious barriers to building conscious AI systems.” When I read those words for the first time, I felt like some important threshold had been crossed, and it was not just a technological one. No, this had to do with our very identity as a species.

What would it mean for humanity to discover one day in the not?so?distant future that a fully conscious machine had come into the world? I’m guessing it would be a Copernican moment, abruptly dislodging our sense of centrality and specialness. We humans have spent a few thousand years defining ourselves in opposition to the “l(fā)esser” animals. This has entailed denying animals such supposedly uniquely human traits as feelings (one of Descartes’s most flagrant errors), language, reason, and consciousness. In the last few years, most of these distinctions have disintegrated as scientists have demonstrated that plenty of species are intelligent and conscious, have feelings, and use language and tools, in the process challenging centuries of human exceptionalism. This shift, still underway, has raised thorny questions about our identity, as well as about our moral obligations to other species.

With AI, the threat to our exalted self?conception comes from another quarter entirely. Now we humans will have to define ourselves in relation to AIs rather than other animals. As computer algorithms surpass us in sheer brainpower—handily beating us at games like chess and Go and various forms of “higher” thought like mathematics—we can at least take solace in the fact that we (and many other animal species) still have to ourselves the blessings and burdens of consciousness, the ability to feel and have subjective experiences. In this sense, AI may serve as a common adversary, drawing humans and other animals closer together: us against it, the living versus the machines. This new solidarity would make for a heartwarming story and might be good news for the animals invited to join Team Conscious. But what happens if AI begins to challenge the human—or animal, I should say—monopoly on consciousness? Who will we be then?

ADVERTISEMENT

I find this a deeply unsettling prospect, though I’m not entirely sure why. I’m getting comfortable with the idea of sharing consciousness with other animals (and possibly even with plants, in my case) and I’d be happy to admit them into an expanding circle of moral consideration. But machines?

It could be that my discomfort with the idea stems from my background and education. I have been slow?cooked in the warm broth of the humanities, especially literature and history and the arts, and these have always held up human consciousness as something exceptional that is worth defending. Just about everything we value about civilization is the product of human consciousness: the arts and the sciences, high culture and low, architecture, philosophy, religion, government, law, and ethics and morality, not to mention the very idea of value itself. I suppose it is possible that conscious computers could add something new and as yet unimagined to the stock of these glories. We can hope so. To date, poetry written by AIs isn’t much better than doggerel; the absence of consciousness might explain why it lacks even a spark of originality or fresh insight. But how will we feel if (when?) conscious AIs start producing really good poetry?

ADVERTISEMENT

Why should we assume that conscious machines would be any more virtuous than conscious humans?

ADVERTISEMENT

As a humanist, I struggle with the possibility that the animal monopoly on consciousness might fall. But I have now met other types of humans (some of whom call themselves transhumanists) who are more sanguine about this future. Some AI researchers endorse the effort to build conscious machines because, as entities with feelings of their own, conscious machines are more likely to develop empathy than computers that are merely intelligent. Building a conscious AI is a moral imperative, as both a neuroscientist and an AI researcher sought to convince me. Why? Because the alternative is the blazingly smart but unfeeling AI that will be ruthless in pursuit of its objectives, because it will lack all of the moral constraints that have arisen from our consciousness and shared vulnerabilities. Only a conscious AI is apt to develop empathy and therefore spare us. I am not exaggerating; this is the argument.

One has to wonder if these people have ever read Frankenstein! Dr. Frankenstein gives his creation the gift of not only life but also consciousness, and therein lies the rub. Mary Shelley’s novel chronicles “the creation of a sensitive and rational animal,” and it is the combination of those two qualities that determines the monster’s fate. It is not the monster’s rationality but his emotional injury that spurs him to seek revenge and turn homicidal.

ADVERTISEMENT

“Everywhere I see bliss, from which I alone am irrevocably excluded,” the monster complains to Dr. Frankenstein after being driven out of human society. “I was benevolent and good; misery made me a fiend.” The monster’s ability to reason surely helped him realize his demonic scheme, but it was his consciousness—his feelings—that supplied the motive. Why should we assume that conscious machines would be any more virtuous than conscious humans?

REMARKABLY ENOUGH, THE Butlin report on artificial consciousness represents something of a consensus view in the field; most of the computer scientists I interviewed endorsed its conclusions. Yet the more time I spent reading it (and interviewing one of its coauthors), the more I began to question its conclusion that artificial consciousness is right around the corner. To their credit, the authors are scrupulous about setting forth their assumptions and methods, both of which make me wonder if they haven’t erected their bold conclusion atop a dubious foundation.

Right on page one, these computer scientists and philosophers set forth their guiding assumption: “We adopt computational functionalism, the thesis that performing computations of the right kind is necessary and sufficient for consciousness, as a working hypothesis.” Computational functionalism takes as its starting point the idea that consciousness is essentially a kind of software running on the hardware of what could be a brain or a computer—the theory is completely agnostic. But is computational functionalism true? The authors aren’t quite prepared to nail themselves to that claim, only to say that it is “mainstream—although disputed.” Even so, they will proceed on the assumption that it is true for “pragmatic reasons.”

ADVERTISEMENT

The candor is admirable, but the approach demands a tremendous leap of faith that I’m not sure we should make.

For the purposes of the report, the “material substrate” of the system—that is, whether it is a brain or a computer—“does not matter for consciousness … It can exist in multiple substrates, not just in biological brains.” Any substrate that can run the necessary algorithm will do. “We tentatively assume that computers as we know them are in principle capable of implementing algorithms sufficient for consciousness,” the authors state, “but we do not claim that this is certain.” The acknowledgment of uncertainty doesn’t go nearly far enough. Unquestioned in the report is the metaphor that brains are computers—the hardware on which the software of consciousness is run. Here, we meet a metaphor parading as fact. Indeed, the whole paper and its conclusions hinge on the validity of this metaphor.

Metaphors can be powerful tools for thinking, but only as long as we don’t forget they are metaphors—imperfect or partial analogies likening one thing to another. The differences between the two things are as important as the similarities, but these differences seem to have gotten lost in the enthusiasm surrounding AI. As cyberneticists Arturo Rosenblueth and Norbert Wiener noted years ago, “The price of metaphor is eternal vigilance.” Beyond the authors of this report, the whole field of AI appears to have let down its guard on this one.

ADVERTISEMENT

Consider the sharp distinction between hardware and software. The beauty of separating hardware from software in computers is that a great many different programs can run on the same machine; the software and the knowledge it encodes survive the “death” of the hardware. The separation also speaks to our folk intuition that dualism is true—that, following Descartes, we can draw a bright line between mental stuff and physical stuff. But the distinction between hardware and software simply doesn’t exist in brains; there, software is hardware and vice versa. A memory is a physical pattern of connection among neurons in the brain, neither hardware nor software but both.

Indeed, everything that happens to you—everything you experience or learn or remember—changes the physical structure of your brain, permanently rewiring its connections. (In this sense, there is no dualism in the brain; mental stuff can never be completely disentangled from physical stuff.) The idea that the same consciousness algorithm can be run on a variety of different substrates makes no sense when the substrate in question—a brain—is continually being physically reconfigured by whatever information (or “algorithm of consciousness”) is run on it. Your brain is materially different from mine precisely because it has been shaped, literally, by different life experiences—that is, by consciousness itself. Brains are simply not interchangeable, neither with computers nor with other brains.

ADVERTISEMENT

Just about anyplace you push on it, the computer?as?brain metaphor breaks down. Computer scientists treat neurons in a brain as though they are transistors on a chip, switched on or off by pulses of electricity. That analogy has some truth to it, but it is complicated by the fact that electricity is not the only factor influencing the firing of neurons. Brains are also awash in chemicals, including neuromodulators and hormones that powerfully influence the behavior of neurons, not just whether or not they fire but how strongly. This is why psychoactive drugs can profoundly alter consciousness (and have no discernible effect on computers). The activity of neurons is also influenced by oscillations that traverse the brain in wavelike patterns; the different frequencies of these oscillations correlate with different mental operations, such as consciousness and its absence, focused attention and dreaming (as well as other stages of sleep).

To liken neurons to transistors is to grossly underestimate their complexity. Compared with transistors on a chip, neurons in the brain are massively interconnected, each one communicating directly with as many as 10,000 others in a network so intricate that we are still decades away from being able to draw even the crudest map of its connections. In computer science, much has been made about the advent of “deep artificial neural networks”—a type of machine?learning architecture, supposedly modeled on the brain’s, that layers a mind?boggling number of processors in such a way that the network can process and learn from vast troves of data. Impressive, for sure, yet a recent study demonstrated that a single cortical neuron can do everything an entire deep artificial neural network can.

ADVERTISEMENT

Yes, there are plenty of ways in which computers do resemble brains, and computer science has made great strides by simulating various aspects and operations of the brain. But the idea that brains and computers are in any way interchangeable—the premise of computational functionalism—is surely a stretch. And yet this is the premise upon which stands not only the Butlin report but also most of the field. It’s not hard to see why. If brains are computers, then sufficiently powerful computers should be able to do whatever brains do, including becoming conscious. The premise all but guarantees the conclusion. Put another way, it is the authors themselves who have removed the biggest “barrier” to building a conscious AI—the barrier that says brains differ from computers in crucial ways.

To liken neurons to transistors is to grossly underestimate their complexity.

ADVERTISEMENT

There is a second aspect of the report that makes me wonder how seriously to take its conclusion, and that is the standard it proposes for deciding if an AI is actually conscious or not. This is a serious challenge. Citing the Lemoine incident (fairly or not), the authors point out that AIs can easily dupe humans into believing they are conscious when they are not. (It’s probably more accurate to say that we dupe ourselves into this belief, thanks to our weakness for anthropomorphism and magic.) “Reportability” (philosophical jargon for just asking the AI itself) won’t work when the AI has been trained on pretty much everything that’s been said and written about consciousness. One approach to this dilemma would be to remove all references to consciousness (and presumably feeling and emotion as well) from the dataset on which the AI has been trained and then see if it can still speak convincingly about being conscious.

Instead, the authors propose that we look for “indicators” of AI consciousness that match the predictions of the various theories of consciousness in play. So, for example, if the design of an AI included a workspace that brought together various streams of information, but only after those streams had competed to enter it, that would look a lot like global workspace theory and so might qualify as conscious. The report reviewed a half?dozen theories of consciousness, identifying the “indicators” that an AI would have to exhibit to satisfy each of them and, by doing so, be deemed potentially conscious.

ADVERTISEMENT

The problem here (well, one of them) is this: None of the theories of consciousness that it proposes we measure AIs against are even remotely close to being proved to anyone’s satisfaction. So what kind of standard of proof is that? What’s more, many of these theories can be simulated in the design of an AI, which should come as no surprise, because they’re all based on the idea that consciousness is a matter of computation. Round and round we go.

By the time I finished digesting the Butlin report, the Copernican moment I’d worried about seemed more distant than the report’s bold conclusion had led me to believe. After reviewing the half?dozen or so theories of consciousness covered by the report, it seemed clear that all of them stacked the deck by taking for granted that consciousness could be reduced to some kind of algorithm.

I was also struck by what was missing from the theories under consideration. None of them had anything to say about embodiment—the idea that consciousness might depend on having both a body and a brain—or, for that matter anything remotely biological. Nor did the theories have anything to say about the conscious subject. Who or what, exactly, is the recipient of the information that is broadcast in the global workspace? Or the information that is integrated in integrated information theory (IIT)? And what about the role of feelings in rendering experience conscious?

ADVERTISEMENT

This last point was not lost on the authors, who noted the absence of “affect” from most current theories and recommended that the field pay more attention to the issue of whether conscious machines would have “real” feelings, because if it turns out they do, we will have a moral and ethical crisis on our hands. “Any entity which is capable of conscious suffering deserves moral consideration,” the report states. (But isn’t suffering always conscious?) “This means that if we fail to recognize the consciousness of conscious AI systems,” the report continued, “we may risk causing or allowing morally significant harms.” What would we owe machines that can suffer? And do we really want to bring any more suffering into the world?

Apart from this sort of highly speculative discussion of feeling (as a troublesome by?product of making machines conscious), in the AI community, the conversation about consciousness is as relentlessly abstract—as bloodless, bodiless, and utterly oblivious to biology—as one would expect. When I posed the suffering?computer conundrum to a researcher seeking to build a conscious AI, he waved away the problem, explaining it could be offset with a simple fix to the algorithm: “There’s no reason we couldn’t just turn up the dial on joy.”

Adapted from A World Appears: A Journey into Consciousness by Michael Pollan. Copyright ?2026 by Michael Pollan. Published by arrangement with Penguin Press, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC.

特別聲明:以上內(nèi)容(如有圖片或視頻亦包括在內(nèi))為自媒體平臺“網(wǎng)易號”用戶上傳并發(fā)布,本平臺僅提供信息存儲服務(wù)。

Notice: The content above (including the pictures and videos if any) is uploaded and posted by a user of NetEase Hao, which is a social media platform and only provides information storage services.

相關(guān)推薦
熱點推薦
辣椒被關(guān)注!研究發(fā)現(xiàn):癌癥患者吃辣椒,或引發(fā)六種關(guān)鍵變化

辣椒被關(guān)注!研究發(fā)現(xiàn):癌癥患者吃辣椒,或引發(fā)六種關(guān)鍵變化

全球軍事記
2026-02-26 11:27:44
郭碧婷一家四口出游!向佐手臂全是肌肉好壯實,3歲兒子趕超姐姐

郭碧婷一家四口出游!向佐手臂全是肌肉好壯實,3歲兒子趕超姐姐

鄉(xiāng)野小珥
2026-03-26 01:49:54
18歲的姚晨在肯德基工作時的一張照片,那時就難掩渾身的好氣質(zhì)

18歲的姚晨在肯德基工作時的一張照片,那時就難掩渾身的好氣質(zhì)

娛你同歡
2026-03-17 16:08:28
主動退市獲批!603056,下周二摘牌!

主動退市獲批!603056,下周二摘牌!

證券時報e公司
2026-03-25 19:35:03
中美都在賭,美國賭中國不敢打日本,而中國則在賭美國不會下場

中美都在賭,美國賭中國不敢打日本,而中國則在賭美國不會下場

南權(quán)先生
2026-03-24 15:30:39
又一次被火箭二隊打爆了!昔日籃網(wǎng)首輪秀在雷霆二隊也要被淘汰了

又一次被火箭二隊打爆了!昔日籃網(wǎng)首輪秀在雷霆二隊也要被淘汰了

稻谷與小麥
2026-03-26 01:50:14
不死不休!伊朗切斷自己的退路:你們知道我這47年是怎么過的嗎?

不死不休!伊朗切斷自己的退路:你們知道我這47年是怎么過的嗎?

共工之錨
2026-03-26 00:59:03
中方必將奉陪到底!拒邀日本人參會,人數(shù)直接清零,態(tài)度強(qiáng)硬

中方必將奉陪到底!拒邀日本人參會,人數(shù)直接清零,態(tài)度強(qiáng)硬

觸摸史跡
2026-03-26 00:55:44
微信突然放大招!正式接入龍蝦,12億用戶聊天框變?nèi)詣覣I控制臺

微信突然放大招!正式接入龍蝦,12億用戶聊天框變?nèi)詣覣I控制臺

老特有話說
2026-03-23 15:16:09
快訊!中國和伊朗談妥了!

快訊!中國和伊朗談妥了!

達(dá)文西看世界
2026-03-25 15:51:56
復(fù)刻莫德里奇奇跡?米蘭加速免簽31歲老將,靠中場制造復(fù)興機(jī)會

復(fù)刻莫德里奇奇跡?米蘭加速免簽31歲老將,靠中場制造復(fù)興機(jī)會

里芃芃體育
2026-03-25 16:00:13
釋永信“開光”真相大白,過程不堪入目,易中天也有牽扯

釋永信“開光”真相大白,過程不堪入目,易中天也有牽扯

尋墨閣
2026-03-25 11:39:10
張雪峰突然去世!博士妻子李麗婧飽受非議上熱搜,或面臨3個選擇

張雪峰突然去世!博士妻子李麗婧飽受非議上熱搜,或面臨3個選擇

火山詩話
2026-03-25 16:14:23
趙繼偉14中1!遼寧爆冷惜敗吉林 姜宇星戰(zhàn)舊主吃T姜偉澤21+5+5

趙繼偉14中1!遼寧爆冷惜敗吉林 姜宇星戰(zhàn)舊主吃T姜偉澤21+5+5

醉臥浮生
2026-03-25 21:37:11
80年陳云建議陳錫聯(lián)辭職,陳錫聯(lián)猛拍桌:讓我干啥,我絕無二話!

80年陳云建議陳錫聯(lián)辭職,陳錫聯(lián)猛拍桌:讓我干啥,我絕無二話!

抽象派大師
2026-03-25 12:04:28
特朗普萬萬沒想到!第一個敢掀桌子的,竟然是馬來西亞

特朗普萬萬沒想到!第一個敢掀桌子的,竟然是馬來西亞

黑鷹觀軍事
2026-03-25 17:12:13
10輪23分坐穩(wěn)前3,曝曼聯(lián)新帥名單還剩3人,卡里克轉(zhuǎn)正為啥這么難

10輪23分坐穩(wěn)前3,曝曼聯(lián)新帥名單還剩3人,卡里克轉(zhuǎn)正為啥這么難

夏侯看英超
2026-03-26 01:04:19
國內(nèi)外都沒人裝電腦了!亞馬遜CPU銷量已暴跌47%

國內(nèi)外都沒人裝電腦了!亞馬遜CPU銷量已暴跌47%

3DM游戲
2026-03-23 10:14:09
比勞斯萊斯庫里南還大 全新蔚來ES9實車曝光:氣場太強(qiáng)

比勞斯萊斯庫里南還大 全新蔚來ES9實車曝光:氣場太強(qiáng)

快科技
2026-03-25 10:14:04
場均40分8板7助!NBA歷史首人,這就是他能拿3年1.7億頂薪的原因

場均40分8板7助!NBA歷史首人,這就是他能拿3年1.7億頂薪的原因

籃球掃地僧
2026-03-25 15:58:44
2026-03-26 02:27:00
科學(xué)的歷程 incentive-icons
科學(xué)的歷程
吳國盛、田松主編
3158文章數(shù) 15010關(guān)注度
往期回顧 全部

科技要聞

紅極一時卻草草收場,Sora宣布正式關(guān)停

頭條要聞

伊朗:正在搜捕逃亡美軍

頭條要聞

伊朗:正在搜捕逃亡美軍

體育要聞

35歲替補(bǔ)門將,憑什么入選英格蘭隊?

娛樂要聞

張雪峰遺產(chǎn)分割復(fù)雜!是否立遺囑成關(guān)鍵

財經(jīng)要聞

管濤:中東局勢如何影響人民幣匯率走勢?

汽車要聞

智己LS8放大招 30萬內(nèi)8系旗艦+全線控底盤秀實力

態(tài)度原創(chuàng)

數(shù)碼
健康
家居
時尚
公開課

數(shù)碼要聞

蘋果macOS 26.4新增“慢速充電器”提示

轉(zhuǎn)頭就暈的耳石癥,能開車上班嗎?

家居要聞

輕奢堇天府 小資情調(diào)

女人過了40歲別胡亂穿衣,趕緊看看這些日系穿搭,舒適又耐看

公開課

李玫瑾:為什么性格比能力更重要?

無障礙瀏覽 進(jìn)入關(guān)懷版