2021年考研英语(一)阅览真题祥解Text3——出题的思路

9 minutes, 39 seconds Read


2021年考研英语(一)阅览真题祥解text 3——出题的思路
【1】?this year marks exactly two centuries?since the publication of frankenstein; or, the modern prometheus, by mary shelley. even before the invention of the electric?light bulb, the author produced a remarkable work of speculative fiction that would foreshadow?many?ethical questions?to be raised by technologies?yet to come.
长难句:the author produced a remarkable work of speculative fiction that would foreshadow many ethical questions to be raised by technologies yet to come.
the author?produced?a remarkable work of speculative fiction(主+谓+宾)。productity
that引导的定语从句,润饰 a remarkable work of speculative fiction。to be raised by technologies;动词不定式短语作后置定语,润饰 many ethical questions。
foreshadow?[f?:???d??] vt.预示,是…的前兆(fore-=pro;b,p,f 通假)
forecast[?f??kɑ?st]预告;
predict [pr??d?kt]v.预言;
foretell预言,预示
foreteller预言者;
本年恰逢玛丽·雪莱的《弗兰肯斯坦现代普罗米修斯的故事》出书两百周年。
(31)?甚至在电灯泡创造之前,该作者就创造出了这部特别的推理小说,该小说预示了将来科技将会致使的许多道德疑问。
花絮:瑞士贵族弗兰肯斯坦,他曾留学德国,研讨电化学和生命,发现了去世的隐秘,所以抉择着手制造生命。他先从尸身中寻找材料,然后进行拼装,最终凭仗电化学办法予以激活。可是,正本全都是由好材料制造的、高达8 英尺的怪物在被赋予了生命之后,却变得奇丑无比,弗兰肯斯坦被吓得昏了曩昔,醒来之后发现怪物现已失踪。
prometheus是真实的第三代神王,他是盖亚与乌拉诺斯的女儿忒tuī 弥斯与伊阿珀托斯所生。那时风化污浊,地母和儿孙都成婚生子,?道德疑问(ethical?questions),地母难脱关连,ethical?中eth=earth,al标明…的。prometheus先知先觉,这是词缀pro,pre=before;《弗兰肯斯坦现代普罗米修斯的故事》说到的prometheus这是一个“culture code”(文明代码),他创造人类,料事如神。
precise?[pr??sa?s]adj.清楚的; 精确的(pre在……前的+cise切→预先切好的→精确的)
prospect[?pr?spekt]n. 前景;出路;展军:向能性;期望,前景;出路(pro=before;spect=look)
propose?[pr??p??z]?v.主张;提议;主张;方案;方案;求婚(pro=before;pose=put)(向前放→提议)
pronouncement [pr??na?nsm?nt] n.声明; 布告; 宣告; 断定(pro=before;nounce
=speak,)
progressive?[pr??gres?v]]adj.前进的;(pro=before;gress=go.)领先的;改造的;逐步的
promote [pr??m??t] vt.推进,推进; 前进 ???(pro=before;?mot=move)
(向前动→推进标明)
★provision [pr??v??n] n.规则,条项,条款;(pro=before; vis=see,标明看,查+ion表名词)
★productivity [?pr?d?k?t?v?ti] n.出产率,出产力(pro=before;duct=lead,bring,标明引导,带来- ivity名词后缀。)
profession n.作业,专业; 同行; 作业声称; 崇奉,崇奉(pro=before;fess=speak)
★★predict?[pr??d?kt] vt.预言,猜测(pre-;dict=speak说)
★★?predominantly [pr??d?m?n?ntli] adv.占主导方位地; 显着地; 占优势地(pre 在前面+dominant 分配的)
presuppose [?pri:s??p??z] vt.预先假定; 假定(pre=before)
yet to come没有到来
mark [mɑ?k] 班驳,污渍; 记号; 成果;
electric用电的; 电的; 通电的;electronic电子的; 电子设备的;名词方法:electricity
?
?【2】today the rapid growth of artificial intelligence (ai) raises fundamental questions: “what is intelligence, identify, or?consciousness? what makes humans humans?”
(31)?如今,人工智能(ai)的灵敏打开致使了一些根柢性的疑问:“啥是智能,身份或知道?啥使人类变成人类?”
?
【3】what is being called artificial general intelligence, machines that would imitate the?way humans think, continues?to evade?scientists. yet humans remain fascinated by the idea of robots that would look, move, and respond like humans, similar to those recently depicted on popular sci-fi tv series such as “westworld” and “humans”.
长难句:what is being called artificial general intelligence?what引导的主语从句。continues?to evade scientists。谓语continues。宾语to?evade?scientists。?降耐ㄓ萌斯ぶ悄堋贝蟹穸ㄑ丈昝髦靡伞?br>
imitate?[??m?te?t] vt.仿照,仿效;( imit=imagine;-ate后缀)
evade [??ve?d] vt.躲避,躲避(e=out+vade=go)(走出去→躲避)
keep away from
fascinate[?f?s?ne?t]v.深深招引; 迷住; 使入神
be?fascinated?by
remain?fascinated?by
depict [d??p?kt]描绘;描绘,描绘(de加强+pict描画→变成画图→描绘)
?evade[??ve?d]躲避,躲避(e出,出来+vade走)
machines that…think中,that引导的定语从句 that would?imitate the?way,润饰 machines,that替代先行词machines在该从句中作主语,would标明有想像力;从句中又嵌套了一个省掉( in which)或联络词(that)的定语从句humans think,润饰 the way.
经历共享:the way humans think中接连两个名词词,一个动词,后边的名词+动词实践上是一个省掉了联络代词(副词)定语从句,而绝不是同位语从句,因同位语从句规划是全的,从不省掉联络代词。
?
所谓的通用人工智能,即可以仿照人类思维方法的机器,一向让科学家难以揣摩。可是,人类仍然入神于机器人可以像人类相同调查、行为和反应这样的主意,就像《西部世界》和《真实的人类》等迩来盛行的科幻电视接连剧中所描绘的那样。
【4】just?how?people think is still far too complex to be understood, let alone reproduced, says david eagleman, a stanford university neuroscientist. “we are just in a situation where there are no good theories explaining what consciousness actually is and how you could ever build a machine to get there.”
长难句:we are just in a situation where there are no good theories?explaining what consciousness actually is and how you could ever build a machine to get there。
where引导的定语从句,润饰 a situation。
如今分词短语?explaining what consciousness actually is and how you could ever build a machine to get there润饰theories.如今分词短语中,and联接两个并排的宾语从句;what引导的宾语从句中,主语是 consciousness,系动词是is,表语是what;how引导的宾语从句中,主语是you,谓语是 could?build,宾语是a machine,副词ever作状语。
(32)人类的思维方法仍然过于凌乱,让人难以了解,更不必说仿制了,斯坦福大学神经科学家大卫·伊格曼如是说。“咱们正处于这样一种情况:(32)没有好的理论来说明知道究竟是啥,以及如何才干制造出一台抵达人类知道水平的机器。”
【5】but that doesn’t mean crucial ethical issues involving ai aren’t at hand. the coming use of autonomous vehicles, for example, poses thorny ethical questions. human drivers sometimes must make split-second decisions. their reactions may be a complex combination of instant reflexes, input from past driving experiences, and what their eyes and ears tell them in that moment. ai “vision” today is not nearly as sophisticated as that of humans. and to anticipate every imaginable driving situation is a difficult programming problem.
长难句:but that doesn’t mean crucial ethical issues involving ai aren’t at hand. =but that doesn’t mean?(that)?crucial ethical issues?involving ai?aren’t at hand. mean后省掉that引导的宾主从句。从句中crucial ethical issues是主语,?aren’t at hand是表语。
但这并不料味着触及人工智能的重要道德疑问不会呈如今咱们身边。例如,即将投入运用的主动驾御车辆就引发了扎手的道德疑问。人类司机有时有必要在片刻间做出抉择。他们的反应可所以片刻间反应、以往驾御经历的堆集,以及那一刻双眼和耳朵所传递信息的凌乱联系。(33)当前,人工智能的“愿望力”远不及人类的凌乱。而且猜测悉数可以发生的驾御场景是一项编程难题。
reflex [?ri?fleks]
【6】whenever decisions are based on masses of data, “you quickly get into a lot of ethical questions,” notes tan kiat how, chief executive of a singapore-based agency that is helping the government develop a voluntary code for the ethical use of ai. along with singapore, other governments and mega-corporations are beginning to establish their own guidelines. britain is setting up a data ethics center. india released its ai ethics strategy this spring.
每当要根据很大都据做抉择计划时,“你就会很快堕入成堆道德疑问中,”一家总部设在新加坡的机构的首席实施官陈杰豪指出。这家机构正在协助政府拟定一套自愿行为原则,以使人工智能的使用符合道德道德。与新加坡相同,其他(国家的)政府以及大型公司也正在初步拟定自个的辅导方针。英国正在树立一个数据道德中心。印度在本年春季发布了其人工智能道德战略。
【7】on june 7 谷歌 pledged not to “design or deploy ai” that would cause “overall harm,” or to develop ai-directed weapons or use ai for surveillance that would violate international norms.?it also pledged not to deploy ai whose use would violate international laws or human rights.
长难句:on june 7 谷歌 pledged not to “design or deploy ai” that would cause “overall harm,” or to develop ai-directed weapons or use ai for surveillance that would violate international norms.
语句骨干中的not对三个并排的动词不定式短语一起进行否定,三个并排的动词不定式短语均作pledged的宾语。
6月7日,谷歌承诺不会“方案或安设”会构成“全部损害”的“人工智能”,也不会研发由人工智能控制的武器,或使用人工智能进行违背世界原则的监督活动。谷歌还承诺不会安设违背世界法或侵监犯权的人工智能。
【8】while the statement is vague, it represents one starting point. so does the idea that decisions made by ai systems should be explainable, transparent, and fair.
(34)尽管这项声明迷糊其词,但它标志着一个起点。“人工智能体系做出的抉择计划大约是可说明的、通明的和公正的”这一观念也是如此。
【9】to put it another way: how can we make sure that the thinking of intelligent machines reflects humanity’s highest values? only then will they be useful servants and not frankenstein’s out-of-control monster.
语句分析:only then will they be useful servants and not frankenstein’s out-of-control monster.
因only润饰充当状语的副词then且位于句首,故该语句运用了有些倒装。
only then will they be useful servants(倒装句:助动词+主+系+表),and联接并排的系表规划:and(will) not(be) frankensteins out-of-control monster。
换言之:咱们如何保证智能机器的思维可以反映人类的最高价值观?只需到那时分,它们才干为人类所用,而不是变成弗兰肯斯坦手下失控的怪物。
?
一、数一数全文一共由几个期间构成,标上序号。
二、听懂人话(审题,看出题人问了你啥?文章的中心是啥,规划是啥?特别首要细心审理最终一道标题),这是自上而下的逻辑,从出题人的视点来看疑问,悉数都好说。
《出题人的思路》:《英语一》愈加剧视查询整篇文章的中心思维、文法逻辑和微观了解;考研英语没有捷径,却又最短的途径可走。《出题人的思路》极力使你走的旅程最短,位移最大。
?
?31.mary shelley’s novel frankenstein?is mentioned?because it____.(例子题,细节)
??【出题人的思路】:??由 is mentioned可知是例子题,比方不重要,它是为中心效能;由because可知是有因果的成份,可直接找缘由。
由“mary shelley’s novel?frankenstein”可以快速直接定位。文中提及玛丽·雪莱的小说《弗兰肯斯坦》是因为它能引出这篇文章的论题(中心),抛砖引玉,砖是《弗兰肯斯坦》,玉是这篇文章的论题(中心)。
?
32.?in david eagleman’s opinion, our current knowledge of consciousness____.(例子题)
?【出题人的思路】:表面上是按大卫·伊格曼的定见,实践上借大卫·伊格曼的嘴说出咱们如今的识知疑问。
?33.the solution to the ethical issues?brought by autonomous vehicles____.
(细节)vehicle [?vi??kl] (ve=vey路途+hi+cle)circle
【出题人的思路】:主动驾御轿车带来的道德疑问,道德疑问这是文章的中心。主动驾御轿车是当今ai的首要方法,比方特斯拉。文章的中心是ai的道德疑问。
32题,由“our current knowledge of consciousness”阐明咱们其时对“ai的道德疑问”知道缺乏,若啥都理解,主动驾御轿车就不会带来的啥道德疑问了。
31.?文中提及玛丽·雪莱的小说《弗兰肯斯坦》是因为它能引出这篇文章的论题(中心)——“ai的道德疑问”
ethical[?eθ?kl] issues道德议题;近义词:moral [?m?r?l] n.道德; 涵义;
34.??the author’s attitude toward?谷歌’s pledge?is one of____.(情绪;one=attitude)
【难点】:is one of 这儿不是“其间…之一”;pledge是可数名词,如果“其间…之一”,pledge大约是复数,这儿是奇数。因而这儿是代词,one=attitude。
?关于谷歌的承诺,作者持____情绪;表面是情绪题,实为细节题。
【经历】:情绪题在最终,根柢上是中心;在倒数第二,可所以有些中心,而这儿仅对谷歌’s pledge的attitude,找地址期间的中心句,就是答案。
pledge[pled?]?n.保证,誓词
【题干译文】
31.文中提及玛丽·雪莱的小说《弗兰肯斯坦》是因为它
32.在大卫·伊格曼看来,咱们如今对知道的知道
33.主动驾御车辆带来的道德疑问的处置办法
34.关于谷歌的承诺,作者持____情绪。
?
35.?which of the following would be the best title for the text?(主旨)
a.?ai’s future: in the hands of tech giants
b.?frankenstein, the novel predicting the age of ai
c.?the conscience?of ai: complex but inevitable
d.?ai shall be killers once out of control
由33题干.“主动驾御轿车带来的道德疑问”,32题干,由“our current knowledge of consciousness”阐明咱们其时对“ai的道德疑问”知道缺乏,若啥都理解,主动驾御轿车就不会带来的啥道德疑问了。
小解:31.32引入“科技带来的道德疑问”论题,33主动驾御轿车带来的道德疑问。34关于谷歌的承诺,作者持____情绪(对道德疑问的承诺不带道德疑问),35主旨。
答案是c.包括了ai(artificial intelligence)、consciousness(con=together;sci=know+ous……多的;-ness名词后缀),后边的冒号,说明阐明,可以不管。
?
?
下面哪一项是文章的最佳标题?
a.人工智能的将来:掌控在科技巨子手中
b.《弗兰肯斯坦》,猜测人工智能年代的小说
c.人工智能的认知:凌乱而又不可以避免(大有些译为:人工智能的良知)
d.人工智能一旦失控就会变成杀手
conscience[?k?n??ns]?(con=together;sci=know)
ethical?issues道德议题; 道德疑问,ethical的近义词是moral
?
三、跟着老迈(中心the conscience of ai),顺藤(有关的提示信息,回到原订亲位)摸瓜(找答案),选用的自上而下的逻辑,从中心启航;中心抉择细节。
31.?mary shelley’s?novel?frankenstein is mentioned because it____.
a.?fascinates ai scientists all over the world.
b.?has remained popular for as long as 200 years.
c.?involves some?concerns?raised by ai today.
d.?has sparked serious ethical controversies.
文中提及玛丽·雪莱的小说《弗兰肯斯坦》是因为它
a.让全世界的人工智能科学家入神
b.现已盛行长达200年
c.触及一些由当今人工智能致使的疑问
d.现已引发了严峻的道德争议
答案:办法一,从细节上:novel=work;?ethical questions=?concerns;raised by technologies yet to come=raised by ai?today?ai在第二段中说到。today the rapid growth of artificial intelligence (ai) raises fundamental questions.联系第一二段答案是c。
办法二,环绕中心:由“mary shelley’s novel frankenstein”可以快速直接定位。文中提及玛丽·雪莱的小说《弗兰肯斯坦》是因为它能引出这篇文章的论题(中心),抛砖引玉,砖是《弗兰肯斯坦》,玉是这篇文章的论题(中心)。在审题干现已说到
烦扰项:d.?has sparked serious ethical controversies.二百年前的小说不可以能“引发了严峻的道德争议”,谓语错了。
小结:文章的中心一般在一段,假定第一段与第二段说的是一回事,中心在第一段的段尾与第二段的段首句。
【1】?this year marks exactly two c?since?centuries ?the publication of frankenstein;?or, the modern prometheus, by mary shelley. even before the invention of the electric light bulb, the author produced a remarkable work?of speculative fiction that would foreshadow many?ethical questions?to be raised by technologies yet to come.
本年恰逢玛丽·雪莱的《弗兰肯斯坦现代普罗米修斯的故事》出书两百周年。
(31)?甚至在电灯泡创造之前,该作者就创造出了这部特别的推理小说,该小说预示了将来科技将会致使的许多道德疑问。
?
?【2】today the rapid growth of artificial intelligence (ai) raises fundamental questions: “what is intelligence, identify, or?consciousness? what makes humans humans?”
长维句:what makes humans humans。前一个humans是makes的宾语,后一个是宾补。
(31)如今,人工智能(ai)的灵敏打开致使了一些根柢性的疑问:“啥是智能,身份或知道?啥使人类变成人类?”
【3】what is being called artificial general intelligence, machines that would imitate?the way humans think, continues to evade?scientists. yet humans remain fascinated by the idea of robots that would look, move, and respond like humans, similar to those recently depicted on popular sci-fi tv series such as “westworld” and “humans”.
所谓的通用人工智能,即可以仿照人类思维方法的机器,一向让科学家难以揣摩。可是,人类仍然入神于机器人可以像人类相同调查、行为和反应这样的主意,就像《西部世界》和《真实的人类》等迩来盛行的科幻电视接连剧中所描绘的那样。
【4】just?how?people think is still far too complex to be understood, let alone reproduced,?says?david eagleman, a stanford university neuroscientist. “we are just in a situation where there are no good theories explaining what consciousness actually is and how you could ever build a machine to get there.”
32.?in david eagleman’s opinion, our current knowledge?of consciousness____.(引证)
a.?helps explain artificial intelligence.
b.?can be misleading to robot making.
c.?inspires popular sci-fi tv series.
d.?is too limited for us to reproduce it.
根据大卫·伊格曼的观念,咱们如今对知道的认知
a.有助于说明人工智能
b.可以会误导机器人制造
c.为盛行的科幻电视接连剧带来构思
d.太有限而无法对其进行仿制
  【解析】:d.?is too limited for us to reproduce it.是对just?how?people?think?is still far too complex to be understood, let alone reproduced,的附和转述,david eagleman’s opinion是用来阐明第四的主旨。同第三段“一向让科学家难以揣摩”相共同。
也就是说:第三段“一向让科学家难以揣摩”,第四段“人类的思维方法仍然过于凌乱,让人难以了解,更不必说仿制了”表达的是一个意思:our current knowledge of consciousness?is too limited for us to reproduce it.这就是需求不要跳段,不要孤登时找去答案。方法会文章的主旨。?要“自上而下”掌控全文,而不是“自下而上”地找答案。
(32)?人类的思维方法仍然过于凌乱,让人难以了解,更不必说仿制了,斯坦福大学神经科学家大卫·伊格曼如是说。“咱们正处于这样一种情况:(32)没有好的理论来说明知道究竟是啥,以及如何才干制造出一台抵达人类知道水平的机器。”
小解:做题时不要跳段,特别是几段说的是一件事时。大有些教师用的是要害词定位法,直接定位到段,比方《考研底细》顶用的三步法:1、答案在定位的句;2、前后句;3、主旨。这样对基础一般的同学可以有必定的用处,但这种孤立看疑问办法有捆绑性。
【5】but that doesn’t mean crucial ethical issues involving ai aren’t at hand. the coming use of autonomous vehicles, for example, poses thorny ethical questions. human drivers sometimes must make split-second decisions. their reactions may be a complex combination of instant reflexes, input from past driving experiences, and what their eyes and ears tell them in that moment. ai “vision” today is not nearly as sophisticated?as that of humans. and to anticipate every imaginable driving situation is a difficult programming problem.
33.?the solution to the ethical issues brought by autonomous vehicles____.
a.?can hardly ever be found.(太必定)
b.?is still beyond our capacity.
c.?causes little public concern.
d.?has aroused much curiosity.
主动驾御车辆带来的道德疑问的处置办法
a.几乎无法找到
b.仍然超出咱们的才能
c.几乎没有致使大众的重视
d.现已激起了许多人的猎奇心
 33.【b】is still beyond our?capacity.
  解析: 本标题为例子题,为主旨效能。 第四段“人类的思维方法仍然过于凌乱,让人难以了解,更不必说仿制了”;第五段but that doesn’t mean crucial ethical issues involving ai aren’t at hand.“但这并不料味着触及人工智能的重要道德疑问不会呈如今咱们身边。”是五段的主旨句。the coming use of autonomous vehicles, for example, poses thorny ethical questions. 例如,即将投入运用的主动驾御车辆就引发了扎手的道德疑问。是例子。
举例“autonomous vehicles主动驾御轿车”,大约是不易完成。它是跟着主旨走的,同第四段,第五段的主旨是共同的,这就是思路,这样可以避开后边的凌乱句,使人站的高,看的远。
最终?and to anticipate every imaginable driving situation is a difficult programming problem.(而且猜测悉数可以发生的驾御场景是一项编程难题)是是比方的结论,可以扫一眼,验证一下,ok。
【经历】:也就是说,题干定位于第几段,你不必太介意作业的本身,联络上下文,看看最初,瞧瞧段尾这就够了。要有理科的思维,不要过份地拘泥于细节。
as…as比照级处爱出题。
sophisticated ?[s??f?st?ke?t?d]adj.凌乱的;浅显奇妙的,见多识广的;老到的;见过世面的(soph=wise,标明)
sophomore?[?s?f?m?:(r)] n.(中等、专科学校或大学的)二大学学生
as…as比照级处爱出题
但这并不料味着触及人工智能的重要道德疑问不会呈如今咱们身边。例如,即将投入运用的主动驾御车辆就引发了扎手的道德疑问。人类司机有时有必要在片刻间做出抉择。他们的反应可所以片刻间反应、以往驾御经历的堆集,以及那一刻双眼和耳朵所传递信息的凌乱联系。(33)当前,人工智能的“愿望力”远不及人类的凌乱。而且猜测悉数可以发生的驾御场景是一项编程难题。
【6】whenever decisions are based on masses of data, “you quickly get into a lot of ethical questions,” notes tan kiat how, chief executive of a singapore-based agency that is helping the government develop a voluntary code for the ethical use of ai. along with singapore, other governments and mega-corporations are beginning to establish their own guidelines.?britain is setting up a data ethics center. india released its ai ethics strategy this spring.
release [r??li?s]开释; 松开; 发泄; 辞退; (使)放松; 发布
难点:本段主旨是:拟定一套自愿行为原则,以使人工智能的使用符合道德道德。other governments and mega-corporations are beginning to establish their own guidelines.本段举了两个政府,下一段初步大约是公司,这是行文逻辑,在新题型中请留心。
每当要根据很大都据做抉择计划时,“你就会很快堕入成堆道德疑问中,”一家总部设在新加坡的机构的首席实施官陈杰豪指出。这家机构正在协助政府拟定一套自愿行为原则,以使人工智能的使用符合道德道德。与新加坡相同,其他(国家的)政府以及大型公司也正在初步拟定自个的辅导方针。英国正在树立一个数据道德中心。印度在本年春季发布了其人工智能道德战略。
?
【7】on june 7 谷歌 pledged not to “design or deploy ai” that would cause “overall harm,” or to develop ai-directed weapons or use ai for surveillance?that would violate international norms. it also pledged not to deploy ai whose use would violate international laws or human rights.
surveillance?[s???ve?l?ns]n.跟踪,监督; [法]控制,监督;(sur=under;veil面纱)
6月7日,谷歌承诺不会“方案或安设”会构成“全部损害”的“人工智能”,也不会研发由人工智能控制的武器,或使用人工智能进行违背世界原则的监督活动。谷歌还承诺不会安设违背世界法或侵监犯权的人工智能。
【8】while the statement is vague, it represents one starting point. so does the idea that decisions made by ai systems should be explainable, transparent, and fair.
(34)?尽管这项声明迷糊其词,但它标志着一个起点。“人工智能体系做出的抉择计划大约是可说明的、通明的和公正的”这一观念也是如此。
34.?the author’s attitude toward 谷歌’s pledge is one of____.
a.?affirmation.?b.?skepticism.?c.?contempt.??d.?respect.
关于谷歌的承诺,作者持情绪。
a.必定??b.置疑??c.小看??d.尊敬
  34.【a】affirmation
  解析: 本标题表面是情绪题,实为细节题。 根据题干信息谷歌s pledges找到定位点第七段,可是标题问的是作者的观念,可以进一步定位到第8段while the statement is vague, it represents one starting point尽管这个陈述是有点迷糊的,可是它代表了一个初步。“人工智能体系做出的抉择计划大约是可说明的、通明的和公正的”这一观念也是如此。从这句话就能断定出作者对谷歌的承诺是认可的,对应选项affirmation认可段⑩句说到,尽管这项声明迷糊其词,但它标志着一个起点。可见,作者认为谷歌的承诺具有正面意义,
即作者对其持必定情绪,故a选项正确。b、c两个选项均带有负面颜色,与作者对该承诺的情绪不符,运用了“迷糊其词”一词,阐明作者并非全盘必定谷歌的承诺,达不到“尊敬”的故打扫d选项。
【9】to put it another way: how can we make sure that the thinking of intelligent machines reflects humanity’s highest values? only then will they be useful servants and not frankenstein’s out-of-control monster.
换言之:咱们如

何保证智能机器的思维可以反映人类的最高价值观?只需到那时分,它们才干为人类所用,而不是变成弗兰肯斯坦手下失控的怪物。
?
35.?which of the following would be the best title for the text?
a.?ai’s future: in the hands of tech giants
b.?frankenstein, the novel predicting the age of ai
c.?the conscience of ai: complex but inevitable
d.?ai shall be killers once out of control
下面哪一项是文章的最佳标题?
a.人工智能的将来:掌控在科技巨子手中
b.《弗兰肯斯坦》,猜测人工智能年代的小说
c.人工智能的良知:凌乱而又不可以避免
d.人工智能一旦失控就会变成杀手
35.【c】the conscience of ai: complex but inevitable
  解析: 本标题为标题题,判别全文中心思维。 判别中心思维第一步要断定全文中心词:conscience,ethical issue of ai,所以根据这一主题词咱们就可以选择出c.the conscience of ai: complex but inevitable 人工智能的道德疑问:凌乱可是不可以避免,该疑问一向贯穿全文和ethical issue做对应。
  烦扰项 a.als future: in the hands of tech giants ai的将来,在科技巨子的手中,文中说到当前咱们尚没有才能处置ai的道德知道方面的疑问,所以和主旨相悖。
  b.frankenstein, the novel predicting the age of al《佛兰肯斯坦》,一本猜测了人工智能的年代的小说,只是文章的引入,并没有提及文中的中心疑问ethical issue,故片面信息,不可以做全文主旨。d.ai shall be killers once out of control人工智能一旦失控,将会变成杀手。这一点并未在文中提及。
?

Similar Posts

发表回复

您的电子邮箱地址不会被公开。 必填项已用*标注

|京ICP备2022015867号-3