【131125】❤EFY❤【第10期】奇猎英闻:输定的比赛

绝望罗勒 (布丁小葵) 资深达人
442 7 0
发表于:2013-11-25 07:05 [只看楼主] [划词开启]



(左键刷蓝不会的单词-点击查词-即见单词的音标释义,回复者体会者另外奖励150沪元)
某呆说过,知识给她带来快乐;某呆还说过,她会因为某呆的快乐而快乐;
那么且希望,学习和传播知识的某呆能让最近比较辛苦的某呆重现笑颜吧;
——究竟是哪呆跟哪呆?8(>3<)8

——START——


The unwinnable game


Two humans - one Norwegian and one Indian - have been competing for the World Chess Championship. Neither of them would fancy their chances against the best computers. The machines have come a long way and their progress has taken us closer to achieving artificial intelligence.

In 1968 chess master David Levy made a bet that by 1978 no computer could beat him in a series of games. He won the bet.

In fact, it took most of the 1980s before he was finally beaten. "After I won the first bout, I made a second bet for a period of five years. I stopped betting after that. At that point I could see what was coming."

In 1997, the best player in the world Garry Kasparov was beaten by the IBM computer Deep Blue in a controversial series.

Today, the world's best player Magnus Carlsen would be foolish to make a Levy-style bet. The best computers would beat him.

But the progress that computers have made against one task - beating the best humans at chess - offers a lesson for the whole way people think about the future of artificial intelligence.


The man who coined the term "artificial intelligence" - the American scientist John McCarthy - identified early on that chess matches, and other complex games, were a good way of testing the progress of machines.

"One has an absolute measure and target to beat," says Levy. "In many games, there are rating systems - we can have an object measure. For all these reasons, games are a very good vehicle for AI. Playing a game requires a combination of skills, including intelligence."


McCarthy oversaw the creation of the first chess programme to play convincingly. By 1962 the programme - Kotok-McCarthy - was as good as a mediocre human. But it later lost the first match between computers when pitted against a Soviet rival.

"Computers didn't win by learning to play chess like humans - they won because their calculating power increased exponentially”

That match spawned a tradition of computer v computer battles that eventually led to the World Computer Chess Championship. For 40 years, programmers have been doing battle against other programmers. A film comedy released in the UK this week, Computer Chess, uses these singular contests as its backdrop.

It's not just chess. In 2007, a team led by Jonathan Schaeffer at the University of Alberta "solved" draughts. That is, it was worked out that if both sides played perfectly, the result had to be a draw. It had taken 18 years of computer calculation.

For fans of the games that have been mastered by computers there can be an occasional nostalgic longing for the pre-machine age.


A commentator in the current Carlsen-Anand series used the phrase: "A very human move." The point is that humans make mistakes. The subtlest of mistakes, the "sub-optimal" moves, can create beautifully poised situations.

And the bad mistakes committed by the best humans - as with missed open goals in football - provide light relief for lesser mortals. Former world champion Vladimir Kramnik committed one of the most famous blunders in chess, while playing a series against a computer in 2006, missing an obvious checkmate. It was the kind of mistake that a parent would have been disappointed at their novice eight-year-old committing.

Computers offer no such fun.

Their triumph led programmer Omar Syed to try and come up with a game where computers would be at an inherent disadvantage against humans.

"When Deep Blue won, I felt sorry for Kasparov. I knew what an incredible mind he had. But he was not able to outdo a computer."

Syed created Arimaa, a game using pieces similar to those used in chess but moving in a simpler fashion and set up on the board in a pattern decided by the player. Its creation wasn't easy.

"Whenever you make the game more difficult for computers, it gets more difficult for humans," notes Syed.

But he eventually cracked it. "If you use simple movements, we could increase the branching factor and still keep the game easy enough for humans."

In keeping with the traditions of AI buffs, he made a bet that a computer could not beat the best Arimaa players. January will see the 10th year of the competition and computers haven't come near to winning. But by 2020, Syed predicts the machines will have the upper hand.

The reason is the same as their triumph in chess - "brute-force calculation".

Computers approach a game in a different way to a human. A typical chess - or draughts, or Reversi, or Go, or Shogi - master may "feel" a particular move is right and then test it in his head, looking several moves ahead.

But the human can do nothing against a computer that has the power to look at even "wrong" moves and then test what would happen dozens of further moves into the future.

"Humans don't like to look very deeply forward but they have this innate sense without working out the fine detail," says Syed, who now works at Hedgechatter, creating software that uses AI to gauge social media sentiment about share prices.

Computers didn't win by "learning" to play chess like humans. They won because their calculating power increased exponentially.

This was something that pleased McCarthy, remembers Levy.

"If one can solve a problem like chess in a completely different way to humans, one has achieved something. It doesn't matter how the computer does it."


Now there is a greater target than winning at board games. The Loebner Prize is awarded to the programme that can best have a conversation with a human. David Levy, also an AI expert, has won twice.

"That's the most difficult task remaining in AI for all sorts of reasons. There is so much that's involved in understanding what we say to each other."

Most humans aren't consciously aware of it, but the average conversation is an extraordinary verbal and non-verbal symphony comprising body language, tone, emotion, double meaning, humour, historical references and sundry other intricacies.

How would a computer know when it's generally polite to interrupt?

But Levy - author of Love and Sex with Robots - thinks it will be mastered.

We will reach a point where computers can have convincing conversations with humans, he believes. In certain circumstances, that could even lead to cheating.


Just as there have been cases where humans have been accused of communicating with computers to win at chess, so humans might one day conceivably cheat in an interview. "That's something that will happen in all sorts of walks of life," says Levy.

With hindsight, getting a computer to win at chess was easy. But getting one to make up a funny anecdote will be a mindboggling achievement.



点击查看大图 点击查看大图


    OK,文章的大概意思都看懂了么?下面让我们做几道题来试试吧~  




             


图可耻!——熊吉

亲爱的们,都答对了吗?题目都是故意让你们答不对的啦~~
如果真的木有答对,要好好看解释哦~如果节目中有任何不足之错,也欢迎大家猛戳哦~谢谢!
如果喜欢这个节目,欢迎订阅哦~让我们一起进步吧~推倒英闻阅读!

这一期节目先到这里,下期再见~

盗文也可耻!——熊吉

~~O(∩_∩)O~~, 订阅节目戳这里哦~~~
PS.往期节目还没做的亲,点我进入节目单,奖励照发哦~~~\(≧▽≦)/~~~


答题区

1、When was Levy finally beaten?
2、According to the article, which game below can be won by computers just using its calculation ability?
3、Who plans to make a bet on chessing with AI?
4、Which is the most difficult task remaining in AI?
5、According to the article, which below is NOT a part of human beings' communication?

本帖来源社刊

全部回复 (7) 回复 反向排序

  • 0

    点赞

  • 收藏

  • 扫一扫分享朋友圈

    二维码

  • 分享

课程推荐

需要先加入社团哦

编辑标签

最多可添加10个标签,不同标签用英文逗号分开

保存

编辑官方标签

最多可添加10个官方标签,不同标签用英文逗号分开

保存
知道了

复制到我的社团