How to measure the ‘I’ in AI – اخبار مجنونة

[ad_1]

Last week, Lee Se-dol, the South Korean Go champion who lost in a historical matchup against DeepMind’s artificial intelligence algorithm AlphaGo in 2016, declared his retirement from professional play.

“With the debut of AI in Go games, I’ve realized that I’m not at the top even if I become the number one through frantic efforts,” Lee told the Yonhap news agency. “Even if I become the number one, there is an entity that cannot be defeated.”

Predictably, Se-dol’s comments quickly made the rounds across prominent tech publications, some of them using sensational headlines with AI dominance themes.

Since the dawn of AI, games have been one of the main benchmarks to evaluate the efficiency of algorithms. And thanks to advances in deep learning and reinforcement learning, AI researchers are creating programs that can master very complicated games and beat the most seasoned players across the world. Uninformed analysts have been picking up on these successes to suggest that AI is becoming smarter than humans.

But at the same time, contemporary AI fails miserably at some of the most basic that every human can perform.

This begs the question, does mastering a game prove anything? And if not, how can you measure the level of intelligence of an AI system?

Take the following example. In the picture below, you’re presented with three problems and their solution. There’s also a fourth task that hasn’t been solved. Can you guess the solution?

Abstraction Reasoning Corpus problem