Prepare for Artificial Intelligence to Produce Less Wizardry

Early last year, a large European supermarket chain deployed artificial intelligence to predict what customers would buy each day at different stores, to help keep shelves stocked while reducing costly spoilage of goods.

The company already used purchasing data and a simple statistical method to predict sales. With deep learning, a technique that has helped produce spectacular AI advances in recent years—as well as additional data, including local weather, traffic conditions, and competitors’ actions—the company cut the number of errors by three-quarters.

It was precisely the kind of high-impact, cost-saving effect that people expect from AI. But there was a huge catch: The new algorithm required so much computation that the company chose not to use it.

“They were like, ‘Well, it’s not worth it to us to roll it out in a big way, unless cloud computing costs come down or the algorithms become more efficient,’” says Neil Thompson, a research scientist at MIT, who is assembling a case study on the project. (He declined to name the company involved.)

The story highlights a looming problem for AI and its users, Thompson says. Progress has been both rapid and dazzling in recent years, giving us clever game-playing programs, attentive personal assistants, and cars that navigate busy roads for themselves. But such advances have hinged on throwing ever-more computing resources at the problems.

In a new research paper, Thompson and colleagues argue that it is, or will soon be, impossible to increase computing power at the same rate in order to continue these advances. This could jeopardize further progress in areas like computer vision, translation, and language understanding.

look at this now
find out
Read Full Report
see here now
visit here
click here to find out more
why not check here
her response
published here
check
discover this
from this source
basics
read what he said
visit the site
browse around this web-site
visit this site
link
click for source
click this link now
blog
why not look here
more information
look at these guys
site link
helpful hints
pop over to this web-site
go to my site
see this page
browse around this website
view website
my sources
webpage
Discover More Here
Learn More Here
company website
click for info
Read Full Article
his response
click over here
take a look at the site here
more tips here
helpful resources
check out this site
look at this website
have a peek at this site
the original source
Continue
visit our website
visit this website
go to this website
pop over here
Home Page
Recommended Reading
these details
advice
try these out
check my reference
her comment is here
useful link
Resources
hop over to here
click this link here now
blog link
Continue eading
Click Here
Clicking Here
Go Here
Going Here
Read This
Read More
Find Out More
Discover More
Learn More
Read More Here
Discover More Here
Learn More Here
Click This Link
Visit This Link
Homepage

AI’s appetite for computation has risen remarkably over the past decade. In 2012, at the beginning of the deep-learning boom, a team at the University of Toronto created a breakthrough image-recognition algorithm using two GPUs (a specialized kind of computer chip) over five days. Fast-forward to 2019, and it took six days and roughly 1,000 special chips (each many times more powerful than the earlier GPUs) for researchers at Google and Carnegie Mellon to develop a more modern image-recognition algorithm. A translation algorithm, developed last year by a team at Google, required the rough equivalent of 12,000 specialized chips running for a week. By some estimates, it would cost up to $3 million to rent this much computer power through the cloud.

“Deep neural networks are very computationally expensive,” says Song Han, an assistant professor at MIT who specializes in developing more efficient forms of deep learning and is not an author on Thompson’s paper. “This is a critical issue.”

Han’s group has created more efficient versions of popular AI algorithms using novel neural network architectures and specialized chip architectures, among other things. But he says there is a “still a long way to go” to make deep learning less compute-hungry.

Other researchers have noted the soaring computational demands. The head of Facebook’s AI research lab, Jerome Pesenti, told WIRED last year that AI researchers were starting to feel the effects of this computation crunch.

Thompson believes that, without clever new algorithms, the limits of deep learning could slow advances in multiple fields, affecting the rate at which computers replace human tasks. “The automation of jobs will probably happen more gradually than expected, since getting to human-level performance will be much more expensive than anticipated,” he says. “Slower automation might sound good from a jobs perspective,” he says, but it will also slow gains in productivity, which are key to raising living standards.

In their study, Thompson and his coauthors looked at more than 1,000 AI research papers outlining new algorithms. Not all of the papers detailed the computational requirements, but enough did to map out the cost of progress. The history suggested that making further advances in the same way will be all but impossible.

Improving the performance of an English-to-French machine-translation algorithm so that it only makes mistakes 10 percent of the time instead of the current rate of 50 percent, for example, would require an extraordinary increase in computational power—a billion billion times as much—if it were to rely on more computation power alone. The paper was posted to arXiv, a preprint server. It has yet to be peer-reviewed or published in a journal.

Leave a Reply

Your email address will not be published. Required fields are marked *

Previous post Remember TV on the Internet Before Netflix? Neither Do We
Next post The UAE’s First Mars Mission Is a Robo-Meteorologist