It's been an incredible week, with 3 major milestones
Microsoft won the image recognition Image-net challenge (see results) with an "extremely deep" network. The difficulty when dealing with such networks is to facilitate information flow and avoid so-called "vanishing gradients". Their approach based on residual learning is detailed in this paper.
Progress in"one-shot" machine learning where algorithms learn representations with very limited training sets. The approach here, called Bayesian Program Learning is detailed in this Science article
Google announced groundbreaking achievements in quantum computing, including a 100-million-fold speed-up over traditional approaches on a specific problem. In the long-term, this could shake up all the AI/ML field.
In the News
An advance in Artificial Intelligence rivals human abilities
Great article covering both Image-net and the progress on one-shot machine learning.
Why 2015 was a breakthrough year in Artificial Intelligence
A few interesting charts on the acceleration in the number of AI projects, systems and usage.
Also in the news this week...
- The NIPS conference in Montreal took place this week. This year's hot topics and accepted papers can be viewed with this tool
- Tensorflow gains traction outside of Google
- $15M grant given to Cambridge to create a new interdisciplinary institution on AI and its implications for humanity
Managing AI in a multiplayer game
Great piece on the intelligence baked into the Gigantic video game and its implementation. By the game developers.
How much memory does a Data Scientist need?
Over a few years, the memory available in AWS instances and laptops has grown more rapidly than the size of datasets used by data scientists. So where is the big data?
The Tesla AutoPilot
Interesting look at the technology behind the engineering marvel
Software tools & code
Evaluation of deep learning toolkits
Focuses on the big names out there: Caffe, CNTK, Tensorflow, Theano, Torch
Simplified interface for Tensorflow
In the spirit of scikit-learn
Facebook shares its ML server design
Named Big Sur, this server design packs 8 Nvidia GPUs and is disclosed by Facebook through the Open Compute Project.
Emergent chip vastly accelerates Deep Neural Networks
A small chip, called EIE, maximizes the role of SRAM in processing the inference side of neural networks and yields impressive speed improvements.
This newsletter is a weekly collection of AI news and resources.
If you find it worthwhile, please forward to your friends and colleagues, or share on your favorite network!
Share on Twitter · Share on Linkedin · Share on Google+
Suggestions or comments are more than welcome, just reply to this email. Thanks!