Research into narratives as forms of sense-making AI has shown that these can impede the public understanding of AI, mask human agency, and reinforce damaging stereotypes. This article shows the result of a research to imagine more helpful visual representations of AI to improve science communication.
This post shows that the instance of Stochastic Gradient Descent (SGD) used to solve modern ML problems carries rich particularities. In particular, we will put emphasis on the difference between the under and overparametrised regimes.
Over the past few years, advances in training large language models (LLMs) have moved natural language processing (NLP) from a bleeding-edge technology that few companies could access, to a powerful component of many common applications.