The future of artificial intelligence applications won’t be powered by supercomputers in the cloud. Instead, efficiency is the new name of the game. State-of-the-art science is hence largely geared toward making the most of the one computer you already own. Your smartphone, of course. Especially if it’s Samsung-made.
As one of the largest technology companies on the planet, Samsung has been pursuing that vision while constantly trying to innovate in the space for years now. Neural network compression and on-device computing are two notable disciplines that developed as part of these efforts. Bin Dai, an AI engineer at Samsung R&D Institute China – Beijing (SRC-B), specializes in these fields, and has recently shared some insight on how such technologies translate to noticeable improvements to our day-to-day.
Here’s a rare behind-the-scenes look at Samsung’s AI lab
His perspective consequently offered up a rare behind-the-scenes look at Samsung’s China-based AI lab. Most notably, Dai confirmed that developing AI tech meant to run on modern smartphones is still an immense challenge, in spite of how much more powerful mobile devices have gotten in recent years. As a result, the compression models Samsung is developing need to be both theoretically ambitious and practically performant.
Of course, this is usually easier said than done. But advancements in computer vision and speech recognition — like the tech that powers Bixby, for example — are testament to how steady the progress has been in recent years. In effect, a lot of SRC-B’s research is centered on miniaturization: developing tiny neural models optimized for on-device performance. These are usually single-purpose solutions based on modular design patterns. A feature like Bixby Vision consists of hundreds of such components.
E.g., in order to deliver a more versatile functionality like being able to “see” the world around you, it actually leverages multiple categorization models. Equipped with that kind of data, it can narrow down on the task at hand quite efficiently. So, while Samsung probably never developed a model that specifically knows how to differentiate between wine bottles and dogs, Bixby Vision can nowadays recognize both. And due to the neural network backend, the app is both improving with everyday use and is suitable for incremental development. The kind you can micro-target and spread out across multiple teams if you happen to be an international conglomerate the size of Samsung.
Nowadays, that brand of hyper-optimized modularity is at the center of Samsung’s AI implementations. As for the academic perspective, Dai revealed that SRC-B is currently exploring two broad fields – equivariant networks and dynamic inference. Both of which could majorly improve the accuracy of the company’s AI algorithms. All without significantly adding to the computational overhead of its apps, of course.
Note that this only pertains to the SRC-B’s AI Lab. That’s a division started in 2019, nearly two full decades since Samsung established an R&D foothold in China with its Beijing institute. Today, that unit is still just a small part of the company’s global AI taskforce. But Samsung has also been investing in local startups exploring machine learning tech. Though details on the company’s China investments don’t come by too often. For example, one of its most recent ventures in the space that we’re pretty sure actually happened revolved around a company called DeePhi – in 2017.