Here’s how researchers are making machine learning more efficient and affordable for everyone

Here’s how researchers are making machine learning more efficient and affordable for everyone

The research and development of neural networks is flourishing thanks to recent advancements in computational power, the discovery of new algorithms, and an increase in labelled data. Before the current explosion of activity in the space, the practical applications of neural networks were limited. 

Much of the recent research has allowed for broad application, the heavy computational requirements for machine learning models still restrain it from truly entering the mainstream. Now, emerging algorithms are on the cusp of pushing neural networks into more conventional applications through exponentially increased efficiency.

Neural networks are a prominent focal point in the current state of computer science research. They are inspired by complex human biology, which, for all but the most niche use cases, still outperforms computers on most conceivable scales.

Computers are excellent at storing information and processing at speed, while humans are more adept at efficient use of the limited computational power that they have. A computer could perform millions of calculations per second, which no human can hope to match. Where humans possess their advantage is efficiency, being more efficient than computers by a factor of many 10s of thousands.

What computers lack in algorithmic complexity, they make up for in sheer processing power, analyzing information at a rate that is continually developing.

That computational power comes with a catch: despite the costs of computational power decreasing exponentially, machine learning still remains an expensive affair — outside the reach of most individuals, businesses and researchers, who must rely on expensive third-party services to perform experiments in a space that could have staggering ramifications in myriad verticals.

For example, simple chatbots could cost anywhere in the range of a few thousand dollars to upwards of $10,000, depending on the complexity.

Enter The Neural Architecture Search (NAS)

To overcome this barrier, scientists have been investigating various techniques to reduce the cost and time associated with machine and deep learning application.

The field is a mix of both software and hardware considerations. More efficient algorithms and better-designed hardware are both priorities, but the human development of the latter is enormously labor-intensive and time-consuming. This has spurred researchers to create design automation solutions for the field.

Advancements are being made on both the software and hardware side. Currently, the most common technique in the implementation of neural networks is the Neural Architecture Search (NAS), which, though effective in designing neural networks, is computationally expensive. The NAS technique can be considered something of a basic step towards automated machine learning.

MIT, where much of the research in the field has taken place, has published a paper that shows a hugely more efficient NAS algorithm that can learn Convolutional Neural Networks (CNN) for specific hardware platforms.

The researchers who worked on the paper succeeded at increasing efficiency by “deleting unnecessary neural network design components” and by focusing on specific hardware platforms, including mobile devices. Tests indicate that these neural networks were almost twice as fast as traditional models.

Co-author of the paper, Song Han, assistant professor at MIT’s Microsystems Technology Laboratory, has said that the goal is to “democratize AI”.

“We want to enable both AI experts and nonexperts to efficiently design neural network architectures with a push-button solution that runs fast on specific hardware,” he says. “The aim is to offload the repetitive and tedious work that comes with designing and refining neural network architectures.”

Image: Chelsea Turner, MIT

Other techniques have also been proposed. As opposed to being executed in resource-heavy controlled environments, machine learning algorithms can be reduced to run on specially designed hardware that utilizes lower levels of power.

Researchers from the University of British Columbia have shown that Field-Programmable Gate Arrays (FPGA) are faster and more power-efficient in the implementation of machine learning applications. In addition to making machine learning more affordable and less time-consuming with customized hardware, FPGAs can make Deep Neural Networks (DNN) more accessible to those with lesser technical expertise.

FPGAs are used in conjunction with the High-Level Synthesis (HLS) tool to “automatically design hardware”, eliminating the need to specifically design hardware for trialling machine learning inference solutions, and consequently achieve faster implementation of applications for a variety of use cases.

Other researchers have considered FPGAs for the specific DNN subset that is the CNN, a technique that is known for its application in analysing images, which itself has taken inspiration from the visual cortex of animals. This method also refers to the use of HLS and FPGA.

To further demonstrate the diversity of specific use cases, some research has looked into the implementation of DNN to execute automated design with respect to engineering tasks.

Agent 001: The Machine Learning Agent

Still, there is a long road ahead for the field of machin

Read More

Leave a reply