IBM Increases the Speed of Deep Learning

To experiment with deep learning, developers need powerful resources and the very fast hardware. If they work with big amounts of data, they need to increase the speed of calculations. To achieve this goal, the company is going to use multiple servers.

The developers of the company have found a way to increase the speed of deep learning by splitting jobs across multiple physical servers. Each of these servers should have its own set of GPUs to get the needed result.

However, it’s not available for all platforms. Currently, only the IBM’s PowerAI 4.0 software package supports these features. By the way, this package runs only on OpenPower systems that belong to IBM, so you would need to use the hardware of this company and pay for it.

If we compare the speed before and after using this technology, we can see that operation that needed 16 days to complete now can be finished in a few hours. Such results seem to grab the interest of developers and users, so this technology may grow soon.

The company says that it is not difficult to set this system to work. However, it may be very expensive to use the advantages of this invention because of the high cost of running software on IBM hardware.