Microsoft Spent Millions To Put Together A Supercomputer For OpenAI

Details on Microsoft’s new supercomputer for AI

YouTube.com

originally+posted+to+Flickr+as+Blue+Gene+%2F+P+From+Argonne+National+Laboratory%0A%0AUploaded+using+F2ComButton

originally posted to Flickr as Blue Gene / P From Argonne National Laboratory Uploaded using F2ComButton

By: Walker McCann, Journalist

What happened?

Microsoft received a unique request from a little-known startup nearly five years ago to assemble computing power on a scale it had never done before. After that, Microsoft invested millions of dollars to assemble tens of thousands of potent chips to create a supercomputer. This was used by OpenAI to train the GPT big language model.

Microsoft has experience creating artificial intelligence (AI) models that facilitate user productivity. An AI model trained on language is the automatic spell checker, which has benefited millions of users. OpenAI, however, had no interest in creating anything resembling a spell checker. The business needs something extremely large in order to significantly alter how people engage with technology.

How a supercomputer was built by Microsoft for OpenAI

Microsoft realized it needed to grant OpenAI access to its cloud computing services for a lengthy period of time once OpenAI disclosed the specifics of the scale it wished to compute at.

So, Microsoft started connecting tens of thousands of NVIDIA A100 graphics chips, which have a lot of computing capacity and are widely utilized for creating AI models. Yet, it required more than simply joining the chips.

As thousands of processing units fire up simultaneously during the training of AI models, work is distributed among them, necessitating careful power management. Also, the processors must communicate with one another in order to share the completed task, for which Microsoft had to create specialized software.

The bigger challenge now that OpenAI has trained its model is getting it in front of lots of users. Microsoft also launched a new version of Bing that leverages GPT-4, an improved OpenAI language model, and added customers to a waitlist so that it could scale its servers before launching the service.

Not just that, through its Azure AI services, Microsoft is also making the computing infrastructure available for other users to build their own AI models, requiring it to scale its resources even faster. Microsoft Azure Team has to keep an eye on every little detail from chips to concrete, pipes to trays to rack-up servers because disruption in the supply of any one component can thwart their plans of scaling up rapidly.

What Microsoft put together for OpenAI a few years ago has blown away users. What the company is building now is even bigger and more sophisticated than before. One can only imagine what exciting models it will bring out.

 

 

 

Related Stories:

https://www.theverge.com/2023/3/13/23637675/microsoft-chatgpt-bing-millions-dollars-supercomputer-openai

https://siliconangle.com/2023/03/13/microsoft-spent-hundreds-millions-azure-infrastructure-make-chatgpt-happen/

https://gagadget.com/en/microsoft_corp/224673-microsoft-creates-chatgpt-supercomputer-for-hundreds-of-millions-of-dollars/

https://inshorts.com/en/news/microsoft-spent-millions-to-build-supercomputer-for-openai-report-1678787939941

https://www.msn.com/en-us/money/other/microsoft-spent-hundreds-of-millions-of-dollars-on-a-chatgpt-supercomputer/ar-AA18zHJY?ocid=FinanceShimLayer

 

Take Action: 

https://support.microsoft.com/en-us