Hugging Face commits to $10 million in free shared GPUs.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Hugging Face, one of the biggest names in machine learning, is committing $10 million in free shared GPUs to help developers build new AI technologies. The goal is to help small developers, academics, and startups tackle the centrality of AI advancements.

“We are fortunate to be in a position where we can invest in the community,” said Clem Delingo, CEO of Hedging Face. the edge. Delingo said the investment is possible because Hugging Face is “profitable, or close to profitable” and recently raised $235 million in funding, valuing the company at $4.5 billion.

Delango is concerned about AI startups' ability to compete with tech giants. The most important advances in artificial intelligence—like GPT-4, the algorithm behind Google search, and Tesla's fully self-driving system—are hidden within the confines of big tech companies. Not only are these corporations financially incentivized to keep their models proprietary, but with billions of dollars in computational resources at their disposal, they can gain advantages and outrun competitors, giving startups It becomes impossible to maintain.

“If you end up with a few organizations that are very dominant, it's going to be hard to fight that later.”

Hugging Face aims to make the latest AI technologies accessible to everyone, not just tech giants. I spoke with Delingo during Google I/O, the tech giant's flagship conference, where Google executives unveiled several AI features called Gemma for its proprietary products and even a family of open-source models. For Delingo, the proprietary approach is not the future he envisions.

“If you go the open source route, you're going to a world where most companies, most organizations, most nonprofits, policy makers, regulators, can actually do AI. So, the power A much more decentralized approach without too much concentration of which, in my opinion, is a better world,” Delingo said.

How it works

Access to compute poses a significant challenge in building large language models, often supported by companies such as OpenAI and Anthropic, which secure deals with cloud providers for substantial computing resources. Hugging Face aims to level the playing field by donating these shared GPUs to the community through a new program called ZeroGPU.

Shared GPUs are accessible to multiple users or applications simultaneously, eliminating the need to have a dedicated GPU for each user or application. ZeroGPU will be available through Hugging Face's Spaces, a hosting platform for publishing apps, which according to the company, have so far created more than 300,000 AI demos on CPU or paid GPU.

“It's very difficult to get enough GPUs from main cloud providers”

Access to shared GPUs is determined by usage, so if a portion of a GPU's capacity is not actively being used, that capacity becomes available for someone else to use. This makes them cost-effective, energy-efficient, and ideal for community-wide use. ZeroGPU uses Nvidia A100 GPU devices to power this operation – offering roughly half the computation speed of the popular and more expensive H100s.

“It's very difficult to get enough GPUs from the primary cloud providers, and the way to get them — which is creating a high barrier to entry — is to do a lot of work over a long period of time,” Delingo said. “.

Typically, a company commits to a cloud provider such as Amazon Web Services for one or more years to secure GPU resources. The arrangement hurts small companies, indie developers, and academics who build on a small scale and can't predict whether their projects will catch on. Regardless of usage, they still have to pay for GPUs.

“It's also a nightmare to know how many GPUs you need and what kind of budget,” Delingo said.

Open source AI is catching up.

With AI advancing rapidly behind closed doors, Hugging Face aims to allow people to create more AI tech in the open.

“If you end up with a few organizations that are very dominant, it's going to be hard to fight that later,” Delingo said.

Andrew Reed, a machine learning engineer at Hugging Face, even created an app that visualizes the progress of proprietary and open-source LLMs over time, created by the LMSYS chatbot Arena, which is both close to each other. shows the difference between

According to the company, more than 35,000 variations of Meta's open-source AI model Llama have been shared on Hugging Face since Meta's first version a year ago, ranging from “quantized and integrated models to specialized models in biology and Mandarin.” up to” included.

“AI shouldn't be kept in the hands of a few. With this commitment to open source developers, we're excited to see what everyone will do next in the spirit of collaboration and transparency,” Delingo said in a press release. said

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment