Intel and others commit to building open generative AI tools for the enterprise.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Image credit: hapabapa/Getty Images

Could generative AI designed for the enterprise (eg, AI that automatically generates reports, spreadsheet formulas, etc.) ever be viable? Along with several organizations including Cloudera and Intel, the Linux Foundation — a nonprofit organization that supports and maintains a growing number of open source efforts — aims to find out.

The Linux Foundation on Tuesday announced the launch of the Open Platform for Enterprise AI (OPEA), a project to promote the development of open, multi-provider and composable (i.e. modular) generative AI systems. Within the scope of the Linux Foundation’s LF AI and Data org, which focuses on AI- and data-related platform initiatives, OPEA aims to pave the way for the release of “rigorous,” “scalable” generative AI systems. will be the best open source innovation from the entire ecosystem,” said Ibrahim Haddad, executive director of LF AI and Data, in a press release.

“OPEA will unlock new possibilities in AI by creating a detailed, composable framework that is at the forefront of the technology stack,” Haddad said. “This move is a testament to our mission to advance open source innovation and collaboration within the AI ​​and data communities under a neutral and open governance model.”

In addition to Cloudera and Intel, OPEA – one of the Linux Foundation’s sandbox projects, a sort of incubator program – has members from enterprise heavyweights such as Intel, IBM-owned Red Hat, Hugging Face, Domino Data Lab, MariaDB and VMware. I count.

So what exactly can they build together? Haddad points to a few possibilities, such as “optimized” support for AI toolchains and compilers, enabling AI workloads to run across different hardware components, as well as recovery-augmented generation (RAG). “contradictory” pipelines for

RAG is becoming increasingly popular in enterprise applications of generative AI, and it’s not hard to see why. The responses and actions of most creative AI models are limited by the data they are trained on. But with RAG, the model’s knowledge base can be extended to information beyond the original training data. RAG models refer to this outside information – which can take the form of proprietary company data, a public database or some combination of the two – before generating a response or performing an action.

A diagram illustrating RAG models. Image credit: Intel

Intel offered some more details in its press release:

Enterprises are challenged with a do-it-yourself approach. [to RAG] Because there is no de facto standard across components that allows enterprises to choose and deploy RAG solutions that are open and scalable and that help them get to market faster. OPEA plans to address these issues by collaborating with industry to standardize components, including frameworks, architecture blueprints and reference solutions.

Assessment will also be an important part of dealing with OPEA.

In its GitHub repository, OPEA proposes a rubric for grading generative AI systems along four axes: performance, features, reliability and “enterprise-grade” readiness. performance As OPEA explains, it relates “black box” benchmarks to real-world use cases. Features System interoperability, deployment options and ease of use are assessed. fiduciary duty AI looks at the “robustness” of the model and its ability to guarantee quality. And Enterprise readiness Focuses on the requirements to get the system up and running without significant problems.

Rachel Roumeliotis, director of open source strategy at Intel, says OPEA will work with the open source community to offer rubric-based tests, as well as provide assessments and ratings of generative AI deployments on request.

OPEA’s other efforts are a bit up in the air at the moment. But Haddad pushed the potential of open-model development along the lines of Meta’s expanding Llama family and Databricks’ DBRX. Toward this end, in the OPEA repo, Intel has already contributed reference implementations for a generative-AI-powered chatbot, document summarizer and code generator for its Xeon 6 and Gaudi 2 hardware. Is.

Now, OPEA members are very clearly invested (and self-interested, for that matter) in building tooling for enterprise generative AI. Cloudera recently launched partnerships to build what it terms an “AI ecosystem” in the cloud. Domino Business Forward offers a suite of apps for building and auditing generative AI. And VMware — which is on the infrastructure side of enterprise AI — introduced new “personal AI” compute products last August.

The question is whether these vendors will. in fact Work together to create cross-compatible AI tools under OPEA.

There is a clear advantage to doing so. Customers will happily gravitate to multiple vendors depending on their needs, resources and budget. But history has shown that vender lock-ins are all too easy to fall into. Let’s hope that’s not the end result here.



WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment