NEW YORK--(BUSINESS WIRE)--Dec 7, 2022--
Comet, provider of the leading MLOps platform for machine learning (ML) teams from startup to enterprise and Run:ai, the leader in compute orchestration for AI workloads, today announced a new partnership that will help ML practitioners accelerate their workflows and benefit from enhanced support throughout the ML lifecycle. Joint customers gain seamless access to this best-of-breed integrated solution, which couples Comet's experiment management and model production monitoring with Run:ai's orchestration, while new customers can fully leverage this powerful integration to get the most from their ML initiatives from early experimentation all the way through production.
“Many of our customers rely on Run:ai and this integration enables teams to glean even more value from our respective solutions,” said Gideon Mendels, CEO and co-founder of Comet. “Comet is fully committed to working with companies across the ML ecosystem to hasten the maturation and adoption of ML. It is our view that integrations and collaboration are the community’s best path forward, and we are delighted to work with Run:ai in pursuit of this goal.”
Run:ai is the latest in a string of technology partnerships and integrations aimed at expanding Comet’s ecosystem and interoperability. The Comet platform’s flexibility makes it well suited for the changing AI landscape. Comet uniquely offers both experiment tracking and model production monitoring, and its platform can run on any infrastructure, whether it is cloud, on-premises or virtual private cloud (VPC). Comet’s approach has not only won over an outstanding roster of customers, the company was also recently recognized as a Gartner Cool Vendor and a CRN Emerging Vendor.
The new partnership between Comet and Run:ai will streamline ML projects for data scientists, researchers, IT teams, as well as those extended team members who are in search of strategic business insights. In addition to Comet’s world-class experiment tracking and model production monitoring capabilities, joint customers can now easily operationalize cloud-native shared GPU clusters with Run:ai.
Run:ai’s Kubernetes-based software platform for orchestration of containerized AI workloads enables GPU clusters to be utilized for different Deep Learning workloads dynamically—from building AI models, to training, to inference. With Run:ai, jobs at any stage get access to the compute power they need, automatically. Run:ai’s compute management platform speeds up data science initiatives by pooling available resources and then dynamically allocating resources based on need, maximizing accessible compute.
Customers can now use Run:ai's scheduling and orchestration capabilities to optimize resources. Teams also gain a single ML system of record to keep an accurate history of all ML experiments, model history and dataset versioning.
"We are thrilled to be working with Comet to help ML practitioners accomplish more, faster and at a greater scale than ever before," said Omri Geller, CEO and co-founder of Run:ai. “Together, we will empower practitioners throughout the entire ML lifecycle with intelligent tools that make their work simpler and more efficient.”
About Comet
Comet provides an MLOps platform that data scientists and machine learning teams use to manage, optimize, and accelerate the development process across the entire ML lifecycle, from training runs to monitoring models in production. Comet’s platform is trusted by over 150 enterprise customers including Affirm, Cepsa, Etsy, Uber and Zappos. Individuals and academic teams use Comet’s platform to advance research in their fields of study. Founded in 2017, Comet is headquartered in New York, NY with a remote workforce in nine countries on four continents. Comet is free to individuals and academic teams. Startup, team, and enterprise licensing is also available. To learn more, visit www.comet.com or join our community at heartbeat.comet.ml.
About Run:ai
Run:ai's Atlas Platform brings cloud-like simplicity to AI resource management - providing researchers with on-demand access to pooled resources for any AI workload. An innovative cloud-native operating system - which includes a workload-aware scheduler and an abstraction layer - helps IT simplify AI implementation, increase team productivity, and gain full utilization of expensive GPUs. Using Run:ai, companies streamline development, management, and scaling of AI applications across any infrastructure including on-premises, edge and cloud. Learn more at www.run.ai.
View source version on businesswire.com:https://www.businesswire.com/news/home/20221207005270/en/
CONTACT: Editorial Contact:
Stephanie Gnibus
GMK Communications for Comet
stephanie@gmkcommunications.com
KEYWORD: UNITED STATES NORTH AMERICA ISRAEL MIDDLE EAST NEW YORK