Vultr introduces AMD Instinct MI300X accelerator

Vultr, a privately held cloud computing platform, has announced that the new AMD Instinct MI300X accelerator and ROCm open software are set to be made available within Vultr’s composable cloud infrastructure.

The collaboration between Vultr’s composable cloud infrastructure and AMD next-generation silicon architecture unlocks new frontiers of GPU-accelerated workloads from the data centre to the edge.

“Innovation thrives in an open ecosystem,” says J.J. Kardwell, CEO of Vultr. “The future of enterprise AI workloads is in open environments that allow for flexibility, scalability, and security. AMD accelerators give our customers unparalleled cost-to-performance. The balance of high memory with low power requirements furthers sustainability efforts and gives our customers the capabilities to efficiently drive innovation and growth through AI.”

With AMD ROCm open software and Vultr’s cloud platform, enterprises have access to an industry-leading environment for AI development and deployment. The open nature of AMD architecture and Vultr infrastructure allows enterprises access to thousands of open source, pre-trained models and frameworks with a drop-in code experience, creating an optimised environment for AI development to advance projects at speed.

“We are proud of our close collaboration with Vultr, as its cloud platform is designed to manage high-performance AI training and inferencing tasks and provide improved overall efficiency,” notes Negin Oliver, Corporate Vice President of Business Development, Data Center GPU Business Unit, AMD. “With the adoption of AMD Instinct MI300X accelerators and ROCm open software for these latest deployments, Vultr’s customers will benefit from having a truly optimised system tasked to manage a wide range of AI-intensive workloads.”

Designed for next-generation workloads, AMD architecture on Vultr infrastructure allows for true cloud-native orchestration of all AI resources. AMD Instinct accelerators and ROCm software management tools integrate seamlessly with the Vultr Kubernetes Engine for Cloud GPU to create GPU-accelerated Kubernetes clusters that can power the most resource-intensive workloads anywhere in the world. These platform capabilities give developers and innovators the resources to build sophisticated AI and machine learning solutions to the most complex business challenges.

Further benefits of this partnership include:

• Improved price-to-performance: Vultr’s high-performance cloud compute, accelerated by AMD GPUs, offers exceptional processing power for demanding workloads while maintaining cost efficiency.
• Scalable compute and optimised workload management: Vultr’s scalable cloud infrastructure, combined with AMD’s advanced processing capabilities, allows businesses to seamlessly scale their compute resources as demand grows.
• Accelerated discovery and innovation in R&D: Vultr’s cloud infrastructure offers the necessary computational power and scalability for developers to deploy AMD Instinct GPUs, AMD ROCm open software, and the vast partner ecosystem to solve complex problems for faster discovery cycles and innovation.
• Optimised for AI inference: Vultr’s platform is optimised for AI inference, with AMD Instinct MI300X GPUs providing faster, scalable, and energy-efficient processing of AI models, enabling reduced latency and higher throughput.
• Sustainable computing: Vultr’s eco-friendly cloud infrastructure allows users to achieve energy-efficient and sustainable computing in large-scale operations with AMD-efficient AI technologies.

For more from Vultr, click here.

The post Vultr introduces AMD Instinct MI300X accelerator appeared first on Data Centre & Network News.

Vultr, a privately held cloud computing platform, has announced that the new AMD Instinct MI300X accelerator and ROCm open software are set to be made available within Vultr’s composable cloud infrastructure.

The collaboration between Vultr’s composable cloud infrastructure and AMD next-generation silicon architecture unlocks new frontiers of GPU-accelerated workloads from the data centre to the edge.

“Innovation thrives in an open ecosystem,” says J.J. Kardwell, CEO of Vultr. “The future of enterprise AI workloads is in open environments that allow for flexibility, scalability, and security. AMD accelerators give our customers unparalleled cost-to-performance. The balance of high memory with low power requirements furthers sustainability efforts and gives our customers the capabilities to efficiently drive innovation and growth through AI.”

With AMD ROCm open software and Vultr’s cloud platform, enterprises have access to an industry-leading environment for AI development and deployment. The open nature of AMD architecture and Vultr infrastructure allows enterprises access to thousands of open source, pre-trained models and frameworks with a drop-in code experience, creating an optimised environment for AI development to advance projects at speed.

“We are proud of our close collaboration with Vultr, as its cloud platform is designed to manage high-performance AI training and inferencing tasks and provide improved overall efficiency,” notes Negin Oliver, Corporate Vice President of Business Development, Data Center GPU Business Unit, AMD. “With the adoption of AMD Instinct MI300X accelerators and ROCm open software for these latest deployments, Vultr’s customers will benefit from having a truly optimised system tasked to manage a wide range of AI-intensive workloads.”

Designed for next-generation workloads, AMD architecture on Vultr infrastructure allows for true cloud-native orchestration of all AI resources. AMD Instinct accelerators and ROCm software management tools integrate seamlessly with the Vultr Kubernetes Engine for Cloud GPU to create GPU-accelerated Kubernetes clusters that can power the most resource-intensive workloads anywhere in the world. These platform capabilities give developers and innovators the resources to build sophisticated AI and machine learning solutions to the most complex business challenges.

Further benefits of this partnership include:

• Improved price-to-performance: Vultr’s high-performance cloud compute, accelerated by AMD GPUs, offers exceptional processing power for demanding workloads while maintaining cost efficiency.
• Scalable compute and optimised workload management: Vultr’s scalable cloud infrastructure, combined with AMD’s advanced processing capabilities, allows businesses to seamlessly scale their compute resources as demand grows.
• Accelerated discovery and innovation in R&D: Vultr’s cloud infrastructure offers the necessary computational power and scalability for developers to deploy AMD Instinct GPUs, AMD ROCm open software, and the vast partner ecosystem to solve complex problems for faster discovery cycles and innovation.
• Optimised for AI inference: Vultr’s platform is optimised for AI inference, with AMD Instinct MI300X GPUs providing faster, scalable, and energy-efficient processing of AI models, enabling reduced latency and higher throughput.
• Sustainable computing: Vultr’s eco-friendly cloud infrastructure allows users to achieve energy-efficient and sustainable computing in large-scale operations with AMD-efficient AI technologies.

For more from Vultr, click here.

The post Vultr introduces AMD Instinct MI300X accelerator appeared first on Data Centre & Network News.

 

Залишити відповідь