In today’s environment, IT organizations must support increasing numbers of artificial intelligence apps, as well as many new and emerging high-performance computing workloads. These challenges are not easily solved by traditional IT approaches and technologies. IT administrators typically set up separate systems for AI workloads and manage them with manual processes.
Today, there’s a better way to go — thanks to the integration technologies from VMware and NVIDIA. NVIDIA and VMware collaborate on software so IT organizations can run both traditional and AI workloads within the same environment. This allows IT professionals to deliver the necessary information technology to support new workloads. AI applications can now also be managed using the same VMware flexibility that other applications.
Organizations can now virtualize multiple technologies within their systems thanks to the tight integration between VMware and NVIDIA. They can share GPUs within servers so that multiple data scientists can simultaneously accelerate deep learning workloads. This improves utilization and saves money on hardware procurement and management.
This integration of technologies helps IT organizations to save time and reduce administrative steps. The best part is that they can centrally manage all their resources with VMware vCenter, and allocate them as required.
These are the benefits that University of Pisa has realized today through its use of Dell Technologies and NVIDIA AI Enterprise software suite.
Dell EMC VxRail provides a simple, cost effective hyperconverged infrastructure that solves a wide range of IT challenges and supports almost any use case, including tier-one applications and mixed workloads. VxRail makes it easier to deliver VMware-virtualized apps faster and more efficiently. VxRail, which is jointly developed by Dell Technologies, VMware, provides a seamless, optimized, and curated HCI experience. VxRail has also been certified by NVIDIA. This means that it is able to deliver exceptional performance, security and scalability in AI and data science workloads.
Dell EMC PowerScale storage is designed to serve as the foundation for data, building an integrated and optimized IT Infrastructure for AI initiatives, from proof of concept (POC) to production. These flash-scale, network-attached storage systems provide the data performance and extreme concurrency to feed deep learning algorithms. PowerScale OneFS governance, enterprise features for data management and security, and data protection are all part of PowerScale storage. This helps IT organizations comply with regulatory and enterprise security policy requirements.
NVIDIA AI Enterprise is a software suite of enterprise-grade AI tools and frameworks that is optimized, certified and supported by NVIDIA with the latest VMware vSphere. This software allows IT professionals in the thousands of enterprises that use vSphere to support AI with the same tools they use for managing large-scale data centers or hybrid cloud environments. NVIDIA AI Enterprise delivers a scale-out, multinode AI performance on vSphere that is unrivaled by bare-metal servers.
VMware vSphere is the industry’s leading server virtualization software for applications using any combination of virtual machines, containers and Kubernetes. You can now modernize the 70+ millions of workloads on vSphere by using native Kubernetes With vSphere with Tanzu, modern containerized applications can be run alongside enterprise applications.
A Center of Excellence
The University of Pisa is both a Dell Technologies and a VMware AI Center of Excellence. The University’s IT department regularly tests and evaluates new technologies as part of this designation. This is the case with NVIDIA AI Enterprise and Dell Technologies.
We are running AI workloads using VMware and are using Dell EMC PowerStore to store data in the virtualized environment,” Maurizio Daviani, chief technology officer at the University of Pisa, explains. “And we have a Dell EMC PowerScale all flash environment for AI/HPC as a kind of traditional scale out in our fast systems
Davini points out that his company has both bare-metal systems and virtualized systems with NVIDIA GPUs or DPUs.
We have traditional bare metal GPUs that are used for research such as language processing and image processing. We are increasing our bare metal capabilities on GPUs. We now have clusters on GPUs within our VMware production environment that match the performance of our bare-metal systems .”
Flexibility is key. He says
VMware allows us to be flexible and use the infrastructure to support a lot more things, including enterprise workloads, VDI, remote workingstations, smart working, and scientific computing. It also gives us the ability to access the infrastructure in a very flexible manner.” “And this is the problem that VMware and Dell have helped us to solve.”
For the full story, see the Dell Technologies case study “Simplifying AI systems.”
Section D Digital World Tech News – dWeb.News
More dWeb.News Tech News at https://dweb.news/category/dweb-news/section-d-digital-world-tech-technology-news/