Automatic Transfer system, as the workload of data requirements infiltrates into the data center and covers traditional CPU performance, GPU vendors have added new devices and graphics CARDS to the data center.
Recent trends in big data, artificial intelligence and machine learning are creating a chain reaction between enterprise servers.Because traditional microprocessors can't efficiently process information from demanding workloads, the data center graphics processor is moved to fill in the relevant resource gaps in the field.
Since the 1970s, graphics processing units have been initially used to handle video and graphics processing tasks from the CPU.These systems have different underlying designs than typical cpus, which are built to maximize throughput on a single data stream.The CPU is also designed to support fast switching and quickly move information from one place to another, such as from main storage to a storage system.GPU, however, has a different structure: they can handle and support multiple high-speed connections in parallel.These microprocessors have multiple sets of data paths that perform a lot of data processing, which is very consistent with the requirements of the graphical application.
Extended data center GPU application range.
The GPU does a good job of doing a small number of tasks, but as the task requires a gradual expansion, the relevant ones are gradually expanded.Nvidia tends to differentiate gpus from other semiconductor suppliers and find broader USES for GPU.
First, these products are starting to move into high-performance computing.Recently, however, GPU vendors specifically designed devices and display card products for data center servers.GPU optimized for servers USES high bandwidth memory and is provided as a module integrated into a dedicated server design, or as a Peripheral Component Interconnect Express add-on card.However, unlike game graphics, these graphics CARDS do not provide a graphical interface.
Server vendors connect the GPU to the CPU to take advantage of the CPU.When CPU performance is not sufficient to handle data-intensive tasks, it improves the performance of the CPU (and the integration of GPU).
Big data, machine learning, and artificial intelligence applications have high processing requirements that require processing a large amount of information and different data types.These characteristics are very consistent with the design of GPU.
Both AI and machine learning vendors use gpus to support the large amount of data needed to train neural networks.Gartner analyst Alan Priestley said that in the market of this field, compared with the high-performance servers will be deployed to the application with the GPU, has the GPU PC equipment availability can help software developers to develop their algorithm on desktop computers.
Application of GPU in data center field.
The application of data center GPU may be further and further in the future.Gpus are important infrastructure features for mission-critical workloads.Priestley says IT organizations can implement GPU that is commoditized and can easily be incorporated into applications with the use of standard libraries.
Therefore, server vendors provide dedicated servers that integrate GPU modules or products that support GPU add-on CARDS.According to Gartner, a server optimized GPU graphics card and modules with the highest performance processor typically cost between $1,000 and $5,000.
Suppliers of existing products are beginning to incorporate these additional products into their product lines.
Dell also supports AMD's FirePro series GPU and Nvidia's GPU, designed for virtual-desktop infrastructure and computing applications, and has the ability to handle up to 1792 GPU cores.The hewlett-packard Enterprise (HPE) ProLiant system works with Nvidia Tesla, Nvidia GRID and Nvidia Quadro GPU.HPE Insight Cluster Management Utility (HPE Insight Cluster Management Utility) will install and configure GPU drivers to provide monitoring of GPU operating conditions such as temperature.
In order to prepare the data center GPU for intensive use, administrators need to master the expertise of how to manage these processors.They should find familiar with the technique of the relevant personnel, and of course it is not easy, because the technology is different from traditional microprocessor design, and despite Nvidia provides some training materials, still less related courses.
沒有留言:
張貼留言