Over the past 5 years, data center infrastructure management (DCIM) has become an acknowledged, if somewhat inconsistently implemented, approach to control and oversight of IT facilities. DCIM offers a centralized approach to the monitoring and management of the critical systems in a data center.
Currently, DCIM implementation are primarily focus on physical and asset-level components of the data center facility, such as:
- For facilities monitoring only
- Building management systems (BMS)
- Utility sources and dual power source systems
- Generators
- UPS systems
- Power distributions units (PDUs)
- Multi-source mechanical systems (chilled water, direct exchange, heat wheel)
- Fire detection and suppression
- Temperature
- For system monitoring and management
- Valve control
- Power source control
- Variable frequency drive (VFD) response to temperature changes
- For security integration
- CCTV monitoring
- Access control systems logging and monitoring
- Biometric reader logging and monitoring
In these implementations, telecommunication and data networks have typically remained independent, and while there is typically a remote monitoring and management concept implemented, the application focus has clearly been in the collection and presentation of systems data, not the interpreted use of that data for actually achieving improved uptime.
In many respects, the current state of the market represents the business and technical drivers behind these implementations: data center consolidation, implications of increasing power and heat density in server racks, and energy efficiency and sustainability initiatives. With the rapid acceptance of virtualized environments and cloud computing, there is now increasing visibility on the delivery of high-performance, ultra-reliable, efficient data center architectures.
To being, let’s start by understanding cloud computing which NIST defines as “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” Inherent in this definition is emphasis on automated provisioning and governance along with built-in focus on the core benefits that cloud is supposed to deliver: cost savings, energy savings, rapid deployment and customer empowerment.
This cloud-influenced perspective is putting traditional DCIM approaches under scrutiny. DCIM is increasingly specializes in automation capabilities to create a dynamic infrastructure that can rapidly adapt to workload demands and resource utilization conditions. At BRUNS-PAK, we refer to this emerging requirement as Data Center Information Management 2.0, DCIM 2.0 for short.
DCIM 2.0 will integrate existing infrastructure management tools and systems with the telecommunication, data and networking feeds needed to create a true ‘internet of things’ for the data center. By bringing these pieces together, along with proactive visualization and predictive analytics applications, DCIM 2.0 can begin to drive systems that control the necessary infrastructure changes to maintain operations with the lowest possible energy utilization. For example, real-time computational fluid dynamics (CFD) modeling of workload driven anticipated temperature changes can be used to control VFD cooling fans to maintain temperature).
Given the increasing intelligence of both the physical and logical devices that need to be part of this environment, implementation of DCIM 2.0 is possible sooner than many IT professionals think. In fact, the largest barriers to initial implementations may be management focus and a conscious desire to avoid responsibility silos (facilities emphasis vs. IT emphasis). Current dashboard tools can unify much of the data needed to begin to bring the DCIM 2.0 to life, and in so doing, help IT teams looking to combine ultra-reliability, scalability and efficiency under one data center vision.