Republic Bank Limited Statement of Requirements, Design, Engineering, Construction Management

Republic Bank Limited

Port of Spain, Trinidad

Statement of Requirements, Design, Engineering, Construction Management

IBM/BRUNS-PAK were selected by Republic Bank Ltd. to develop a statement of requirements, architectural design and engineering, construction management consulting and commissioning services for a new data center and support infrastructure building to be developed on a Greenfield site outside Port of Spain, Trinidad. Upon completion the facility has become the new primary data center for the Bank.

The project scope included the infrastructure development of the undeveloped site; providing communications and utility services. The single story structure, which is approximately 26,000 square feet in area, is raised roughly five feet above the surrounding grade to enhance security and provide protection from flooding. The new facility provides the Bank with over 10,000 square feet of raised access floor space; including  approximately 5,000 square feet of “white space”, 2,500 square feet of future expansion “white space”, a Command Center, a War Room, a Staging area and other raise floor support functions. The structure, designed to withstand category ‘3’ hurricane winds, is a concrete frame with an insulated exterior metal wall panel system over concrete block exterior walls and covered by a ‘double roof’ system. The receiving area / loading dock supports both the IT technical functions and the mechanical / electrical infrastructure areas.

The mission critical facility incorporates a ‘2N’ segregated independent redundant electrical systems to provide concurrent maintainability in an “A/B” configuration, with the potential of a future “A/B/C” configuration. The mechanical system provides ‘N+1’ redundant configuration and utilizes a ‘hot aisle’ containment system in the Data Center. A ‘double interlock’ pre-action sprinkler system provides protection of the entire facility while a full flooding gaseous suppression system provides primary protection for the Data Centre and other critical areas. An Aspirating Smoke Detection (ASD) system provides for early detection in critical areas.

Understanding Data Center Enterprise Transformation

Mark Evanko, CEO of BRUNS-PAK, panel on data center enterprise transformation at Data Center World Local, Washington D.C.

by Danny Bradbury, Data Center Knowledge, April 10th 2018
 
Here is the text from the article above, great read!

When you want to know what trends should be on your radar as a data center professional, Mark Evanko is a good source of knowledge. With nearly four decades of experience in the data center space, the co-founder and principal engineer at data center consulting firm BRUNS-PAK understands where today’s trends came from – and where they’re going.

He has his ear close to the ground, getting feedback directly from BRUNS-PAK’s target market. He hears what issues are top of mind for board executives and senior management when planning their data center developments and uses this to help maintain a multi-point strategic planning model.

In the past, that model has featured 19 elements that impact enterprise data center solutions. They have included cloud computing, colocation and disaster recovery.

Evanko has moderated a panel about trends in the data center space at Data Center World Local, Washington D.C. on May 15. Attendees at this Data Center Enterprise Transformation presentation learned the “elements” and fundamental concepts associated with the continued transformation. The panel also explains what board of directors, senior management, stockholders, trustees, and taxpayers are looking for in their data center solution and provide a vendor-neutral overview of total cost of ownership vs. risk.

Security and the Cloud

One of the most significant factors associated with data center transformation is cybersecurity, especially as it pertains to cloud computing, Evanko warns.

“If data is ever taken, stolen or corrupted at a third-party venue, there is no recovery for the enterprise. They can’t sue the third-party provider because they absolve themselves of any liability,” he says. He adds that large third-party cloud and colocation service providers will generally not take liability for security breaches in client contracts because it would be prohibitively expensive for them.

“Some of these large enterprises are weighing that risk,” he explains. This manifests itself in decisions about which applications to take off-site and which to retain on a company’s premises. He has a process called ‘candidacy’, in which BRUNS-PAK reviews products and services against several tiers of mission-criticality. The cloud has a role in less critical applications, he says.

Evanko warns that there is still much educating to be done as companies struggle to keep track of a rapidly changing data center landscape. “Many of the customers that we have come across have not necessarily understood what the impacts are,” he says.

Therein lies his fundamental message. When it comes to managing transformational data center trends like edge and cloud computing while keeping your information safe, data center professionals must understand how to balance risk, total cost of ownership, and functionality.

That can be a difficult path to walk, which is why understanding the impacts of these new technologies is so critical

Alternative Financing Strategies for Data Center Expansion

The rising reliance on real-time, data-informed decision making in the enterprise, is placing new demands on the CIO to increase capacity and quality of service to knowledge workers throughout the enterprise. The CIO challenge in many organizations, however, is how to deliver that increased capability, capacity and quality of service while dealing with rising pressure to cut costs or forego major capital expenditures.

Traditionally, this has meant a strategic decision between:

  • Renovation of existing data center facilities (dominantly OPEX)
  • Expansion of existing data center facilities (balance of OPEX and CAPEX)
  • Building new data center facilities (CAPEX program)

BRUNS-PAK Data Center Design/Build Leaseback Programs offer a secure way to finance new data center capacity through allocation of operating dollars instead of capital dollars. Backed by one of the nation’s leading financial services institutions, BRUNS-PAK leaseback options are integrated with the BRUNS-PAK design/build methodology which offers both the data center owner and the financing organization, a clear, well-documented, fixed price plan for data center construction projects. Financing options are available for both large-scale and moderate-scale programs.

GE and EMC Pivotal: Three Things Every CIO Can Learn From Them.

Recently, General Electric announced a $105 Million investment in EMC Pivotal. The investment reflects the companies growing commitment to smart systems/devices under their industrial Internet initiative. From locomotives to turbines to household appliances, GE sees a world where the ‘internet of things’ delivers measurable value to users of these increasingly intelligent systems.

They are not alone in their strategy. Apple ex-pats Tony Fadell and Matt Rogers took their knowledge of design engineering and online connectivity to create Nest, which sells smart building thermostats. Nest is more than a programmable thermostat, however. This web-connected device learns from a homeowner’s behavioral patterns and creates a temperature-setting schedule from them. It is also a data-use giant…compiling data on its users to drive smarter energy utilization. More important, it shows how entrepreneurs are beginning to embrace technology to do to other common devices what Apple has done to our portable music devices (iPod) and phones (iPhone)—namely make them stylish, fun and easy to use.

So, at GE, drawing on the trend, the company is rethinking how turbines can talk to their owners to drive smarter operation…or more reliable operation. How locomotives can talk to controllers to ensure timely services and ensure maintenance schedules are maintained. And for IT teams at GE this means tons of diverse data streams, structure, unstructured and semi-structured that need storage and interpretation. If this is your business, as GE increasingly deems it is, then the investment in EMC Pivotal makes sense.

But what can we all learn from GE? Here are three important takeaways from the GE investment for CIOs in all business, academic and government segments:

Data Volume Will Grow.

In conversation with IT executives, we still see a tendency to talk about data in traditional terms. That is, we think of applications in our traditional departments (HR, sales, finance, manufacturing, etc) as being our data sources. However, overlooking the explosion in data volumes likely to come from marketing, social media and from customer devices like the Nest thermostats could leave IT teams scrambling for resource when the tsunami from these sources hit.

CIOs Must Drive Business Value…Not Just IT.

GE is slowly and methodically betting its business on data and they are not alone. The key takeaway is the rapid shift from CIO as owner of IT services to broker of services supporting business value. This shift requires CIOs to rethink their facilities and infrastructure strategy in order to ensure, nimble, scalable, secure on-demand, affordable resources for the business.

Data Center Facilities Are Not What They Used To Be.

The Microsoft Azure cloud facility in Quincy, WA includes three distinct architectural approaches to data center design, from traditional raised floor integrated facility to a novel, open air modular form factor that redefines what it means to be a data center. This one facility single-handedly demonstrates the complex decisions facing IT executives looking to plot data center facility strategy for the next decade. Building out data center resources to support consumer-grade data processing (i.e. Google or Amazon class price/performance), you need to consider groundbreaking concepts.

The BRUNS-PAK Data Center Methodology

Over the years, BRUNS-PAK has quietly assembled one of the most diverse, skilled teams of professionals focused on the strategies and implementation tactics required to craft durable data center strategies in this new era. From strategic planning to design/build support, construction and commissioning, BRUNS-PAK is helping clients craft solutions that balance the myriad decisions underpinning effective data center strategy, including:

  • Renovation vs. expansion options (CAPEX v. OPEX)
  • Build and own
  • Build and leaseback
  • Migration/relocation options
  • Co-Location
  • Cloud integration / Private cloud build out
  • Container/Pod deployment
  • Network optimization
  • Business impact analysis
  • Hybrid computing architecture

With over 6,000 customers in all industry, government and academic sectors, BRUNS-PAK has a proven process for designing, constructing, commissioning and managing data center facilities, including LEED-certified, high efficiency facilities in use by some of the world’s leading companies and institutions.

The New Normal in Data Center Infrastructure Strategy

IT/Line of Business IT SpendingCloud computing is a top-of-mind initiative for organizations in all industries. The promise of scalable, on-demand infrastructure, consumption-based pricing that reduces capex demands, and faster time-to-market for new solutions constitutes an intoxicating potion for requirements-challenged, cash-strapped IT executives.

However, for many IT executives, the migration to the cloud is not a simple decision for one big reason security. When you own and manage your own infrastructure or employ traditional colo or managed hosting services, there are established policies, practices and risk mitigation strategies that are widely accepted. In the murky waters of the cloud, entirely new risks emerge, including:

  • Less transparency on infrastructure security practices, especially in below-the-hypervisor assets
  • New multi-tenancy considerations that are not as well documented or understood
  • Greater delegation of governance, risk and compliance demands to the cloud services provider

Despite these considerations, the financial lure of the cloud is inescapable. Public cloud services providers (CSPs) like Amazon and Microsoft have created massive economies of scale and are increasingly focused on segmented private cloud services that set a new normal in terms of cost-effectiveness, scalability and the ability to deliver truly agile IT infrastructure.

This has forced many IT departments to begin to look at workload segmentation in a new light. Beyond the questions of transactional vs. archival or batch vs. real-time workloads, organizations now need to look at applications that are “cloud adaptable”, both in terms of performance/technical readiness and in terms of governance, risk and compliance. New, business-driven applications like social CRM, human capital management, collaborative procurement and predictive analytics are all strong candidates for migration to on-demand cloud architecture.

This leads to another ‘new normal’ in IT infrastructure hybrid architectures. Hybrid IT infrastructure bridges public and private clouds, managed services providers and on-premise data centers. This composite fabric needs to be secured and managed for optimized performance, compliance and risk, opening up entirely new challenges and ushering in whole new classes of automation and management toolkits, such as internal cloud services brokers. It also forces greater emphasis on internal plans for virtualization or on-premise cloud deployments that can be integrated seamlessly in these complex architectures.

Making sense of this trend and its associated technologies can be confusing. BRUNS-PAK Consulting Services is a growing part of BRUNS-PAK’s comprehensive data center services offerings. Our consulting services team is expert at helping customers to plan and implement complex strategies for alternative infrastructures and dynamic IT deployment. By helping IT management understand and optimize the following critical infrastructure considerations, we can make it easier to align IT strategy with business needs, and reduce the rise of shadow IT initiatives:

  • Value of current facilities renovation/expansion (CAPEX vs. OPEX)
  • New data center build options (CAPEX)
  • Alternative financing options/leaseback (OPEX)
  • Co-location design and optimization
  • Cloud integration
  • Containers/Pods
  • Network/WiFi design and management
  • Migration/relocation options
  • Hybrid computing environment design and deployment

High Impact Measures to Boost Data Center Efficiency (Part 4)

Mechanical cooling, depending on the efficiencies of the systems being used, can consume as high as 50% of the total power used in a data center. Good engineering practice, equipment efficiencies, and solid operational understanding can all benefit in a lower cost of ownership and operations.

Mechanical Economization or “Free Cooling”

The advent of “green” data center practices has ushered in a heightened interest in reducing mechanical systems energy use. As part of these efforts and in conjunction with data center design “best practices”, a means of mechanical economization or “free cooling” has become a design standard rather than a luxury.

Mechanical economization utilizes the ambient temperature of the local climate to provide an alternative means of heat rejection from standard mechanical systems. Two means of creating this ambient usage are through waterside or airside systems:

Waterside economization utilizes a liquid medium which is run through an outdoor series of coils to be cooled to a lower temperature.  If the ambient cooling meets the necessary set point required for the supply water temperature, the chiller barrel never needs to run, thus greatly reducing the power required for the chiller. During periods where the ambient temperature are not at levels to provide 100% economization, partial “free cooling” can still be provided reducing the overall power needs of the chiller, while still providing some mechanical cooling to reduce the return water to a proper supply temperature.

Airside economization utilizes an air exchange through either a cross stream configuration (which mixes return air with outside air passing through a filter media to help create the supply air stream) or a heat wheel (also known as an enthalpy wheel which nearly eliminates outside air mixing, but typically requires a much larger footprint). This system essentially eliminates the water system medium requirements. These systems can outperform waterside economization in colder climates.

Mechanical Systems Controls and Monitoring

With mechanical systems improving their efficiencies through the equipment improvements and added systems designs noted above, controls and monitoring of these systems becomes more critical in order to maintain these efficient operations. CRAC unit manufacturers have added better controls for these units that allow for a more systematic approach for data center HVAC concerns. Units now communicate with one another throughout the data center and share individual operating conditions to assure a more singular response to the general room conditions.

Monitoring of these systems and trending data also benefit operations and maintenance personnel associated with the data center to better understand the effects of things like economization, maintenance, and other conditions which may affect the data center mechanical systems.

Motors and Drives

Because of reliability requirements in the data center, oftentimes mechanical systems are running at 50% or less their rated capacities during normal operation. This allows for failover scenarios to provide design loads even when a component in the system is not in operation. To further hamper efficient operation, most data center loads are typically below their maximum design capacities in order to plan for growth in the space.

In order to help alleviate the power consumption on equipment running at lower loads and help this equipment maintain better efficiency (as well as better life expectancy), the use of Variable Frequency Drives (VFDs) has provided a simple solution to allow better performance at lower loading while also reducing power consumption. A VFD is an electrical controlling device for motors varying the frequency to consume less power at
lower speeds when loads are not at their rated capacities. A motor can consume as low as 25% of the power required at 60% of the speed compared to 100% loading and speed. Additional benefits include reduced wear at start up and reduced over all motor wear by running it at lower rates than at its single speed maximum.

VFDs can be used in chillers, pumps, and cooling towers of a central system. VFDs can also be used on the air handler systems in the data center as well. VFDs can further provide more recordable information about power consumption for mechanical equipment. The recent improvements in the design technology of the new Variable Frequency Drives (VFDs) over the past few years have been substantial.  The operation of HVAC equipment, especially the pumps and CRAC Unit Fans, at reduced speed can produce cost saving of almost 20 percent.

High Impact Measures to Boost Data Center Efficiency (Part 3)

Energy efficiency in electrical systems can be achieved through some measures to limit losses through devices among these components. Power parity (the amount of power put into a device equaling the amount of power provided to the device) provides for the most efficient use of power. Transformers and equipment which utilize transformers (such as UPS systems and PDU’s) tend to have some losses in efficiency due to the friction losses in the windings of these transformers. As equipment vendors apply more stringent manufacturing techniques to their products, improvements can be made to efficiencies of this type of equipment. UPS vendors now provide UPS systems which operate at a .95 or higher power factor. This means that there is only a 5% loss of power into the device compared to power supplied by the device. It should be noted that these power factors are generally based on a load limit on the device no lower than around 30% of the rated maximum for the device, although some of the newer UPS systems can maintain their power factor down to as low as 20% of the rated maximum. As equipment is replaced due to changes in a system, end of life, or equipment failure, higher efficiency equipment should be specified and provided to improve on energy efficiency for these systems.

Measurement and Recording Data

We mentioned in part 1 of this series that in order to understand the consumption of power related to the data center, metering of these systems needs to be provided. Further, trending of this information is invaluable to understanding a baseline of energy use as well as the outcome of changes implemented to improve efficiency. The Power Usage Effectiveness (PUE) of the systems is an indicator of how efficient the data center operates. It is very important to understand where your data center ranks for PUE in order to know what measures should be taken to improve efficiency. This means that recording power usage at the main switchgear supporting both the electrical and mechanical equipment supplying the data center, and at the distribution side of the UPS systems distribution (preferably at the 120/208 volt level at the PDU’s) is ideal to achieve the simplest means of calculating the PUE.

Lighting

Lighting systems have been moving towards more energy efficient components in recent years.  These systems have moved away from the use of incandescent and T12 luminaires to compact fluorescent and LED fixtures. ENERGY STAR has reported savings of 42% by switching from T12 fluorescent luminaires with magnetic ballasts to high efficiency T8 luminaires with electronic ballasts. It should be noted that oftentimes these higher efficiency luminaires actually produce higher lighting levels in addition to using less power. The more recent introduction of LED lighting luminaires, which can be retrofit into current fluorescent fixtures, is driving these efficiencies even higher.

Lighting Controls

Another energy savings measure which can be implemented in the data center is lighting controls. The notion of “lights out” data center operations refers to personnel not being normally stationed in the data center space. As operational controls of data processing applications become more network driven, and remotely accessed, less time is required in the data center to perform these activities. As a result of this reduced time spent in the data center, lighting becomes less necessary to operate under non-manned periods. Lighting controls utilizing occupancy sensors as a means of controlling lighting offers a reasonable solution to taking control of shutting off the lights out of the personnel entering and using the space. However, occupancy sensors do not allow for continued presence in the space when personnel are out of sensory contact with a motion or occupancy sensor due to working within or at the lower portions of equipment racks. In order to better accommodate these specialized circumstances in the data center, a combination of occupancy/motion sensors in conjunction with card access systems allows for a highly effective and efficient lighting controls strategy.

The Bottom Line

Once the proper metering components are in place and baselines are established, it’s relatively simple to determine which electrical infrastructure equipment will benefit from an upgrade and what the payback for the investment will be. Also, paying attention to lighting controls can improve energy efficiency in the data center.  No matter what the situation is in your data center, a facility-wide energy audit from an experienced partner will help to identify the areas where the most immediate impact can be achieved.

High Impact Measures to Boost Data Center Efficiency (Part 2)

While typical energy audits focus on the mechanical and electrical infrastructure, in data centers the facility framework is only one factor in the cost equation. Often times improving in other areas can be even more rewarding. For example, consideration of the actual kilowatts consumed by servers and other IT hardware is crucial when examining energy efficiency in a data center.

Data processing equipment accounts for most of the energy consumption in a data center, and because of this, facility executives really need to start by thinking ‘inside the box’. Best practices in the type, usage, and configuration of deployment all can significantly reduce the overall energy needs for this equipment.

Pull the Plug on Idle Servers

It’s a simple concept really, if it’s not doing anything, unplug it.  However, in many data centers, up to 15% of the servers should be decommissioned and yet are left running for no other reason than lack of drive to clean up outdated equipment.  Some estimates indicate that the cost of each idle server can exceed $1,000 annually when considering total data center energy usage.  That’s a lot of wasted capital!  Addressing the issue can have an immediate impact on the bottom line.

The solution is to establish a rigorous program to decommission obsolete hardware.

Maintaining an asset management database is a necessity to help enterprises ensure that they are consuming resources efficiently. This database should contain accurate, up-to-date information on server location and configuration, enabling IT staff to easily identify variables of power, cooling, and available rack space when planning future server and storage deployments and identifying potential systems to retire.

Upgrade to Energy Efficient Servers

Another simple measure to reducing energy consumption is to buy more energy-efficient servers. The bulk of IT departments ignore energy efficiency ratings when selecting new hardware, focusing on performance and up-front costs rather than total cost of ownership. However, if just one server uses 50 watts less than another it will equate to a savings of more than $250 over a three year period, and an even more profound savings of $1,500+ on facility infrastructure expenditures can be realized.

Data processing equipment all rely on power supplies to take incoming power to the device and distribute it accordingly throughout its internal components as required. These power supplies are typically specified by the manufacturer to provide for the worse case conditions of the device under a maximized configuration. In the past, these power supplies typically were rated far beyond the components capabilities to provide a “safety factor” in the device. As more pressure is being brought to the forefront on energy efficiency in computing, manufacturers have been striving to match their power supplies more closely to the components capabilities, or power parity.

One of the more power consuming components in most IT processing equipment are the fans required to provide air for proper cooling internally within the equipment. These fans run continuously as long as the device is running. Both equipment and chip manufacturers have been making strides to better pair fan use with actual equipment needs. As chip development continues, heat tolerance is being increased.  Also, fans are being created which can be run in stages depending on the processing load of the equipment. This means that fans can run at lower speeds when processing is at a lower state, thus consuming less power.

Processing equipment developed within the last 3-5 years (depending on the manufacturer) is likely to be relatively energy efficient, anything older than that certainly should be evaluated.

Consolidate and Virtualize

Another “low-hanging fruit” in many data centers is server consolidation and virtualization. Typical utilization rates for non-virtualized servers is measured between 5 and 10 percent of their total physical capacity, wasting hardware, space, and electricity. By moving to virtualized servers, data centers will be fully supported with less hardware, resulting in lower equipment costs, lower electrical consumption (thanks to reduced server power and cooling), and less physical space required to house the server farm.

It is important to remember that not all applications and servers are good candidates for virtualization which adds complexity to the endeavor.

Along with consolidated server applications, associated storage for these systems is also becoming more consolidated as well. Storage Area Networks (SAN) and Network Attached Storage (NAS) solutions are becoming the norm in data center typologies. Virtualized tape systems are also replacing larger tape storage devices of the past. As these systems become more regularized, they have also been increasing in density. This allows more storage in the same footprint with only marginal increases in energy consumption. The advent of solid state storage devices (SSD’s) will likely only create higher densities with lower overall power consumption in the future. Although these devices are not yet in production on central storage equipment, it will only be a matter of time before they are utilized.

The Bottom Line

A comprehensive efficiency strategy that targets IT processing equipment in addition to other tactics can substantially reduce energy consumption and net large savings. A facility-wide energy audit from an experienced partner will help to identify the areas where the most immediate impact can be achieved.

High Impact Measures to Boost Data Center Efficiency (Part 1)

With Data Center energy consumption at an all time high, maintaining the lowest possible total cost of ownership has become increasingly difficult. We’ve detailed some high impact measures to help improve efficiency, and reduce power and cooling requirements to create a greener, more cost effective Data Center.

The first step in energy-efficiency planning is measuring current energy usage. The power system is a critical element in the facilities infrastructure, and knowing where that energy is used and by which specific equipment is essential when creating, expanding, or optimizing a Data Center.

In order to understand how energy efficiency measures affect energy consumption in the Data Center, a baseline needs to be established for the current energy used. There are currently two primary metrics being used by a number of organizations such as the Green Grid to promote the notion of measuring Data Center energy efficiency. The first is Power Usage Effectiveness (PUE) which is defined as the total facility power consumed divided by the IT equipment power consumption. The second metric is PUE’s reciprocal known as Data Center Infrastructure Efficiency (DCiE) which is defined as the IT equipment power consumed divided by the total facility power consumption.

Total facility power is defined as power measured from the utility meter or switch gear solely dedicated to the operation of the Data Center infrastructure in the building if the building is a shared facility with other functions. This includes power consumed by electrical equipment such as switchgear, UPSs (uninterruptible power system) and batteries, PDUs (power distribution units), and stand-by generators. Mechanical equipment dedicated to the HVAC needs of the Data Center such as CRACs (computer room air conditioning units), chillers, DX (direct expansion) air handler units, drycoolers, pumps, and cooling towers. IT equipment power includes the loads associated with IT processes including server, storage, network, tape and other processing equipment fed through Data Center infrastructure support equipment such as PDUs, RPPs (remote power panels), or other distribution means fed from a UPS.

To collect the information noted above, an effective building management system (BMS) should be employed to help collect, categorize, and trend the data gathered. Most systems offered by BMS providers such as Johnson Controls, Andover, Automated Logic, Honeywell, Siemens, and others can allow monitoring of energy consumption for both the IT equipment and the associated infrastructure equipment serving the Data Center. Metering and other DCPs (data collection points) should be provided at all switchgear relating to power and mechanical needs of the Data Center. Also metering should be provided at the output side of the UPS modules or better yet the PDUs. This will provide the energy consumption rates of both the facility power and IT equipment power.

The types of electrical monitoring which can be employed to measure this type of information can be broken down into three basic forms:

  • Amperage-only monitoring
  • Estimated Wattage monitoring
  • True RMS Wattage monitoring

Amperage-only and Estimated Wattage monitoring means can be flawed in the information they provide due to the inaccuracies of measuring the sine wave and its form. Should a sine wave be produced inaccurately, as many double conversion UPS systems do, averaging means of formulating power consumption can prove to be flawed. True RMS Wattage monitoring provides a much more accurate means of understanding the idiosyncrasies of power consumption relating to data processing power sources. BMS systems which employ measures such as wave form capture sampling using real time updating provide a very high degree of accuracy. It should be pointed out that this type of monitoring can be expensive at implementation based on the number of locations it is determined to be used. Should the decision be made to measure power at the distribution level of PDUs and CRAC units to determine power consumption, the cost at this level can be higher than if monitoring was to be placed at the distribution panel boards feeding these types of devices. As long as all the IT equipment and associated infrastructure equipment is being fed from a singular (or dual) location, this monitoring may be far less expensive while still providing nearly the same information for the distributed systems required out in the Data Center.

Traditional Data Centers which are not currently enacting any type of energy efficiency measures are operating with an average PUE of over 3.  A Data Center which is actively pursuing energy efficient measures can achieve much lower ratings, and in return realize substantial energy savings.

Why CFD for Energy Efficiency?

While winter temperatures make it a little easier to distract yourself from the costs of data center cooling, the realities are that for many companies, data center cooling remains a topic of high importance. At BRUNS-PAK, we have long championed design options that can make significant difference in your data center HVAC costs, including:

  • Airside Economization: the use of “free” outside air in your cooling plan
  • Heat Wheel Integration: integration of heat wheel exchange systems for optimizing energy efficiency
  • Higher Data Center Ambient Temperature: following the guidelines in ASHRAE 9.9 means real savings
  • Hot Aisle/Cold Aisle Configuration: reducing hot/cold mixing can produce measurable improvements in cooling efficiency

However, one item that companies do not take regular advantage of is CFD Modeling. Computational Flow Dynamics is often used in data center design projects, but its use in understanding airflow and cooling efficiency in existing data centers can yield measurable improvements in the optimized configuration of your data center assets, along with recommendations for HVAC improvements.

As leaders in the use of CFD modeling, BRUNS-PAK can provide expert consultation on ways to leverage this technique to support both short-term energy efficiency optimization modifications, and long-term strategic options for improving your data center sustainability profile.

Fire Detection and Suppression Technology

Sometimes the unimaginable happens. A fire can threaten to destroy a data center. To protect the valuable equipment and information housed in the facility, it is critical to install a fire suppression system adequate to the size, type and operational responsibilities of the complex. By definition, a fire suppression system is a combination of fire detection and extinguishing devices designed to circumvent catastrophic business loss as a result of a fire. This loss includes not only the cost of equipment replacement, but also the cost of recovering lost data or business-specific applications.

Detection Systems: The First Line of Defense

A critical component of any suppression system is smoke detectors. Depending on the application, they can be of the photoelectric or ionization type. Detectors perform several vital functions:

  • Warn facility occupants of possible fire.
  • Shut down all electrical service to the equipment so as not to “fuel the fire.”
  • Activate the suppression medium.

If it is properly designed, the detection system can also be used to limit business loss due to power-off interfaces by detecting a system failure rather than an actual smoke condition.

A highly effective detection system is one we call an “intelligent” system. It uses a software-based early warning system to provide an accurate means of detection and verification at the ceiling plane and underfloor plenum.

Water and Clean Agent Gas: Common Suppression Media

Suppression medium is activated if a true emergency is detected. The two most commonly used media to put out a fire are water and clean agent gas such as FM200, Inergen, and NAFS-III.

Determining which type of suppression medium to use depends in large part on the requirements of local code enforcement authorities, building and/or landlord stipulations, and input from insurance underwriters. It also depends on user preference, which is influenced by such factors as cost, business risk relative to data recovery, existing systems, and so forth.

Water sprinkler systems

Water sprinkler systems are found in most buildings regardless of the presence of a data center. As a general rule, where sprinkler systems exist, it is less expensive to convert to a pre-action sprinkler system than to install a clean agent system. Pre-action sprinklers are the water-based choice for data centers and refer to systems that control the flow of water to pipes in the ceiling plane. Smoke and heat activate a valve that advances the water to the ceiling plane. That way, inadvertent damage to equipment from leakage or accidental discharge is prevented. (By comparison, with an ordinary sprinkler system, water is contained in pipes in the ceiling plane at all times.)

Water is highly effective at putting out fires and is well suited for areas like printer rooms that contain combustible materials like paper and toner. The downside of water-based systems is the messy and lengthy clean up and recovery time after a water discharge.

Clean agents

There are primarily three clean agents presently vying for acceptance in the marketplace, FM200, NAF S-III, and Inergen. These agents were developed in response to the phase-out of Halon and the development of NFPA 2001, which was adopted in the Fall of 1994.

Consideration of these agents as alternatives to CO2 in under floor applications is viable. The costs of these systems has dropped in recent years due to more competition in the market place with competing vendors offering these various gas options.

  1. FM-200 (Heptafluoropropane – HFC-227EA) is a colorless, liquefied compressed gas. It is stored as a liquid and dispensed into the hazard as a colorless,FM-200 tanks electrically non-conductive vapor. It leaves no residue. It has acceptable toxicity for use in occupied spaces when used as specified in the United States Environmental Protection Agency (EPA) proposed Significant New Alternatives Policy (SNAP) program rules. FM-200 extinguishes a fire by a combination of chemical and physical mechanisms.

    FM-200 is an effective fire-extinguishing agent that can be used on many types of fires. It is effective for use on Class A Surface-Burning Fires, Class B Flammable Liquid, and Class C Electrical Fires.

    On a weight of agent basis, FM-200 is a very effective gaseous extinguishing agent. The minimum design concentration for total flood applications in accordance with NFPA 2001 shall be 7.0%.

  2. NAF S-III is a clean, non-conductive media used for the protection of a variety of potential fire hazards, including electrical and electronic equipment. NAF S-III is a clean gaseous agent at atmospheric pressure and does not leave a residue. It is colorless and non-corrosive.

    NAF S-III acts as a fire-extinguishing agent by breaking the free radical chain reaction that occurs in the flame during combustion and pyrolysis. Like Halon 1301, NAF S-III has a better efficiency with flaming liquids than with deep-seated Class A fires.

    NAF S-III fire extinguishing systems have the capability to rapidly suppress surface-burning fires within enclosures. The extinguishing agent is a specially developed chemical that is a gas at atmospheric pressure and is effective in an enclosed risk area. NAF S-III extinguishes most normal fires at the design concentration by volume of 8.60% at 20° C.

    NAF S-III is stored in high-pressure containers and super-pressurized by dry nitrogen to provide additional energy to ensure rapid discharge. At the normal operating pressure of 360 psi (24.8 bar) or 600 psi (42 bar), NAF is in liquid form in the container.

    Once the system is activated, the container valves are opened and the nitrogen propels the liquid under pressure through the pipe work to the nozzles, where it vaporizes. The high rate of the discharge through the nozzles ensures a homogeneous mixture with the air. Sufficient quantities of NAF S-III should be discharged to meet the concentration required and the pressure at each nozzle must be located to achieve uniform mixing.

  3. Inergen is composed of naturally occurring gases already found in Earth’s atmosphere (nitrogen, argon, and CO2). Inergen suppresses fire by displacing the oxygen in the environment. Inergen, however, is not toxic to the occupants because of the way it interacts with the human body. The level of CO2 in Inergen stimulates the rate of respiration and increases the body’s use of oxygen. This compensates for the lower oxygen levels that are present when Inergen is discharged.

    Inergen is stored as a dry, compressed gas and is released through piping systems similar to those utilized in other gaseous suppression systems.

  4. FE-25 fire suppression agent is environmentally acceptable replacement for Halon 1301. FE-25 is an odorless, colorless, liquefied compressed gas. It is stored as a liquid and dispensed into the hazard as a colorless, electrically non-conductive vapor that is clear and does not obscure vision. It leaves no residue and has acceptable toxicity for use in occupied spaces at design concentrations. FE-25 extinguishes a fire by a combination of chemical and physical mechanisms. FE-25 does not displace oxygen and therefore is safe for use in occupied spaces without fear of oxygen deprivation.

    FE-25 has zero ozone depleting potential, a low global warming potential, and a short atmospheric lifetime.

    FE-25 closely matches Halon 1301 in terms of physical properties such as flow characteristics and vapor pressure. The pressure traces, vaporization, and spray patterns for FE-25 nearly duplicate that of Halon 1301. The minimum design concentration for FE-25 systems is 8.0% meaning that about 25% more of FE-25 agent will be required. Fe-25 requires about 1.3 times the storage area of Halon.

    When retrofitting existing Halon 1301 system, the nozzles and cylinder assembly will need to be upgraded, however, the piping system likely will not need to be changed, which is cost-effective retrofit that minimizes business interruption.

  5. FE-13 is a clean, high-pressure agent that leaves no residue when discharged. FE-13 efficiently suppresses fire by the process of physiochemical thermal transfer. The presence of FE-13 absorbs heat from the fire as a sponge absorbs liquid. FE-13 is safe for use in occupied spaces up to a 24% concentration. Design concentration for total flood application is 16%.
  6. Novec 1230 is the newest clean-agent gas available on the market. It is marketed as a long-term sustainable alternative to FM-200 and Halon. Novec 1230 has a 0.0 ozone depletion potential (equivalent to FM-200), but has an atmospheric lifetime of only five days, compared to FM-200’s half life of over 20 years. Novec 1230 has a zero global warming potential. Novec 1230 is designed to a concentration level of 4-6%, which will require less gas than other clean agent. Novec 1230 extinguishes the fire by heat absorption, and is heavier than air, so the gas will sink in the room. Novec 1230 is also safe for electronic equipment, so the data center may not have to be shut down in the event of a gas discharge.

    Novec 1230 will require the same amount of tanks as FM-200, and is stored as a liquid under pressure. Under normal atmospheric conditions, it will exist as a gas. The system is approximately 5-7% more expensive than FM-200.

Table – Relative Cost Comparison of Extinguishing Methods
Scenario Characteristics:

  • Occupied Room
  • Housing electrical equipment
  • 10,000 cu-ft room volume
  • Room fully enclosed and building is fully sprinklered
Design Basis:

(1) Total flooding.

(2) Does not include the cost of fire alarm and detection system. Probable cost < $4,000.

(3) Assume a fully sprinklered building and

(4) Includes the cost of the extinguishing agent.

Extinguishing Agent Design Concentration, Density Agent Quantity Installation Cost (4) Recharge Cost Design Basis
FM-200 7.44 % by volume 364 lbs 20% more than Inergen Almost twice the cost of Inergen (1) + (2)
FE-25 96% by volume 335 lbs Parallel to FM-200 less gas 20%-25% less than FM-200 (1) + (2)
Inergen 37.5 % by volume 4780-cu-ft (1) + (2)
NAF S-III 8.60 % by volume ___ ___ ___ (1) + (2)
Pre-Action Sprinklers 0.1 gpm/s.f water N/A 1/4 the cost of Halon or Inergen N/A (2) + (3)

Note: NAF S-III does not appear to have the market presence to be a viable alternative.

A Four-Part Framework for Resilient Data Center Architecture

Cornerstone concepts to support cybersecurity

While working on a recent project, we came across a newsletter authored by Deb Frincke, then Chief Scientist of Cybersecurity Research for the National Security Division at the Pacific Northwest National Lab in Seattle, which outlined her team’s initiatives for “innovative and proactive science and technology to prevent and counter acts of terror, or malice intended to disrupt the nation’s digital infrastructures.” In cybersecurity, the acknowledged wisdom is that there is no “perfect defense” to prevent a successful cyberattack. Dr. Frincke’s framework defined four cornerstone concepts for architecting effective cybersecurity practices:

  • Predictive Defense through use of models, simulations, and behavior analyses to better understand potential threats
  • Adaptive Systems that support a scalable, self-defending infrastructure
  • Trustworthy Engineering that acknowledges the risks of “weakest links” in complex architecture, the challenges of conflicting stakeholder goals, and the process requirements of sequential buildouts
  • Cyber Analytics to provide advanced insights and support for iterative improvement

In this framework, the four cornerstones operate interactively to support a cybersecurity fabric that can address the continuously changing face of cyber threats in today’s world.

If you are a CIO with responsibility for an enterprise data center, you may quickly see that these same cornerstone principles provide an exceptional starting point for planning a resilient data center environment, especially with current generation hybrid architectures. Historically, the IT community has looked at data center reliability through the lens of preventive defense…in the data center, often measured through parameters like 2N, 2N+1, etc redundancy.

However, as the definition of the data center expands beyond the scope of internally managed hardware/software into the integration of modular platforms and cloud services, simple redundancy calculations become only one factor in defining resilience. In this world, Dr. Frincke’s four-part framework provides a valuable starting point for defining a more comprehensive approach to resilience in the modern data center. Let’s look at how these principles can be applied.

Predictive Defense: We believe the starting point for any resilient architecture is comprehensive planning that incorporates modeling (including spatial, CFD, and network traffic) and dynamic utilization simulations for both current and future growth projections to help visualize operations before initiating a project. Current generation software supports extremely rich exploration of data center dynamics to minimize future risks and operational limitations.

Adaptive Systems: Recently, Netflix has earned recognition for its novel use of resilience tools for testing the company’s ability to survive failures and operating abnormalities. The company’s Simian Army, consisting of services (monkeys) that unleash failures on their systems to test how adaptive their environment actually is. These tools, including Chaos Monkey, Janitor Monkey and Conformity Monkey, demonstrate the importance of adaptivity in a world where no team can accurately predict all possible occurrences, and where unanticipated consequence of a failure anywhere in a complex network of hardware fabrics can lead to cascading failures. The data center community needs to challenge itself to find similar means for testing adaptivity in modern hybrid architectures if it is to rise to the challenge of ultrareliability as current scale.

Trustworthy Engineering: Another hallmark of cybersecurity is the understanding that the greatest threats often lie inside the enterprise with disgruntled employees, or simply as a result of human error. Similarly, in modern data center design, tracking a careful path that iteratively builds out the environment while checking off compliance benchmarks and ‘trustworthiness’ at each decision point, becomes a critical step in avoiding the creation of a hybrid house-of-cards.

Analytics: With data center infrastructure management (DCIM) tools becoming more sophisticated, and with advancing integration between facilities measurement and IT systems measurement platforms, the availability of robust data for informing ongoing decision-making in the data center is now possible. No longer is resilient data center architecture just about the building and infrastructure. So, operating by ‘feel’ or ‘experience’ is inadequate. Big data now really must be part of the data center management protocol.

By leveraging these four cornerstone concepts, we believe IT management can begin to frame a more complete, and by extension, robust plan for resiliency when developing data center architectures that bridge the wide array of deployment options in use today. This introduction provides a starting point for ways to use the framework, but we believe that further exploration by data center teams from various industries will create a richer pool of data and ideas that can advance the process for all teams.

REFERENCES

Frincke, Deborah, “I4 Newsletter”, Pacific Northwest National Laboratory, Spring-Summer 2009.

Six Factors Influencing Data Center Efficiency Design

In rapidly evolving markets, bigger is not always better. Is your data center designed for efficiency?

The aggressive efforts of DISA, the Defense Information Systems Agency, to rationalize and consolidate mission-critical data center facilities has put a spotlight on the challenges of planning a data center infrastructure that is reliable, resilient, responsive, secure and efficient at the same time, from both an energy utilization and financial perspective. It is easy to criticize DISA’s efforts as emblematic of government inefficiency, but that would be an unfair assessment, as there are plenty of equally egregious commercial examples of overbuilding (and underbuilding) in the data center space. Especially in the current hybrid architecture marketplace, designing a data center facility to effectively and efficiently meet both current and anticipated needs takes careful planning and expert engineering.

At BRUNS-PAK, we believe that part of the reason so many projects end up misaligned with the demand profile is that both the customer and vendor design/build teams fail to account for the six critical factors that influence efficiency when working at the design phase of the project:

  • Reliability
  • Redundancy
  • Fault Tolerance
  • Maintainability
  • Right Sizing
  • Expandability

How you balance these individual priorities can make all the difference between a cost-effective design and one that eats away at both CAPEX and OPEX budgets with equal ferocity. Here is a quick review of each critical consideration.

Reliability

The data center design community has increasingly acknowledged that workloads, and their attendant service level and security requirements, are potentially the most critical driver in defining data center demands. Workloads dictate the specifics of the IT architecture that the data center must support, and with that, the applicability of cloud/colo services, pod designs, and other design/build options. Before initiating a data center project, having a clear picture of the workloads that the site must support will facilitate accurate definition of reliability for the project.

Redundancy

The goal of redundancy is increased reliability, which is defined as the ability to maintain operation despite the loss of use of one or more critical resources in the data center. Recognizing that all systems eventually fail, how you balance component vs. system-wide redundancy (N+1 vs. 2N, 2N+1, etc.) will significantly reshape the cost/benefit curve. Here, it is important to design for logical and reasonable incident forecasts while balancing mean-time-to-failure and customary mean-time-to-recover considerations.

Fault Tolerance

While major system failures constitute worst-case scenarios that ultrareliable data centers must plan for, far more common are point failures/faults. In order to achieve fault tolerance, data centers must have the ability to withstand a single point-of-failure incident for any single component that could curtail data processing operations. Typically, design for fault tolerance emphasizes large electrical/mechanical components like HVAC or power distribution, as well as IT hardware/software assets and network or telecommunications services, all of which will experience periodic failures. Design for fault tolerance should involve more than simple redundancy. Rather, effective design must balance failover capacities, mean-time-to-repair, repair vs. replace strategies, and seasonal workflow variances to ensure that the data center is able to support service level demands without requiring the installation of excess offline capacity.

Maintainability

When designing a data center facility, a common mistake is failing to account for maintainability. Excess complexity can rapidly add to costs since even redundant systems must be exercised and subjected to preventive maintenance. In fact, planning a consistent preventive maintenance schedule can be one of the most effective contributors to long-term efficiency by reducing the need for overcapacity on many key infrastructure components.

Right-Sizing/Expandability

When properly accounted for, these final two factors work in tandem to help design/build teams create an effective plan for near-term and long-term requirements. Modern design strategies include the use of techniques like modular/pod design or cloud integration that engineer in long-term capacity growth or peak demand response. This means that the team can better ensure that near-term buildout does not deliver excess capacity simply as a buffer against future demand. Engineering teams can readily design modern infrastructure to smoothly scale to meet even the most aggressive growth forecasts.

Treated as a portfolio, these six factors offer the data center design team diverse levers to balance service delivery against cost while ensuring that the final infrastructure can meet demand without breaking the bank, either through initial capital investment, or long-term operating cost.

How BRUNS-PAK Can Help

Over the past two years, BRUNS-PAK has evolved its proprietary design/build approach to incorporate the evolving array of strategies and tools available to data center planning teams, resulting in the BRUNS-PAK Hybrid Efficient Data Center Design program. Through an interactive process that acknowledges both an organization’s IT requirements and the associated facilities infrastructure needs’, this program delivers a strategic approach to addressing the six critical factors influencing efficient data center design while retaining the performance, resilience and reliability needed in enterprise computing environments. Through our expanded consulting services group, and well-established design/build services team, BRUNS-PAK is uniquely positioned to assist customers seeking to create a long-term strategic direction for their data center that satisfies all stakeholders, including end-users, IT and finance.

OpEx Solutions for Financing Data Center Renovation/Construction

Funding a data center build, renovation or expansion does not have to mean draining capital resources.

Big Data. Mobile enablement. The knowledge economy. The reasons are myriad, but the impact is singular…data center demand continues to grow in the enterprise, regardless of industry or corporate maturity. Today’s CIO must figure out how to satisfy an increasingly demanding audience of users, seeking access to data across a diversifying array of applications, and do so with continually stretched IT budgets.

In fact, many legacy data center assets are being stressed by power density and distribution constraints, rising cooling costs and complex networking and peak load demand curves. However, retrofitting, upgrading or consolidating multiple legacy, lower performing assets into a newly designed and constructed facility, or constructing large new data center facilities to support enterprise growth, can require significant capital.

At BRUNS-PAK, our proprietary Synthesis2 methodology integrates a structured approach to data center planning and construction that includes rigorous estimation and structured adherence to budget guidelines throughout the project. This discipline has helped us define breakthrough approaches to data center financing driven by operating cash flow instead of capital reserves. This can dramatically expand an organization’s ability to support required IT expansion in the face of rising end user demand.

The basic concept behind OpEx financing is the use of long-established structured finance techniques that leverage the credit rating of investment grade companies (BBB or better) to finance the new assets or improvements on a long term basis. In a retrofit or upgrade scenario where energy savings are anticipated as a result of the project, the financing to provide the capital improvements can be secured by the cash flow generated by reduced energy usage. For a new build scenario, the financing to construct and use the facility can be secured by a well structured, bondable, long-term lease.

To illustrate how this can work, here are two scenarios outlined below for a retrofit and new build:

Scenario 1: Energy Saving Retrofit/Upgrade Financing

Financing an energy efficient retrofit or upgrade to a data center requires a few key considerations:

  • The amount of capital required to complete the retrofit or upgrade
  • The energy savings that will generated
  • The term of those energy savings which often coincides with the obsolescence life of the assets being deployed

Baseline anticipated energy savings are first established through an energy audit to determine the as-is energy costs and plan the target cost profile. The difference between current costs and future costs is presumed to apply to the debt service on the construction. If the actual annual energy savings exceed the annual debt service costs of the underlying financing, the owner or user can keep the positive difference or spread between those streams. For example, if an organization invests in a $50 million upgrade that results in $12.5 million in energy savings per year, here is a basic financing option. First, let’s presume 84 month (7 year) financing at a 7% interest rate. That results in an annual debt service cost of $7.5 million. That $7.5 million is paid from the energy savings, and the organization retains the remaining $5 million in savings. After the financing is repaid, the full energy savings flow to the organization’s bottom line.

An important note in this example…the organization has not outlaid any cash for the construction.

Scenario 2: New Build Financing

For new facility financing, we will take into account a different set of considerations, including:

  • The amount of capital and the construction schedule for the facility
  • The credit rating of the user
  • The desired term that the user will occupy the facility which is used to establish the lease term.

In this scenario, the user will execute what is known as a bondable, net-lease that provides sufficient duration to completely pay back the financing provided. Once again, the user is not required to outlay capital for the construction. Instead, they pay for the facility through lease payments that factor in the term, total construction cost, construction period interest, and the assumed interest rate applied to the project.

For example, assume an investment grade rated company wants to consolidate three existing legacy data centers into a new, state of the art facility that will cost approximately $50 million, but they do not want to tap their capital budget. They are, however, prepared to occupy and pay for annual use of the facility over a 15 year period. If we were to apply a 6% interest rate to this project and assume the hypothetical loan would be repaid ratably over the 15 year lease, the company would pay approximately $5.5 million annually over the lease term, with an option to buy the facility at term end.

The BRUNS-PAK Advantage

Using structured finance techniques to finance long term assets is not limited to these two scenarios discussed. In fact, for organizations with strong credit ratings, there are practically endless ways to structure a capital efficient transaction for data center facilities. As noted earlier, BRUNS-PAK’s track record for accurate estimation of facility construction costs and long-standing history of on-budget project completion, have become powerful assets when discussing OpEx solutions.

With over 6,000 customers in all industry, government and academic sectors, BRUNS-PAK’s proven process has helped us line up multiple sources for structured financing that we can introduce into project plans to ensure that you can plan and implement a program that effectively supports your current and future IT infrastructure demands.

12 Days of Merry Mark Evanko…isms

In festive holiday spirit we’d like to share with all of you, 12 Days of Mark Evanko’isms! 12 Days of personal attributes, quotes and one liners from our one of a kind Principal, Mark Evanko!

Day 1: Today we would like to highlight the term in which Mark introduces himself in meetings, conferences, presentations etc: “Mark Evanko, a Paranoid Schizophrenic Conservative Engineer”

Day 2: Mark Evanko is a gentleman and does not use profanity. Here are some of our favorites: “We’re gonna be in deep sneakers!”
“He’s a Cone Head.”
“Tell him to go jump in a lake!”

Day 3 – how Mark expresses the chaos and uncertainty of the day to day:

“Its a Goat Rodeo”
“Its a Violent Doo Loop”
“I’m Confused”
 
Day 4 – I need to find out, in what context have these phrases been used? lol:
 
“If he fell off a turnip truck….”
“Strike Up The Band!”
“The dog married the cat.”
 
Day 5 – This one made the list twice:
“We want it cheap!”
“We like it cheap!”
 
Day 6 – Not kidding when he says hes paranoid:
“They’ll eat our lunch.”
“They will assassinate us.”
“I’m going to be in a firefight. Its going to be a war!”
 
Day 7– Steers his ship with positivity!
Go BRUNS-PAK!
Congratulations! Thank You!
I’m having a great time.
 
Day 8 – Again some creative language to avoid using profanity:
Lord love a duck!
Horse Feathers!
 
Day 9 – afew more gems….
Share your vegetables!
This is a Torture Test.
We don’t want our guys upstairs hatching Walnuts….
Day 10 – You would hear these quite often in meetings:
I’m going to cancel all of my meetings…
The bottom line is….
I can run with this.
 
Day 11 – This was #1 on the list:
I’m not from Harvard but…..
 
Day 12 – My favorites:
Whats up?
He’s Wounded.
He has a knot in his shorts.