ADDIS ABABA (HAN) August 30.2016. Public Diplomacy & Regional Security News. OPINION By Bahru Mossaan IT Consultant. Technology will continue to evolve. Data centre infrastructure, operations and management is evolving along with the changing roles of IT professionals. In view of this, banks need to plan for emerging technologies in data centres, in order to adapt to changing needs.
Almost all banks in Ethiopia have their own data centres for running Core Banking Systems (CBS), as well as other mission critical applications to support their businesses. The infrastructure for these data centres depends on traditional, proprietary and purpose built hardware in the most part. In a legacy data centre, scaling-out is expensive due to the proprietary nature of each individual piece of hardware. Besides, older infrastructure is less secure and vulnerable to increasingly sophisticated attacks.
Virtualisation technology is perhaps the single most important trend impacting IT. Very few banks in Ethiopia experiment with server virtualisation technologies with the aim of maximising resource utilisation.
However, server platform and virtualisation only address one part of the problem of data centre infrastructure consolidation. Two critical resources – networking and storage infrastructure silos that are slow, expensive, and not well suited to the needs of highly dynamic applications being deployed today – need to be addressed as well.
For this reason, it is time to re-evaluate fundamental infrastructure choices – do I want to be poised for growth and change or continue to be crippled by inefficient hardware silos?
IT professionals in many banks are cautious by nature. Some look at data centre technology trends and wonder about change management issues, risks associated with adapting to new technologies fast enough, security and other unknowns. And yet the most common challenges faced by IT today are the following: How to forecast IT budget for the next three to five years; how to manage multiple components; how to deal with multiple vendors; how to move to the cloud; how to make the right sizing to meet requirements; how to scale IT infrastructure fast enough, and how to make IT infrastructure agile enough.
All data centres contain the same fundamental building blocks – compute, storage and networking. How you choose to equip your data centre with those elements will affect your day-to-day workflow, operating costs, short-term and long-term costs, capital costs, ability to scale and more. As many banks in Ethiopia build and upgrade data centres (DC) and disaster recovery (DR) sites, infrastructure consolidation is an important strategy that needs to be seriously considered.
From traditional specialty hardware to new Hyper Converged Infrastructure (HCI) solutions, the options are abundant. The first decision to be made is which model to pursue.
The traditional silo-based model relies on proprietary and purpose built hardware. It typically has its own management software and works best when optimised and managed by dedicated specialists. Furthermore, because performance is set at the hardware layer, resources are not properly optimised and overprovisioning often occurs.
It’s an expensive solution to a general-purpose IT need, and results in an increased footprint, increased complexity, and increased staffing and specialisation. Worse, today’s dynamic applications and virtualised workloads require provisioning flexibility that hardware-centric approaches aren’t designed to deliver.
One method of cutting unnecessary costs and maximising the returns on investment (ROI) in the data centre is server consolidation. The key objective of server consolidation is to increase business agility and operational efficiency by eliminating over-provisioning, increasing server utilisation. Other benefits of server consolidation include greater computing efficiency, lower power and cooling costs and the flexibility to migrate workloads between physical servers.
There are a variety of ways to undertake server consolidation. One means is the use of blade servers to maximise the efficient use of space. The other means is server virtualisation, which allows optimal server resource sharing among different applications – i.e. effective utilisation of server hardware by allowing one physical server to host multiple virtual machine (VM) instances.
While server consolidation and virtualisation brings the required efficiency in terms of space and resources utilisation, they do not bring about improved infrastructure management. With virtualisation alone, virtual machine sprawl and multiple hypervisors from various vendors can become IT management headaches.
The key to improved and simplified infrastructure management is leveraging the software-defined approach, which tightly integrates compute, storage, networking resources and management in a commodity hardware box.
There are basically two types of system architecture that greatly simplify the management and operation of a data centre – converged and hyper converged.
In converged infrastructures, the building blocks are pre-configured bundles of hardware and software placed in a single chassis with the aim of minimising compatibility issues and simplifying management. The technology components in a converged infrastructure can be separated and used independently. The server can be separated and used as a server, just as the storage can be separated and used as a functional storage.
Examples of these systems are the hardware-based, building-block approach of VCE with preconfigured and integrated solutions, with predictable units of power, weight, cooling, and geometry for data centre planning purposes like Vblock. Vblock systems consist of storage and provisioning from EMC, switches and servers from Cisco and VMware virtualisation software running on the servers. In these systems, flash storage is used for high performance applications and for caching storage from the attached disk-based storage. Dropping prices and performance requirements are moving flash-based enterprise data storage into the data centre.
While the physical boundaries may have been eliminated in converged systems, provisioning and operational challenges remain.
In hyper-converged infrastructure (HCI), the technology is software defined, so that the technology is, in essence, all integrated and cannot be broken out into separate components. Software defined solutions provided by Nutanix and VMware are examples of hyper-converged infrastructures.
HCI allows the convergence of computer, networking and storage on to industry-standard x86 servers, enabling a building-block approach with scale-out capabilities. HCI delivers data centre infrastructure that provides cloud-like economics and agility, but with the security and reliability of on-premises solutions.
One of the fastest-growing solutions for evolving to the next-generation data centre is HCI. Since the resources in HCI are software-based and managed from a single management console, a well-architected HCI solution can dramatically increase efficiency, improve flexibility and reduce TCO across the data centre — without any trade-offs on performance or availability.
All the key data centre functions – compute, networking and storage – run as software, enabling efficient operations, streamlined and speedy provisioning and cost-effective growth.
HCI delivers the following essential capabilities required for a data centre infrastructure: simplicity; low CapEx and OpEx; high performance and availability; future-proofing; deployment flexibility; operational simplicity; scalability and affordable growth.
With HCI, IT departments can scale their data centre infrastructure without relying on expensive proprietary components, leveraging commodity hardware. While commodity hardware might not be cheap, it is much cheaper than proprietary hardware.
Tech giants, like Facebook, Amazon and Google, build their servers themselves from commodity components, without relying on expensive proprietary components. Their agile and efficient data centres are built for the Cloud on virtualised or software-defined compute, software-defined storage and software-defined networks that constitute the Software-Defined Data Centre (SDDC).
It is these cloud principles that have been adapted for use in smaller settings and packaged in hyper converged products that smaller companies can afford. That is why HCI accelerates a bank’s path to the Cloud with predictable performance and economics, turning the data centre into a flexible and scalable asset.
In summary, flexible, cost-conscious companies are discovering that moving toward hyper-converged infrastructure does not require a radical shift in thinking. Rather, it’s part of the natural evolution of the virtualised data centre. Now that server and application virtualisation are part of the fabric in all data centres, it simply makes sense that the next big thing is HCI.
No IT decision-maker chooses a new technology or adopts a new strategy just because it is trending. The technology adopted must support the business, and deliver clear benefits and rewards.
It is now undeniably becoming apparent for IT decision-makers that HCI offers the performance, flexibility and efficiency for any budget-conscious IT department. While we cannot predict what the future holds, banks in Ethiopia can plan for emerging technologies in the data centre by embracing HCI now.
By leveraging the simplified operations and management, flexibility and easy scalability of HCI, as opposed to traditional silo-based infrastructure that requires performing mundane administrative and support tasks, IT departments can focus on innovative projects that can help deliver competitive value for their bank.