Reference Architectures: Choice and scale in data centre operations

October 20 2022, by David Hirst | Category: Data Centres
Reference Architectures: Choice and scale in data centre operations | Macquarie Data Centres

One of the fundamental foundations of successful colocation operations is the reference architecture that defines the implementation.

When defining a reference architecture, either for the first time or when upgrading as part of an expansion of operations in support of business growth, it’s important to understand the parameters that are under the direct control of the organisation and those that can be the responsibility of colocation partners.

The value of reference architectures is traditionally defined as creating infrastructures that can be replicated. I think their value also sits in a number of other areas of focus, including defining resilience, minimum service standards, defining the implementation of the data centre operations themselves, and in defining compliance standards across multiple availability zones capable of being deployed at speed and to defined levels of serviceability.

As such, reference architectures go beyond the data centre itself, to embrace every aspect of its operation.

They rationalise the way businesses operate around tasks such as patch and fix, and they define the levels of risk and uptime required for the application software being run in each data centre.

Reference architectures assume increased importance when expanding operations overseas, for a number of reasons. At the application level, a company will want to replicate operations as homogeneously as possible in new markets. Ideally, businesses want to roll out the same reference architecture around the world.

The reality is different. Different countries and markets have different jurisdictions, around managing customer data as one example. Reference architectures should therefore be capable of being optimised for different applications and how these might be operated and run in different countries.

Hyperscale vs Colocation: the capacity challenge.

This also defines one of the main differences between choosing a hyper-scale and a colocation data centre partner. Hyperscalers will tend to expect customers to accommodate their own operating environments, rather than being flexible enough to adopt and adapt to the reference architecture of the customer. They may not be a problem for the customer, until either capacity becomes constrained or costs become unsustainable.

That brings us to capacity planning. Capacity planning is challenging, even for hyperscalers. The most obvious recent challenge, of course, has been the pandemic, though that was an extreme example. Even as data centres and businesses sought to adjust to the very rapid move to hybrid work and the resulting increase in demand for cloud services, they were also trying to work out what their future capacity and demand might be. What many realised was the risk that, in the short-term, they would have insufficient capacity, and that, post-pandemic, they would be left with too much.

Capacity planning is rarely linear, and the effect is compounded if a business operates across multiple regions, each with its own set of dynamics and requirements.

Colocation and Compliance.

Though these challenges are common to hyperscalers and to smaller SaaS companies alike, companies using colocation partners can implement their reference architecture relatively quickly, and certainly without having to redesign from scratch for multiple locations or regions.

In these scenarios, the role of colocation data centre partners is to focus on making sure that the compliance needs of end customers are met with the base data centre infrastructure. An important example is compliance standards to support doing business with governments. It’s essential that

colocation partners have these service levels in place before considering taking on customers seeking to operate to these standards.

Another area is data centre uptime. Switching data centres in the event of an outage is difficult, usually expensive, and never an option to take unless absolutely necessary. Having mirror sites and guaranteed uptime service levels are options that need to be factored into reference architectures, and colocation data centre partners need to be assessed on having the required uptime levels.

The decision about whether to go with a public cloud, a private cloud, on-prem data centres or colocation data centre partners is one of the biggest, any company can make. Once made, it’s difficult to change mid-operations, like changing an engine on an aircraft while it’s in the air. And as companies grow, they may wish to migrate away from being with a hyper scaler as they look for ways to improve profitability, or if the company needs to address compliance challenges.

The role of colocation data centre partners is, therefore, to support the reference architecture. That means ensuring that the reference architecture sits in a compliant environment now, and is positioned as best as possible for the future, working with the customer to provide guidance, support and services that are compliant in the location under consideration, and understanding how policy drivers change over time.

That creates new inputs and features that can be baked into future iterations of the customer’s reference architecture, which in turn creates the infrastructure they need to win in the marketplace.


About the author.

David Hirst is a true tech enthusiast with an inherent talent for staying ahead of the curve in the fast-moving technology industry. As the Group Executive for Macquarie Data Centers, he has been successfully leading the team for over 14 years, where - through a combination of innovation, resilience and captaincy - he has grown the company to be one of Australia's leading data centre providers. Always eager to explore the potential of emerging technologies, such as AI, he is a true industry trendsetter.

See all articles by this author

Get in touch.

1800 004 943

Enquiry Sent.

Thank you for contacting us. Our specialists will get in touch with you shortly.

From the Blogs

Immersion cooling

Immersion cooling: A deep dive into a cool technology. Data is growing at an unprecedented pace. The advent of blockchain and AI solutions m...

Read More

Sovereign AI

Is Sovereign AI The Future of Artificial Intelligence? Generative AI is unavoidable—a $45 billion USD industry expected to grow to $207 bi...

Read More

A Guide to Australian Data Centre Sovere...

With data flowing freely across borders, how do we prevent foreign governments from accessing or seizing sensitive information when it enter...

Read More