The Classic Sales and Marketing Dilemma: Results Today vs Planning for Tomorrow

bargainsMy great friend and fellow marketing traveler, Owen Ashby, recently had a crisis of confidence brought about by a fairly brutal discussion that appeared to relegate marketing to the minor supporting roles of lead generation and messaging. In order to, as he put it, “ensure he hadn’t lost his marbles” he posed a question on a LinkedIn sales and marketing discussion group about the role of marketing in product development.

For his sanity, there were a lot of people who took a much broader and more strategic view of marketing. But the thing that intrigued me was how quickly the discussion turned to who leads the offer/proposition/product…sales, product management or marketing? In particular, sales can be pretty virulent in their relegation of marketing to a subservient role.

As a marketer, I could be expected to promote the view that marketing has a much wider, strategic role than it is often given credit for. In the mid 80s I was working in sales for Burroughs which was a very sales and product led company. There was a great story doing the rounds about the Burroughs then EMEA head, Graham Murphy, whose reputation for bawling out people he thought were bullshitting him, or not performing was legendary. The story itself may be apocryphal, but it is quite fun and highlights that everything isn’t quite as it seems.

At a quarterly sales review meeting with all the EMEA country managers the Italian country manager was trying to justify to Graham Murphy why he wasn’t on target. “But Graham, I was planning for tomorrow.” To which his response was “I don’t pay you to think about tomorrow, I pay you to get results today.” Having got the hapless country manager to accept he was paid to get results today, Graham Murphy started going around the table asking all the other country managers what they were paid for. Given his fearsome reputation they all dutifully chimed in that they were paid to get results today.

Finally, he got around to the person sitting directly on his right and asked the question again. Like a rabbit caught in the headlights the man came back with the same answer as all the rest. “NO!!” exploded Graham Murphy “you are my strategic marketing manager. I pay YOU to plan for tomorrow.”

Even someone as sales focused as Graham Murphy, working for a notoriously sales and product led company, could not ignore the need for some part of the business to think more broadly, about what customers and markets need and how the company uses the full marketing mix to deliver a competitive, differentiated offer.

I concede that marketing may be partly responsible for the subservient, reactive position it finds itself in. It may also be that the term, Marketing, is now so badly misunderstood and poorly valued, that we will need to coin a different term to cover the development and deployment of market offers using the whole marketing mix. However, while sales are critical in the customer feedback loop process, sales drivers and behaviours preclude sales from owning and driving offer development.

How marketing needs to step up to deliver an agile, continuous offer development environment, is something I will return to…if sales haven’t driven me into hiding first!

Posted in IT Marketing, Marketing, propositions, Uncategorized | Tagged , | Leave a comment

Why Enterprise CIOs need to justify running their own data centres

data-cntre-christopher-bowns

Picture: Christopher Browns

If your CIO comes to you on Monday and says you need to replace your old data centre, you should tell him or her that the primary strategy should be to outsource as much of the data centre environment as possible. Therefore, he or she needs to make a very strong case for building and managing a new data centre, as opposed to having to make a very strong case to outsource. There may well be compelling reasons for having your own facility, but I believe we have reached that tipping point where building and managing your own data centre facility rarely makes sense.

The hyper-scale cloud operators have changed the economics of building and running data centres. They have reduced the costs of managing servers by a factor of 10, 50 or a 100 depending on which stories you read. They have perfected the art of squeezing the last drop of cost savings from energy utilisation through innovative cooling technologies, attention to detail in layout and customisation of servers that strips out unnecessary power utilisation. They are building or leasing data centres at a huge rate. According to Research and Markets the Mega-scale data centre market is forecast to grow from $86.9.7 Million in 2016 to $359.7 Billion in 2023.

The growth in cloud deployment and the advances made by the likes of AWS, Microsoft Azure and Google have also had a galvanising effecting on the co-location data centre sector. Their data centres have become far more efficient and operators like Equinix, Interxion, NTT et al have built interconnectivity and direct public cloud access facilities that deliver hybrid-cloud securely, quickly and without the challenges of cost and latency of using the public internet.

Technology is changing at such a rapid pace that data centres need to be built with flexibility and agility in mind. Data centres are actually 15-20 year investments, therefore there is a real need to factor in modularity. Form factors, power requirements, cooling and networking considerations will all be impacted, not only by new technologies in themselves, but also by the way DevOps, software defined infrastructure, Big Data, analytics and the Internet of Things are changing. Do you as a CIO want to be responsible for the challenges of building and maintaining your own data centres to meet these challenges when skills are scarce and frankly, you won’t be able to get anywhere near the operating price points of the cloud and service providers?

But what about hybrid-cloud? For sure, analysts like Gartner are pointing to Enterprises majoring on hybrid-cloud in the next few years. But hybrid-cloud does not necessarily equate to a combination of on-premise and off-premise. The private part of a hybrid-cloud can run quite happily in a co-location or outsourced data centre. Given the growth of off-net direct connects to the public cloud, co-location provided interconnectivity platforms and metropolitan area networks, I would suggest that the in-house, Enterprise data centre is the last place to run a hybrid cloud.

Security and data privacy has often been a big factor in enterprise decisions to keep data centres in-house. The failure of the Safe Harbor provisions has prompted a surge in the building of local data centres to help deal with data sovereignty concerns and legislation. If security is a big issue, then you can’t get more secure physically than nuclear bunkers or data centres built into mountains that are proliferating across Europe. Combine that with managed infrastructure and software security services that are the equal of, and often superior to, in-house security technology and processes, there seems little reason to discount outsourcing on these issues, unless of course you are GCHQ.

Then of course, there is the mainframe. I won’t reprise the reasons for the refusal of the old dears to fade away and die. They are often cited as a reason for maintaining your own data centres. I can’t talk for IBM, but I know that Unisys mainframes now come in industry standard 19” cabinets and use absolutely standard disk sub-systems. So, they are hardly a physical problem. I now understand that, much to the horror of many in the IT world, whole mainframe applications are being containerised which will radically reduce operations and maintenance headaches.

So, there are no generic, general reasons why you can’t ditch your ageing data centre. As an Enterprise, there are certainly very strong reasons why you shouldn’t contemplate building and owning your own new data centre. If your CIO knocks on your door and suggests you did need to build and run a new one, then make sure you set the bar very high for him or her to get approval.

 

Posted in Data Centre Insights, Technology and Innovation | Tagged , , , | Leave a comment

Opportunities for data centre operators in 2017

data-cente-bob-mical-1

Source: Bob Mical

Based on quarterly reports from CBRE up to the end of Q3 the major European co-location markets of London, Frankfurt, Amsterdam and Paris, 2016 has been a bumper year for data centre operators. Nothing in any of the predictions and trends for 2017 points to a slowing in the appetite for data centre space. But it would be wrong to suggest that there are no pitfalls and challenges to be navigated.

While cloud computing has grabbed the headlines, it is the explosive growth in internet traffic driven by Subscription Video on Demand (SVOD) and other Over the Top (OTT) streaming services that has been a key driver in growth. With Amazon Prime and Netflix locked in a battle for supremacy, don’t expect to see this demand falling.

Surveys amongst Gartner’s customers, presented at their Data Centre Summit in London in November, pointed to increased demand for hybrid cloud services and backed up CBRE’s findings that Enterprise customers were strong contributors to the growth in data centre space take up. It also shows that cloud remains a key driver of growth.

These trends are good news for those data centre operators with global coverage and excellent network and interconnection options. Equinix, Digital Realty and NTT have been strengthening their positions through acquisition and partnerships and we are likely to see further moves as some telcos retreat from owning and operating data centres. Companies with good regional coverage and revenue streams, like Interxion, may well be the target of larger players or investors looking to move into the market.

Much has been made of the potential decline of in-house enterprise data centres. Indeed, Enterprises are making increased use of co-location for new apps and services that require connectivity to public cloud services. But, like mainframes, Enterprise data centres will take a long time to die. The Gartner survey revealed that only 6% of respondents plan to completely retire their in-house data centres in the next 3 years. There will be opportunities for wholesale Enterprise deals, but sunk costs, risk and a desire for control will continue to be a brake on this business for some time to come yet.

Three technologies, data analytics, machine learning and IoT will provide new opportunities for data centre operators in 2017. The sheer volume of data generated by sensor networks will challenge where and how data centres are located and operated. This could be to the advantage of smaller local data centres and operators who have a strong track record in data storage and back up.

Data analytics and machine learning will also generate large amounts of data. But here the focus will be on High Performance Computing (HPC) that requires high power densities. Newer, more agile data centres, built with this need in mind and whose connectivity is “good enough” could be well placed to take advantage of these new opportunities.

Data centre operators have long since moved away from a “build it and they will come” approach. The market leaders have developed very strong propositions for specific market sectors and business challenges. The best ones have also recognised the importance of building close customer relationships and strong reputations. Markets rarely conform to old stereotypes and the data centre market is an unusual mix of maturity and dynamic new opportunities.  Failing to understand your customers or clearly define compelling differentiated propositions will condemn you to fight in mature, commoditised segments. Meanwhile others will take advantage of the growth and exciting new opportunities that clearly exist and offer much greater returns.

For further insights and advice on how to take advantage of these opportunities in 2017  please contact me at paulbevan@owenbeg.com

Posted in Business model, Data Centre Marketing, Uncategorized | Leave a comment

Social media is not a numbers game

In a previous blog we highlighted the importance of social media for data centres and looked at some examples of companies doing Twitter right. Now, we have taken our research further to assess the way in which data centres use LinkedIn and also, how they present themselves via their websites. To guide us we have used The Cluetrain Manifesto and Clayton Christensen’s Jobs To Be Done as a guide to help us understand if data centre operators are engaged in real conversations with their customers and are helping those same people find solutions to their jobs to be done.

We have used desk based research to assess company, as opposed to individual employee, social media usage across 76 UK and Continental European data centre operators. Here are some of our initial observations.

There is no conversation. Company LinkedIn pages are without exception broadcasting messages, opinion and product information. Twitter feeds, apart from a few dedicated support feeds, are not much better. Often if there are ‘likes’ or ‘comments’, these come from a reasonably small group of followers. A brief scan of those followers tends to suggest that many of them are from existing employees or channel partners. At best awareness and understanding levels among customers and prospects may be improved, but it is unlikely that any acceptance or advocacy will be enhanced.

Volume doesn’t grow followers…necessarily. We analysed the impact of posting and tweeting frequency on the number of followers and there was little pattern. Sure, if you are part of a large, well known organisation, particularly telco oriented, then numbers of followers looked quite impressive. But by applying some ratios, such as followers per employee that minimise the natural impact of being big, we found that there was little correlation between the volume of posts and tweets and the numbers of followers.

Websites don’t make it easy for customers to find solutions for the jobs to be done. A few operators stand out by trying to focus on the problems of the customer and how they help solve those problems. Equinix and Interxion are good at doing this. Among the smaller operators Virtus stood out in linking their solutions to the challenges their customers faced. AQL has a very smart new website that opens with a bold customer focus. When they get sufficient content to back up the orientation of the website we wouldn’t hesitate to put them near the top of the table. Before others cry foul, many operators do focus on business issues. However, you often have to navigate to vertical industry solution pages or case studies to get a sense about the jobs to be done. The best operators do then make the connection to how their offers help, but many don’t. There is a wealth of good content that needs to be brought forward in the website and those connections made. Your customers are notoriously short on patience when navigating websites for information.

There are more statistics we could throw out, but we want to move on. We will be trying to understand whether the individuals who are either used to front tweets and posts for the company, or who post independently have an impact on social media effectiveness. We would love to hear your views and opinions on this. After all, social media is supposed to be a conversation.

Let me leave you with a contentious observation. Donald Trump had a huge impact on the US Presidential campaign in general, and on social media in particular, because people believed they were hearing his genuine voice. Not that I am suggesting you should be like Donald Trump, but how can you connect with your audiences in a way that a genuine voice comes across as opposed to one pushing out the same corporate messages? What’s your opinion?

Posted in Data Centre Marketing | Leave a comment

Microsoft, Oracle and the merits and pitfalls of fast following for data centre operators

The way in which Microsoft and Oracle have responded, very differently, to the challenge posed by public cloud in general and AWS in particular offers real learnings for data centre operators faced with changing market dynamics, emerging technologies and new business models.

AWS has built an impressive market lead in public cloud provision which it is not going to lose anytime soon. This has enabled AWS to threaten the traditional enterprise IT market, forcing a response from the all the established vendors. Truly disruptive new technologies and business models don’t emerge every day. This means competitors often have to find defensive strategies to nullify the threat of the dominant player in the sector. Fast following is a genuine strategy that can deliver survival and success. How you embrace and implement a fast following strategy will determine the extent of its success.

Microsoft is, arguably, the master of fast following. It was slow to embrace the internet and the cloud yet it is prospering in both areas. They are able to say, with some conviction, we do what AWS does. They don’t spend a long time trying to dismiss AWS capabilities, or claim they do things that are AWS core competencies better than AWS. Rather, having minimised or nullified any perceived AWS advantage, they then focus their efforts on convincing their target audiences on what they do best and where they have inbuilt competitive advantage.

Oracle, on the other hand, seems intent on trying to convince the world that it has a better technical solution than AWS, that, coming late to the party, it has a second mover advantage. Quite apart from the fact that a second mover advantage only occurs if the first mover stumbles and gets it wrong, which AWS clearly hasn’t, whingeing that your technology should be more widely adopted because it is better than the market leader’s rarely works as a strategy. You just sound bitter and twisted.

In the Data Centre Operator world Equinix has a clear lead in the interconnection platform space with Digital Realty doing a fair impression of a fast follower. EdgeConnex looks as though it may well take the leadership space in provision of genuine Edge computing facilities, although they may lose the war about what Edge computing really means. Significant markets with clear leaders might yet emerge in modular data centres, in high density or green data centres. The point is, if you need to fast follow don’t waste time arguing that you do what you are following better than the leader. If you are clear about what you do well and differently, do enough to nullify their advantage. Make sure your data centre is connected enough, energy efficient enough, flexible enough to stop customers making these areas an issue. Then, importantly, stress the benefits of what you do differently to the right set of target customers at the right time.

For example, don’t argue that PUE is not an effective way of measuring energy efficiency because your competitor has a rating of 1.08 and yours is only 1.2. Don’t dismiss your competitor’s latency story by being dismissive about ease of physical access. Leave those debates to the analysts. The point is your PUE, latency and access will be good enough for most customers. Be positive. “And I do this very well” is so much better than “but they aren’t very good”. In the end, in the absence of a genuine disruptor it comes back to a laser focus on what you can do better than anyone else and what customers need that capability more than anything else.

Posted in Business model | Leave a comment

The way out for old corporate data centres

In August, data centre start-up ST2 announced the proposed construction of a large new data centre on the old ICI site at Redcar in North East England. Last week I caught up with Kevin Timms and David Gilpin from ST2 to understand more about the reasoning behind the move and plans for this site and beyond.

Building new data centres is a 15-20 year financial commitment. While the market appears very robust with plenty of new capacity being brought on stream to meet our apparent insatiable urge for more and more on-line activity, the building and operating of the data centres themselves is in danger of becoming a highly competitive, commoditised market. At first sight, the building of a large data centre on Teesside with what might be considered a wholesale co-location offering seems a risky move. But initial impressions can often be misleading.

One thing that strikes you straight away about ST2 is the make-up of the small start-up management team.  CEO Anne Stokes is on the Board of Governors of the Data Centre Alliance and she has a background in real estate management. COO Kevin Timms was, until recently, IT Director at Ford Motor Company and David Gilpin has held senior product development and operational roles at Sungard. This makes for a very balanced team that understands not only the core building and service provider operational issues, but also the motivations and drivers of enterprise CIOs. That’s not a mix you often see in this sort of data centre start-up.

The basic premise for this initiative is that most major corporate, in-house data centres are no longer fit for purpose. They are old. This makes them very expensive to run and weren’t built with the flexibility to handle the variety of different computing tasks and configurations demanded by new systems of engagement running across public, hybrid and private clouds. Nor do most corporates have the desire to build and run new data centres…it just isn’t their business.  ST2 are very clear that their primary market will be those large enterprises looking to get out of old, uneconomic data centres. They accept that some new applications might be located in traditional hub locations with good, direct access to the public cloud and other trading partners. But they are equally clear that there remains a large requirement to re-house and handle existing and future transaction systems that don’t require the sort of low latency and high interconnectivity driving some applications to key regional hubs.

A number of smaller, regional data centre operators have been integrating vertically to deliver a range of added value systems integration, hosting and infrastructure management capabilities. ST2 don’t have plans to go down this route given the channel conflict that might create with any potential cloud or hosting tenants, but they haven’t ruled out investigating how, and if, their sister organisation Streamwire might be able to provide elements  of consulting or infrastructure services on a project basis.

As to the data centre itself, a lot of thought has gone into its specification and design. It will be designed and operated as a Tier 4 facility. That is unusual in the UK, but not unique. However, ST2 claim that they will be able to price this at “very competitive” Tier 3 levels.  This should be made possible by off-grid, renewable power located right next door, supplemented by dedicated 50MVa on-grid power for resilience, the latest building and cooling technologies to deliver a best-in-class target PUE of 1.04 and the highest levels of security.

ST2 claim to have an anchor tenant identified. Given the specifications of the data centre I would suspect that major financial institutions and other enterprises with highly regulated, mission critical applications are likely to be the most likely customers. With a capacity for 2,700 racks ST2 believe that there will be a small number of large customers taking up the space. They haven’t ruled out a major cloud player or infrastructure player becoming a tenant, but they need to be engaged now, in the planing and design stage. Kevin Timms remarked that the demands of the mega-scale players would make it difficult for the likes of Google, Microsoft or Amazon to be seen as valid prospects once the Datacentre is built.

With Digital Realty focusing more on retail co-location and interconnectivity markets, Virtus remaining close to London and Infinity SDC running into funding issues there is then perhaps a strong logic for a high quality wholesale co-location proposition based away from London and the South East. I look forward to reviewing the new data centre formally when it opens for business in Q2 2017, and keeping an eye on their plans for further data centres in the North of England.

Posted in Data Centre Operations | 1 Comment

Dealing with Darwin helps resolve Bi-Modal IT Issues

Cassini Reviews bi-modal ITMy attention was caught last week by an article in ComputerWeekly.com. It highlighted disagreements between two IT analyst heavyweights, Gartner and Forrester, about Gartner’s view on what it calls “Bi-modal IT”.

If you haven’t heard of this, then in simple terms, it highlights the very different skills, behaviours and organisation needed to develop and deploy new cloud based systems of engagement, from those required to develop and deploy older, more traditional enterprise systems of record. Without going into detail, Gartner suggest that two teams be set up in IT organisations; Mode 1 teams to look after the old stuff and Mode 2 teams to get involved in the new, exciting stuff.

To be fair to Gartner this is not new. As early as 2011 Deutsche Bank was reorganising its IT resources and structures around two distinct groupings… “Develop the Bank” and “Run the Bank”. Gartner it seems was merely responding to what it saw out in the market. However the argument is now that this approach, which effectively creates a divide between the old and the new, is detrimental to building collaboration and co-operation and could actively disenfranchise and demoralise Mode 1 teams.

As I read the Computer Weekly article I was reminded more and more of Geoffrey Moore’s book “Dealing with Darwin”. His focus is on how great companies innovate in every phase of their evolution. In particular it is the chapter on what Moore calls “Re-purposing for the Core” that seemed to have the most relevance to the debate around Bi-Modal IT. Try imagining for a minute taking Cobol programmers in their 60s and even 70s, still working today and trying to retrain and them into new coders used to the world of Java, Containers and a hundred and one other technologies I haven’t even heard of. Despite the fact that you and I probably know one or two who could, and would love to do this, evidence shows that, practically, the gap in knowledge and culture is too big to bridge for most.

As products or services move clockwise through a cycle of innovation, deployment, optimisation and finally decommissioning, Geoffrey Moore shows why it has become hard to release staff tied up in the deployment and optimisation phase back into innovation. By the time decommissioning comes along there appears to be just two options for the staff tied to those solutions, or in this case systems…retraining to move to new innovations or redundancy. Unfortunately the gap is usually too large, so redundancy and loss of skilled resources is often the outcome.

The answer, Geoffrey Moore believes, is to continually look to shift the people resources anti-clockwise, for example from operations about to be decommissioned back into operations in an optimisation phase; from optimisation back into deployment and from deployment back into innovation. He shows how this makes continued use of the people’s key skills and frees up resources to innovate.

Just as Bi-modal suggests a one or the other answer, so the argument seems to be drifting the same way with both sides digging in with opposing views. If you are challenged with resolving the resourcing issues around new systems of engagement and older systems of record I urge you to find a copy of Geoffrey Moore’s book and if nothing else, read Chapter 10 on “Re-purposing for the Core”. You may not end up with 60 year old Cobol programmers in your cloud based development teams, but you are less likely to lose their skills completely and keep them fully engaged. At the same time you will be able to keep recycling the innovators back into new developments where they want to be.

Posted in Business model | Leave a comment

Gartner’s latest IaaS Magic Quadrant: more than meets the eye

Cassini Reviews QuadrantsWhen was the last time you read the whole of a Gartner Magic Quadrant report?  If you are like me you will look at the picture of the Quadrant, perhaps read the blurb on one or two of the leaders and the report introduction. It’s also quite fashionable to dismiss the Magic Quadrant as being something you have to pay to get on, through subscriptions and consultancy, which is therefore not worthy of consideration.

For start-ups and smaller technology vendors it can be very frustrating trying to get onto a Magic Quadrant.  Failure to do so however is not down to how much money you pay them, but whether you make their criteria for inclusion which tend to favour larger organisations with scale and geographic spread. To be fair to Gartner their target audiences are large and medium sized enterprises, most of whom work on an international scale.

Those smaller and start-up organisations might get mentioned in Hype-Cycle or Ones to Watch reports, but this always seemed like a poor second prize.  So it was good to see Gartner commenting on a number of smaller and interesting cloud providers, such as Cloud Sigma, Skytap, ProfitBricks, Navisite and Blulock that did not meet the criteria for inclusion.

If you just looked at the Quadrant picture and saw AWS way out in front, Microsoft in a well placed second and the rest a long way behind, your opinion might well be, “same old same old”. But this report is full of gems and insights that should be compulsory reading for senior management in both vendors and enterprises.

In covering pricing mechanisms, how SLAs are handled, the availability of multi-lingual support and a range of different and specific use cases Gartner has provided a clear guide on where IaaS is most applicable. The report highlights the key things CIOs should be considering and evaluating, both technically and from a business perspective, before making a decision.

For me, the implications for vendors, either in the IaaS market, or thinking of getting in, were fascinating. For large, long established vendors like IBM, Fujitsu, NTT and Centurylink the biggest challenge is keeping up with the pace of development at AWS and Microsoft. This is partly about having pockets deep enough, but also about having a corporate culture that encourages and facilitates change and innovation at speed. Other vendors like Virtustream and VMWare have found particular niches, or appeal to a significant base of existing technologies that will provide a good consistent business model without ever coming anywhere near the scale of the market leaders.

In my last blog I noted the fact that the platforms above the infrastructure are already looking quite proprietary despite using many OpenStack components. In the IaaS Magic Quadrant, Gartner opines that the future appears to point to a greater integration of infrastructure (IaaS) and platform (PaaS) capabilities. Gartner also highlights that the use of open standards and components does not necessarily mean that there will be less vendor lock-in than with the more proprietary platforms; that, due to the amount of customisation that occurs, easy portability across different IaaS vendors is not a given.

If you get a chance go and find a read a copy of the Gartner Magic Quadrant for Cloud Infrastructure as a Service, Worldwide. It is much more than a vanity exercise for technology CEOs to get into the leaders’ quadrant. It shows that true utility computing is some way off yet; that vendors trying to take AWS and Azure head on are likely to fail, that sustainable, profitable niches do exist and that new, interesting start-ups can get recognition. As a marketer, that warms the cockles of my heart.

Posted in Business model | Leave a comment

Primetime for Cloud and DevOps

Cassini Reviews Cloud and DevOpsThe quarterly reporting season is upon us again. IBM and Microsoft have already announced impressive cloud growth figures in the last week, Microsoft showing 102% quarterly growth for Azure. In a couple of days AWS will share its latest quarterly earnings with analysts expecting to see significant revenue growth and improved margins.

Meanwhile Gartner is predicting that, this year,  around $111billion dollars will be redirected to cloud away from traditional in-house IT spend. By 2020 they believe this figure will hit $216billion.

Over at PuppetLabs their annual State of DevOps report highlights how the highest performing IT organisations deploy 200 times more frequently than low performers, with 2,555 times faster lead times. This doesn’t come at the expense of quality either. The high performers have 3 times less change failures than the low performers and recover from failures 24 times faster.

This is telling us that these technological and cultural shifts are reaching a significant tipping point. If the early days of Cloud and DevOps were about test and development use cases, followed by wider deployment of non-essential applications, accompanied by huge infrastructure land grabs and price wars, we are now reaching the point at which Cloud is becoming the preferred platform for core applications.

In such an environment the battle for market share and margin has moved from Infrastructure-as-a-Service to managed platforms, offering a complete stack that attracts significant developer support. AWS, Microsoft and IBM have all busily been building out a broad suite of capabilities and platform services. At the same time they have re-doubled their marketing efforts aimed at winning over the development community. This is where added value and higher margins reside.

For the CIO Cloud now offers the opportunity to deliver real digital transformation and open up significant new business opportunities at a cost and speed unimaginable a few years ago. Legacy systems and in-house IT operations are not going to disappear overnight. However, Boeing shifting its aviation analytics apps to Microsoft Azure, Netflix closing down its in-house infrastructure in favour of AWS, and the U.S. Department of Defence awarding AWS a number of contracts prove that Cloud is definitely ready for prime time.

That isn’t the end of the story either. Just as we were getting comfortable thinking about and using containers, AWS starts talking about “serverless computing”. Their Lambda product, launched in late 2014, has a little way to go before it is truly serverless, but the direction and the implications are clear…infrastructure will be completely commoditised, if it isn’t already. The irony is that the platforms above the infrastructure are already looking quite proprietary despite using many OpenStack components.

We have said this before, but it warrants being restated. There will continue to be strong demand for data centre compute facilities. Areas not touched on in this article, like the Internet of Things, Machine Learning and Analytics, will all add to that demand. But, as a data centre operator, if your focus is solely on the infrastructure layer you are likely to end up in a very aggressive price sensitive environment where only the most efficient will survive. You need to have a platform strategy that enables you to co-exist and co-operate with the likes of AWS, IBM, Microsoft and Google, but that also helps you find and integrate with those niche players who will end up servicing specific verticals or applications.

Posted in Business model | Leave a comment

3 Post Brexit Certainties

BrexitGiven the tumultuous, barely believable string of political events in the last couple of weeks, it takes a brave person to try and forecast likely economic outcomes over the next couple of years. However, among all the conflicting views, there are at least 3 things that are certainties that data centre management need to be aware of and be planning for. Two are very specific, the impact of new privacy and security regulations implicit in GDPR, and the need to tackle the skills gap. The third is more general, but probably has deeper implications for the data centre market in general, and specific operators in particular. This is the reality that in any sort of period of upheaval there are always as many opportunities as there are risks.

GDPR

A few weeks before the Referendum I was at Infosec in London and I attended an excellent round-table on the new EU GDPR legislation that comes into full force on 25th May 2018. Even if we have managed to conclude exit negotiations before then the implications are clear. If you are dealing with EU customers, or have operations in EU countries you will need to be GDPR compliant. Also, indications from the Information Commissioner’s Office (ICO) point strongly to the certainty that any specific UK privacy and data protection regulations will have to mirror those of the EU very closely. This legislation puts very specific responsibilities on the data processor as well as the data owner. It is likely that you will have to review and amend all of your contracts to take account of the new rules. Even if you are a co-location provider you still need to review your data governance structures to ensure you are not caught by any of the new regulations.

The Skills Gap

Much has still to be decided about the free movement of labour between the UK and the rest of the EU, but whatever happens, a critical skills gap for data centre engineers, operators and good managers will still exist. We can’t rely on a continued supply of skilled continental Europeans, and others, coming here to fill the gaps. Most data centre operators focus on their people as being a key differentiator. There is therefore a critical requirement to understand the competencies and capabilities you need. Assess the potential skills gaps you have and put in place the recruitment, retention, training and mentoring policies that will enable you to evidence your people really do make the difference. We might expect Government to help with some incentives, but improving the underlying flow of the right skills and attitudes coming out of the education system will take a generation to fix. Doing nothing yourself is not an option.

Risks and Opportunities

I’m not going to try and point out all the potential risks and opportunities for data centre operators inherent in the Brexit decision. Plenty of other commentators have been down that route and there will be varying opinions on almost every issue. The point is that, to avoid the risks and take advantage of the opportunities, you need to be able move fast, to be agile and lean enough to change direction at speed when needed. Your antennae should be minutely tuned to the changes going on around you. More than ever you need intelligence: market intelligence, customer intelligence and competitor intelligence. What better at such a moment in in our history than to look the driving principle of one of our greatest generals, the Duke of Wellington, who never lost a battle because, as he himself put it “I knew what was going on on the other side of the hill better than most men.”

Posted in Business model | Leave a comment