The rise of the Internet of Things (IoT), analytics, artificial intelligence, and data science is changing forever how clients derive value from data.
Data accessibility is becoming an important driver of infrastructure decisions, with data architecture now the essential prerequisite to infrastructure architecture, because the value of the data opens up new business opportunities and differentiates the digital enterprise.
Enterprises are now architecting their data management capability to capture, store, move, and secure business data. The data architectures they create must afford them speedy access to data from their infrastructure, applications, and IoT implementations – all in order to extract maximum business value.
The desire for insight must be balanced by responsibility
Data security continues to grow in importance. In our own business, for example, we have a policy of never providing an infrastructure solution without security designed in.
We’ve all had to respond to the General Data Protection Regulation (GDPR) this year, but it’s not a one-off exercise, and we anticipate more industry-specific regulation around the world, continuing to raise the bar on best practices. As we collect new types of data, from new types of devices, we have to strike a fine balance between staying compliant with ever-changing regulations and still extracting the value we want from the data.
The final factor is the reliability of the data. As an organisation’s data is always under attack, it’s important to understand the accuracy of the collected data. Organisations will have to use every technique available to ensure the integrity of their data – validating it using machine learning, correlating it with blockchain, managing processes to protect critical data, and acting only on data from appropriate sources.
We’ve provided this type of digital infrastructure to the scientists at ALMA who rely on accurate, secure data to help them uncover the mysteries of the universe from their remote location in the Atacama Desert in Chile.
Increasingly we’ll see the use of policy automation around industry regulation and specific trends within industries around consistent management of data, as well as new tools that will allow clients to define, automate, and enforce their policies.
Trend 2: The edge is getting more intelligent
As organisations implement IoT, they’re beginning to explore edge computing. But you can put intelligence at the edge of the cloud or the edge of the network…
Data center players are coming out with ‘micro server’-type products to provide computing capabilities at the edge of the cloud. Networking companies are likewise releasing devices that are able to support computer services to make the edge of the network intelligent.
Wherever you put the added intelligence, the rationale is the same: you do it to deliver faster decision making while reducing the cost associated with carrying unnecessary data across your network.
Advances in machine learning and hyperscale clouds are enabling these new technologies and driving the insights that clients are looking for from IoT. Companies are evolving their data architectures to enable the rapid processing of aggregated data and so extract more business value faster.
Edge processing is driving hybrid WAN
But no one wants the benefits of edge processing to be eroded by an unsustainable telco bill. We see edge processing driving the need for a hybrid WAN that offers the optimum blend of connectivity options to properly connect clouds, branches, and edge locations.
Enterprises will gravitate towards providers who can software define their entire network, including carriers, to improve efficiency and reduce operational risk.
Trend 3: More business intelligence is coming from the network itself
The network is becoming a source of business intelligence, especially in cloud networking and the wireless network. Data is being harvested to develop new services in the workplace and a variety of industry use cases.
In the coming year, more organisations will want access to analytics capabilities, so they can leverage the intelligence coming from their networks. F
or maximum benefit, they will want an integrated view that combines network data with intelligence from their applications and infrastructure.
They’ll use this to provide location-based services and improve the user experience for both employees and customers. For example, more organisations will use Wi-Fi data to analyze employee and customer behavior.
Internally, this will help them measure occupancy, reduce real estate costs by reconfiguring the workspace, and better control building energy costs.
Externally, in healthcare and manufacturing they’ll use network insight to track assets, and in retail, it will improve their visibility of footfall, hotspots, and dwell time.
Data from the wired and wireless LAN will increasingly be used to understand application performance across all infrastructure and connection points. Companies can use this data to optimise Wi-Fi density and bandwidth to give a better user experience or improve application performance in the cloud by better understanding traffic flows.
We anticipate that future integration will go further. Beyond today’s supply chain collaboration linked to ERP, we’ll see integration with CRM systems featuring artificial intelligence and robotic process automation. These will allow consumers to reach right back to your supplier’s inventory through their mobile apps. That’s true digital business.
Trend 4: Infrastructure is becoming programmable end to end
The trend towards programmable infrastructure itself isn’t new, but now it’s becoming more important that your entire infrastructure is programmable from end to end, which is increasingly difficult given the rate of technology change.
Clients are subscribing to multiple cloud platforms and intensifying their use of Software-as-a-Service (SaaS) providers. Hyperscale cloud providers are expanding into emerging markets, equipment manufacturers are bringing out more hyper-converged solutions, and on-premise private cloud is growing strongly.
Orchestrating infrastructure to meet the needs of business applications and data requires integration with compute, storage, security, and networking environments – in the client’s data center, in the cloud, and increasingly, at the edge.
The main driver of end-to-end programmability is the need to ensure that infrastructure can quickly adapt to meet the changing business needs of applications and data. It also helps to enforce governance and facilitate customisation.
Companies are also modernising their infrastructure to improve performance. This requires integration with new technologies, such as flash storage, hyperconverged infrastructure, and edge computing.
End-to-end programmability is essential to enable the automatic orchestration of infrastructure by applications via open APIs, wherever the applications are placed and whatever they require from their underlying infrastructure.
To reap the benefits of end-to-end programmability, clients are becoming more mature in choosing levels of abstraction, understanding APIs, and getting acquainted with the programming toolset required to maximise flexibility going forward.
Managed services will help balance speed and cost
Today all the platforms in a hybrid cloud environment need to be programmable to allow automation and avoid escalating network costs.
Companies are looking for the most appropriate platforms for their applications, but without end-to-end programmability, operational costs can rise due to complexity, and telco costs can rise from the inefficient connection of distributed environments.
Last year we said ‘speed trumps cost’ – the main priority was to get something out the door. Now those digital experiments have begun to scale, so cost is returning as an issue.
We anticipate that managed services will emerge as the preferred route to ensure that all the components of an enterprise’s infrastructure are programmable, to drive innovation and control costs.
Trend 5: Applications are defining infrastructure
The latest generation of applications dictates where they run and how they operate. So tighter integration between application outcomes and infrastructure management is becoming critical.
We’re spending more and more of our time advising clients on optimum application placement to deliver maximum value to the business. Conversations start with applications and how to drive business outcomes from them, with infrastructure being seen more as the delivery vehicle for the application.
The conversation has shifted to how an application can exploit the infrastructure to deliver a faster outcome: What is the business need? What applications and processes are needed to deliver it? Then how do you build an infrastructure to deliver the desired outcome with the best performance, cost, and user experience?
The whole purpose of the process is to deliver an application outcome and the infrastructure is one key element in accomplishing that. The principle is always to abstract out all the underlying infrastructure – network, cloud, and on-premise – to deliver that application outcome.
Containers and microservices are making new demands on infrastructure
We’ve moved from an application migration conversation to looking at an organisation’s whole end-view of their applications, including their plans for containers, and from there, constructing an environment that will deliver the maximum value from their application portfolio.
New application models like containers drive new infrastructure requirements and challenge most existing infrastructure environments. There’s an overwhelming trend towards microservices as the application model to build next-generation, cloud-native applications.
These approaches have infrastructure requirements that most of today’s infrastructure (with the exception of some of the software-driven components and the major clouds) aren’t able to meet. They also drive the need to have the application and infrastructure work together to define those requirements.
Article by: Rob Lopez, Group Executive – Digital Infrastructure, Dimension Data