Whether you are a network administrator, security manager, or CIO, how would you feel if you were unable to see and manage major parts of your network environment? This is the problem that organizations migrating their applications and workloads to public clouds are wrestling with today. Because public cloud infrastructure is owned by the provider, your organization’s access to application and network data is typically limited.
Public clouds need to handle hyperscale deployments, resource pooling, and continuous configuration changes based on demand, which brings unique challenges to ensuring visibility, security, and compliance. In February 2017, Ixia surveyed over 220 senior IT staff at enterprise organizations on their cloud security concerns, and 76 percent of respondents were ‘very concerned’ or ‘concerned’ about security in their cloud environment. The top security concern with cloud adoption was ‘loss of control over network data’ (56 percent) and being able to achieve full visibility across their networks (47 percent).
The limitation is in traditional visibility architectures. They cannot deliver the agility and insight required to ensure proper operation and security of cloud workloads. On-premises solutions depend on physical hardware, taps, and the fact that the organization’s network deployment is unlikely to grow or shrink dramatically overnight. In addition, while virtualization deployments enable more rapid changes than ever before, the physical server architecture does not fundamentally change. As a result, the same visibility architecture can be retained, with hardware shifting to software with virtual taps and virtual packet brokers.
Clouds obscure visibility
But this all changes with the move to the public cloud. The benefits that it offers – flexibility, agility, elasticity and rapid scaling both horizontally and vertically – present significant challenges in terms of gaining visibility and monitoring the performance and security of public cloud environments. There is a lack of independent application-level monitoring and analytics of workload behavior, and the tools offered by public cloud providers to monitor the performance of your environment do not include packet data, which is critical for network visibility.
Without special tools to see into your providers' data centers, your network and security teams are working blind, unable to diagnose problems or quickly remediate threats and attacks on critical business applications.
But just tapping into cloud data can be dangerous, if not done thoughtfully. To support a distributed visibility architecture that can use the full power of the public cloud and deliver full visibility of server workloads, you face two primary limitations:
How to capture and filter traffic – In a conventional data center, physical network taps and network packet brokers can be inserted with full control over the network domain. But in the public cloud, there is no way to insert physical devices. In addition, control of the network domain is limited.
How to scale without bringing out too much, or too little, data – the public cloud is built to scale to meet peak demand. As applications scale to meet demand, new instances are created. As a result, your cloud-based network visibility solution needs to fully accommodate this scalability to be effective.
Visibility solutions that rely on a single, dedicated software agent to handle the inspection of packets can introduce a single point of failure, as well as limited scalability. So instead of adapting physical network visibility techniques to the cloud environment, what is really needed is a true, cloud-native visibility architecture. Further, this architecture needs to deploy simply without requiring complex configuration and adjustments by your IT team. Therefore, scalable cloud visibility needs to be implemented as Visibility-as-a-Service (VaaS).
Seeing into the cloud
The first stage in building the VaaS architecture is creating an orchestration layer, accessible via a Software-as-a-Service (SaaS)-based web interface. Optimally, it would use a native-cloud service provider database, identity management, APIs, and other services. This means that the enterprise is no longer required to install or manage any part of the offering. It would implement similar to how a cloud storage provider would offer storage space, management, and maintenance for your files.
This orchestration layer would then connect to cloud-based sensors in the source instances, and to connectors in the various security and monitoring tools. The most efficient, scalable way to deploy these sensors and connectors is within containers, embedded in the same instances as your organization’s micro-service based workloads and tools. As they are embedded directly in the instances, the sensors filter for relevant visibility traffic at the applications’ source.
Embedding sensors into the source instances is not just efficient, it delivers another key advantage: minimizing how much inter-cloud bandwidth is used. This saves you money as only relevant data is sent to the tools. The sensor can communicate the cloud workload, such as database or web, to the orchestration layer. Using this metadata, your organization can associate tools to the different workload types, and create ‘groups’ that comprise the sensors and the relevant tools.
As additional instances of a given service are spun up, these immediately cause the creation of additional sensors, which then connect to the relevant connectors in the security and monitoring tools. And, as these extra sensors are brought on-line within a group, the connectors to the tools are scaled automatically since they too reside within a scalable container environment. This delivers true cloud-native elasticity, with no need for manual intervention.
The ability to scale both the sensors and the connectors on demand, and to leverage cloud-native services, is critical to visibility-as-a-service – giving an OpEx-based consumption model for intelligent visibility which aligns to other SaaS services used by enterprises. Simple, scalable cloud visibility results in better security and compliance, while still delivering the cost advantages of public cloud deployments.
About the Author
Jeff Harris leads solutions marketing for Ixia’s security and visibility portfolio of products and capabilities. As a former product development leader of advanced networking, communications, and surveillance products for commercial and military applications, Jeff has a deep appreciation for security implications that occur in development and the importance of unobstructed visibility in operation. Jeff has led first to market product teams in personal area networks, mobile ad hoc networks as well as a wide range of microelectronics and advanced sensor systems.