Implementing effective asset discovery and vulnerability assessment are two of the most important first steps in improving IT security.
Before you can protect your environment, you need to understand what assets you have across your cloud and on-premises environments, and be able to identify and prioritize vulnerabilities.
Below we have listed some essential capabilities you need within a decent IT Security Program
Discover all assets across your cloud and on-premises environments
The cloud adds incredible value to the business: scalability, flexibility, cost-effectiveness, the list goes on. But while we talk about cloud as a singular entity, these days it’s anything but. Most businesses run a wide variety of services and applications on multiple types of cloud – including public, private, and hybrid – alongside and in concert with their traditional on-premise environments. This new, complex ecosystem using several different types of cloud services from different providers in combination with on-premises infrastructure is called “multi-cloud”, and it comes with its own set of best practices and requirements.
The first step to a successful multi-cloud deployment is understanding what your environment includes so you can manage it effectively. This, like most things worth doing, is easier said than done. Multi-cloud discovery is critical to securing your IT infrastructure, but demands a sophisticated set of capabilities to do right.
7 key requirements for multi-cloud discovery
Out-of-the-box multi-cloud content library.The foundation of any discovery tool is the ability to quickly and accurately discover asset inventory and relationships. This applies in the data center, and certainly for multi-cloud deployments. A modern discovery solution should include a deep understanding of cloud services from multiple vendors as well as data center assets (servers, software, network, storage) for fast, dependable multi-cloud application dependency mapping out-of-the-box.
To achieve this core functionality, a multi-cloud discovery solution requires not only an extensive content library but also a powerful reasoning engine behind it. The reasoning engine governs the discovery process. Acting as the brain of the system, it is responsible for choosing and orchestrating discovery actions and constructing the model of the environment.
The reasoning engine’s intelligence enables it to determine the best approach for discovering each IT element and the relationships among them using cloud APIs, observed communications, configuration files, and command outputs. The more intelligent the engine is, the more comprehensively and accurately it can identify and map the relationships among applications and infrastructure elements – a critical capability in complex multi-cloud ecosystems.
Simple administration. Discovery solutions exist to help IT, not bog you down more. Auto-discovery tools should be just that – automatic – and deliver maximum value with minimal administrative overhead and/or time and staffing constraints.
Look for capabilities like these to ensure that your multi-cloud discovery solution is more help than hindrance:
- A robust set of tools to simplify common administrative functions, such as user-friendly interfaces to manage user access and security, scan schedules, upgrade to new versions, and backup and restore the data store.
- Easy setup of cloud API access credentials and keys.
- Integration with SSO and credential brokering technologies to accelerate time to implement.
- Out-of-the-box integration with common CMDBs.
- RESTful API and export options that make it simple to share discovery data and application maps with other IT management systems.
- Integrations with automation tools that allow IT to automate many administrative tasks.
- Open access and extensibility tools. An effective auto-discovery solution for multi-cloud deployments must help IT embrace and support fast-changing environments, custom software, and extended cloud services by making it simple to discover custom configurations as easily as more generic elements.
This calls for functionality such as:
- Extensibility tools built into the user interface to add discovery of custom software, custom network devices, and other custom elements with little or no programming.
- Access to web services and APIs supplied by cloud providers.
- The ability to extend out-of-the-box discovery to capture additional asset attributes.
- Tools to manage custom content, including versioning, categorization, and the retirement of outdated content.
Powerful map visualizations. Auto-discovery solutions are only as useful as what they reveal to their users. Application maps enable Business-IT alignment by providing clear visibility into which parts of the IT infrastructure support each business service.
Multi-cloud discovery tools should visually expose application components, their relationships and the context in which they run (cloud vendor, data center, region, etc.). Users can leverage those visualizations to foster collaboration between application owners and configuration managers; provide the business with application context; define tiered service models that will enable service-aware performance and availability; change impact analysis and other functions to ensure optimal business support.
These maps should be updated automatically to reflect changes in the IT environment and you should be able to generate different output formats to accommodate a variety of documentation needs.
Transparency and security. User confidence is essential to the value of an auto-discovery solution. If people don’t have faith in the accuracy of the data provided, they simply won’t use it, and the organization will continue to suffer from decisions made without the needed insight or business context. Building user trust and acceptance is especially important—and challenging—given that the data captured through auto-discovery often differs from that gathered through error-prone manual sources, especially given the complexity of a multi-cloud environment.
To build user confidence, your multi-cloud discovery solution must avoid a “black-box” approach to auto-discovery and provide full transparency into how its data is obtained. Ideally, people should be able to locate this information easily, right in the user interface, without having to comb through log files.
A discovery solution also needs access to critical systems. Teams in charge of IT security will need the discovery solution to be proven through standard security certifications and will need it to offer granularity in its configuration of access rights, encryption, and depth of discovery actions.
Powerful search and analytics. Robust search and analytics capabilities make multi-cloud discovery data immediately actionable. They empower IT to make insight-driven decisions, improve service delivery, and reduce mean time to repair.
To unlock the full value of your auto-discovery data, the solution must provide simple ways for people to put it to work. Users need to be able to search quickly and easily for any kind of information in the data store, such as the configuration items that power a digital service, servers missing security patches, or the impact of a potential infrastructure change.
To meet these requirements, your multi-cloud discovery solution should include:
- Out-of-the-box reports and dashboards that answer common IT questions.
- Custom reports and dashboards to adapt the solution to the IT organization’s needs.
- Single search box to quickly perform basic searches.
- Robust query language to support complex queries that zero in on the exact data users need.
- Rich tools to sort, select, and visualize data in multiple ways.
Speed and scalability. High-performance discovery is essential to complete scans and infer relationships as quickly and frequently as needed in a dynamic multi-cloud environment.
Today’s ever-increasing pace of change and the increased size and complexity of enterprise IT environments demand that discovery solutions support:
- Nearly limitless scale—for example, the ability to scan more than 100,000 servers each day.
- High frequency of scans. Most organizations want to scan several times a day to ensure complete and timely information.
- The ability to trigger automatic scans or data updates when changes occur, because out-of-date information can be worse than no information at all.
Discovery solutions can no longer stop at the data center. The scale and complexity of current and future multi-cloud environments requires auto-discovery tools that are powerful, easy to use, secure, feature-rich, and most importantly, are designed to support your infrastructure no matter how it grows and evolves.
Get alerted when new assets connect to the network
Schedule vulnerability scans of individual assets, asset groups or entire networks
Prioritize vulnerabilities by severity and likelihood of exploit
As an organization’s attack surface grows, so too do the volume and severity of vulnerabilities. Given the burgeoning complexity of IT infrastructure – with DevOps practices, cloud, containers and microservices becoming more mainstream, and IoT devices on the rise – vulnerability management can feel akin to working inside a pressure cooker.
Organizations are facing shortages in resources and talent. 58% say shortages in skilled staff affect their ability to scan vulnerabilities in a timely manner, and 51% are bogged down by manual processes and insurmountable backlogs.
With an insufficient picture of your organization’s vulnerability landscape and a scarcity of resources, how can you adequately scan for vulnerabilities and assess cyber risk, let alone satisfy C-suite and board members who need to understand cyber risk in relatable business terms? (It’s enough to make anyone’s head explode.)
Given this landscape, prioritization has become the key challenge for security professionals – it’s what sets apart mature IT organizations, and gives you the competitive edge you need to effectively mitigate risk in today’s era of digital transformation.
A successful prioritization plan will help you answer: Where should we prioritize based on risk? Which vulnerabilities are likeliest to be exploited? What should we fix first?
We’ve pulled together this three-step approach to help drive better decision-making, reduce complexity, and ultimately mitigate cyber risks.
1. Start with vulnerabilities that are being actively exploited
All vulnerabilities represent weaknesses, but exploitable vulnerabilities reflect real risk. Use a vulnerability management tool that incorporates threat intelligence, so you can address vulnerabilities that have exploits known to be available in the wild.
2. Remediate vulnerabilities most likely to be exploited in the next few weeks
Predictive models provide insight into the likelihood that a given vulnerability will be exploited based on certain characteristics (e.g., past threat patterns, NVD data as well as threat intelligence).
3. Address assets tagged as critical.
Critical assets are worth attending to since an attack on them could have broad-scale impacts on the business. Assets open to the internet should be of particular concern.
Next steps ?
The above approach will dramatically reduce the list of vulnerabilities you need to remediate, enabling you to gain structure and control in a pursuit that’s otherwise charged with unknowns.
And when you’re able to prioritize vulnerabilities, you’ll have time to focus on even more strategic initiatives, like evolving toward a comprehensive Cyber Exposure program – across technologies, systems, and departments.
With more predictability, less guesswork, and fewer ad hoc practices, your organization is better protected – and your job is saner, more rewarding, and dare we say it, fun again.
Quickly identify availability of patches
NIST Cyber Security Framework (CSF)
The NIST CSF is recognized by many as a resource to help improve the security operations and governance for public and private organizations. While the NIST CSF is a terrific guideline for transforming the organizational security posture and risk management from a reactive to proactive approach, it can be a difficult framework to actually dive into and implement.
If you’re struggling to get through the NIST Cybersecurity Framework, a quick overview and summary of the framework can help you accelerate your security transformation.
Here’s a quick NIST Cybersecurity Framework Summary and detailed breakdown:
The NIST CSF is comprised of four core areas. These include Functions, Categories, Subcategories, and References. Below, we will provide a brief explanation of terminology for the NIST CSF.
The NIST CSF is organized into five core Functions also known as the Framework Core. The functions are organized concurrently with one another to represent a security lifecycle. Each function is essential to a well-operating security posture and successful management of cybersecurity risk. Definitions for each Function are as follows:
- Identify: Develop the organizational understanding to manage cybersecurity risk to systems, assets, data, and capabilities.
- Protect: Develop and implement the appropriate safeguards to ensure delivery of critical infrastructure services.
- Detect: Develop and implement the appropriate activities to identify the occurrence of a security event.
- Respond: Develop and implement the appropriate activities when facing a detected security event.
- Recover: Develop and implement the appropriate activities for resilience and to restore any capabilities or services that were impaired due to a security event.
Categories & Subcategories
With each of the Functions noted in the Figure above, there are twenty-one categories and over a hundred subcategories. The subcategories provide context to each category with reference to other frameworks such as COBIT, ISO, ISA, and others.
The Figure below is a small view of the Identify Function with its categories, subcategories, and references:
The NIST CSF Tiers represent how well an organization views cybersecurity risk and the processes in place to mitigate risks. This helps provide organizations a benchmark on how their current operations.
- Tier 1 – Partial: Organizational cybersecurity risk is not formalized and managed in an ad hoc and sometimes reactive manner. There is also limited awareness of cybersecurity risk management.
- Tier 2 – Risk-Informed: There may not be an organizational-wide policy for security risk management. Management handles cybersecurity risk management based on risks as they happen.
- Tier 3 – Repeatable: A formal organizational risk management process is followed by a defined security policy.
- Tier 4 – Adaptable: An organization at this stage will adapt its cybersecurity policies based on lessons learned and analytics-driven to provide insights and best practices. The organization is constantly learning from the security events that do occur in the organization and will share that information with a larger network.
You can use the NIST CSF to benchmark your current security posture. Going through each category and subcategories in the core Function can help you determine where you stand on the NIST CSF Tier scale.
Using the NIST Cybersecurity Framework is a great way to standardize your cybersecurity and risk management. It can also be used when your organization needs to benchmark its current security operations. If you need a quick self-assessment, try out this NIST Self-Assessment that will guide you through each Function, Categories, and Subcategories of the Framework.