“WASHINGTON TECHNOLOGY” By Egon Rinderer
“Many agencies end up with tool sprawl – adopting too many one-off specialized solutions that complicate risk decision making, fail to scale, and fall apart in a borderless environment.
This approach impacts productivity, complicates management workflows, and dramatically inflates costs as a byproduct.“
________________________________________________________________________________
“Agencies have long relied on reactive security (compensating security controls) vs. preventive security (baseline security controls) to protect their information systems.
As an industry, we have largely ignored implementing baseline controls. They’ve proven very difficult to implement and manage at scale and even more difficult to retrofit into an environment in which poor baseline practices around access credentials and code execution restrictions have persisted over time. Instead, the industry has favored the myriad of compensating controls which promise to atone for the sins of these poor baseline practices and protect us from the inevitable.
This problem is greater today than ever before with the dramatic shift to a primarily remote workforce. As a result, the rise in cyber attacks, particularly ransomware, on government employees likewise makes the “reactive security status quo” a challenge.
Tools rationalization – taking stock of the tools currently employed across the enterprise and evaluating each – is the first step. This means identifying applications in use across an organization to determine which to keep, replace, retire, or merge. The process allows IT teams to reevaluate priorities, cut down on tools, and modernize those that remain – freeing up funds for strategic IT priorities and modernization.
The Reactive Security Reality
Compensating controls are mechanisms engineered to respond after a threat makes landing at the point of discovery or execution. This type of control intervenes in normal execution and seeks to determine the safety of the action being attempted at the time of action. Too often, IT teams often use compensating controls as a safety net, as they are easier to install and not nearly as complicated to manage as baseline controls. Furthermore, the sustainment of these controls is often automated as new signatures, heuristics, models, etc. are released from the respective vendors leaving little for the end user to do aside from investigating alerts.
It feels like a pretty good setup, but the news headlines show the reality. This approach fails. Often. In fact, the efficacy rate of these compensating controls falls off sharply when it comes to blocking new, never before encountered threats (vs. existing threats, for which you’ll often see efficacy claims in the high 90th percentile). Something will get through, and when it does, most organizations are poorly equipped to handle it.
Compensating controls should not be an agency’s primary defense. They should be treated as the name describes, compensating for the rare occasion in which proper baseline controls around privileged access and code execution don’t cover the threat (which is incredibly rare). Research has proven time and again that restricting elevated privilege access and not allowing code to execute from areas of risk which are writable by a non-privileged user means quite simply that any malware, ransomware, and otherwise malicious payload can land on an endpoint, but it simply will not function. Implement and preserve those controls on the baseline and these payloads are powerless.
Compensating controls are also extremely costly as there’s no finish line. Attackers simply tweak code or TTPs in order to circumvent detection. AI and machine learning seek to close this gap and while they will help tremendously, they can only narrow it, not close the door completely.
Another consideration agencies must account for is a reliance on legacy tools incapable of full functionality in a borderless environment. The location of context (physical, virtual, cloud, VDI, local, remote, VPN connected, etc.) should NOT make a difference in the efficacy of the protection and management mechanisms in use. One of the beauties of baseline controls is that context makes no difference. Baseline controls protect and secure endpoints regardless of context. The machine is in a naturally secure state.
The challenge is that as adversaries evolve, they lean on increasingly advanced tactics to infiltrate federal systems. With compensating controls, IT teams won’t know about a breach until it occurs. Instead, agencies should re-evaluate their approach to implementing and managing proper baseline controls as mandated by theNational Institute of Standards and Technology (NIST) to maintain good cyber hygiene.
Moving Forward
If we’ve learned anything as an industry over the past 30+ years that IT has been a ubiquitous concern, it is bad habits. Chief among them is our propensity for continuing antiquated practices seemingly out of tradition. We build our processes, policies, and practices around the limitations of the tooling available at the time of authoring and then proceed to impose those limitations on modernized technologies as they’re adopted.
Take the measurement of risk as a prime example. There exists a pervasive idea that risk is something that is to be assessed with some periodicity. Time and money has been invested and strides have been made to increase the frequency and fidelity of such assessments, but the commonly held mindset still revolves around this idea of periodic, point in time measurement. Risk, in reality, is an ephemeral thing. It changes right along those changes within the devices that make up the enterprise for which the measurements are taken.
While we certainly understand the ephemerality of things such as process execution, user activity, and network connections, we tend to gloss over the idea as applied to the law of large numbers. In a large enough sample set, nearly everything about the IT estate becomes ephemeral. Even factors such as location, hardware configuration, software installed, and account credentials.
Yet the real time determination assessment of these billions upon billions of permutations and tracking of them over time has been written off as impossible. The only means by which this can be approached, the industry will tell you, is to harvest such data, store it in a central location, and do static analysis against it. This, by nature, is self-defeating as one is simply taking a snapshot in time of ephemeral data and pretending as though it’s static for the purposes of analysis. While not completely without value, it lacks the fidelity and timeliness, and therefore accuracy, to be meaningful for the purposes of real time risk assessment and mitigation. This guarantees a door left open, a crack in the defenses, and an opportunity for success that the adversary will most assuredly leverage successfully.
As agencies strengthen preventive security with baseline controls, they should adopt a holistic risk management approach that uses complete, accurate, and real time data to reduce risk and improve security. The two go hand-in-hand and one without the other is not a partial solution, it is no solution at all. As an added benefit, in doing so one also reduces the reliance on an ever increasing collection of point products and can reallocate budget and scarce resources to efforts that are guaranteed effective. This also aids in the justification of future budget requests for critical security activities – all while providing a more comprehensive view of the security landscape that enables more strategic business decisions.
Leveraging a single, ubiquitous, real time platform that integrates endpoint management and security unifies teams, effectively breaks down the data silos and closes the accountability, visibility, and resilience gaps that often exist between IT operations and security teams.
A truly unified endpoint management platform approach also gives agencies end-to-end visibility across end users, servers, and cloud endpoints, and the ability to identify assets, protect systems, detect threats, respond to attacks, and recover at scale. When agencies achieve complete visibility and control, it significantly reduces cyber attack risk and improves their ability to make good business decisions.”
About the Author
With 30 years of federal and private sector industry experience, Egon Rinderer leads Tanium’s technology efforts as global vice president of technology as well chief technology officer of Tanium Federal. Joining Tanium at a time when the company numbered fewer than 20 employees, he has held roles ranging from technical account manager to federal pod lead to global vice president of the global TAM organization. Prior to joining Tanium, Egon was with Intel Corporation and served throughout the U.S. military and intelligence community in the United States and abroad in an operational capacity.”
Commentaires