It was recently reported that the so called “Locked Shields 2013″ NATO exercise has finished and lo and behold, the blue team won. Of course this leaves us to question how “real” these exercises truly given the stark reality that security teams in the real world are taking a beating. The blue team victories in these types of events are not uncommon and nor are blue team victories uncommon in the real world either. However, from a simple finger in the wind measurement, it would seem that the blue team victory record in simulated exercises is beginning to approach that of the Harlem Globetrotters versus the Generals.
The problem is that the blue teams are real world, while the red teams are not. Furthermore, the simulated environments offer the blue teams far more control than they actually have in their enterprise this therefore makes the attacks less lethal and the security teams more powerful. Some may argue that the red teams employed in these exercises are real world penetration testers and indeed they are. However, penetration testers and government funded red team members are night and day different.
Intelligence, in the context of information security, is defined as the analysis of current, previous, and potential malicious actors and their attributes. As such intelligence gathering is a critical aspect of delivering industry leading security products and services. Today, the market is primarily focused on developing these capabilities by leveraging so-called “big data analytics”. While the benefits of these efforts are high, they are less useful when the analytics are not enriched by contextual information. This contextual information comes directly from defined research processes and procedures.
The application of intelligence to information security strategies comes in the form of the ability to anticipate upcoming attacks, define more as well as fully understand attacks as they occur. Today, threat research within security organizations is primarily contingent on gathering news published by external sources in combination with independent research typically defined by the researcher without specific business requirements (nor defined processes or reporting). The result is unreliable intelligence that is quite often not useful to daily operations.
However, with the introduction of proper processes, security teams can provide high quality threat research that reaches these goals. This can be achieved by implementing a standard intelligence cycle that allows for business stakeholders to define areas of research pertinent to customers and product capabilities. Specifically this can be accomplished by developing actionable information from the analysis of all internal and external data available. The actionable information will be reported in order to provide insight that assists to better predict and prevent successful attacks from threat attackers.
Introduction to Threat Intelligence
The idea of threat intelligence is to enable clients and internal security operations with a clear picture of what they are up against. This is accomplished by disseminating information identifying actors, their skill sets, their motivations, their targets, and finally methodologies. Information in these realms covers the primary pillars of “The Motivational Model of Cybercrime Classification.” (Ngafeeson, 2008) While it is well outside of the purview of a private entity to produce a known malicious actor list that identifies threat actors for closed-source or lawful monitoring, private entities can and should leverage OSINT in combination with HUMINT to better cultivate information from SIGINT.
Identification of Actors
Creating these capabilities begins by first identifying who the actors are within the threat theater. In common terms, this is often referred to as, “who is attacking.” While from the outset creating a list of attackers may seem as simple as following blogs such as, “Krebs On Security” and ThreatPost to learn who the attackers are, creating high-value operational intelligence requires far more effort.
While leveraging information from these sources is useful, it is not indicative of an organization attempting to get “ahead of the threat.” Imagine, if for example, the US Government identified potential threats by reading about who was attacked in the Wall Street Journal. Instead those identifying actors should make every attempt to develop proactive information.
This intelligence can be derived by the analysis of collaborative information developed from external non-media Open-Source Intelligence (OSINT) assessments, internal forensics data, external group collaboration (HTCIA, FIRST, Infragard, etc), internal Signal Intelligence (SIGINT) and of course media analysis. Particulars should be set within the requirements phase of the Intelligence Cycle but should most often be inclusive of actor aliases and known affiliations, such as groups.
As actors, groups, and affiliations are identified, they should be classified based on their motivation, skill set, and level of sophistication. Unfortunately, due to the anonymity of the Internet, it is highly likely where situations will exist where researchers are unable to identify particular aliases of attackers. In these situations researchers will instead be forced to identify attackers based on their classification rather than alias. The team responsible for research should define classifications for cyber criminals however; a sample table from the academic community can be seen here which classifies groups based on their motivations. (Furnell, 2001)
Identification of Targets
Sadly, the attack environment is highly active and broad reaching. As a direct result, organizations conducting unorganized threat assessments, without delegated resources are likely to become inundated with irrelevant information. It is therefore essential that once attackers have been identified to determine who they are attacking or who they will likely attack. This process will narrow the audience for intelligence dissemination to those most in need of any particular intelligence reports. General classification of targets includes verticals, organizations, people and/or the technology used throughout. Identification of targets is particularly useful for better proactive security protections based on external threats.
Identification of Attack Methodologies
Once the base information of who is attacking and what they are attacking has been discovered, it is then imperative to determine how they are attacking. In fact identifying attack methodologies may be the single most important aspect of intelligence gathering within the purview of IBM. Understanding and granularly classifying attack methodologies utilized by diverse attackers within the threat environment is invaluable from a defense perspective.
Attack methodologies can be broken down into a few simple areas, specifically what medium was utilized, any tools or services that were leveraged, and finally the process of how both were combined to attack.
Attackers are capable of leveraging a multitude of mediums to achieve theirs goals. These may include particular applications, telecommunications technologies, or even social interactions. As such it is imperative aspect of security intelligence to develop a full understanding of what mediums are being utilized by attackers to achieve their goals. These mediums have a key impact of the methodologies that will be utilized.
Identification of Software and Services Leveraged
In many attacks, externally acquired software packages and/or services are leveraged. Understanding these applications and services is imperative to understanding and addressing attacks in the most efficient way possible. Identification and classification of these applications and services will obviously be broken down between software and service and then into smaller categories based on focus of the tools and services. Identification of applications and services utilized by attackers is particularly useful for proactive monitoring as well as advanced defense technology research.
Motivations and Skillset Classification
Even without an advanced intelligence gathering process it is clear that attacker capabilities vary widely. In order to better shape and understanding for the variance in skillset, it is important to develop granular classification of attacker. In addition, it is also important to monitor the progress of any particular group of attackers to understand an increase or decrease in the sophistication of methodologies utilized by an individual group. Once capabilities are classified, it is important to understand and classify motivations.
The intelligence cycle is a simple process utilized by the vast majority of the US intelligence community. Although the cycle in its academic form does not represent the complexity of tasks necessary for gathering relevant, actionable information, it does serve as a model for implementing the cycle. Though, some cycles may consolidate specific tasks, in general the cycle consists of six phases (Requirements, Planning and Direction, Collection, Processing, Analysis, Dissemination). These phases can be broken down between existing MSS teams for integration.
Furthermore, if implemented properly this process could well serve as a foundation for everything we do in Security Services. The following are a brief explanation of the individual process areas.
Requirements. Intelligence requirements will be determined by decision makers such as team Leaders, Executives, or other decision maker. The definition of specific intelligence requirements will initiate the intelligence cycle.
Planning and Direction
During the collection phase of the intelligence cycle an Intelligence Collection Plan (ICP) will be created. The ICP is a systematic process where available resources will be tasked to gather and provide pertinent information within a defined time frame. The systematic process will define specific sources where intelligence will be collected. These sources may include but are not limited to:
- 1. Human intelligence (HUMINT) – The collection of intelligence via interpersonal contact or rather information provided directly from human sources.
- 2. Signal intelligence (SIGINT) – The term used to describe communications intelligence and electronic intelligence
- 3. Open-source intelligence (OSINT) – The collection of intelligence from publicly available information as well as other unclassified information that has limited public distribution or access.
- 4. Measurement and signature intelligence (MASINT) – The scientific and technical intelligence derived from analysis of data obtained from sensing instruments
*Definitions from the Nato glossary of terms and Definitions AAP-6 (2008)
During the collection phase, sources will be exploited for the actual collection of intelligence, this will include all sources identified in the planning and direction phase. However, the primary focus for IBM will be in the realm of SIGINT, OSINT, and MASINT as HUMINT will be opportunistic, or coincidental rather than specifically outline and sought out (clandestine operations is not within the purview of this intelligence cycle).
Within the processing phase, collected intelligence artifacts or raw intelligence materials are assessed for reliability and relevance. Finally, these intelligence artifacts are put into a standard in preparation for review. Intelligence artifacts in this format are said to be “vetted” meaning that it has been properly verified.
In the analysis phase of the intelligence cycle vetted intelligence is analyzed and reviewed in order to determine the potential impact of processed intelligence. Within the analysis phase collateral information and patterns are determined in an effort to determine the overall significance of the vetted intelligence.
Dissemination is the reporting phase of the cycle where intelligence consumers whose needs initiated the intelligence requirements are made aware of findings of the analysis phase.
Intelligence is not an IP address. Let me say that again, intelligence is not an IP address. As of late there seems to be a fad for big data analytics and intelligence gathering, unfortunately as of currently the output of most of these activities seems to be simply finding IP addresses. Creating the capability to build a blacklist is not the equivalent of gathering intelligence.
Intelligence has been defined as the “capacity for learning, reasoning, understanding, and similar forms of mental activity; aptitude in grasping truths, relationships, facts, meanings, etc” (dictionary.com). the key focus in this definition is the capacity for understanding, in terms of information security, the capacity for understanding stems primarily from the knowledge of malicious actors within the threat environment in combination with knowledge of one’s own internal environment. The combination of knowledge and understanding in these two disparate environments is an understanding of what is commonly referred to as threat.
An understanding of threat is a key component of the risk equation, yet unfortunately for many years developing that understanding has been very difficult as intelligence processes were typically truncated if not altogether immature. Thus, most organizations were not creating a full “understanding” of the threat that their organization faced. Instead most organizations were operating with a minor level of knowledge based on a precursorsy amount of information.
Last year in 2011, or year that is commonly referred to as the year of the security breach, enterprises and small to medium businesses began to realize exactly what the value of intelligence really is. As a direct result many vendors began developing solutions for delivering “intelligence.” unfortunately, the vast majority of these vendors are utilizing processes that similarly to enterprises, are truncated in nature.
Several months ago I began research into whether or not IPS is up to the task of web application security. In summary, my initial findings were that IPS could very effectively tackle syntax related web application attacks if IPS products could introduce context into alerting. Specifically IPS products would have to create vulnerability based rules by specifying the exact location of where applications are vulnerable. Sourcefire, in partnership with WhiteHat Security Inc., delivers exactly that capability through a technical partnership that integrates Sourcefire’s SNORT IPS with WhiteHat’s Sentinel vulnerability scanning solution.
Much like the WhiteHat’s partnerships with F5, Imperva, and Breach Security (which can be read about here), the integration of Sentinel and Snort technologies allows end-users to correllate highly accurate vulnerability data with protection capabilities. The key is that WhiteHat’s vulnerability scan reports typically do not include any false-positives. This allows users to leverage those reports to create traffic blocking IPS rules with a high-level of assurance that those rules will not block legitimate traffic to their web applications. Furthermore, those rules will not produce noisy false-positive alerts within the protection technology. SNORT will drastically benefit from these capabilities as their ability to detect and block web application attacks has clearly not been their main focus area.
These blocking capabilities are further complemented by the Denim Group who, through partnership with WhiteHat, is largely responsible for the integration between Sourcefire’s SNORT and WhiteHat Sentinel. The Denim Group leverages Sentinel’s open XML Application Programming Interface (API) to deliver additional services offerings and enhanced Source Code Analysis (SCA) integration capabilities. This assists companies in integrating security in multiple levels of an application, particularly in development, assessment, and defense.
Of course WhiteHat’s partnerships with the Denim Group and Sourcefire are not the only efforts to better address web application protection. Other leaders in IPS have also begun to better address web application security. However, it is my belief that as of the time of this post few vendors have as solid of an offering as Sourcefire does when customers are also utilizing WhiteHat Sentinel. IBM ISS probably has the best arguement against my previous statement with their heavy focus on SQL Injection, XSS, and file include attacks, and of course their integrations with IBM Rational (particularly in the AppScan group). However, while ISS does integrate with the AppScan web application vulnerability assessment product in order to enhance their IPS, the IBM Rational standard offeringl does not include manual testing on top of the AppScan product. This service can be purchased, however, unforunately it comes at a premium price to the costumer. The end result is a better possibility for false-negatives and false-positives in the actual scan, thus offering less protection to customers.
TippingPoint also offers some web application protection in their Web Application Digital Vaccine product through parntership with NTO. However, TippingPoint seems focused exclusively on delivering high quality protection against SQL Injection, XSS, and? malicious PHP file includes. While this capability is highly beneficial and does cover the most common web application security attacks, it does not offer the myriad of protections that customers could gain by using Sourcefire and WhiteHat products in combination. This may shift as TippingPoint and HP’s Application Security Center become more integrated as part of the HP acquisition of 3COM (TippingPoint’s parent company).