Guest Contributor: Alex Lozikoff, Business Development Manager
The purpose of this article is to share the real-life experience of operating a new class of IDS solutions based on Deception technologies.
I consider it necessary to begin with the premises to maintain the logical coherence of this story. So, let’s begin
1. Targeted attacks are the most dangerous type of attack, despite the fact that in the total number of threats their relative share is quite low. Why are they so important? Because each targeted attack is a business project that is carried out by well-trained and motivated guys who know very well how to monetize the captured assets. That is why the damage from one such successful attack can exceed the aggregate value of all other cyber threats.
2. There is no tool (or set of tools) to guarantee 100% perimeter safety.
3. Typically, targeted attacks are staged. Passing the perimeter is only one of the initial stages, which (you can throw stones at me now:) does not carry much damage to the “victim”. If it is, of course, not DEoS (destruction of services attack, ransomware, etc.). It becomes a real pain later when the captured assets are used for pivoting and lateral movement, but we did not notice it.
4. Since we begin to suffer real losses when the attackers successfully get to the attack targets (application server, DBMS, data storages, repositories, critical infrastructure elements), it is logical that one of the main tasks of the security team is to interrupt attacks before this sad moment. But in order to interrupt something, we must know about it first. And the sooner the better.
5. Accordingly, if we want to manage APT risks reducing the damage from such attacks, we need to have some tools which can ensure a minimum TTD. TTD (time to detect) is the time from the moment of initial intrusion to the moment of detection of an attack. Depending on the industry and region, this period averages from 99 days in the US to 172 days in the APAC region (M-Trends 2017, A View From the Front Lines, Mandiant). It is sad but true, that in most cases we detect not attacks, but their results, some visible signs like distorted or encrypted data, loss of money, inaccessibility of services, etc. In other words, it is too late. Why?
6. There are a lot of reasons for this. As for me, the main one is the misunderstanding of “defense in depth” concept. Common security toolset “NGFW + mail gateway + domain policies + AV + DLP” is not “defense in depth”. All of them are the same type of security controls, so-called “preventive” controls. Of course, they are important and should be up and running, but they are not enough anymore. They can quickly and easily detect already known threats, kick off “script kiddies”, but they will not detect APT or “0-day” exploits. Dedicated “detective” controls are needed to do this.
7. How should this “detective” control look like?
- It should be able to work effectively when our perimeter is already compromised
- It should detect successful attacks in “near real-time” regardless of the tools and vulnerabilities that are used
- It should not depend on signatures/rules/profiles and other static things
- It should not require large datasets for analysis
- It should define attacks not as some risk-scoring as a result of «the best in the world, patented and therefore secret mathematics», but practically as a binary event – “Yes, we are attacked” or “No, everything is OK”
- It should be universal, effectively scalable and “implementable” in any heterogeneous environment, regardless of the physical and logical network topology used.
One of such tools is deception based IDS. These solutions are based on the good old concept of Honeypots, but with a completely different level of implementation. Modern deception uses different techniques of deceiving attackers with the use of specialized traps, baits and other methods of active disinformation.
Based on the results of the Gartner Security & Risk management summit 2017 Deception solutions are included in TOP 3 strategies and tools that are recommended to apply.
According to the report TAG Cybersecurity Annual 2017 Deception is one of the main directions of IDS (Intrusion Detection Systems) evolution.
The whole section of the latest Cisco report on the state of IT security, dedicated to SCADA, is built with the help of one of the leaders of this market, @TrapX Security (Israel), whose solution @TrapX DeceptionGrid has been running for a year in our test zone.
We constantly study and test various IT and security solutions in our lab. We have around 50 different virtual servers and desktops deployed here, including the @TrapX Deception Grid.
So, let’s go from the top down:
1. TSOC (TrapX Security Operation Console) – the “brain” of the system. This is the central management console, where we configure, deploy the solution and do all our daily job. Since this is a web service, it can be deployed anywhere – on-premise, in the cloud or in the MSSP environment.
2. TrapX Appliance (TSA) – a virtual server where all our network sensors (or traps) actually “live”. It can be connected to the network with ordinary access ports, or trunks if we want to monitor several subnets at once.
In our lab we have only one TSA (mwsapp1) deployed, but actually, there can be a lot of them. This may be needed in large networks with no L2-connectivity between the segments (a typical example is the Holding and subsidiaries or the Bank’s Head Office and branches) or if there are isolated segments in the network, for example, SCADA subnets. In each such branch/segment you can deploy separate TSA and connect all of them to the central TSOC via https when all the information will be collected and processed. Such architecture allows us to build distributed monitoring systems without redesigning existing network or violating existing segmentation scheme.
Network Intelligence Sensor (NIS) is an additional feature of TSA. We can feed TSA with a copy of all outgoing traffic through TAP/SPAN. If it detects connections with known botnets, C&C, TOR-sessions, we will also receive an alert in the console. In our environment this functionality is implemented on a firewall, so we did not use it.
3. Application Traps (Full OS) – traditional Windows-based “honeypots”. We don’t need a lot of them since the main job of these servers is to provide IT services to the next level of sensors or to attract attackers to fake application servers. We have one such server installed (FOS01)
4. Emulated traps – the main component of the solution, which allows us to create a very dense “minefield” for attackers and saturate the whole enterprise network with our traps. The attacker sees such a trap like a real Windows PC or server, a Linux server, switch or another device which we have decided to show him.
A very important point is that each such host is not a full virtual machine that requires resources and licenses. This is a snag, emulation. Every emulated trap is just a process in TSA that has a set of parameters and IP address. Therefore, with the help of even one TrapX Security Appliance, we can saturate the network with hundreds of such “phantom” hosts that will function like sensors in the alarm system. This technology makes it possible to effectively scale the concept of “honeypot” in large distributed enterprise networks.
We were quite curious so we deployed almost everything – Windows PCs and servers of different versions, Linux servers, ATM with Windows embedded, SWIFT Web Access, network printer, Cisco switch, Axis IP camera, MacBook, PLC device and even a smart bulb. Generally, you may deploy a trap on every free IP in your network, but it is quite enough to run such sensors in a quantity of 10%-20% of the total number of real hosts.
From the attacker’s point of view, these hosts look attractive, because they contain vulnerabilities and look relatively easy targets. An attacker sees the services on these hosts and can interact with them or try to attack them using standard or customized tools and protocols (smb / wmi / ssh / telnet / web / dnp / Bonjour / Modbus, etc.) But it is not possible to really compromise them and run some arbitrary code inside traps. So an attacker can not turn our deception network against us.
5. The combination of these two technologies (FullOS and emulated traps) makes it possible to achieve a high statistical probability that the attacker will eventually, sooner or later, collide with some element of our alarm network. But how to make this probability close to 100%?
Here Deception tokens come to the battle. Using them we can make all our PCs and servers to take part in our distributed IDS. Tokens are placed on real users’ PCs. It is important to understand that tokens are not an agent that consumes resources and can cause conflicts. Tokens are passive information elements, like “breadcrumbs” for hackers that lead them into a trap. For example, mapped network drives, bookmarks and saved passwords to fake Web consoles, saved ssh / rdp / WinSCP sessions, new entries in a hosts file, fake credentials injected into memory, fake ODBC data sources, etc. Using tokens we can place an attacker in a distorted environment and make him play “Russian roulette” when every his turn may be fatal for him. He does not have the ability to determine what is true or what is false.
Creating a network trap mimicking Windows Server 2016 and setting up tokens. User-friendly interface, no manual config editing, scripts, etc.
In our lab, we have configured and placed a number of such tokens on FOS01 (Windows Server 2012R2) and a test PC (Windows 7). These machines have RDP enabled and periodically we place them in DMZ when they are accessible from the Internet. Also, we have emulated traps mimicking SWIFT Web Access and several emulated Windows and Linux servers in DMZ. Thus, we always get a constant stream of real, not “synthetic” incidents.
Brief 1-year statistics:
56,208 – incidents recorded
2,912 – hosts – sources of attacks detected.
Interactive, clickable attack map
Despite a lot of events, it was quite easy to handle them, as @TrapX classifies events according to their severity and allows the security team to focus primarily on the most dangerous ones – when the attacker tries to establish management sessions (interaction events) or when we have binary payloads (infection invents) in our traffic.
All the information about events is human readable and understandable to the user with even basic knowledge of information security. I think it is one of the most important things as it allows us to partially close existing qualification gap between “offenders” and “defenders”.
Most of the recorded incidents were attempts to scan our hosts or single connections. Something like this.
Or RDP brute-force
One day we noticed several thousands of such events with different logins
But there were also more interesting cases when we allowed the attackers to successfully “get” one of our Windows machines via RDP and start a lateral movement in our local network.
Nice try to execute some arbitrary code using psexec
The attacker found a token (saved session in Putty), which led him into a Linux server emulated trap. Immediately he tried to destroy all the log files and corresponding system variables with one pre-prepared set of commands.
The attacker attempts to make injections on a trap that mimics SWIFT Web Access
In addition to such real attacks, we conducted a number of our own tests. One of the most interesting is measuring the detection time of a network worm spreading across the network. We used a nice auditing tool from @GuardiCore, called @Infection Monkey. This is a network worm that can capture Windows and Linux PCs, but without any “useful” load.
We deployed a local C&C, launched the first copy of the worm on one of the machines and started waiting. We have received the first notification in the TrapX console in less than a 90 seconds.
90 seconds TTD against average 100+ days. Not bad!
OK, detection is nice, but what about mitigation?
It is possible to build various automatic response and mitigation scenarios using integration with other vendors.
For example, integration with NAC (Network Access Control) systems or with McAfee, Cylance, CarbonBlack allows us to automatically isolate compromised PCs.
Integration with sandboxes allows us to automatically pass all binary payloads for analysis.
McAfee Sandbox integration
Also, TrapX has built-in events correlation module, like mini-SIEM
It works fine, but it is more useful to integrate the solution with the existing SIEM system. In our case, it was HP @ArcSight. We did it using syslog. It was quite straightforward and took us around 5 minutes.
Built-in ticketing module helps to manage incident handling.
And, of course, we have a role-based access model here, integration with AD, an advanced system of reports and triggers (event-driven notifications), orchestration for large holding structures or MSSP providers.
Instead of a resume
TrapX Deception Grid allows to create and operate a distributed IDS without additional sufficient software and hardware costs. In fact, with TrapX we can turn the whole our IT infrastructure into the “minefield” for attackers and create centralized enterprise-wide detective control.
Definitely, it is your choice to implement such controls or not to implement. But if you have such a system covering your back, compromise of your perimeter is just the start of the game. Because early intrusion detection allows you to deal with security incidents, rather than with their sad consequences.
This article was originally posted on Peerlyst.