Your neighbors got ransomwared! Live report from the scene

Such stories are rarely made public: few people like to boast about getting ransomwared (even if there's a happy ending). But let's face it, such stories do happen—and in far greater numbers and much closer to home than we imagine. Ransomware still tops the list of attacks on organizations. One such attack was logged by PT Network Attack Discovery (PT NAD), our network traffic behavior analysis system, which we were piloting at the time. And if only the SOC operator had paid attention to the alerts in the new system interface... but let's leave the what-if and focus on the what-is.

Image

Prologue

Moscow. April 16, 2023. It was a mild night, a relatively balmy +8°C. Spring was in the air, with a scent of the approaching summer—sweet and intoxicating. That same calm night, in the server room of one company, the lights of the servers were blinking and the air conditioner was humming rhythmically. Slowly but surely, the network domain was being encrypted.

The time and circumstances were not randomly chosen. When better to encrypt a domain than a Saturday night? Let's unpick the chain of errors in reverse. We'll walk through the series of dramatic events from end to start, and discover how the hackers infiltrated the network and what they did inside. We'll also see where the attackers got their fingers burned, thanks to which they may have been stopped just in time. And this whole fascinating journey will unfold through the eyes of PT NAD.

Beginning of the end

The final stage of the ransomware attack is to launch it on all network nodes
The final stage of the ransomware attack is to launch it on all network nodes

Everything has a beginning and an end. In the screenshot we see the end. PT NAD behavioral modules detect the mass creation of new tasks on nodes in the domain. The time is perfect: from 2:34 AM to 3:00 AM. Late night or early morning? It's irrelevant, because everyone who might catch the attack in time is fast asleep at home. There is no one to stop the ransomware.

Please note not only the hackers' use of regular Windows scheduler tasks to run commands, but also the nodes that create them. Typically, the most tragic stories are about friendship and betrayal. For example, about how a domain controller you trust like yourself became a source of infection (DC in the node name stands for domain controller).

Command lines for launching ransomware on nodes using Task Scheduler
Command lines for launching ransomware on nodes using Task Scheduler

The password in the line for launching shadow.exe creates only false hope that all will be well. It is the same everywhere, and the chances of using it for file decryption are close to zero. The task names are generated by a pseudo-random algorithm, so there's no need to invent real names to disguise them if you encrypt literally every node in the network. The command line strongly suggests the presence of the LockBit ransomware. The pass parameter is used to decrypt part of the code of the executable—that's the antivirus bypass method right there.

How did it get there? Domain controllers are the most protected nodes in the network, with access granted solely to administrators. Hang on! PT NAD allows you to search for network sessions during which files with a certain name were transferred.

Transferring attack-related executables between two domain controllers
Transferring attack-related executables between two domain controllers

Half an hour before encryption began, the .exe file was transferred between two domain controllers, one of which subsequently launched the attack. But note the accounts. One of them has the prefix adm_, which usually indicates an administrator account with maximum privileges in the domain.

What would you do in the hacker's shoes before encrypting the data? If you consider yourself a decent hacker, you need to be able to restore the encrypted data after demanding a ransom, otherwise you'll stop being trusted—yes, attackers too have a reputation to uphold. Sure, you can still siphon off important data and threaten to post it on leak sites if the ransom is not paid. And we're not talking about a few archives from the desktop of the Chief Legal Officer—the volume of data can run to several terabytes. How to get it onto your own servers without anyone noticing? And where to copy the stolen data anyway?

An FTP offload of almost 11 gigabytes at 5 AM?
An FTP offload of almost 11 gigabytes at 5 AM?
Ten FTP offloads at 5 AM over a week and a half?
Ten FTP offloads at 5 AM over a week and a half?

Looks like in this particular case the hackers literally rented a VPS server for a month to offload the data. The username and password, although cut out in the screenshot, are there during the session. In an ideal world, the PT NAD operator would use them to log in to the attackers' server and delete all their data archives. Alternatively, the hackers could upload the data to a cloud storage service, such as Mega or Dropbox.

How did the hackers get hold of an account with maximum privileges? How did they penetrate node .51 (DC) and what preceded these events? As much as we'd like to, we'll never know the answers to these questions. During any cybercrime investigation, the narrative thread can get lost and reemerge in unexpected places, while some questions remain forever unanswered. The same goes with our story. The hackers could have got into the user segment of the network, and roamed the nodes there in search of access.

Why don't we see any user segment traffic? These areas of the network tend to be "packaged" for analysis last, with the attackers' priority being the network core and server segments. Next up for analysis are external traffic, DMZ zone traffic, and the VPN and Wi-Fi segments. Again, however much we want, we won't be able to analyze everything, but this is not a big problem for security tools. We can afford not to have data on the entire attack chain, because the hackers need only reveal themselves just once—and all their activity will be abruptly stopped.

Time now to move to the beginning of the story.

How it all began

Security experts quip that the safest server is one that is disconnected from the network (Internet and 220). The truth is that every large company has multiple entry points to its network: a website, a VPN for remote employees, a mail server, a handy SSH server on a non-standard port for authorized users only. These might get hacked, or a password brute-forced; such Internet-facing servers are known as the perimeter. Incidentally, the Wi-Fi network for employees is another such entry point.

Needless to say, the perimeter requires extra special protection. About the same as domain user passwords. The events of 2017 and the WannaCry epidemic clearly demonstrated this. For no apparent reason, port 445 on the servers of passport offices and hospitals was open to all, due to which these organizations were paralyzed for days. The WannaCry epidemic has not come back since, but any open port (3389/RDP or 22/SSH) is still exploited for a bit of password brute-forcing on a daily basis.

In any case, if you've made port 22 publicly available, be prepared for a lot of incoming connections. If you disable authentication for passwords and add it for private keys, your SSH server becomes virtually impregnable. The world hasn't seen a major OpenSSH exploit for a long time, while hosting providers use SSH access across the board.

But if you see such SSH network sessions from unfamiliar servers, the situation may be starting to spin out of control.

Image
Long-duration, high-volume SSH connections


We can't take a peek under the hood of the SSH connections this time.

They're encrypted. That said, packet lengths can be used as a side channel to determine whether authentication was by key or by password, and whether the SSH connection was used as a tunnel.

SSH connection parameters card
SSH connection parameters card


What's next? The MITRE ATT&CK matrix and common sense dictate that initial access is followed by reconnaissance of the network. The attackers brute-force the SMB/445 and SSH/22 ports on neighboring servers in the DMZ segment. They likely have some credentials for some account that they will brute-force wherever possible, then move further through the network in search of new accounts. This tactic is called lateral movement, or moving inside the perimeter.

The same node began scanning ports in the DMZ segment after suspicious incoming SSH connections
The same node began scanning ports in the DMZ segment after suspicious incoming SSH connections

A couple of hours later, the hackers switch to active offense. Using scripts from the Impacket framework, they begin to brute-force passwords and combine known usernames and passwords for other accounts, hoping to find cases of password reuse. This is a classic example of password spraying, when attackers try the same password for many accounts. This technique allows multiple brute-force attempts without the risk of account lockout. This activity was detected thanks to network detection rules written for specific Kerberos query artifacts from the Impacket library. Often, attackers don't realize that network queries can give them away.

Network rule triggerings on Impacket activity with different usernames
Network rule triggerings on Impacket activity with different usernames


Once discovered, hackers can no longer hide from the Network Traffic Analysis (NTA) system.

Their actions can be monitored manually by viewing network sessions from compromised nodes, or by searching for sessions where compromised accounts were used. If nothing is done about it, the list of compromised assets will snowball, even though hackers prefer to use the same accounts.

Unusual attributes also crop up 🤡🤡🤡. For example, a session with NTLM authentication. NTLM is an outdated protocol, superseded by Kerberos. Nevertheless, it remains widespread, and the two are currently used side by side. Inside an NTLM authentication message is a hostname field, which contains the name of the client node. This is optional, but by default the Kali Linux distribution (used by hackers of all stripes) puts its own name there, KALI, unless someone changes it. Can all network sessions be found using this name? Sure 🤫

NTLM authentication with characteristic fields
NTLM authentication with characteristic fields


The cybervillains of our tale didn't achieve massive success: their actions were limited to poking around nodes and trying out trendy exploits like PrinterBug or Zerologon. But March 28 was a turning point. From compromised node .98, hackers began executing whoami and ipconfig reconnaissance commands on another node. And they burned the account, which turned out to be svc_sync. Grabbed, it seems, from one of the servers.

Execution of whoami command on the node during reconnaissance
Execution of whoami command on the node during reconnaissance

Svc_sync was first noticed at the end of March, which means it took the hackers about five days to find it. This time could have been used by the InfoSec team to isolate the compromised assets and investigate the incident. Through March 29–31, the hackers were still moving around inside the perimeter, probing both new and admin accounts, before leaving for the weekend. On returning, they apparently went to collect information from nodes inaccessible to us. They reappeared in April, offloading gigabytes of valuable data to an external server. But you already know that part of the story.

Epilogue

Once inside a new network, hackers, like new employees, don't know where to go or where to look for data they need. Except that hackers have no one to ask. It takes time—days, weeks, sometimes months—for them to feel at home. Luckily for the hackers in this case, it all happened quite quickly.

Besides our story above, there are dozens that didn't happen: about ransomware caught in the nick of time, about data theft prevented. But people are less excited to read about non-incidents. It's not conversation material. Just as they don't like to make the difficult decision to shut down the network for the weekend if there is even the slightest risk of having to deploy backups on Monday.

It took the hackers more than three weeks to get from the first SSH session to the encryption command. During this time, they gave themselves away at virtually every step. True, they didn't use this year's trending Cobalt Strike and Sliver frameworks, cloud storage services, or exploits for 1C-Bitrix vulnerabilities. Although with such access via SSH, no additional tools are needed. But even with such a minimal toolkit, we would have spotted the intruders' actions in good time using the NTA system. And reconstructing the chain of actions in reverse, using a saved copy of the traffic and metadata, would be a matter of technique.

All names are fictitious, any coincidences accidental. Take good care of yourself and your passwords. That was Kirill Shipulin, Head of the Network Attack Detection Group, Positive Technologies Security Expert Center. Bye for now!

Get in touch

Fill in the form and our specialists
will contact you shortly