Hero Panel Image

CISO Monthly Roundup, February 2023: Cloud (In)Security Report 2022, Havoc targets government, new Snip3 Crypter TTPs, Rhadamanthys’ obfuscation revealed, and large language model security.

Share:
Deepen Desai

Deepen Desai

Contributor

Zscaler

Mar 1, 2023

The February CISO Monthly Roundup shines a spotlight on cloud misconfigs, Havoc C2s, the Snip3 Crypter RAT, and more.

The CISO Monthly Roundup provides the latest threat research from Deepen Desai and the ThreatLabz team, along with insights on other cyber-related subjects. Over the past month, ThreatLabz released their annual 2022 Cloud (In)Security Report, observed a Havoc campaign targeting the government, analyzed Snip3 Crypter, performed in-depth analysis on Rhadamanthys’ obfuscation techniques, and outlined large language model security considerations.

ThreatLabz 2022 Cloud (In)Security Report

ThreatLabz released its annual Cloud(In)Security report containing a year of summarized findings and discoveries from Zscaler Zero Trust Exchange. The report is broken into two major themes, cloud threat insights and cloud security best practices. The cloud threat insights section contains information vital for your organization’s security awareness. Discoveries include:

  • 98.6% of organizations have misconfigurations in their cloud environments that cause critical risks to data and infrastructure
  • 97.1% of organizations use privileged user access controls without MFA enforcement
  • 59.4% of organizations do not apply basic ransomware controls for cloud storage like MFA delete and versioning

The cloud security best practices section offers general guidelines that businesses can easily implement to improve their security posture. These include advice for securing operations under a shared responsibility model, ways to use encryption while maintaining visibility, and running vulnerability scans. This brief report is packed with information and data that can help your organization improve its cloud security right now.

Read the ThreatLabz 2022 Cloud (In)Security Report

Zscaler Zero Trust Exchange Coverage: Zscaler Posture Control, Advanced Threat Protection, SSL Inspection.

Threat actors unleash Havoc on government

Threatlabz observed a new threat campaign using the Havoc command and control (C2) framework targeting a government organization. Havoc is capable of bypassing current versions of Windows 11 Defender through various evasion and obfuscation techniques. During our analysis we examined the Havoc Demon, an implant generated by the framework. It contains a ShellCode loader that disables Event Tracing for Windows (ETW) and uses Create ThreadpoolWait() to decrypt and execute. It also uses KaynLdr Shellcode to load the Havoc Demon DLL and performs API hashing to resolve the addresses of various NTAPIs.

Havoc’s Demon DLL parses configuration files and uses sleep obfuscation techniques. It communicates with C2 infrastructure to perform CheckIn requests and command execution. It can also execute indirect syscalls, return address stack spoofing, and perform other malicious functions. ThreatLabz is actively tracking the threat actor infrastructure being used in this campaign involving the Havoc C2 framework.

Read the complete Havoc threat analysis

Zscaler Zero Trust Exchange Coverage: Advanced Threat Protection, Advanced Cloud Sandbox, SSL Inspection.

Snip3 Crypter learns new tricks

Threatlabz has continuously monitored Snip3 Crypter, a multi-stage RAT available through crypter-as-a-service, since 2021. During this time we have observed several campaigns using Snip3 Crypter, and have seen it being used with an evolving set of TTPs. Snip3 Crypter’s relative ease of use, back end support, and wide availability as a crypter-as-a-service has made it popular with lower-skilled threat actors. Our latest technical analysis of this threat focuses on recent changes to the Snip3 Crypter infection chain.

The recent campaign involves spear phishing emails that lure victims into opening them by using subject lines related to tax statements. These emails primarily target victims in the healthcare, energy, and manufacturing sectors. If these email recipients allow Snip3 Crypter to infect their machines, it will download other malware like DcRAT and QuasarRAT to their system.

ThreatLabz observed Snip3 Crypter using several new TTPs during its most recent infection chain. For example, it uses ADODB connections to fetch malicious strings from a database. It uses hardcoded keys with custom decryption routines for in-memory stages of the attack. The malware’s DB server periodically changes in an attempt to evade domain-based detection. The latest Snip3 Crypter infection chain also demonstrated new capabilities for bypassing ASMI, shortening URLs, and leveraging its user-agent.

Read the full analysis of Snip3 Crypter

Zscaler Zero Trust Exchange Coverage:  Advanced Threat Protection, SSL Inspection, Data Loss Prevention.

Uncovering Rhadamanthys obfuscation techniques

Rhadamanthys is a new infostealer written in C++ that is often distributed through malicious Google ads. The malware consists of a loader and a main module (which performs credential exfiltration). This infostealer can extract credentials from several applications including Keepass, VPN clients, email clients, and various cryptocurrency wallets. Rhadamanthys uses an open-source library to implement complex anti-analysis techniques. 

Rhadamanthys has a custom, embedded file system. Threatlabz discovered one of the infostealer’s loaders using virtual machine obfuscation techniques based on Quake III. During our analysis, we discovered the encrypted Rhadamanthys’ payload also uses a variation of the Hidden Bee format. Fortunately, ThreatLabz discovered a weakness in Rhadamanthys’ encryption that allows researchers to break it if they capture the right information.

Read the full analysis of Rhadamanthys

Zscaler Zero Trust Exchange Coverage:  Advanced Threat Protection, SSL Inspection, Cloud Sandbox, Data Loss Prevention.

CISO’s thoughts: Large language model security risks

While large language models (LLMs) like ChatGPT have tremendous potential and will drive considerable innovation as we are seeing at many large organizations (including my team at Zscaler), it is important to understand their security risks and adopt them safely. I thought it is timely to quickly outline some security concerns worth considering when thinking about LLMs.

First, organizations using LLMs have the potential to suffer unintentional data loss. This could happen through malicious (or accidental) prompt engineering. An LLM trained on a data set that includes sensitive information like PII, PCI, or healthcare records risks having it revealed. The same is true for intellectual property, such as proprietary code segments. It would be unfortunate if a developer asking an LLM to optimize their work later saw the same code exposed to the world.

On that note, it is worth mentioning that developers may be too trusting of LLM code. This means vulnerable code generated by the model might be eagerly accepted without proper security testing. Others may intentionally use LLMs to create dangerous code. Inexperienced adversaries may use the coding abilities of an LLM to create new malware and ransomware. More experienced attackers might use these models to create polymorphic code, so their existing malware can better evade detection.

Humans are also exploitable through attack vectors that LLMs can improve. For example, these models can create more believable phishing emails and SMS messages that make scams look more credible. They may also create and disseminate convincing fake news articles. LLMs easily eliminate the grammatical errors and spelling mistakes that often catch people’s attention and raise their suspicions.

While in-house security teams may use LLMs for vulnerability discovery, threat actors can do the same. In fact, threat actors can use LLMs extensively to uncover potentially serious flaws in webpages/websites worldwide. They might even use these large language models as advisors for training, or during an active attack. Consider them turning to an LLM for answers every time their efforts hit a roadblock. “I am faced with a penetration testing challenge. I am on a website with one button. How would I test its vulnerabilities?”

AI is a powerful tool that has the potential to make an incredible impact on the cybersecurity space. It is important that security professionals take time to understand the risks, benefits, and use cases LLMs possess. Learning and using this technology today will lead to having a more secure environment tomorrow. 

About ThreatLabz

ThreatLabz is the embedded research team at Zscaler. This global team includes security experts, researchers, and network engineers responsible for analyzing and eliminating threats across the Zscaler security cloud and investigating the global threat landscape. The team shares its research and cloud data with the industry at large to help promote a safer internet.

 

The Zscaler Zero Trust Exchange

Zscaler manages the world’s largest security cloud. Each day, Zscaler blocks over 150 million threats to its 6000+ customers, securing over 240 billion web transactions daily. The Zscaler ThreatLabz security research team uses state-of-the-art AI and machine-learning technology to analyze Zscaler Zero Trust Exchange traffic and share its findings.

 

What to read next: 

Job scams impersonate companies still hiring following tech layoffs

State of Encrypted Attacks 2022 Report

Fostering cooperation among enterprise CISOs and beyond

 

Explore more insights

Recommended