This Week's/Trending Posts

Hand-Picked/Curated Posts

Most Popular/Amcache

Hand-Picked/Weekly News

The Most/Recent Articles

Daily Blog #720: Spotlight on zeltser challenge participant - Chris Eng

 



Hello Readers,


This week, we’re excited to shine the spotlight on another Zeltser Challenge participant: Chris Eng!


Chris is a fellow digital forensics enthusiast who got his Master’s at Champlain College. He has been sharing his journey—and plenty of insightful research—over on his blog: ogmini.github.io. Whether you’re new to forensics or an experienced professional, his posts offer a glimpse into both his academic and practical experiences, including:

Transitioning into Digital Forensics: Chris discusses how the Champlain Master’s program shaped his approach to investigations, the tools he’s learning, and how he’s applying his newly acquired skills to real-world scenarios.

Notepad State Files: One of his standout research topics dives into the forensic artifacts left behind by Notepad, shedding light on how state files can reveal a surprising amount of information during investigations.


I’ve known Chris for a while, and it’s been fantastic to watch him grow as a practitioner—now the rest of the community gets to witness it as well.


So here’s to you, Chris! I hope everyone reading takes the time to check out his blog and see what he’s been working on. 

Daily Blog #719: Installing project adaz


 

 

Hello Reader,

Following up on our last post, I’m now testing the installation process for Project Adaz to see if it’s still functional. While the project is marked as "maintained," confirming it’s installable on a Windows 11 system is a different matter entirely.

Below are my updated installation instructions to ensure a smoother setup:


Updated Installation Instructions for Project Adaz

  1. Clone the Repository
    Assuming you already have Git installed, create a directory for the project, then run the following command:

    git clone https://github.com/christophetd/Adaz.git
    
  2. Set Up the Python Environment
    Navigate to the newly created adaz directory and execute the following commands:

    python3 -m venv ansible/venv
    ./ansible/venv/bin/activate
    pip install -r ./ansible/requirements.txt
    deactivate
    
  3. Prepare Terraform
    Download Terraform and extract it to the terraform directory within the adaz project folder.

  4. Initialize Terraform
    Run the following commands:

    cd terraform
    terraform init
    
  5. Set Up Azure CLI
    Ensure the Azure CLI is installed and log in to your desired Azure account using:

    az login
    
  6. Generate an SSH Key (if needed)
    If you don’t already have an SSH key, generate one and store it in the .ssh directory. On Windows 11, run the following command in the terminal:

    ssh-keygen
    

    Make sure to name the key id_rsa and avoid accepting the default name.

  7. Apply Terraform Configuration
    Navigate to the terraform directory and execute:

    terraform apply
    

Once these steps are complete, Terraform will build an Active Directory-enabled network with an ELK log forwarder to support your project needs.

Tomorrow we can see if it was succesful.

Daily Blog #718: Building test environments in 2025

 


 

Hello Reader,

A while back, I shared a post on LinkedIn about building test environments for simulating attacks and creating better training datasets. This is something I’ve done extensively for both my coworkers and my SANS students. With the discontinuation of Detection Lab several years ago, I started exploring alternatives. After reviewing the issues section of Detection Lab and consulting ChatGPT O1, I’ve identified two promising replacements that are currently being maintained:


1. Project ADAZ

Four years ago, Christophe Tafani-Dereeper joined us on the Forensic Lunch to discuss his Azure-supported project for spinning up instrumented networks for testing. According to his GitHub page, it’s still being actively updated. I’ll be revisiting Project ADAZ in my upcoming blog posts to see how it performs today and whether it still meets my needs as it did back then.

Key Features:

  • ELK Backend: Provides a robust and widely-used stack for log aggregation, analysis, and visualization.
  • Azure Integration: Leverages Azure to create and manage the test environment, making it ideal for organizations already invested in Microsoft’s ecosystem.
  • Open Source: Free to use, with full access to the source code for customization.

Limitations:

  • Azure Costs: While the software is free, the resources used on Azure (e.g., VMs, storage, bandwidth) can add up quickly.
  • Azure Dependency: It’s tightly coupled with Azure, which may not be ideal for those working with other cloud providers or looking for multi-cloud solutions.
  • Complexity: Initial setup and configuration may require familiarity with Azure, ELK, and Terraform.It is well documented though and I felt it easy to setup.

2. Splunk Attack Range

The Splunk Threat Research Team has developed an instrumented network-building script, specifically designed for collecting and analyzing logs with Splunk. It’s another compelling option for creating test environments.

Key Features:

  • Broad Platform Support: Works with VirtualBox, Azure, and AWS, offering flexibility across various deployment scenarios.
  • Splunk-Centric: Designed to send logs directly to Splunk, enabling quick analysis and visualization.
  • Actively Maintained: Updates and support from the Splunk Threat Research Team ensure compatibility with current Splunk releases and threat models.
  • Attack Simulations: Pre-configured to simulate adversary techniques using open-source tools like Atomic Red Team, enabling realistic threat scenarios.

Limitations:

  • Splunk Dependency: Works best with Splunk as the log receiver, making it less attractive for organizations using alternative log aggregation solutions like ELK.
  • Resource Requirements: Environments built with Splunk Attack Range can be resource-intensive, requiring significant compute and storage, especially for larger simulations.
  • Learning Curve: Requires familiarity with Splunk configurations and potential tuning for specific use cases.

What’s Next?

I’ll be deploying both of these solutions in my test environments to compare their performance, usability, and suitability for various scenarios. Additionally, I’m on the lookout for robust Terraform scripts to build similar environments with cloud-based identity providers (e.g., Azure AD or Google Cloud Identity) instead of traditional local Active Directory.

If you know of any such scripts or have experience with either of these projects, please share your thoughts in the comments below—I’d love to hear your insights!


Daily Blog #717: Getting free Azure credits for testing

 Hello Reader,

I’m posting this from my phone while I fly back from a forensic inspection. I noticed that this weeks sunday funday already has contest entries working (Great job Ogmini!) while last week was lighter on entries. I thought about the issue and wondered if part of the issue was people not knowing about how to get free Azure credits for testing . 


So i asked Chat Gpt to list out the most common ways to get free Azure credits, let me know what I missed. 


1. Azure Free Account

What You Get:

$200 Free Credit: Available for the first 30 days to explore any Azure service.

Free Services for 12 Months: Includes popular services like virtual machines, storage, and databases.

Always Free Services: More than 55 services are available for free with limited usage, such as Azure App Service and Functions.

Eligibility: Open to new Azure customers.


Sign up for a free account here.


2. Azure for Students

What You Get:

$100 Free Credit: No credit card required for verification.

Access to free developer tools such as Visual Studio Code, GitHub, and more.

Eligibility: Must be a verified student aged 18+ with a valid academic email address.

Additional Perks: Free access to select learning resources and Azure certifications.


Learn more and apply for Azure for Students.


There were other options presented but they’re much more specialized, like for business start ups. 


So the next time you see an Azure challenge sign up for free credits and give it a go!

Daily Blog #716: Sunday Funday 1/12/25

 

Hello Reader,

It's Sunday! That means it's time for another challenge. This week are going back to our roots with some digital forensics artifact testing. SRUM is collected, parsed and relied on by multiple types of investigations but how many of us have ever validated the metrics it presents?


The Prize:

$100 Amazon Giftcard


The Rules:

  1. You must post your answer before Friday 1/17/25 7PM CST (GMT -5)
  2. The most complete answer wins
  3. You are allowed to edit your answer after posting
  4. If two answers are too similar for one to win, the one with the earlier posting time wins
  5. Be specific and be thoughtful
  6. Anonymous entries are allowed, please email them to dlcowen@gmail.com. Please state in your email if you would like to be anonymous or not if you win.
  7. In order for an anonymous winner to receive a prize they must give their name to me, but i will not release it in a blog post
  8. AI assistance is welcomed but if a post is deemed to be entirely AI written it will not qualify for a prize. 


The Challenge:
With so many of us relying on SRUM for so many different uses its time to do some validation on the counters so many people cite. For this challenge you will test and validate the following SRUM collected metrics and document if they accurately capture the data or if there is a skew present. 

Use cases to test and validate on Windows 11 or Windows 10 but you must document which:
1. Copying data between two drives using copy and paste (look for disk read and write activity )
2. Uploading data to an online service of your choice (look for process network traffic)
3. Wiping files (look for disk read and write activity)

bonus points for attempting different popular utilities/functions. 

Daily Blog #715: Solution Saturday 1/11/25

Hello Reader,
This first week back of Sunday Fundays made me realize that I need to update the rules to account for our new times. David Nides, with the help of an AI friend, has won this week's challenge with the best entry submitted. However, for tomorrow's challenge, expect that while I appreciate the help of AI in your research, I will be expecting more human involvement in your submissions.

The Challenge:
What evidence is left behind in Azure when an attacker runs BloodHound or any derivative like SharpHound? You should document at least two scenarios:

  1. Default logging
  2. Turning on any optional logging you want to test

Your response can be a link to your own blog, an email, a document, etc. Bonus points if you point out specific indicators that can be searched for or alerted off of.

The Winning Answer:
David Nides

Here's a breakdown of the evidence left behind in Azure when an attacker runs BloodHound/SharpHound, covering both default and optional logging scenarios:

Understanding the Attack:

BloodHound and SharpHound work by querying Active Directory (AD) to map relationships between users, groups, computers, and other objects. In an Azure context, this typically means querying Azure Active Directory (Azure AD) via the Microsoft Graph API or, if AD Connect is in use, on-premises AD. The attack itself doesn't directly interact with Azure resources (like VMs or storage accounts) unless the attacker has already compromised credentials that grant them such access. The focus is on the queries made against the directory service.

Scenario 1: Default Logging

By default, Azure AD provides some logging, but it may not be granular enough to explicitly identify BloodHound/SharpHound activity. The primary logs of interest are:

  • Azure Resource Manager Activity Logs: These logs show any resource management operations, such as creation or modification of resources.
  • Azure AD Audit Logs: These logs record directory activities (sign-ins, group changes, user updates, application registrations, etc.). While they might show unusual patterns of queries (e.g., a large number of Get-AzureADUser or Get-AzureADGroupMember calls in a short timeframe), they won't specifically flag "BloodHound."
    • Limitations: Default audit logs often have limited retention and may not capture every low-level query.
  • Sign-in Logs: These detail user sign-ins/auth attempts, useful for identifying suspicious logins from unusual locations or with compromised credentials.
    • Limitations: These focus on authentication events, not subsequent data-gathering queries.

Indicators (Default Logging):

  • High volume of directory read operations: Look for a large number of Get-AzureADUser, Get-AzureADGroupMember, or Get-AzureADServicePrincipal calls from one source in a short time.
  • Unusual application access: If SharpHound uses a registered application (service principal), check logs for unexpected patterns by that application.
  • Sign-ins from unusual locations: Analyze sign-in logs for unexpected IPs or geographies.

Scenario 2: Optional Logging (Recommended)

For more detailed insights and detection, enable or use:

  • Diagnostic Settings for Azure AD: Configure these to send Azure AD audit logs and sign-in logs to a Log Analytics workspace, Event Hub, or storage for advanced analysis.
  • Microsoft Graph API Audit Logs: If supported by your license, these logs provide the most granular view of Graph API calls (ideal for detecting SharpHound).
  • Azure Advanced Threat Protection (ATP) / Microsoft Defender for Identity: Provides alerts/logs for suspicious activities like lateral movement or reconnaissance.
  • Azure Security Center (Defender for Cloud): Offers a unified view of security alerts and recommendations.
  • Azure Monitor / Sentinel: Aggregates logs and allows custom queries/detections for enumeration activities.

Indicators (Optional Logging):

  • Specific Graph API queries: Look for /users/{id}/memberOf, /groups/{id}/members, etc.
  • Large numbers of requests: A sudden spike to Graph API endpoints suggests enumeration.
  • User agent strings: Can reveal known SharpHound signatures (though attackers may spoof).
  • Unusual sign-in patterns: Sign-ins from unknown locations or devices deviating from normal user behavior.
  • Excessive directory queries: A high volume of read-based requests can indicate reconnaissance.
  • Changes to directory roles/groups: Any unexpected role or group membership changes might indicate privilege escalation attempts.
  • Alerts from Azure ATP/Security Center/Sentinel: Check these products for out-of-the-box or custom detection rules that spot enumeration behavior.

Sample KQL for Detection:

AuditLogs
| where OperationName has "Get-AzureADGroupMember"
| summarize count() by CallerIpAddress, UserDisplayName, TimeGenerated
| where count_ > 100
| render table

Key Takeaways:

  • Default logging is limited. Enable diagnostic settings and centralize logs for better visibility.
  • Detect enumeration via patterns of directory queries rather than a specific "BloodHound" signature.
  • Graph API audit logs (when available) are your best bet for catching SharpHound usage.
  • Correlate logs with other security signals (threat intel, endpoint alerts, etc.) for a fuller defense.

Daily Blog #714: Forensic Lunch 1/10/25 with Ryatt Roesrma talking about fine tuning AI models

The Forensic Lunch is Back! 🍴

Hello Readers,

I'm excited to announce that The Forensic Lunch is back with another episode! This week, we had the privilege of hosting Wyatt Roersma, who shared his insights on training open-source AI models for specialized tasks.

Wyatt has been exploring how to take open-source AI models, like Qwen-2.5, and train them using examples such as YARA rules and targeted prompts to enhance their usefulness for specific applications. In the episode, he walks us through the process step-by-step, empowering you to apply similar techniques to solve your unique challenges.

For instance, I'm currently experimenting with getting AI models to write dfvfs code. While the models are fairly accurate, I believe with a bit of fine-tuning and additional training, they could become even more precise and reliable.

Key Resources from Wyatt's Discussion

Here are some invaluable links to help you dive deeper into the topics discussed in the episode:

Watch the Episode

You can catch the full episode below and learn how to start training your own open-source AI models to tackle specialized problems:



Or click the link here:
https://www.youtube.com/live/z6QkYHo97k0

Daily Blog #713: Developing an AWS Examination Tool Part 4

 Hello Reader,

       Development continues! What all did we do today? Well here is the automated commit message the model made for me:

Enhance AWS Enumerator Tool with Lambda and Gateway Resource Support

- Added functionality to enumerate and display AWS Lambda functions, including details such as runtime, memory, timeout, and VPC configuration.
- Implemented scanning for Internet and NAT Gateways, capturing their state, type, and associated VPCs.
- Introduced a new Network Security tab in the GUI for analyzing security configurations across accounts, including security groups and network ACLs.
- Updated README.md to reflect new features and permissions required for Lambda and Gateway resource access.
- Improved error handling and progress tracking during resource scans.

This commit significantly enhances the AWS Enumerator Tool's capabilities for managing and analyzing AWS resources.


Tomorrow is the Forensic Lunch make sure to tune in!