This Week's/Trending Posts

Solution Saturday Series

Sunday Funday Series

Forensic Lunch

The Most/Recent Articles

Daily Blog #790: Is your new contractor from North Korea?

 

Hello Reader,

You may have seen alerts from the FBI like this


 Many of us working investigations have encountered one of these cases in the last year. A company finds out from multiple reasons:

1. The North Korean IT worker VPN drops and exposes a Chinese or North Korean IP

2. Someone appears on camera who does match the original photos taken

3. You get a reach out from the FBI

4. You notice suspicious activity on a new developers system

In all of these examples many times what you'll find is a North Korean citizen who has been asked to generate revenue for their government.  Many organizations have even talked about how the North Korean IT worker was a model employee, maybe even one of their best. In other cases I've seen the North Korean IT worker is just creating busy work and doing the bare minimum, like something out of the overemployed subreddit. 

In either case it can become easy to lower your guard towards this incident, especially when their actions appear to be more to gain income that encrypt your systems. However if given the opportunity the same model worker will steal all of your secrets and extort you.

 “To prop up its brutal regime, the North Korean government directs IT workers to gain employment through fraud, steal sensitive information from U.S. companies, and siphon money back to the DPRK,” said Deputy Attorney General Lisa Monaco.

 

So if you find yourself with employees who took work from home to a new level, make sure to carefully review their work, changes and access. You may be lucky like some of my clients and find they were just collecting a paycheck, but you may also find a trail of stolen data or code modifications. 

Daily Blog #789: Things not to do when creating test clouds part 3

 

Hello Reader,

I wanted to wrap up this series by sharing something surprising that actually worked.

While cloud providers are very cautious about IP ranges and phone numbers, they seem completely fine with someone else paying the bill. For example, after jumping through multiple hoops to set up my fictional company with cleverly named employees, the providers didn't mind that the credit card used for billing was under my real name.

I suppose this makes sense, since it's common for an IT professional to set things up while finance handles payment. Still, I found this amusing.

So, even if you encounter challenges creating a fake company on cloud platforms, you certainly won't have any trouble paying for it!

sms

Daily Blog #788: Things not to do when creating test clouds part 2

Hello Reader,

In my previous post, I discussed the challenges encountered when signing up for cloud services using IP addresses originating from AWS EC2. Today, I'd like to focus on another common hurdle: receiving verification text messages once you've successfully started the sign-up process.

Here are a few key observations:

1. If you've created multiple accounts over the years, cloud providers eventually flag your phone number as overused, rendering it ineligible for verification purposes.

2. Attempts to circumvent this restriction by using text-to-email services typically fail because providers cross-reference phone number blocks against mobile carrier assignments.

3. Even if you initially created an account using your personal system and subsequently logged in from an AWS IP address to create another, you might face double scrutiny, resulting in providers rejecting your phone number entirely.

To navigate around these restrictions, I opted for a straightforward solution: purchasing a used Google Pixel smartphone from Amazon and subscribing to a $20/month Google Fi plan using an eSIM. This approach provided me with a reliable "burner" number accepted by cloud providers, simplifying the verification process considerably.

Also Read: Things not to do when creating test clouds part 1


Daily Blog #787: Things not to do when creating test clouds part 1

 


Hello Reader,

Today I wanted to share an important lesson I learned while creating a test cloud environment. Whenever I need to generate a test dataset for my SANS class or other public events, I typically create a new fictional company to host my tests. This time, I thought I'd simplify my life by performing all cloud setups within an AWS VM, allowing me to conveniently store and save snapshots for future use.

However, I inadvertently discovered a detection rule shared by AWS, Microsoft, and Google:

"Never allow account sign-ups originating from an AWS EC2 IP—EVER."

Here's what happened when I attempted to create new accounts from an AWS EC2 instance:

  • Microsoft Azure: Allowed initial sign-up, but redirected me to a blank "unknown error" page.
  • Microsoft 365: Similarly allowed account creation attempts but ended in an error.
  • Outlook.com: Immediately displayed an error preventing account creation.
  • Google Cloud: Appeared to allow account creation initially, but consistently rejected every phone number provided for validation.

The key takeaway is clear: Due to extensive fraud originating from cloud IP ranges, you must use either a VPS or your personal IP for such activities.

Tomorrow, I'll discuss strategies for reliably receiving SMS verification codes.


Also Read: Things not to do when creating test clouds part 2

Daily Blog #786: Sunday Funday 3/23/25

 


Hello Reader, 

Last week we focused on SSH logins and tunnels between two linux systems. At the conclusion of this Chris Eng, of the ogmini blog , proposed an interesting question. What would a windows SSH server leave behind? So this week let's find out!

The Prize:

$100 Amazon Giftcard


The Rules:

  1. You must post your answer before Friday 3/28/25 7PM CST (GMT -6)
  2. The most complete answer wins
  3. You are allowed to edit your answer after posting
  4. If two answers are too similar for one to win, the one with the earlier posting time wins
  5. Be specific and be thoughtful
  6. Anonymous entries are allowed, please email them to dlcowen@gmail.com. Please state in your email if you would like to be anonymous or not if you win.
  7. In order for an anonymous winner to receive a prize they must give their name to me, but i will not release it in a blog post
  8. AI assistance is welcomed but if a post is deemed to be entirely AI written it will not qualify for a prize. 


The Challenge:

 Test what artifacts are left behind from SSHing into a Windows 11 or 10 system using the native SSH server. Bonus points for tunnels. 


Also Read: Daily Blog #785: Solution Saturday 3/22/25

ssh

Daily Blog #785: Solution Saturday 3/22/25

 

Hello Reader,

This week's SSH challenge had several contenders. It's always interesting to see what does and does not get your attention and time! I think this should help many people looking for where to look and also opens the door for some more advanced scenarios that we can explore!

 

The Challenge:

What are all of the artifacts left behind on a Linux system (both server and client) when someone authenticates via SSH and creates a SSH Tunnel.

 

The Winning Answer:

 Chris Eng with the OG Mini blog:

https://ogmini.github.io/2025/03/21/David-Cowen-Sunday-Funday-SSH.html

 

Also Read: Validating linux systems with Yum

yum

Daily Blog #784: Validating linux systems with Yum

Hello Reader,

In the prior posts I've been posting about using rpm to validate packages, but there are other package managers out there. I've decided to look into each package manager individually and then maybe we can make a conditional script to handle all of them. Here is the yum version:

 

#!/bin/bash

# Files to store results
VERIFIED="verified"
FAILURES="failures"
DEBUG="debug"

# Clean previous results
> "$VERIFIED"
> "$FAILURES"
> "$DEBUG"

# Iterate over installed packages managed by yum
for package in $(yum list installed | awk 'NR>1 {print $1}' | cut -d. -f1); do
  echo "Processing package: $package"

  # Find repository URL
  repo_url=$(yumdownloader --urls "$package" 2>/dev/null | head -n 1)

  if [[ -z "$repo_url" ]]; then
    echo "Repository URL not found for package: $package" | tee -a "$FAILURES"
    echo "$repo_url $package" | tee -a "$DEBUG"
    continue
  fi

  # Download RPM package temporarily
  tmp_rpm="/tmp/${package}.rpm"
  curl -s -L "$repo_url" -o "$tmp_rpm"

  if [[ ! -f "$tmp_rpm" ]]; then
    echo "Failed to download RPM - Package: $package" | tee -a "$FAILURES"
    echo "$repo_url $package" | tee -a "$DEBUG"
    continue
  fi

  # Get repository file hashes from the downloaded RPM
  repoquery_hashes=$(rpm2cpio "$tmp_rpm" | cpio -idmv --no-absolute-filenames 2>/dev/null; find . -type f -exec sha256sum {} \;)

  # Verify files
  echo "$repoquery_hashes" | while read -r repo_hash repo_file; do
    local_file="/$repo_file"

    # Check file existence and type
    if [[ ! -x "$local_file" ]] || [[ ! -f "$local_file" ]] || [[ -h "$local_file" ]]; then
      continue
    fi

    # Calculate local disk hash
    disk_hash=$(sha256sum "$local_file" 2>/dev/null | awk '{print $1}')

    if [[ "$disk_hash" == "$repo_hash" ]]; then
      echo "Verified - Package: $package, File: $local_file" >> "$VERIFIED"
    else
      echo "Hash mismatch (Repository) - Package: $package, File: $local_file" | tee -a "$FAILURES"
      echo "$disk_hash $repo_hash $package $local_file" | tee -a "$DEBUG"
    fi
  done

  # Cleanup extracted files and downloaded RPM
  rm -rf ./* "$tmp_rpm"
done

echo "Verification complete. Results are stored in '$VERIFIED' and '$FAILURES'."

Also Read: Automating RPM checks
 

Daily Blog #783: Automating rpm checks

 


Hello Reader,

I'm recreating my 24 year old perl scipt in bash to allow someone to validate all of the installed rpms on a system against both the local rpm DB and the repository it came from.  This should allow a certain since of comfort on if any core system packages have been manipulated.

 

#!/bin/bash

# Files to store results
VERIFIED="verified"
FAILURES="failures"
DEBUG="debug"

# Clean previous results
> "$VERIFIED"
> "$FAILURES"
> "$DEBUG"

# Iterate over installed RPM packages
for package in $(rpm -qa); do
  echo "Processing package: $package"

  # Find repository URL
  repo_url=$(dnf repoquery -q --location "$package" 2>/dev/null | head -n 1)

  if [[ -z "$repo_url" ]]; then
    echo "Repository URL not found for package: $package" | tee -a "$FAILURES"
    echo "$repo_url $package" | tee -a "$DEBUG"
    continue
  fi

  # Get local file hashes from RPM database
  rpm -ql --dump "$package" | while read -r line; do
    file_path=$(echo "$line" | awk '{print $1}')
    rpm_hash=$(echo "$line" | awk '{print $4}')

    # Skip directories and non-executable files
    if [[ ! -x "$file_path" ]]; then
       continue
    fi
    
    if [[ ! -f "$file_path" ]]; then
       continue
    fi

    if [[ -h "$file_path" ]]; then
       continue
    fi
    # Calculate local disk hash
    disk_hash=$(sha256sum "$file_path" 2>/dev/null | awk '{print $1}')

    if [[ "$disk_hash" != "$rpm_hash" ]]; then
      echo "Hash mismatch (Local RPM DB) - Package: $package, File: $file_path" | tee -a "$FAILURES"
      echo "$dish_hash $rpm_hash $package $file_path" | tee -a "$DEBUG"
      continue
    fi

    # Get repository RPM hash
    repo_hash=$(rpm -qp --dump "$repo_url" 2>/dev/null | grep " $file_path " | awk '{print $4}')

    if [[ -z "$repo_hash" ]]; then
      echo "File not found in repository RPM - Package: $package, File: $file_path" | tee -a "$FAILURES"
      echo "$repo_hash $repo_url $file_path" | tee -a "$DEBUG"
      continue
    fi

    if [[ "$disk_hash" == "$repo_hash" ]]; then
      echo "Verified - Package: $package, File: $file_path" >> "$VERIFIED"
    else
      echo "Hash mismatch (Repository) - Package: $package, File: $file_path" | tee -a "$FAILURES"
      echo "$disk_hash $repo_hash $package $file_path" | tee -a "$DEBUG"
    fi
  done
done

echo "Verification complete. Results are stored in '$VERIFIED' and '$FAILURES'."

Also Read: Validating linux packages other than rpms