This Week's/Trending Posts

Solution Saturday Series

Sunday Funday Series

Forensic Lunch

The Most/Recent Articles

sms

Daily Blog #788: Things not to do when creating test clouds part 2



Hello Reader,

In my previous post, I discussed the challenges encountered when signing up for cloud services using IP addresses originating from AWS EC2. Today, I'd like to focus on another common hurdle: receiving verification text messages once you've successfully started the sign-up process.

Here are a few key observations:

1. If you've created multiple accounts over the years, cloud providers eventually flag your phone number as overused, rendering it ineligible for verification purposes.

2. Attempts to circumvent this restriction by using text-to-email services typically fail because providers cross-reference phone number blocks against mobile carrier assignments.

3. Even if you initially created an account using your personal system and subsequently logged in from an AWS IP address to create another, you might face double scrutiny, resulting in providers rejecting your phone number entirely.

To navigate around these restrictions, I opted for a straightforward solution: purchasing a used Google Pixel smartphone from Amazon and subscribing to a $20/month Google Fi plan using an eSIM. This approach provided me with a reliable "burner" number accepted by cloud providers, simplifying the verification process considerably.


Daily Blog #787: Things not to do when creating test clouds part 1

 


Hello Reader,

Today I wanted to share an important lesson I learned while creating a test cloud environment. Whenever I need to generate a test dataset for my SANS class or other public events, I typically create a new fictional company to host my tests. This time, I thought I'd simplify my life by performing all cloud setups within an AWS VM, allowing me to conveniently store and save snapshots for future use.

However, I inadvertently discovered a detection rule shared by AWS, Microsoft, and Google:

"Never allow account sign-ups originating from an AWS EC2 IP—EVER."

Here's what happened when I attempted to create new accounts from an AWS EC2 instance:

  • Microsoft Azure: Allowed initial sign-up, but redirected me to a blank "unknown error" page.
  • Microsoft 365: Similarly allowed account creation attempts but ended in an error.
  • Outlook.com: Immediately displayed an error preventing account creation.
  • Google Cloud: Appeared to allow account creation initially, but consistently rejected every phone number provided for validation.

The key takeaway is clear: Due to extensive fraud originating from cloud IP ranges, you must use either a VPS or your personal IP for such activities.

Tomorrow, I'll discuss strategies for reliably receiving SMS verification codes.

Daily Blog #786: Sunday Funday 3/23/25

 


Hello Reader, 

Last week we focused on SSH logins and tunnels between two linux systems. At the conclusion of this Chris Eng, of the ogmini blog , proposed an interesting question. What would a windows SSH server leave behind? So this week let's find out!

The Prize:

$100 Amazon Giftcard


The Rules:

  1. You must post your answer before Friday 3/28/25 7PM CST (GMT -6)
  2. The most complete answer wins
  3. You are allowed to edit your answer after posting
  4. If two answers are too similar for one to win, the one with the earlier posting time wins
  5. Be specific and be thoughtful
  6. Anonymous entries are allowed, please email them to dlcowen@gmail.com. Please state in your email if you would like to be anonymous or not if you win.
  7. In order for an anonymous winner to receive a prize they must give their name to me, but i will not release it in a blog post
  8. AI assistance is welcomed but if a post is deemed to be entirely AI written it will not qualify for a prize. 


The Challenge:

 Test what artifacts are left behind from SSHing into a Windows 11 or 10 system using the native SSH server. Bonus points for tunnels. 


Also Read: Daily Blog #785: Solution Saturday 3/22/25

ssh

Daily Blog #785: Solution Saturday 3/22/25

 

Hello Reader,

This week's SSH challenge had several contenders. It's always interesting to see what does and does not get your attention and time! I think this should help many people looking for where to look and also opens the door for some more advanced scenarios that we can explore!

 

The Challenge:

What are all of the artifacts left behind on a Linux system (both server and client) when someone authenticates via SSH and creates a SSH Tunnel.

 

The Winning Answer:

 Chris Eng with the OG Mini blog:

https://ogmini.github.io/2025/03/21/David-Cowen-Sunday-Funday-SSH.html

 

Also Read: Validating linux systems with Yum

yum

Daily Blog #784: Validating linux systems with Yum

Hello Reader,

In the prior posts I've been posting about using rpm to validate packages, but there are other package managers out there. I've decided to look into each package manager individually and then maybe we can make a conditional script to handle all of them. Here is the yum version:

 

#!/bin/bash

# Files to store results
VERIFIED="verified"
FAILURES="failures"
DEBUG="debug"

# Clean previous results
> "$VERIFIED"
> "$FAILURES"
> "$DEBUG"

# Iterate over installed packages managed by yum
for package in $(yum list installed | awk 'NR>1 {print $1}' | cut -d. -f1); do
  echo "Processing package: $package"

  # Find repository URL
  repo_url=$(yumdownloader --urls "$package" 2>/dev/null | head -n 1)

  if [[ -z "$repo_url" ]]; then
    echo "Repository URL not found for package: $package" | tee -a "$FAILURES"
    echo "$repo_url $package" | tee -a "$DEBUG"
    continue
  fi

  # Download RPM package temporarily
  tmp_rpm="/tmp/${package}.rpm"
  curl -s -L "$repo_url" -o "$tmp_rpm"

  if [[ ! -f "$tmp_rpm" ]]; then
    echo "Failed to download RPM - Package: $package" | tee -a "$FAILURES"
    echo "$repo_url $package" | tee -a "$DEBUG"
    continue
  fi

  # Get repository file hashes from the downloaded RPM
  repoquery_hashes=$(rpm2cpio "$tmp_rpm" | cpio -idmv --no-absolute-filenames 2>/dev/null; find . -type f -exec sha256sum {} \;)

  # Verify files
  echo "$repoquery_hashes" | while read -r repo_hash repo_file; do
    local_file="/$repo_file"

    # Check file existence and type
    if [[ ! -x "$local_file" ]] || [[ ! -f "$local_file" ]] || [[ -h "$local_file" ]]; then
      continue
    fi

    # Calculate local disk hash
    disk_hash=$(sha256sum "$local_file" 2>/dev/null | awk '{print $1}')

    if [[ "$disk_hash" == "$repo_hash" ]]; then
      echo "Verified - Package: $package, File: $local_file" >> "$VERIFIED"
    else
      echo "Hash mismatch (Repository) - Package: $package, File: $local_file" | tee -a "$FAILURES"
      echo "$disk_hash $repo_hash $package $local_file" | tee -a "$DEBUG"
    fi
  done

  # Cleanup extracted files and downloaded RPM
  rm -rf ./* "$tmp_rpm"
done

echo "Verification complete. Results are stored in '$VERIFIED' and '$FAILURES'."

Also Read: Automating RPM checks
 

Daily Blog #783: Automating rpm checks

 


Hello Reader,

I'm recreating my 24 year old perl scipt in bash to allow someone to validate all of the installed rpms on a system against both the local rpm DB and the repository it came from.  This should allow a certain since of comfort on if any core system packages have been manipulated.

 

#!/bin/bash

# Files to store results
VERIFIED="verified"
FAILURES="failures"
DEBUG="debug"

# Clean previous results
> "$VERIFIED"
> "$FAILURES"
> "$DEBUG"

# Iterate over installed RPM packages
for package in $(rpm -qa); do
  echo "Processing package: $package"

  # Find repository URL
  repo_url=$(dnf repoquery -q --location "$package" 2>/dev/null | head -n 1)

  if [[ -z "$repo_url" ]]; then
    echo "Repository URL not found for package: $package" | tee -a "$FAILURES"
    echo "$repo_url $package" | tee -a "$DEBUG"
    continue
  fi

  # Get local file hashes from RPM database
  rpm -ql --dump "$package" | while read -r line; do
    file_path=$(echo "$line" | awk '{print $1}')
    rpm_hash=$(echo "$line" | awk '{print $4}')

    # Skip directories and non-executable files
    if [[ ! -x "$file_path" ]]; then
       continue
    fi
    
    if [[ ! -f "$file_path" ]]; then
       continue
    fi

    if [[ -h "$file_path" ]]; then
       continue
    fi
    # Calculate local disk hash
    disk_hash=$(sha256sum "$file_path" 2>/dev/null | awk '{print $1}')

    if [[ "$disk_hash" != "$rpm_hash" ]]; then
      echo "Hash mismatch (Local RPM DB) - Package: $package, File: $file_path" | tee -a "$FAILURES"
      echo "$dish_hash $rpm_hash $package $file_path" | tee -a "$DEBUG"
      continue
    fi

    # Get repository RPM hash
    repo_hash=$(rpm -qp --dump "$repo_url" 2>/dev/null | grep " $file_path " | awk '{print $4}')

    if [[ -z "$repo_hash" ]]; then
      echo "File not found in repository RPM - Package: $package, File: $file_path" | tee -a "$FAILURES"
      echo "$repo_hash $repo_url $file_path" | tee -a "$DEBUG"
      continue
    fi

    if [[ "$disk_hash" == "$repo_hash" ]]; then
      echo "Verified - Package: $package, File: $file_path" >> "$VERIFIED"
    else
      echo "Hash mismatch (Repository) - Package: $package, File: $file_path" | tee -a "$FAILURES"
      echo "$disk_hash $repo_hash $package $file_path" | tee -a "$DEBUG"
    fi
  done
done

echo "Verification complete. Results are stored in '$VERIFIED' and '$FAILURES'."

Also Read: Validating linux packages other than rpms


 

Daily Blog #782: Validating linux packages other than rpms

 

Hello Reader,

      We've talked about validating rpms in several posts now but there are other package managers besides rpm. Let's talk about how we can do the same validation with other package managers.

 

1. Debian/Ubuntu (dpkg & debsums)

Install debsums if you haven't already:

sudo apt install debsums

Verify file hashes for a specific package:

sudo debsums -s <package-name>

Verify a specific file:

sudo debsums -s <package-name> | grep /path/to/file

Verify all installed packages:

sudo debsums -cs

2. Arch Linux (pacman)

Check integrity of a specific package:

pacman -Qkk <package-name>

Verify a single file:

pacman -Qkk <package-name> | grep /path/to/file

Verify all installed packages:

pacman -Qkk

3. openSUSE (rpm & zypper)

openSUSE uses RPM, so you can use standard RPM verification commands:

Check integrity of a file against the RPM database:

rpm -Vf /path/to/file

Verify all installed packages:

rpm -Va

4. Alpine Linux (apk)

Newer Alpine Linux versions (3.15+) include the apk audit command:

Verify integrity of a package:

apk audit <package-name>

Verify all installed packages:

apk audit

Also Read: Self validating linux executables


rpm

Daily Blog #781: Validating local linux hashes to their distros

 

Hello Reader,

In my previous blog post, I explained how to use the rpm tool to validate a file on disk against the local RPM metadata. But what if you suspect that an entire package—not just a single file—has been tampered with?

This is where we can leverage a powerful feature I mentioned earlier: extracting metadata directly from the Linux distribution’s repository. Since this remote repository should be unaffected by any local security incidents, it allows you to verify that packages like the core system utilities in this example remain unaltered by a potential threat actor.


Step 1: Identify the Repository URL

To fetch the official package version, you first need to determine the correct repository URL. You can do this using the dnf repoquery command:

dnf repoquery --location coreutils (you can specify any package name)

This will return a URL similar to:

https://ftp.redhat.com/pub/redhat/linux/enterprise/9/en/os/x86_64/Packages/coreutils-8.32-31.el9.x86_64.rpm

Step 2: Extract the Official Package Hash

Now that you have the package URL, you can use rpm to retrieve its metadata, including file hashes, without downloading the full package:

rpm -q --dump -p https://ftp.redhat.com/pub/redhat/linux/enterprise/9/en/os/x86_64/Packages/coreutils-8.32-31.el9.x86_64.rpm

Step 3: Compare Local vs. Repository Hashes

To ensure your package is untouched, compare:

  1. The hash of the local file (e.g., /bin/ls)
  2. The hash stored in the local RPM database
  3. The hash from the official repository package

If all three hashes match, you can be highly confident that your package has not been altered.

Of course, this assumes there isn’t a worst-case scenario where the original distribution’s repository has been compromised—but let’s hope it never comes to that!

By following these steps, you can verify system integrity efficiently using native Linux tools


Also Read: Self Validating Linux Executables