Sophos XG and Export TCPDUMP pcap files

Recently, I needed to run TCPDUMP on a Sophos XG firewall appliance and export the pcap for further analysis in Wireshark.

On the SG, this was very straightforward as you could just use scp but on the XG appliance when connecting via ssh, you have to navigate through a menu before starting a shell, and the secure copy windows utility being used, did not allow for manipulation of the menu.

After some consideration, I realized I could drop the pcap in the root of the filesystem that is used by the user portal which is at  /usr/share/userportal and then download the file via the web browser.

To download the file, use the user portal URL and port and post-pend the name of the pcap. For example, https://<ip address>/filename.pcap

After the download, remove the file.

To ensure success, keep the following in mind:

  • Keep the size of the pcap small. The partition containing the location where you are going to place the pcap is relatively small.  Filling up the root partition could result in all kinds of unexpected behaviors.
  • Always use filters in tcpdump. You want to avoid the system from becoming slow or unresponsive because it is drinking from the firehose.  At least filter out your own SSH traffic.
  • CYA: You have to assume that anything you do while on the CLI can and will void your warranty. Sophos support doesn’t want to start taking calls from customers who caused a train wreck by letting the root partition fill up. On the other hand, if you’re using tcpdump, there is a greater change your skillset is seasoned enough to not let that happen.

How do you transfer pcap’s from an XG firewall appliance? Let me know in the comments.

Using Centos 7 as a Time Capsule Server

What follows below is a modified version of Darcyliu install script. I’ve changed it to account for changes resulting from the newer versions of netatalk.

Starting Point

# For this project, I start with a Centos 7 – Minimal install.  Install the Centos 7 – Minimal distribution. After install, update the packages to current:

yum -y upgrade

# then reboot the server:


# When the server is finished rebooting, it is time to get to work.   First, lets enable EPEL and install the first group of packages:

yum install -y
yum install -y rpm-build gcc make wget
# install netatalk
yum install -y avahi-devel cracklib-devel dbus-devel dbus-glib-devel libacl-devel libattr-devel libdb-devel libevent-devel libgcrypt-devel krb5-devel mysql-devel openldap-devel openssl-devel pam-devel quota-devel systemtap-sdt-devel tcp_wrappers-devel libtdb-devel tracker-devel bison
yum install -y docbook-style-xsl flex dconf perl-interpreter
# Now we need to build up netatalk.  At the time of the writing 3.1.11 is the current version.
# Install the source RPM for Netatalk:
rpm -ivh netatalk-3.1.*
# Build the RPM from sources
rpmbuild -bb ~/rpmbuild/SPECS/netatalk.spec
# Next install the netatalk binary
yum -y install ~/rpmbuild/RPMS/x86_64/netatalk-3.1.*
# Lets add the config files
# configuration
cat >> /etc/avahi/services/afpd.service << EOF
<?xml version=”1.0″ standalone=’no’?>
<!DOCTYPE service-group SYSTEM “avahi-service.dtd”>
<name replace-wildcards=”yes”>%h</name>
cat >> /etc/netatalk/AppleVolumes.default << EOF
/opt/timemachine TimeMachine allow:tmbackup options:usedots,upriv,tm dperm:0775 fperm:0660 cnidscheme:dbd volsizelimit:200000
cat >> /etc/nsswitch.conf << EOF
hosts: files mdns4_minimal dns mdns mdns4
cat >> /etc/netatalk/afp.conf << EOF
[Time Machine]
path = /opt/timemachine
valid users = tmbackup
time machine = yes
cat >> /etc/netatalk/afpd.conf << EOF
– -transall -uamlist,, -nosavepassword -advertise_ssh
# Add a user. This user id and password is what you’ll use when you mount the Time Machine folder. Also create the directory tree and change its ownership.
useradd tmbackup
mkdir -p /opt/timemachine
chown tmbackup:tmbackup /opt/timemachine
# Set firewall commands
firewall-cmd –zone=public –permanent –add-port=548/tcp
firewall-cmd –zone=public –permanent –add-port=548/udp
firewall-cmd –zone=public –permanent –add-port=5353/tcp
firewall-cmd –zone=public –permanent –add-port=5353/udp
firewall-cmd –zone=public –permanent –add-port=49152/tcp
firewall-cmd –zone=public –permanent –add-port=49152/udp
firewall-cmd –zone=public –permanent –add-port=52883/tcp
firewall-cmd –zone=public –permanent –add-port=52883/udp
firewall-cmd –reload
# Enable and start the services
systemctl enable avahi-daemon
systemctl enable netatalk
systemctl start avahi-daemon.service
systemctl start netatalk
systemctl restart avahi-daemon.service
systemctl restart netatalk
# set password for tmbackup
passwd tmbackup
A word about strategies.  If you want to back up more than one Mac, you can simply have the users share the login and password and as long as the Macs have different names, there will be no collisions in files created. Just use a good password to encrypt each backup.
I’m not a huge fan of sharing credentials. in fact, I think its a bad idea.  In order to use more than one login, create all the users and set a good password for each. Next, edit ( /etc/netatalk/afp.conf ) and add a duplicate of the entry above and change the share name (the string in between the brackets) and valid user to match the user id.  Do one entry for each user id.
[Time Machine1]
path = /opt/timemachine/user1
valid users = user1
time machine = yes

[Time Machine2]
path = /opt/timemachine/user2
valid users = user2
time machine = yes
[Time Machine3]
path = /opt/timemachine/user3
valid users = user3
time machine = yes
Next create user ids, folders in /opt/timemachine and change the owenrship of each user id
# EG:
adduser user1
adduser user2
adduser user3
mkdir -p /opt/timemachine/user1
mkdir -p /opt/timemachine/user2
mkdir -p /opt/timemachine/user3
chown user1:user1 /opt/timemachine/user1
chown user2:user2 /opt/timemachine/user2
chown user3:user3 /opt/timemachine/user3
# Now set a password on each:
passwd user1
passwd user2
passwd user3
Lastly, reboot the server just to make sure all the services start.  Next, attach to the server.  If you are on the same network, then you should see the server in your browse list.  If the server is on a different subnet, then you’ll have to point to the server manually.  Here’s how:
With Finder being the current app in the forground. Click Go -> Connect to Server
For server address, type the IP of the server and press enter:
Fill in the login and password from those that you just created.
Next “Open Time Machine Preferences…”
Select your new disk.

How to make a bootable USB stick with a Sophos Bootable Anti-Virus ISO (The Easy Way with Rufus)

The other day, I needed to make a fresh bootable Sophos Bootable Anti-Virus thumb drive.  After downloading the extract tool from Sophos’s free tools section, I began to follow Sophos KB article 111374, which begins:

“The Sophos Bootable Anti-Virus (SBAV) tool allows you to scan and cleanup a computer infected with malware without the need to load the infected operating system. This is useful if the state of the computer’s normal operating system – when booted – prevents cleanup from by other means, or the Master Boot Record (MBR) of the computer’s hard drive is infected.

Other SBAV articles assume you want to run the tool from a CD or DVD drive, but you can also use the SBAV tool from a USB pen drive.  For instructions on creating a CD see article 52011.”

Then the KB article instructs the reader on how to follow at 15+ step method of creating a bootable usb drive.   What follows is a method I use, which is faster and just as reliable, using a free utility called Rufus.








Extracting Sophos SBAV ISO (From The KB Article):

  1. Locate the downloaded file (sbav_sfx.exe) and run it.
  2. Select ‘Yes’ if prompted by User Account Control.
  3. Read and ‘Accept’ the End-User License Agreement.
  4. Choose an extraction path and click ‘Extract’. Note: For the rest of this article it is assumed the extraction path is left as the default ‘C:\sbav’.
  5. Open a command prompt (Start | Run | Type: cmd.exe | Press return).
  6. Change directory to the extraction folder (e.g., C:\sbav) with the following command:
    cd c:\sbav
  7. To create the ISO image containing the SBAV tool run the following command:
    sbavc.exe sbav.iso

Create Bootable Thumb Drive Using Rufus

  1. Run Rufus – Select ‘Yes’ if prompted by User Account Control.
  2. To the right of “Create a booble….”, click the CD icon and open the sbav.iso file (should be located in C:\sbav)
  3. To the right of “Create a booble….”, click the ‘drop down’ menu and change it to read: ISO Image
  4. Make sure Rufus selected the proper device (your usb thumb drive)
  5. Only leave checked the following options: Quick Format, Create a bootable disk, create extended label and icon files. These are default and you shouldn’t actually have to check or uncheck any other options.

That’s it – just click Start.

When finish, just eject the thumb drive and you’re ready to go.

Sophos UTM Command-line Useful Shell Commands and Processes: Tuning Web Protection

One of the benefits of working with different customers is that troubleshooting processes are used enough for common tasks that they undergo significant refinement over time.  One such process is tuning web protection exceptions via the http.log.

Task: Permitting URLS

Using linux commands tail, awk, and grep, it is easy to spot all web traffic being blocked from the end users.  After logging into the linux shell on UTM 9.4 and becoming root, type:

# tail -f http.log | awk -F”\”” ‘{print $16″ “$12” “$38” “$10}’|grep -v ” pass ”

The output looked like:

x.y.z.226 warn web request warned, forbidden category detected
x.y.z.88 warn web request warned, forbidden category detected

This shows all the URLs in real time that are being blocked and why.  The Microsoft CRL url was being blocked because the default exception was enforcing the category and the database doesn’t know that CRL should not be “un-categorized”.

The true benefit of the method is that you’ll be able to fine tune exceptions before anyone opens a support request for a process failing like Office updates.


Password Reuse: Dropbox Breach Lesson

The 68 million account leak of Dropbox resulted from Password Reuse. Dropbox leaking 68 Million email address (used as logins), password hashes, and other user account details, all the result of password reuse by a Dropbox Employee, who  used the same login/password combination from one site  and resued the credentials Dropbox. This employee apparently had access to user details. The bad guys did not have to attack all the security barriers to gain access to Dropbox’s internals, they just needed to find a login/password. And they succeeded.

The lesson from the Dropbox breach, which isn’t a new lesson at all: Do not reuse passwords!

Imagine the following scenario. You create an account on a new website and you use the same email/password used in your email account. Six months later, your the new website isn’t doing so well, the site is not paying close attention, and their database is leaked.

The best case is that they used strong encryption techniques and your password is can not be decrypted. Or worse, your password was encrypted but the proper steps were not taken to prevent the decryption, and they figure out your password. Either way, the bad buys know your email and if they also think they know your password, your email account is next on the list. From there, they look for hints to where you bank, your amazon account details, etc.

If you’re telling yourself “this can’t actually happen“, you are not respecting how determined the bad guys are. Why else do the bad guys try to hard to breach online services?

Lets review the breach wall of shame:
359,420,698 – MySpace
164,611,595 – LinkedIn
152,445,165 – Adobe
68,648,009 – Dropbox

You might think – How do you limit the impact of a breach, since there appears to be no end of them? Here is a few recommendations.

Never Reuse A Password

Never use the same password with more than one service. You should use a different and “good” password for each site. You may choose to use a password manager to keep track of all those password. Two suggestion Lastpass and 1Password.

Always Use A Good Password

A good password doesn’t necessarily have to be full of numbers and special characters to be good. A password of 12 characters or more that contains upper and lower case creates a 52 character pool someone would have to try to randomly guess your password. Add numbers, gain a character pool of 62, add special characters, the pool grows 80+ characters. If you’re using Lastpass or another password manager, let the password manager pick your password and make it as long as the service will allow. The password managers also do you one additional favor, they can fill in the login forms with your credentials.

Avoid Dictionary Words for A Password

I have been running a honey pot for several years, where I log every password tried. The passwords are variations of dictionary words and commonly used passwords (we know the common passwords because we study the data the is exposed from breaches). Some IT people think its also a good practice to use “Elite Speak” alternative spellings. They replace ‘e’ with 3 for example. The bad guys know this trick. Don’t try it. The lesson: Do not think by adding a few numbers on the end of a dictionary word or substituting characters in a systematic way on a dictionary word is good enough, its not. If your password is in a dictionary, change it.

Use 2 Factors for Authentication

Proving identity requires one of 3 factors – Something you know, something you are, something you have. A password is the most common authentication factor – Something you know. Most services like Dropbox also can use a second factor, optionally, Something you have – to be used in combination of with your password to make it much harder for bad guys from leveraging a leaked password to gain access to your account. Each service may use a different “something you have” factor but most of the time they use a rotating 6 digit key, which you setup an app on your smart phone that generates the same 6 digits every 30 seconds or so. Then when you login, the website prompts you for the six digits or you append the six digits to then end of your password. Google has the Google Authenticator app. Just check the “app store” for your smart device.

Lastly, how do you know if your information has been compromised? Check out, just enter a user id or your email address and see if your details matched.

Sophos UTM v9.3 – AD SSO and Web Protection Profiles

Keeping with the spirit of sharing my check lists, here is my Active Directory integration check list used in configuring AD SSO used in Web Protection Profiles. This is NOT a Web Profile Check list, just the AD portion.

Initial Configuration – DNS and Hostnames:

  • The UTM Hostname – When the UTM is setup, the initial hostname should have been a publicly working hostname.  That hostname is used in a whole host of configurations locations downstream. If the hostname was not valid on the internet, hostname over rides would will have to be used.
  • The UTM must have a valid internal hostname. The hostname used when configuring the utm must be resolvable in the local AD dns.
  • DNS Configuration. Use a DNS Availability group on the UTM, all of which points externally. Create a DNS Request route to point all internal dns lookups to your AD DNS server. Lastly, configure your AD servers to forward all external DNS requests to the UTM.

Authentication Services

  • In AD, create a user for the UTM AD service with RED ONLY privileges
  • Set create users automatically
  • Create an AD Authentication server, using the read only ad user id created above
  • After creating the AD Auth server, be sure to test the lookup work as intended
  • Join the UTM to the AD domain.

How To Test

  • Test authenticate the user portal with an AD login/pass
  • Watch the live logs

Failed Log Entry:

2015:06:08-11:11:33 XXXXX aua[17765]: id=”3005″ severity=”warn” sys=”System” sub=”auth” name=”Authentication failed” srcip=”XXXXXXXXXX” host=”” user=”testuser” caller=”portal” reason=”DENIED”

Successful Entry:

2015:06:08-11:14:10 XXXXX aua[19120]: id=”3004″ severity=”info” sys=”System” sub=”auth” name=”Authentication successful” srcip=”XXXXXXXXXX” host=”” user=”testuser” caller=”portal” engine=”adirectory”




Sophos UTM Home / Software Licensed IP Count Explained

For some users of the Sophos UTM running up to V9.3x at home using the very generous “home license”, the number of IP’s used against the allotted 50 is always a concern.  Some people have a hobby of collecting baseball cards, some fly model airplanes, and some construct an IT lab at home using Vmware’s ESX and hosting Sophos UTM with the Home license. Add a family with each member having from 2 to 4 devices, IOT, and 50 licensed IP addresses does not last long.  For this subset of users of the Sophos UTM, the question of “how long does an IP stay in the Active IP Addresses” and “what has to happen to get an IP address noticed (Read: added to the Active IP list)”.  I recently setup a few scenarios in my lab to answer this question.

How Does An IP Get Noticed?

From my testing, any time an IP address is processed on an interface on the UTM, the source/destination pair is logged in the accounting table. The UTM  uses a postgres database for its packet accounting. During debugging, all IP’s that are included that appear to be within the range of addresses defined in the objects on Interfaces & Routing -> Interfaces, then subtracting off the IP’s of the UTM interfaces. Only packets that have traffic, as a source or destination, within the last 7 days (from the time of the query), are included. Sophos access points and UTM interface IP addresses are subtracted off this list.

I ran some tests, where I setup a network and let the UTM handle DHCP services. I defined the scope to not include a gateway or DNS server settings. The device only received an IP and subnet mask. These devices were also included in the list of Active IP Addresses.

How Long Does An IP Stay Active?

As mentioned earlier, the packet level accounting is kept in a database. Based on the SQL query seen under the hood of the UTM, the query specifies a 7 day look back.

We Have A Guest Wifi Network – Any Tips On Lowering The IP Count?

If you have a guest wifi network and a large amount transit and short lived users, the user count can grow quickly. Imagine a House of Worship or other venue where the number of users have a quick but short lived peek. There is noting you can do to limit the number of IP’s added to the active list apart from “natting” that traffic behind another firewall, you can lessen the impact by making the DHCP lease time as short as the expected duration of that group’s visit. Lets look at an example.

Suppose a House of Worship holds two Sunday morning services and a Sunday night service and the lease is set for 24 hours (the default). Should they have 50 users at the first service, and 75 at the second, and 35 users for the evening service, 110 IP address would result in the Active list.  Now suppose the DHCP scope lease is set for 60 minutes. We would expect the Active list to top out at 75 users.

The DHCP lease time should not be made too short. If all the users have to renegotiate the IP too frequently, a lot of needless overhead would be created.

IOT, Printers, Cameras

How then does one prevent the IP addresses used on IOT, IP based cameras, and printers from burning one of the active IP address slots? It is not hard, but lets restate the question for clarity: How do you prevent ANY device that does not need to be on the internet from taking one of the Active IP address slots? Try one of the following:

  • Manually configure the IP settings on the device and do not include a “gateway of last resort”.
  • Use a DHCP server other than the UTM for your network, configuring the scope for these devices to not set a router.


Keeping the DHCP lease time short enough to prevent the build up of IP addresses in the Active list is one method of reducing the number of licenses for the home or software appliance installation, yet too short of lease creates a waste of resources and could inconvenience the user. Devices that do not need internet access can be configured such that the device has absolutely no interaction with the UTM. The bottom line is that it only takes one packet from a device to get listed in the accounting database, and it is from that database that the UTM checks when building the active ip list.


Installing Coldfusion 11 on Centos 6.6 with SELinux Enforcing

In a previous post, I shared my method of installing Coldfusion on a Centos server. The method was written for older versions of Coldfusion and Centos yet the method still works today with CF 11 and Centos 6.6.  I was never happy about one aspect of the install, which was in order to get it to work, SELinux had to be disabled. After spending some time on the topic, I’m happy to provide this procedure to keep SELinux ‘enforcing’ post CF install.

Verify The Problem

If you leave SELinux in the enforcing mode, when you restart apache,  you’ll likely see this error:

Starting httpd: httpd: Syntax error on line 1010 of /etc/httpd/conf/httpd.conf: Syntax error on line 2 of /etc/httpd/conf/mod_jk.conf: Cannot load /opt/coldfusion11/config/wsconfig/1/ into server: /opt/coldfusion11/config/wsconfig/1/ failed to map segment from shared object: Permission denied

This is where you should be from the install procedure, Coldfusion installed but Apache will not start.

Install The Tools

  • We need some of the SELinux audit tools:
    yum -y install policycoreutils-python
  • Next we need to look at the error:
    grep httpd /var/log/audit/audit.log | audit2why

The output of the audit2why may have other lines of output, but should contain::

type=AVC msg=audit(1422463871.557:760010): avc: denied { execute } for pid=2658 comm=”httpd” path=”/opt/coldfusion11/config/wsconfig/1/” dev=dm-0 ino=524516 scontext=unconfined_u:system_r:httpd_t:s0 tcontext=unconfined_u:object_r:usr_t:s0 tclass=file

Was caused by:
Missing type enforcement (TE) allow rule.

You can use audit2allow to generate a loadable module to allow this access.

Since this is at missing enforcement TE allow rule, we can fix it using the following steps:

Step 1: Read the Audit Log

# audit2allow -a

Step 2: Generate A Module Package

# audit2allow  -a -M httpd_t

This step creates two files, httpd_t.pp and httpd_t.te

Step 3: Apply The Policy

# semodule -i httpd_t.pp

And for good measure, I add the following commands:

chcon -R -t httpd_log_t /opt/coldfusion11/config/wsconfig/1/*.log
chcon -R -t httpd_exec_t /opt/coldfusion11/config/wsconfig/1/

Step 4: Test

Restart Apache, and it should start, without errors.

# service httpd restart

If all went well, you should have Apache and Coldfusion running.

HOWTO: Add MySQL 5 Driver Support To Coldfusion 11

If you do any work with Adobe Coldfusion, when Coldfusion 11 was released, one of the items missing from Coldfusion 11 was database driver support for MySQL 5 community server, trying to add a datasource ended in an error messaging simply instructing the user to download and install the driver.

My first thought to tackle this issue was simple – turn to Uncle Google and see if there is an howto written on this… but at last, after reviewing the results, no HOWTO was found. So I promised myself to write a HOWTO if I ever figure it out, and here I am.

All that is needed is a .JAR file and it needs to be placed in the CFroot/lib directory. After some searching, I discovered that the MySQL’s Connector/J is the official JDBC driver for MySQL and this is exactly what is needed for this problem.

Here are the steps:

  • Downloaded the file from
    # wget “”
  • On Centos 6, extract out the JAR file and drop the file on the server in: /opt/coldfusion11/cfusion/lib
    # cp mysql-connector-java-5.1.34-bin.jar /opt/coldfusion11/cfusion/lib
  • Changed ownership and priv on the file so the web server can access/run it
    # chown apache.bin /opt/coldfusion11/cfusion/lib/mysql-connector-java-5.1.34-bin.jar
    # chmod 0700 /opt/coldfusion11/cfusion/lib/mysql-connector-java-5.1.34-bin.jar
  • Restart Coldfusion
    # service coldfusion_11 restart

After Coldfuion 11 is restarted, add your MySQL 5 datasource and you are finished!

Sendmail Series – Limiting SPAM – Example Code

NO SPAMRecently a client asked if I could help them improve their anti-spam efforts. It is not just generic blue pill pushing emails they are trying to get rid of but seem spam as a vector for all things that are “no bueno”. They are using sendmail and dovecot on a Centos 6.x linux server. They also use several RBL’s as well as spamassassin as their primary spam mitigation.

The purpose of this post is to lay the foundation for the future posts on how to carry out the plan.  Here are the key points to the project:

  • Review the output of sendmail and dovecot, determine patterns that show abuse, spam, and other security risks
  • Create a series of policies that address the identified risks
  • Implement the policies

Attacks To Mitigate

The client was concerned about unauthorized use of the server. They explained that occasionally a weak password would get exploited and the server will be used by the unauthorized access to pump out spam or worse.  All the passwords are hash in the shadow file and no copies of the password are kept in plain text. Moving forward, the client has a strong password policy but older accounts may have less than strong passwords. Further, older legacy accounts have been in use since the early 90’s where security wasn’t the logical constraint to some of these behaviors that it is today, and  may also share their login as the left hand portion of the email address, effectively publishing the user name to the bad guys.  In fact, all of the known weak password compromises have been on the older accounts where the user name is also the left hand side of the email address. Sigh!

The mail services offer is standard for a server of this variety (SMTP, Auth Relay SMTP, POP3, IMAP).  The client would like to make it considerably harder to:

  • Probe the system for weak passwords.
  • Clamp down on any easily identifiable hosts who is sending inbound spam to the client’s customers.

Reviewing Log Files

Sendmail is configured to use RBL, primarily from Spamhaus and spamassassin via a milter and the log output is standard.  The log files are rotated each day. It is standard in Dovecot/Sendmail to log their activity on a Centos server in /var/log/maillog. The code examples below are piped they way they to show the thought process and many can be rewritten more efficiently. Thoughtful suggestions are welcome in the comments.

SMTP Auth Counts:  How many times does each username authenticate:

grep “bits=0” /var/log/maillog|awk -F”,” ‘{print $3}’|awk -F “=” ‘{print $2}’|sort|uniq -c| sort -rn
By sampling many day’s login activity, we can get a general sense of what a “high water mark” should be. Over 10 days of logs, the most active user on this system authenticates via SMTP Auth 89 times and accounts that are being abused tend to have counts in the thousands.
SMTP Auth Failure: Log entries that correspond to failed SMTP Auth attempts:
grep “auth failure” /var/log/messages|awk ‘{print $10}’|sed -e ‘s/\[user=//g’ -e ‘s/]//g’|sort|uniq -c| sort -rn
This code example will extract and count the failed smtp auth attempts, in this case for the current day. The output of this code, for my client’s server at least, seems to show that the bad guys like to try email addresses and then the portion to the left of the @ symbol as the login, since that seemed to be a common practice. This is actual output of the top failed login attempt via smtp auth with the domain name redacted and preceeded by the attempt count in just 12 hours:
454 ajbadget
382 ajbadget@<redacted domain>
Status Sent: Count of the number of email messages sent:
cat /var/log/maillog|grep -i “stat=sent” | awk -F”<” ‘{print $2}’|awk -F”>” ‘{print $1}’| sort|uniq -c|sort -rn|more
The output of this example code will show how many emails have been received by the email address. Not exactly a direct indication of abuse, but it is nice to know how many emails are inbound and anything that stands out needs to be looked at further.
Broken Connections: The IP Addresses that for some reason do not complete the full SMTP transaction
cat /var/log/maillog|grep “did not issue MAIL”|awk -F”[” ‘{print $3}’|awk -F “]” ‘{print $1}’|sort|uniq -c|sort -rnk1|head
This is interesting. Sendmail logs a line that contains “did not issue MAIL” when the connection dies before issuing MAIL.  There are several reasons, all of which in my mind are fodder for a temporary block. If for any reason, IP’s have hundreds of entries like this per day, they need to be dealt with.
Possible SMTP RCPT Flood: Dumping SMTP Commands
grep “Possible SMTP RCPT flood” /var/log/maillog|awk -F”[” ‘{print $3}’|awk -F”]” ‘{print $1}’|sort|uniq -c|sort -rnk1|head
A non-standard method for some spam spewing servers is to essentially dump all the smtp commands at once, not waiting for the reply and just hanging up, not caring if the message was accepted or not.  The code above shows the top IPs and the count that trigger the “SMTP RCPT Flood” log messages, which is also more fodder for blocking.
Number of Emails Sent By User:
cat /var/log/maillog|grep “ctladdr”|awk -F”=” ‘{print $3}’|awk ‘{print $1}’|sed -e ‘s/<//g’ -e ‘s/>//g’ |grep -v root|sort|uniq -c
The number of outbound emails by user is useful to help spot a compromised account, as any that stands out needs a closer look.
Count IP Blocked By RBL’s:
cat /var/log/maillog|grep  misconfiguration|awk -F”[” ‘{print $3}’| awk -F”]” ‘{print $1}’|sort|uniq -c|sort -rnk1|head -n10
The client’s server users DNS based RBL’s to help stem the amount of spam received. None of the IP’s in the output of this command, on the client’s server, successfully sent spam to the server’s recipients, but can this process be made more efficient?  It would be nice to block IP’s that really stand out, lightening the load on sendmail.  For example in the course of 10 days, this server received just under 10,000 blocked attempts from 8 servers.

Create Policies

From the data collected using the command line code, we recommended and the client agreed that the following policy should be created (read: written down!) and tools created to automate the enforcement. The thresholds here are based on what seemed to be a good starting point for the this server.

  • Excessive Broken Connection – After receiving 5 over a 1 to 2 hour window, the IP should be blocked for 3 days.
  • SMTP RCPT Flood – After receiving more than 8 flood messages in a 1 to 2  your window, the IP should be blocked for 1 day.
  • Excessive RBL Blocks – When a server sends more than 10 emails in a 1 to 2 hour window, block the IP of the server for 1 day.
  • Excessive Success SMTP Auth – When more than 25 successful SMTP Auth are served in a 1 to 2 hour window, send an email alert to the sysadmin for review.
  • Excessive SA Drops – When an ip’s email sends more than 10 emails in a 1 to 2 hour window, and Spamassassin scores those message high enough to drop them via the milter, block the ip address of the remote server for 1 day.
  • Excessive Outbound Email – When a user sends more than 25 messages in a 1 to 2 hour window,  send a notice to the sysadmin for review.
I hope you take a way from this post are some practical tips on how to process your sendmail log files, to separate the chaff from the wheat so to speak.  From the command line tools, the admin now can get a sense for when an account is compromised or from what IPs specific attacks are coming. The next several blog posts on this topic will show how the command line code can automatically put the offending IP or user into the penalty box  when a threshold is reached.