Sophos XG and Export TCPDUMP pcap files

Recently, I needed to run TCPDUMP on a Sophos XG firewall appliance and export the pcap for further analysis in Wireshark.

On the SG, this was very straightforward as you could just use scp but on the XG appliance when connecting via ssh, you have to navigate through a menu before starting a shell, and the secure copy windows utility being used, did not allow for manipulation of the menu.

After some consideration, I realized I could drop the pcap in the root of the filesystem that is used by the user portal which is at  /usr/share/userportal and then download the file via the web browser.

To download the file, use the user portal URL and port and post-pend the name of the pcap. For example, https://<ip address>/filename.pcap

After the download, remove the file.

To ensure success, keep the following in mind:

  • Keep the size of the pcap small. The partition containing the location where you are going to place the pcap is relatively small.  Filling up the root partition could result in all kinds of unexpected behaviors.
  • Always use filters in tcpdump. You want to avoid the system from becoming slow or unresponsive because it is drinking from the firehose.  At least filter out your own SSH traffic.
  • CYA: You have to assume that anything you do while on the CLI can and will void your warranty. Sophos support doesn’t want to start taking calls from customers who caused a train wreck by letting the root partition fill up. On the other hand, if you’re using tcpdump, there is a greater change your skillset is seasoned enough to not let that happen.

How do you transfer pcap’s from an XG firewall appliance? Let me know in the comments.

Using Centos 7 as a Time Capsule Server

What follows below is a modified version of Darcyliu install script. I’ve changed it to account for changes resulting from the newer versions of netatalk.

Starting Point

# For this project, I start with a Centos 7 – Minimal install.  Install the Centos 7 – Minimal distribution. After install, update the packages to current:

yum -y upgrade

# then reboot the server:

reboot

# When the server is finished rebooting, it is time to get to work.   First, lets enable EPEL and install the first group of packages:

yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
yum install -y rpm-build gcc make wget
# install netatalk
yum install -y avahi-devel cracklib-devel dbus-devel dbus-glib-devel libacl-devel libattr-devel libdb-devel libevent-devel libgcrypt-devel krb5-devel mysql-devel openldap-devel openssl-devel pam-devel quota-devel systemtap-sdt-devel tcp_wrappers-devel libtdb-devel tracker-devel bison
yum install -y docbook-style-xsl flex dconf perl-interpreter
# Now we need to build up netatalk.  At the time of the writing 3.1.11 is the current version.
wget http://www003.upp.so-net.ne.jp/hat/files/netatalk-3.1.11-1.3.fc29.src.rpm
# Install the source RPM for Netatalk:
rpm -ivh netatalk-3.1.*
# Build the RPM from sources
rpmbuild -bb ~/rpmbuild/SPECS/netatalk.spec
# Next install the netatalk binary
yum -y install ~/rpmbuild/RPMS/x86_64/netatalk-3.1.*
# Lets add the config files
# configuration
cat >> /etc/avahi/services/afpd.service << EOF
<?xml version=”1.0″ standalone=’no’?>
<!DOCTYPE service-group SYSTEM “avahi-service.dtd”>
<service-group>
<name replace-wildcards=”yes”>%h</name>
<service>
<type>_afpovertcp._tcp</type>
<port>548</port>
</service>
<service>
<type>_device-info._tcp</type>
<port>0</port>
<txt-record>model=Xserve</txt-record>
</service>
</service-group>
EOF
cat >> /etc/netatalk/AppleVolumes.default << EOF
/opt/timemachine TimeMachine allow:tmbackup options:usedots,upriv,tm dperm:0775 fperm:0660 cnidscheme:dbd volsizelimit:200000
EOF
cat >> /etc/nsswitch.conf << EOF
hosts: files mdns4_minimal dns mdns mdns4
EOF
cat >> /etc/netatalk/afp.conf << EOF
[Time Machine]
path = /opt/timemachine
valid users = tmbackup
time machine = yes
EOF
cat >> /etc/netatalk/afpd.conf << EOF
– -transall -uamlist uams_randnum.so,uams_dhx.so,uams_dhx2.so -nosavepassword -advertise_ssh
EOF
# Add a user. This user id and password is what you’ll use when you mount the Time Machine folder. Also create the directory tree and change its ownership.
useradd tmbackup
mkdir -p /opt/timemachine
chown tmbackup:tmbackup /opt/timemachine
# Set firewall commands
firewall-cmd –zone=public –permanent –add-port=548/tcp
firewall-cmd –zone=public –permanent –add-port=548/udp
firewall-cmd –zone=public –permanent –add-port=5353/tcp
firewall-cmd –zone=public –permanent –add-port=5353/udp
firewall-cmd –zone=public –permanent –add-port=49152/tcp
firewall-cmd –zone=public –permanent –add-port=49152/udp
firewall-cmd –zone=public –permanent –add-port=52883/tcp
firewall-cmd –zone=public –permanent –add-port=52883/udp
firewall-cmd –reload
# Enable and start the services
systemctl enable avahi-daemon
systemctl enable netatalk
systemctl start avahi-daemon.service
systemctl start netatalk
systemctl restart avahi-daemon.service
systemctl restart netatalk
# set password for tmbackup
passwd tmbackup
A word about strategies.  If you want to back up more than one Mac, you can simply have the users share the login and password and as long as the Macs have different names, there will be no collisions in files created. Just use a good password to encrypt each backup.
I’m not a huge fan of sharing credentials. in fact, I think its a bad idea.  In order to use more than one login, create all the users and set a good password for each. Next, edit ( /etc/netatalk/afp.conf ) and add a duplicate of the entry above and change the share name (the string in between the brackets) and valid user to match the user id.  Do one entry for each user id.
[Time Machine1]
path = /opt/timemachine/user1
valid users = user1
time machine = yes

[Time Machine2]
path = /opt/timemachine/user2
valid users = user2
time machine = yes
[Time Machine3]
path = /opt/timemachine/user3
valid users = user3
time machine = yes
Next create user ids, folders in /opt/timemachine and change the owenrship of each user id
# EG:
adduser user1
adduser user2
adduser user3
mkdir -p /opt/timemachine/user1
mkdir -p /opt/timemachine/user2
mkdir -p /opt/timemachine/user3
chown user1:user1 /opt/timemachine/user1
chown user2:user2 /opt/timemachine/user2
chown user3:user3 /opt/timemachine/user3
# Now set a password on each:
passwd user1
passwd user2
passwd user3
Lastly, reboot the server just to make sure all the services start.  Next, attach to the server.  If you are on the same network, then you should see the server in your browse list.  If the server is on a different subnet, then you’ll have to point to the server manually.  Here’s how:
With Finder being the current app in the forground. Click Go -> Connect to Server
For server address, type the IP of the server and press enter:
afp://x.y.z.c
Fill in the login and password from those that you just created.
Next “Open Time Machine Preferences…”
Select your new disk.

Meltdown, Spectre, InSpectre

Meltdown and Spectre

Meltdown is an attack that breaks the isolation between user applications and the operating system, while Spectre breaks the isolation between applications. The performance of modern computers relies on a feature in the CPU called Branch Prediction. Branch prediction processes instructions in a non-linear stream. While this is an oversimplification, the goal of branch prediction is to make an educated guess of which instruction will be executed next, and as a result, the instructions can be executed out of order if there are no dependencies. The ability to execute instructions out of order dramatically increases the speed of the overall program execution. Meltdown (CVE 2017-5754) and Spectre (CVE 2017-5753 & 2017-5715) exploit critical vulnerabilities in modern CPU/processors. Meltdown exploits side effects of out of order execution. Spectre, on the other hand, induces a victim to speculatively perform operations that would not normally occur, which can reliably leak confidential information thru a side channel to the attacker.  The attacks involve stealing data from other processes currently running on the system.

How Well Do Meltdown and Spectre Work?

The short answer is that Meltdown and Spectre work extremely well. The implications of what a motivated individual could do cannot be oversold. We know that researchers have demonstrated of attack code, successfully intercepting targeted objects so it is safe to assume that weaponizing Meltdown and Spectre is underway (or even completed).  What kind of information can be stolen? Passwords, personal data, photos, documents, just about anything actually.

InSpectre: A Simple Testing Tool

It is not easy to test a Windows-based system to understand if the hardware or operating system is vulnerable to either Meltdown or Spectre based attacks since there are two Spectre variants and the mitigations available are currently evolving. Steve Gibson, of Gibson Research Corporation, released a zero-install utility called InSpectre.  InSpectre is an easy to understand utility to test all Windows-based computers for both Meltdown and Spectre. Best of all, InSpectre does not require installation.  Just run the utility.  Until manufacturers release updates, understanding if a system is vulnerable is important so a mitigation strategy can be chosen. For Linux and Mac based PC, InSpectre is Wine friendly.

Mitigations

Spectre: Harden User Applications

Microsoft, GCC, and other vendors who provide compilers have been busy updating their software to include a new switch to protect against Spectre (CVE 2017-5753). A developer need only to recompile the application with the necessary options enabled. Web browsers also need to be hardened to prevent javascript exploit code.

Spectre: Microcode Updates

Spectre (CVE 2017-5715) mitigation involves microcode updates to add new CPU instructions to eliminate branch speculation in some risky situations. Microcode changes are part of BIOS updates for most platforms, although Linux can load updated microcode in most circumstances. But it comes down to your vendor and if they’ll provide updated microcode.  If not, then Spectre will have earned its name, in that this vulnerability will be around to haunt us for a long time.

Meltdown: Patches and PCID

One mitigation involves patching and works by flushing the translation lookaside buffer (TLB) when switching between user and kernel space. However, this takes a huge bite out of the performance of the computer. Repopulation of the TLB is quite painful but the real pain comes from the actual cleaning of the buffer.

Another mitigation involves using Process-Context identifiers, which is supported in newer processors. The use of PCID tags eliminates the need to flush the TLB at context switches.  The context identifiers are in the TLB and lookups in the TLB will only succeed when the PCID matches that of the thread running in the processor.

Conclusion

Meltdown and Spectre pose a significant risk of reliably leaking confidential information using branch prediction which is baked into every modern processor. Because processors are not easy to change, and the mitigations to each of the CVE involve wholesale changes to applications and platforms, managing the vulnerability status of each system over time is essential.  GRC’s InSpectre provides a very easy to use utility to check the vulnerability status of a computer running Windows or Wine. It will be interesting to see how many system vendors will provide updates to their platforms and just how far back they go on previous models. This is an issue that is not going to go away and we can be sure of one thing; that Meltdown and Spectre is here to stay.

Resources

Meltdown and Spectre: More Information

KAISER: Hiding the Kernal from User Space

Spectre Mitigations in MSVC

Understanding the performance impacat of Spctre and Meltdown Migigations on Windows Systems

Meltdown – Cyberus Technology Blog

How to make a bootable USB stick with a Sophos Bootable Anti-Virus ISO (The Easy Way with Rufus)

The other day, I needed to make a fresh bootable Sophos Bootable Anti-Virus thumb drive.  After downloading the extract tool from Sophos’s free tools section, I began to follow Sophos KB article 111374, which begins:

“The Sophos Bootable Anti-Virus (SBAV) tool allows you to scan and cleanup a computer infected with malware without the need to load the infected operating system. This is useful if the state of the computer’s normal operating system – when booted – prevents cleanup from by other means, or the Master Boot Record (MBR) of the computer’s hard drive is infected.

Other SBAV articles assume you want to run the tool from a CD or DVD drive, but you can also use the SBAV tool from a USB pen drive.  For instructions on creating a CD see article 52011.”

Then the KB article instructs the reader on how to follow at 15+ step method of creating a bootable usb drive.   What follows is a method I use, which is faster and just as reliable, using a free utility called Rufus.

Requirements:

 

 

 

 

 

 

Extracting Sophos SBAV ISO (From The KB Article):

  1. Locate the downloaded file (sbav_sfx.exe) and run it.
  2. Select ‘Yes’ if prompted by User Account Control.
  3. Read and ‘Accept’ the End-User License Agreement.
  4. Choose an extraction path and click ‘Extract’. Note: For the rest of this article it is assumed the extraction path is left as the default ‘C:\sbav’.
  5. Open a command prompt (Start | Run | Type: cmd.exe | Press return).
  6. Change directory to the extraction folder (e.g., C:\sbav) with the following command:
    cd c:\sbav
  7. To create the ISO image containing the SBAV tool run the following command:
    sbavc.exe sbav.iso

Create Bootable Thumb Drive Using Rufus

  1. Run Rufus – Select ‘Yes’ if prompted by User Account Control.
  2. To the right of “Create a booble….”, click the CD icon and open the sbav.iso file (should be located in C:\sbav)
  3. To the right of “Create a booble….”, click the ‘drop down’ menu and change it to read: ISO Image
  4. Make sure Rufus selected the proper device (your usb thumb drive)
  5. Only leave checked the following options: Quick Format, Create a bootable disk, create extended label and icon files. These are default and you shouldn’t actually have to check or uncheck any other options.

That’s it – just click Start.

When finish, just eject the thumb drive and you’re ready to go.

Using Time Elapsed Video To Show Outflow Boundary Structure

A line of storms, which where quite off in the distance, had an outflow boundary heading towards us, 90 degrees offset from the storm’s direction of travel. The time elapsed video starts with the outflow’s arrival and appears to be heading towards the camera (South Easterly), where the storm’s motion appears from left to right (North Easterly).

A line of storms, which where quite off in the distance, had an outflow boundary heading towards us, 90 degrees offset from the storm's direction of travel. The time elapsed video starts with the outflow's arrival and appears to be heading towards the camera (South Easterly), where the storm's motion appears from left to right (North Easterly).

More Information about Outflow Boundaries

The following observations were made from the video to highlight some aspects that may not be obvious to a casual observer:

  • Unlike the radar image, there is more activity behind the Outflow Boundary.  The flow of air continues from the storm to flow out well after the leading edge of the OB passes. This seems obvious from a fluid dynamics perspective, but it is not obvious from the perspective of the casual observer, as the radar image shows a distinct boundary.
  • Why does the radar image of a Outflow Boundary shows a sharp boundary interface? Weather radar is detects more than meteorological  targets (rain, snow, ice, etc). Sometimes the air in front of an outflow is dry and the line seen on radar is dust. To a lesser extent, the line may also contain some rain when the conditions support it. Correlation Coefficient can help categorize meteorological  vs non-meteorological  targets.  See: Correlation Coefficient

 

Bonus Observation:

The video shows two air masses moving at roughly 90 degrees from each other. If you watch carefully, you can see swirling where the two air masses interface each other. This is also known as shear and it is clearly seen on the video.  Imagine that on a larger scale, for example, a super cell thunderstorm, where the down draft (and up draft) both are stronger, larger in size, and longer in duration. This is one of the many ingredients that made tornado formation possible.  This video is showing shear on a very small scale. The take away: Shear always happens when two fluids, like air, meet and are going different directions.

Conclusion

The weather outside your window right now is the result of an interaction of fluids (the mix of gases we call ‘air’) and energy in 3D space.  To the casual observer or even beginning weather junkies, the tools used to observe the current conditions only observe weather within their limitations. Using tools like time elapsed video can help us appreciate just how complex the interaction of two air masses can be, by compressing time and reveling that which would otherwise go unnoticed.

Sophos UTM Command-line Useful Shell Commands and Processes: Tuning Web Protection

One of the benefits of working with different customers is that troubleshooting processes are used enough for common tasks that they undergo significant refinement over time.  One such process is tuning web protection exceptions via the http.log.

Task: Permitting URLS

Using linux commands tail, awk, and grep, it is easy to spot all web traffic being blocked from the end users.  After logging into the linux shell on UTM 9.4 and becoming root, type:

# tail -f http.log | awk -F”\”” ‘{print $16″ “$12” “$38” “$10}’|grep -v ” pass ”

The output looked like:

x.y.z.226 warn http://crl.microsoft.com/pki/crl/products/WinPCA.crl web request warned, forbidden category detected
x.y.z.88 warn http://officecdn.microsoft.com/pr/39168D7E-077B-48E7-872C-B232C3E72675/Office/Data/v32.cab web request warned, forbidden category detected

This shows all the URLs in real time that are being blocked and why.  The Microsoft CRL url was being blocked because the default exception was enforcing the category and the database doesn’t know that CRL should not be “un-categorized”.

The true benefit of the method is that you’ll be able to fine tune exceptions before anyone opens a support request for a process failing like Office updates.

 

Password Reuse: Dropbox Breach Lesson


The 68 million account leak of Dropbox resulted from Password Reuse. Dropbox leaking 68 Million email address (used as logins), password hashes, and other user account details, all the result of password reuse by a Dropbox Employee, who  used the same login/password combination from one site  and resued the credentials Dropbox. This employee apparently had access to user details. The bad guys did not have to attack all the security barriers to gain access to Dropbox’s internals, they just needed to find a login/password. And they succeeded.

The lesson from the Dropbox breach, which isn’t a new lesson at all: Do not reuse passwords!

Imagine the following scenario. You create an account on a new website and you use the same email/password used in your email account. Six months later, your the new website isn’t doing so well, the site is not paying close attention, and their database is leaked.

The best case is that they used strong encryption techniques and your password is can not be decrypted. Or worse, your password was encrypted but the proper steps were not taken to prevent the decryption, and they figure out your password. Either way, the bad buys know your email and if they also think they know your password, your email account is next on the list. From there, they look for hints to where you bank, your amazon account details, etc.

If you’re telling yourself “this can’t actually happen“, you are not respecting how determined the bad guys are. Why else do the bad guys try to hard to breach online services?

Lets review the breach wall of shame:
359,420,698 – MySpace
164,611,595 – LinkedIn
152,445,165 – Adobe
68,648,009 – Dropbox

You might think – How do you limit the impact of a breach, since there appears to be no end of them? Here is a few recommendations.

Never Reuse A Password

Never use the same password with more than one service. You should use a different and “good” password for each site. You may choose to use a password manager to keep track of all those password. Two suggestion Lastpass and 1Password.

Always Use A Good Password

A good password doesn’t necessarily have to be full of numbers and special characters to be good. A password of 12 characters or more that contains upper and lower case creates a 52 character pool someone would have to try to randomly guess your password. Add numbers, gain a character pool of 62, add special characters, the pool grows 80+ characters. If you’re using Lastpass or another password manager, let the password manager pick your password and make it as long as the service will allow. The password managers also do you one additional favor, they can fill in the login forms with your credentials.

Avoid Dictionary Words for A Password

I have been running a honey pot for several years, where I log every password tried. The passwords are variations of dictionary words and commonly used passwords (we know the common passwords because we study the data the is exposed from breaches). Some IT people think its also a good practice to use “Elite Speak” alternative spellings. They replace ‘e’ with 3 for example. The bad guys know this trick. Don’t try it. The lesson: Do not think by adding a few numbers on the end of a dictionary word or substituting characters in a systematic way on a dictionary word is good enough, its not. If your password is in a dictionary, change it.

Use 2 Factors for Authentication

Proving identity requires one of 3 factors – Something you know, something you are, something you have. A password is the most common authentication factor – Something you know. Most services like Dropbox also can use a second factor, optionally, Something you have – to be used in combination of with your password to make it much harder for bad guys from leveraging a leaked password to gain access to your account. Each service may use a different “something you have” factor but most of the time they use a rotating 6 digit key, which you setup an app on your smart phone that generates the same 6 digits every 30 seconds or so. Then when you login, the website prompts you for the six digits or you append the six digits to then end of your password. Google has the Google Authenticator app. Just check the “app store” for your smart device.

Lastly, how do you know if your information has been compromised? Check out HaveIBeenPwned.com, just enter a user id or your email address and see if your details matched.

Sophos UTM v9.3 – AD SSO and Web Protection Profiles

Keeping with the spirit of sharing my check lists, here is my Active Directory integration check list used in configuring AD SSO used in Web Protection Profiles. This is NOT a Web Profile Check list, just the AD portion.

Initial Configuration – DNS and Hostnames:

  • The UTM Hostname – When the UTM is setup, the initial hostname should have been a publicly working hostname.  That hostname is used in a whole host of configurations locations downstream. If the hostname was not valid on the internet, hostname over rides would will have to be used.
  • The UTM must have a valid internal hostname. The hostname used when configuring the utm must be resolvable in the local AD dns.
  • DNS Configuration. Use a DNS Availability group on the UTM, all of which points externally. Create a DNS Request route to point all internal dns lookups to your AD DNS server. Lastly, configure your AD servers to forward all external DNS requests to the UTM.

Authentication Services

  • In AD, create a user for the UTM AD service with RED ONLY privileges
  • Set create users automatically
  • Create an AD Authentication server, using the read only ad user id created above
  • After creating the AD Auth server, be sure to test the lookup work as intended
  • Join the UTM to the AD domain.

How To Test

  • Test authenticate the user portal with an AD login/pass
  • Watch the live logs

Failed Log Entry:

2015:06:08-11:11:33 XXXXX aua[17765]: id=”3005″ severity=”warn” sys=”System” sub=”auth” name=”Authentication failed” srcip=”XXXXXXXXXX” host=”” user=”testuser” caller=”portal” reason=”DENIED”

Successful Entry:

2015:06:08-11:14:10 XXXXX aua[19120]: id=”3004″ severity=”info” sys=”System” sub=”auth” name=”Authentication successful” srcip=”XXXXXXXXXX” host=”” user=”testuser” caller=”portal” engine=”adirectory”

 

 

 

Sophos UTM Home / Software Licensed IP Count Explained

For some users of the Sophos UTM running up to V9.3x at home using the very generous “home license”, the number of IP’s used against the allotted 50 is always a concern.  Some people have a hobby of collecting baseball cards, some fly model airplanes, and some construct an IT lab at home using Vmware’s ESX and hosting Sophos UTM with the Home license. Add a family with each member having from 2 to 4 devices, IOT, and 50 licensed IP addresses does not last long.  For this subset of users of the Sophos UTM, the question of “how long does an IP stay in the Active IP Addresses” and “what has to happen to get an IP address noticed (Read: added to the Active IP list)”.  I recently setup a few scenarios in my lab to answer this question.

How Does An IP Get Noticed?

From my testing, any time an IP address is processed on an interface on the UTM, the source/destination pair is logged in the accounting table. The UTM  uses a postgres database for its packet accounting. During debugging, all IP’s that are included that appear to be within the range of addresses defined in the objects on Interfaces & Routing -> Interfaces, then subtracting off the IP’s of the UTM interfaces. Only packets that have traffic, as a source or destination, within the last 7 days (from the time of the query), are included. Sophos access points and UTM interface IP addresses are subtracted off this list.

I ran some tests, where I setup a network and let the UTM handle DHCP services. I defined the scope to not include a gateway or DNS server settings. The device only received an IP and subnet mask. These devices were also included in the list of Active IP Addresses.

How Long Does An IP Stay Active?

As mentioned earlier, the packet level accounting is kept in a database. Based on the SQL query seen under the hood of the UTM, the query specifies a 7 day look back.

We Have A Guest Wifi Network – Any Tips On Lowering The IP Count?

If you have a guest wifi network and a large amount transit and short lived users, the user count can grow quickly. Imagine a House of Worship or other venue where the number of users have a quick but short lived peek. There is noting you can do to limit the number of IP’s added to the active list apart from “natting” that traffic behind another firewall, you can lessen the impact by making the DHCP lease time as short as the expected duration of that group’s visit. Lets look at an example.

Suppose a House of Worship holds two Sunday morning services and a Sunday night service and the lease is set for 24 hours (the default). Should they have 50 users at the first service, and 75 at the second, and 35 users for the evening service, 110 IP address would result in the Active list.  Now suppose the DHCP scope lease is set for 60 minutes. We would expect the Active list to top out at 75 users.

The DHCP lease time should not be made too short. If all the users have to renegotiate the IP too frequently, a lot of needless overhead would be created.

IOT, Printers, Cameras

How then does one prevent the IP addresses used on IOT, IP based cameras, and printers from burning one of the active IP address slots? It is not hard, but lets restate the question for clarity: How do you prevent ANY device that does not need to be on the internet from taking one of the Active IP address slots? Try one of the following:

  • Manually configure the IP settings on the device and do not include a “gateway of last resort”.
  • Use a DHCP server other than the UTM for your network, configuring the scope for these devices to not set a router.

Conclusion

Keeping the DHCP lease time short enough to prevent the build up of IP addresses in the Active list is one method of reducing the number of licenses for the home or software appliance installation, yet too short of lease creates a waste of resources and could inconvenience the user. Devices that do not need internet access can be configured such that the device has absolutely no interaction with the UTM. The bottom line is that it only takes one packet from a device to get listed in the accounting database, and it is from that database that the UTM checks when building the active ip list.

 

Sophos RED – Some Thoughts

 

 

 

I have supported clients who use the Sophos RED (Remote Ethernet Device) to securely connect a remote office back to their HQ.  This product does everything the product description says, and I’ve had few issues with the service, most of which were resolved through the evolution of the feature set. In a nutshell, the RED service sets up a SSL vpn between the remote site and HQ. What I like the most about the RED service, is that the RED device simply does not have an admin interface that the IT team needs to interact with at the remote site, it is configured via the UTM at HQ.  This cuts down on support costs and errors that might require a visit from the IT team after a misconfiguration and should a RED device fail, a replacement can be shipped to the remote office and installed via just about anyone, no configuration required at the remote site.

A Problem: Compliance Audit Scans

I’ve been hearing from my clients that the Sophos RED service back on the HQ UTM appliance is being flagged more frequently during PCI compliance audit scans, for exposing a port (3400/tcp) to the internet using a self signed certificate. I know that some of these audits are nothing more than an automated script running and flagging “issues” and does not take into account the use case or whether or not the issue is even a real issue. The consequence of failing this audit is a real concern for the client, as it can impact their ability to do business.

Self Signed vs CA Signed

As I understand certificates and how they work, the fact that the RED service uses a self signed certificate is not a problem. Why? Think back to why we use a public certificate authority (CA) signed certificate. When you’re surfing a website using SSL/TLS, you (the viewer of the content) needs to know that the certificate you’re using is from the site you are visiting. This is done via the public CA and since there needs to be more than one CA in the world.  The role of the public CA – to tie in a trustworthy way the entity’s identity to the certificate.  It is a “trust many” CA model. Your indication of that trust is when you see the “green lock” on  your web browser.

When visiting a website with a self signed certificate, on the other hand, the web browser does not have the “little green lock” because the certificate was not signed by a public Certificate Authority that your browser trusts. While this matters a great deal with the guy connecting to his bank or other websites, it doesn’t matter at all to the RED service, because of what a signed certificate does not do.  A publicly signed certificate does NOT make the certificate any more secure (read: make the crypto better), it simply vouches for the identity of the entity offering the certificate.

RED and Compliance Audits Scans

A publicly signed certificate would weakens the RED service because the RED service uses a “trust one CA” model. If the RED service used public CA’s, then many CA’s could mint certificates that the RED would trust. Trusting more CA’s weaken the over all security of the RED service.  A reasonable PCI compliance auditor would listen to this explanation and agree to note the exception. If you know of one that would entertain an explanation and note an exception in the audit, you should hang on to that auditor because what you really found was an unicorn, and we all know those don’t exist!

Since a one CA model is better than a “many” CA model for the RED service, lets move on to the entire point of this post: It is time for the RED service to be updated to include the standard access controls one would want anywhere a service is exposed to the world. Sophos should allow the RED service to be configured with more granularity, just like Sophos allows for their User Portal or Web Admin.  Because the controls are not present, time and hence money has been spent by my clients use these devices defending that the “strange port”, 3400/TCP, found on the UTM isn’t some back door to the company crown jewels, to the compliance auditors.

Suggestion for Sophos

The root of the compliance auditor issue is my only complaint with the RED service, that is that service follows the logic: All the “security eggs” are in “the code is correct” basket, and this is what I hope can be changed.  It would only take one vulnerability in the RED code or dependent library (remember Poodle, Heart Bleed, Ghost?) to be exploited from any IP and there are no other controls to prevent or lessen a compromise, other than the RED service OFF switch. With that in mind, here’s is my RED wish list:

  • Enable the RED service on the UTM to be optionally configured to only listen for incoming connect requests to specific IP’s, just like one can in the User Portal or Web Admin. I’d settle for a simple ACL in the UTM as the first step, but Sophos could automate it via the provisioning services.
  • Allow the port the RED listener service on the UTM uses to be arbitrary changed.
  • Allow the UTM to be configured to listen for incoming RED connections arbitrary IP’s hosted on the UTM, instead of ALL WAN interface IP’s.

Again, lets be clear, this is not criticism of the RED appliance and service, in fact, I’m quite the fan of the entire idea of the RED service. This post is to point out that more can be done to strengthen the foundation of the service by adding access control I would use on every RED that I would come into contact with.