Part 8 - Backup and recovery

See here for the entire series of posts, if you are just stumbling onto these posts.


As I said in part one, these post are supposed to be helpful in giving you meaningful useful advice to prevent ransomware.


This post is a bit different from the other posts, in that the previous 7 parts were tools and techniques to help prevent the attacks from ever happening (aka the best case scenario). Even if you follow all 7 posts down to the letter, there is still a possibility ransomware will get through your (now) multi-layered defenses. After all you have to be correct every time for everything. Mr and Mrs Hacker only have get it correct once. So plan for the worst and hope for the best. Not the other way round. So this post will cover how to actually put your organization in a place to recover as best as possible were the unthinkable to happen.

While you could pay the ransom, the
Sophos - State of Ransomware 2021 report indicates only 8% of paying victims claimed to recover everything. 4% got nothing at all for their payment. On average only 65% of data is restored after an ransomware incident after paying the ransom, so one third of the data is gone, like the snap in Avengers: Infinity Wars, but for data. The average ransom payment was $170,404 USD. But the entire bill for rectifying the attack comes in at a whopping $1,850.000 USD.

The average cost of rectifying a ransomware attack, considering downtime, people time, device cost, network cost, lost opportunity cost, ransom paid. etc. was US$1.85 million.


What I'm about to cover cannot be done with a $100 Microcenter USB external drive and Windows Backup (well, maybe it can, but it shouldn't). Yes, for real backup and recovery build outs can be relatively expensive, but they are far, far less expensive than the average $1,850,000 that it currently costs were you to pay up and all the other things you now have to fix.  And once you get hit, YOU WILL BE DOING THIS ANYWAY, so make the argument to do it now. It's not if you will get hit, it's when. And just because you have been hit doesn't mean you won't get hit again. I really wish they'd spend more time on probability in math(s) class.


Alas, sometimes you need a really bad experience to understand the obviously (now with the benefit of hindsight)  stupid things you previously did. Exhibit number 1:


Image:Ransomware Prevention Part 8 - Backup and Recovery

So let me start the meat of this post with the most important thing you will ever read in terms of recovering from a ransomware attack.....


Never have any of your backup infrastructure domain joined!

Never have any of your backup infrastructure domain joined!

Never have any of your backup infrastructure domain joined!


No, I'm serious....this includes password and decryption keys as well. So once again, to the chorus.....


Never have any of your backup infrastructure domain joined!


Never. Ever. The stories I have heard....."we had backups but they got encrypted as well"...."we had off-site backups and we even encrypted them for reason x,y,z, however the the private key/password (usually just a text file stored in a "secure" IT Windows file share) was encrypted by the ransomware so our backups are useless". It goes on and on and on. It's extremely common for an organization who gets ransomwared who also has backups that are about as useful as an ashtray on a motorbike. Far more common than you would ever imagine. So plan. And have a plan for when the plan won't work. Print actual copies of any keys you use and put them in a very safe place. Make sure you are not the only one who knows them.

Don't be the guy above that puts temporary hose ramps on a train track. Let's try to save you from that, eh?


For the most part this article will cover Veeam, mainly because of all the systems I've used, it's the easiest and does what it says. You solution du-jour may or may not be able to do the following. If it can't consider changing.


Also this is backup and recovery. Not high availability. Those are two very different things that are != (or <>) to each other at all. While a given product maybe able to do both, I'm not covering both here. HA is a paying gig and track down Lisa if you're interested in that.


Now for the second most important thing to understand about backups.....automate. When humans are involved with backups they fail. All the time. When humans are not involved with backups they fail far less often.


Recognize that not everything needs to be backed up and recoverable


There is some stuff is critical to your organization. Without it you simply cannot function. Back those up. Everything else is optional and is a function of cost vs PIA to rebuild it. For example, SQL servers and AD, sure. But if I had a pretty sizable Tenable install with one or more Nessus Linux scanners feeding it, do I really need to backup *all* the Nessus scanner devices? I would argue no. The value is in the Tenable reports that are harvested from the Nessus scanners. I can rebuild the Nessus scanners at a later date, or just back up one or two of them. Needless to say, the more you back up the more time it takes. Additionally you are taking precious backup resources from other more critical systems.


Frequency and Tagging


Give serious thought to the frequency you need to backup a given device. Break out your backups into these frequencies. Some stuff you want daily, others weekly or even monthly or quarterly. I may backup a given domain controller daily, but others maybe able to be backed up weekly. Also tag the stuff you don't want backed up. Then there is no confusion as to who is to blame when all hell breaks lose and that VM is not in the backup.


Tagging VMs is a way to combat the age old issue of forgetting to add something to the backup. Tagged objects can then be added automatically to backups. Both VMware and HyperV can do this (requiring vCenter and SCVMM respectively). In vCenter create folders for each backup frequency and add a tag to that folder and move VMs to the required folder. Then have Veeam back up that tag. SCVMM is much less user-friendly as you have to tag each VM independently.


Here's a vCenter folder tagged (meaning everything in that folder is also dynamically tagged when Veeam comes looking):


Image:Ransomware Prevention Part 8 - Backup and Recovery

And here is the corresponding Veeam job that adds VMs that match the tag at every execution. Truly dynamic and now you don't need to edit your backup job everytime someone adds a VM. Simply move the VM to the required folder in vCenter and the next time that job runs, the new VM is added to the backup.


Image:Ransomware Prevention Part 8 - Backup and Recovery

SCVMM is a per VM setting, but Veeam is still the same, dynamically adding VMs with associated tag at backup execution time. You cannot set this in HyperV settings, only in SCVMM settings:


Image:Ransomware Prevention Part 8 - Backup and Recovery

Don't forget to backup assets that you will need *during* the recovery. Your PC for example. Also backup and store off-disk the Veeam configuration. You really don't want to have to install a new Veeam server and have it index all the backups across all your different storage tiers. That can add a long time the recovery.

Yes, you do need three tiers of backups


Every knows this already, yet few do it. It's a bit like exercise, we all *know* we should do it and it's not a secret, but doing *it* is a whole different matter. Multi-tier backups are like that. We *know* to do. The majority just don't. And by multi-tier I don't just mean cloud. Cloud for restoring has significant issue which I'll get to later. Just don't go thinking you've avoided all the backup pitfalls by using cloud. Because you haven't.


So a Darren approve system would go something like this....


Backup Location 1: Local disk.
Dedicated *only* to the backup system. Not on a shared SAN with everything else. That's simply moronic and your asking for trouble with that approach. Lots and lots of storage. For Veeam your going to want format the storage as ReFS. Local disk has lots of advantages:
  1. Fast backups. The fastest of backups actually.
  2. Fast restores. You won't get this with cloud.
  3. Keep the most recent backups on local disk. This will save time and money when doing normal day-to-day restores of things that users delete. For me recents are 45 to 61 days, depending on your need.
  4. Disk is cheap to add to. Relatively speaking. Need more? Add disk shelves. Or Veeam servers. Or both.

It does have one pretty big disadvantage:
  1. It's online, so susceptible to attack. It can be ransomwared. Especially if you are a moron and leave it domain joined. Don't be a moron.
Backup Location 2: Tape. Yes, yes, yes. I know tape is dead. Except it isn't. The only thing dead is your career if you don't have the correct backups and media in place, so stop with the sales person crap already and get with the program. And when I say tape I mean a multi-tape autoloader and/or a robot. Not an admin assistant who inserts the Monday tape on Monday. And there is nothing stopping you having more than one autoloader. Tape is limited not by the media, but by the imagination of the person holding the media. So tape:
  1. Relatively OK speed and storage per tape (LTO8 is 12TB uncompressed per tape at 360MB/s....LTO10 and beyond will double the storage of each previous generation). You can have multiple autoloaders off one Veeam server.
  2. Offline. So extremely low risk of compromise. It's as close to air gapping as you can get and still have a usable backup system.
  3. Keep the most recent and then some. 90-180 days
  4. Can be shipped off-site.  Try doing that with a disk shelf attached to a Veeam server.
Backup Location 3: Cloud. Cloud has issues, but first let's cover the advantages:
  1. Great for long term storage.
  2. Can be made immutable. AWS for example can have Veeam backups made immutable for a period of time, so you can guarantee the backups have not been tampered with.
  3. Geographically diverse. Not really a ransomware advantage, but still....
OK, now for the cold dose of reality....the very significant disadvantage from a recovery standpoint:
  1. Cloud looks fast when you are backing up to it or moving your backups to it. This is generally because when you backup you are most often backing up incremental changes. These backup files tend to be a tiny fraction of the size that actual full backups would be. Yet when you get hit by ransomware and you have to restore, you are *actually* restoring full backups and not the much smaller incremental backup files. I cannot stress enough how difficult it is to restore a full environment from cloud backups in a timely manner. Basically you can't and it will take a whole lot longer then you ever imagined. It'll take many days to a few weeks. Remember one of the hidden costs of ransomware is the loss of employee productivity, A day is a long time. A week or weeks could put you under.
  2. It's also expensive to restore from cloud. But it is still way cheaper than paying the ransom.

Build for restore speed


Look, once your hit and you are confident you have good, restorable backups, it's now a time sink, a waiting game if you will. Create restore job, wait, wait, wait. Create restore job wait, wait, wait. The shorter your restore time, the faster you'll be back up and running. So from a restore perspective build the fastest backbone you can. At a minimum I'm talking 10Gb. See 10Gb is literally 10x faster than 1Gb. In real life 10Gb is 5x to 7x faster than 1Gb. That is still a huge factor. See:


10TB restore at 1Gb = ~22 hours


10TB restore at 10Gb = ~4-5 hours


And trust me, when you get hit, 10TB is a tiny amount to restore. If you have 4 VMs hitting 10TB each, on a 1Gb network you'll be up in approx one work week. On a 10Gb network, that is now restored inside of a day.


So this brings me back to the woeful cloud speeds during a restore. Even if your cloud provider were to give you a 10Gb feed back (which I very, very much doubt), can your internet connection back feed that kind of speed through to your virtual hosts? This is why you want recents close at hand and on a very fast backbone.

Restore speed is why the idiot CEO of Colonial Pipeline paid the ransom, thinking that somehow paying for and getting a decryption key would be speedier than restoring the backups they were already restoring. It's CEOs like this one that make ransomware such a lucrative crime.


Did you backup the pre-detonated ransomware? Are you now going to inadvertently restore it?


One of the tricks the ransomware tricksty hobbittes have in their quiver is to let the encryption engine sit dormant for a period of time before detonating, in hopes of contaminating your backups, so when you restore, boom, another no good very bad day for you. While this is a risk for you, it's also a risk for them as the longer they delay their attack the more likely you are to discover it. pre-encryption. That's not to say it's not a real threat, because it is. And the backup vendors are now integrating scanning directly into the restore process to ensure you don't inadvertently reinfect yourself.

In Veeam's case this feature is called Veeam Secure Restore. There could be some setup involved depending on your requirements so make sure you know what they are before you need it. It will add time to the restore as the virtual disk is mounted and scanned prior to full VM restore, but if you need this level of assurance, it is now available.

Configs, keys and the like


This is where I now extol the virtues of the cloud. You want to backup any and all configuration settings that you may need during a restore. I strongly suggest they be kept in secure cloud location. For example. you can have Veeam backup it's own config DB, ship it via SFTP to a SAN, etc. then ship that off to an AWS bucket. There are a multitude of ways of doing this, but again, automate it. Humans are generally useless when it comes to backup tasks.


Monitor


Yes, Veeam will send emails to you when a job succeeds, fails, burps, has a baby or bar mitzvah. etc. but you, as a general rule won't read them. So use something else to monitor your entire backup infrastructure, for instance Veeam One, or whatever takes your fancy. Here is OP5 (a Nagios derivative) that checks all kinds of jobs:


Image:Ransomware Prevention Part 8 - Backup and Recovery

Protect your backup servers as if your naked pictures were on them


It should go without saying that even non-domain joined servers are still vulnerable. So protect them like nothing else in your data center. They should only allow the bare minimum of inbound connections, and should have firewall rules to prevent anything except management tools in. They should not be pingable, discoverable or any other such thing from anything other than a tiny handful of other devices. A completely separate subnet would be advisable to.
Maybe even a hardware firewall between it and everything else. No amount of security around this is too much. Go big or go home.

Additionally, mandate MFA on the OS login (Duo, Okta, etc.) to prevent compromised account access. In short harden this server as you have no other.

Use dedicated log on accounts per backup technician (it's not AD joined remember?) with one-time, not used elsewhere passwords.


Conclusion


While I sincerely hope that you dear reader don't every have to recover from a ransomware incident the odds are not in your favor. This post (and the subsequent 7 other posts) can hopefully help make that no good very bad day just a day or two of downtime and a story to tell at conferences.
Darren Duke   |   July 15 2021 06:15:00 AM   |    ransomware  security    |    [0]

Part 7 - Email security

See here for the entire series of posts, if you are just stumbling onto these posts.


As I said in part one, these post are supposed to be helpful in giving you meaningful useful advice to prevent ransomware.


Most malware enters via email, a
March, 2020 report from CSO Online reports that email is the vector for 94% of malware attacks. That same reports the Phishing attacks are involved in 60% of attacks. To say email is the front door for most attacks is a pretty apt metaphor.

Email is the ingress point for 94% of malware attacks.


Stopping the multitude of malicious emails before they are ever delivered to your users can prevent a whole lot of attacks. Since the dawn of enterprise SMTP email, this had been the great struggle between good and evil. And still it rages on. I'd be shocked if most organizations of any size are not using any type of email spam filter. If you are not, look no further than SpamHero. It's relatively inexpensive and while lacking the sophisticated tooling of some of the products below, it it orders of magnitude better than nothing at all.


So what are your options? A lot of this is available from most tier 1 vendors (Barracuda, Proofpoint, Cisco, etc) but YMMV and it may be extra licensing costs to add a specific feature.


GeoIP/Regional Blocking


This used to be simple but the advent of Office365 and the various acts of government (i.e. the Patriot Act) makes it more complicated and a game of whack-a-mole. For example US based subsidiary of a Japanese corporation may use Office365 that exits from Japan. Some Microsoft Office 365 status emails now originate from Singapore. See, whack-a-mole.


Of course, use GeoIP or regional blocking to filter out the obvious contenders, Russia, Iran, etc, but you really want to limit it as much as possible.


Advanced Threat Protection (ATP)


If there is one add-on that most do not have, but all should, it is advanced threat protection (ATP). This (usually optional add-on) will take attachments embedded in an email and execute it in a cloud sandbox. ATP is a bit like a Number 7 bus, none come along for a long time then all of a sudden several (hundred) turn up at once.


Here's an example from Barracuda Cloud ESS ATP. They also provide a report, although to date I have yet to see any false positives:

Image:Ransomware Prevention Part 7 - Email Security

Active Content Disarming


Not a common feature (sadly), but this essentially neuters all links within the attachment. So if an entire PDF page is a hidden link that tests if you are using a vulnerable version of Acrobat (hint, you are.....every version of Acrobat is a vulnerability) then this link is removed as it's active content. Thus a user can no longer accidentally click on the link. To date the only product I have seen that can do this is LibraESVA.


URL Protection/SafeLinks


Rewrites URLs in emails so they can be scanned when clicked by the user for malicious intent. Somewhat ironically makes spotting a bad URL with the mark-1 human eyeball an impossible task (and negates some of your cyber-security awareness training your users are doing). I actually really, really dislike Barracuda's implementation and really, really like LibraESVAs as it shows you can actual scan happening. Barracuda, not so much.


Can be used in conjunction with KnowBe4 Second Chance (if you have it) which will unwind the real URL and show it to the user for confirmation.


Reverse DNS


Come on people. Just block anything that doesn't have a reverse DNS pointer. You should have been doing this since 1999.


Sender Protection Framework (SPF)


Now we come to the trifecta of semi-related options. We'll start with SPF. It tells the receiving server if the sending server is authorized to send on behalf of the senders domain. It's does this via DNS. I'll make this easy on you, block anything with a hard SPF fail and quarantine anything with a soft SPF fail. Also you should have SPF set up in your DNS for your outbound email so to let others know. As with all things email security, pass it forward.


If you use them, don't forget to add Salesforce, MailChimp, ConstantContact, et al as allowed SPF senders on your outbound SPF based on their applicable documentation.

Domain Keys Identified Mail (DKIM)


Now it's getting tricky. Where SPF tells you if a server is allowed to send, DKIM takes it a step further and ensures (via PKI and DNS) the received email has not been during tampered with during transmission and that the sender server is authorized to send on behalf of that domain. In a nutshell it adds cryptographic authentication to email (a bit like SSL certificate chains in a web browser, I am who I claim to be).

When done correctly, DKIM can certify that an email is either legitimate or illegitimate. In a perfect world you'd simply discard any illegitimate email. Alas poor reader, a perfect world this is not.....


There is a lot of DKIM out there. A lot of it is configured incorrectly. Which is sad as this could really clean up the world of email. It could literally prevent phishing attacks overnight if everyone enabled it (correctly). You could block or quarantine any that fail, but a LOT will fail, mainly because of mis-configuration on the senders side. It's worth noting that DKIM won't stop malicious email from legitimately signed DKIM servers (sendgrid anyone?)
.

Again, add DKIM to your outbound flow to pass it forward, the same warnings about 3rd party senders for SPF also apply here, so follow their documentation.

DMARC


DMARC is the odd one out of the three in that it really is an extension of SPF and/or DKIM. Like the other two it is also a DNS record. It tells the recipient how to check SPF, DKIM and the from address in an email. More importantly is tells the receiving server what to do with failures. DMARC also adds reporting to the mix. You can get reports that *can* indicate someone is spoofing your domain. DMARC reporting is pretty complex and you'd usually have a 3rd party go this and collate the results.


Using SPF, DKIM and DMARC correctly really does have the potential to stop most malicious and unwanted email, but alas the world is full of people who don't know what they are doing, or worse, end around IT and start having a 3rd party send email on your behalf which never gets delivered.


Conclusion


Email is still how the majority of attackers get into your networks. This is your
Maginot Line from a security perspective and you need to have as many bells and whistles enabled as possible. Add this to cybersecurity awareness training of your users and you can stop 99.8% of attacks at the gates,
Darren Duke   |   July 9 2021 10:07:00 AM   |    ransomware  security    |   Comments [0]

Part 6 - GPO tricks and tips

See here for the entire series of posts, if you are just stumbling onto these posts.


As I said in part one, these post are supposed to be helpful in giving you meaningful useful advice to prevent ransomware.


If you only read one of this series, this one should be it. Seriously. And read it all a few times  before you start editing the default domain policy!


Most of this series is dedicated from stopping any potential ransomware from getting to the install or execution point. But what happens if all your many Darren-approved, onion skin layers of security fail and the nasty does get through and it does execute? In this worse case scenario GPOs or Local Security Policies (if you are not AD joined) are your friends. I have implemented the techniques in this post to prevent whack-a-mole reoccurrences of a Ryuk ransomware attack. These techniques are that powerful.


The basics - how ransomware works


A rather large caveat. Your users should not be local Windows admins on their machines. If they are you have a somewhat larger issue to fix. The fix being "stop doing that".


If your users are not local Windows admins then how does ransomware execute and install? Simple, it installs and/or runs in the users local profile context. That handy c:\users\\ folder. The one that the likes of WebEx, Zoom, Teams, et al all install and run from. Yeah, there.


Ransomware (usually) runs in the user profile folder.


So the same useful Windows features that lets you work from home, do video calls and not wear anything below the waist is also the same mechanism ransomware uses to install and execute. Ransomware is usually a series of different malware applications, each with a specific use case. There is some type of "dropper" that is what the unwitting user clicks on, downloads, allows a MS Office macro to run, or otherwise executes. Once the dropper is in place it will attempt to install one or several different programs from the internet to gain a foothold in your network, These "several different things" (that can happen over a series of days, weeks or months so you do see them for what they are) include:
  1. Reconnoiter - find what is on the network, what it can get to, find lateral move points and search for systems to compromise (meaning un-patched known, exploitable vulnerabilities).
  2. Exfiltrate - take your data off-site so if you don't pay the ransom to unlock your files, they can still have leverage over you and threaten to release sensitive information.
  3. Encryption engine - the program that will download a public key (almost always AES, so to all intents and purposes uncrackable) from a command and control server. It then begins to encrypt items located in 1. Encryption usually begins at the start of a weekend to give the ransomware enough time to do real damage based on the hope that no one is looking at the servers on a weekend. Mondays can be very bad.
  4. Profit.

This is about as simple as it gets. Find your stuff. Steal your stuff. Encrypt your stuff. Profit.


A few years ago step 2 was relatively uncommon. Not anymore as it appears to be pretty good leverage at getting you to pay. Not necessarily to decrypt your data, but to get the hackers to promise they will delete this exfiltrated, sensitive data and they will not publicly release it. A promise. What could possibly go wrong.

In step 1 and 2 the hackers are almost always looking for server based file shares or access into server operating systems (think SQL Server, Exchange, etc) these days. The idea being that the more users I can affect with one attack the more likely you are to be willing to pay. If I encrypt just your files you are unlikely to pay. If I encrypt critical, run your business files that 10, 100, or 1000 users require to work then the pain increases by many orders of magnitude.


Pro-tip, don't pay. Follow this series (this post the the upcoming backup one especially) and you won't have to. I really need to do a "what if you pay" post at some point to so you realize paying for decryption isn't all they promise it will be.


OK, so now we know where and how this stuff works, how do you stop it if none of the other 8+ posts in this series saved me? You prevent it from running.


Prevent it from running in the first place


You prevent it from running by whitelisting. Now just the term whitelisting sends IT professionals off into the woods to remove their clothing, revert to their prehistoric selves never to be seen again. But hear me out before you quit, strip off and go full on paleo in the wilderness.... So long as your users are not local admins and have no rights to install software in to Program Files, etc. then all you need to do is to whitelist applications that you wish to specifically allow to run inside the aforementioned appdata context. This is much, much smaller nut to crack. Why? Because next to nothing *should* be running from the user profile or appdata folder (I say *should* because there are usually way more than you would expect).


Inside your Active Directory Group Policy Object (GPO) and the local security policy is a handy little thing called Software Restriction Policies (SPR). SRPs can be set to not allow anything to run in a specific folder on a Windows device. Additionally the SRP can be expanded to allow only what you want to run:


Image:Ransomware Prevention Part 6 - GPO tricks and tips

SRPs - block everything, except what I specifically allow.


With a SRP you can easily block exe, Powershell, Zip, 7z, rar, etc. from running is a users appdata context (this is also where the users temp is located to which is another execution hotbed).

Below is an actual SRP. Notice the security level column? Disallowed means you're not running. Using a disallow with a path rule and using Windows environment variables your can simply and effectively block all exe's for all users appdata contexts. Conversely a security level of unrestricted will allow anything that matches to execute. In this example anything signed with the uploaded Adobe Inc signing certificate will be allowed to run, as is AMD, Barracuda, etc.:


Image:Ransomware Prevention Part 6 - GPO tricks and tips

SRPs can be set to allow four different ways:
  1. Path - specify an allowed file or folder path (i.e. %appdata%\Temp\Teams\*). This is the most insecure type as *anything* in that folder will be allowed to execute, and hackers know many common folders (a lot of malware adds folders called Google Chrome or Chrome to these paths). It is also the easiest exception to add. Try your hardest to not use this type of exception. Very good for disallow rules. This is, after all, what you are trying to prevent.
  2. Hash - the file hash of a selected exe. This is also pretty easy to allow, but *ANY* change to the file (so an upgrade to a new Zoom version that replaces zoom.exe) will prevent it from running as those file hashes no longer match. Use this for vendors who refuse to use signing certificates (also find a new vendor).
  3. Network zone - I'm going to skip this as it's of little use when trying to protect a local machine, and using this could seriously increase your risk to lateral movement of malware in the network.
  4. Certificate based - the most difficult to do as you need to extract and upload the digital signing certificate from an exe to the SRP (and sometimes more than one). It is also not enabled by default. It is however the most secure (only exe's signed with said digital certificate can run) and it bypasses the issues with hashes as upgraded versions of programs (like zoom.exe) are likely to be signed with the same signing certificate. Certificates do expire or are revoked so this is not quite fire and forget. Indeed just the last few weeks Bitdefender changed signing certs so these had to be updated.
Right about now you should be thinking that none of the above would stopped the recent SolarWinds hack and you'd be correct. If you had an SRP and you added the SolarWinds digital signing certificate you would still have been compromised. This goes to show you can't fix everything. Sometimes breaches are due to a vendors woeful security practices where a hacker can insert code into the code stream prior to building the application and signing it.

The problems with SRPs


Well, quite simply they stop stuff working by design. When you enable them, programs that previously worked could just stop. This means you need to build out your exception list as fully as possible before enabling the policy. Scour your users appdata folders for exes and you will find (and be able to extract and upload signing certificates) the likes of Adobe, Teams, WebEx, Zoom, Go To Meeting, BlueJeans and all kinds of other web conferencing tools you never heard of. All of these most likely need to be added. Note, most of the web conferencing tools also have a "machine wide" installer that forgoes the need for each and every user to download and these tools. As these machine-wide installers utilize Program Files folders they don't fall foul of SRPs (when you create an SRP the GPO auto-add exceptions for this file path). Start with a small set if users and work out from there.

The 2nd issue is find out what was blocked and why. When a block occurs the user is shown this not very useful error:


Image:Ransomware Prevention Part 6 - GPO tricks and tips

Doesn't tell you what was blocked or why. For that you have to look at the local machines event log. If a cunning user or hacker copies a exe to their user profile folder and executes it, not only will they see the message above, but something along the lines of this will be written to the event log:


Image:Ransomware Prevention Part 6 - GPO tricks and tips

Obviously managing this for even a small number of PCs can be time consuming when you first enable these policies, so if you have some type of central logging system you can better report on the things that are happening and/or need to be added as exceptions. Here is a SIEM (Eventlog Analyzer) that shows a blocked 7z execution:


Image:Ransomware Prevention Part 6 - GPO tricks and tips

With a SIEM (or any other reporting solution that extracts local event logs) it becomes much easier to proactively manage SRPs. For instance you can send a report to your security team listing yesterdays blocks. They can then investigate.


Scheduled Tasks


Another common attack area of ransomware is to install innocuous looking scheduled tasks that will attempt to reinfect or re-detonate the malware tools on reboot or on a scheduled basis. There is little use in a regular, non-admin users being able to create a Windows OS level scheduled task, so simply preventing these users from creating them is simple and effective way to head off this line of attack. This is available in the computer and user policies under Administrative Templates, Windows Components, Task Scheduler. Simply prohibit new task creation:


Image:Ransomware Prevention Part 6 - GPO tricks and tips

Conclusion


While one can never guarantee an attack will be prevented (SolarWinds anyone?), whitelisting is about as close to a guarantee as you can get. Added to the onion-skin of protection you build around your devices and (touch wood) you will never have to contemplate paying a ransom or restoring from backups. It is also worth noting that Microsoft has several different options to SPR, AppLocker being the most obvious other choice. Either is fine, I just do a lot more SRP than anything else.
Darren Duke   |   June 16 2021 05:50:00 AM   |    ransomware  security    |   Comments [1]

Part 5 - Cybersecurity awareness

See here for the entire series of posts, if you are just stumbling onto these posts.


As I said in part one, these post are supposed to be helpful in giving you meaningful useful advice to prevent ransomware.


You can add all the security in the world, at the end of the day it is your end-users who either click and download malware or give their credentials to a phishing site. It is our job to help them by either providing education and/or changing their behavior. This makes cybersecurity a team sport. It's a shared responsibility. It was never just a function of IT although many tried (and still do) to make it this way. The more players on your side the better the outcome you will have. You can quite easily increase the size of your team by utilizing cybersecurity awareness.


Cybersecurity awareness is quite possibly the only interaction actual users get from which they could glean a snippet of knowledge that could mean the difference between a ransomware attack and just another deleted email. Awareness training is becoming more common place now (thanks to audits and insurance questionnaires), whereas just as little as three years ago it was next to none existent. As I have mentioned elsewhere in this series, no one solution can or will stop every nasty that tries to get through. Your users could be your last line of defence, and their decision to click on a link or not could be the inflection point that is the difference. That being said,
a recent report from Tessian (the psychology of human error) is indicative of the risks posed by employees and the hackers ability to bypass even the most stringent of email security measures.

9 out of 10 breaches are caused by end user mistakes.


I'll rephrase the above for you.....90% of breaches are caused by user error. 90%. 10% shy of 100%.


Indeed the report makes for dire reading, with 25% or respondents admitting to clicking on a phishing email, with the younger (under 40), and especially males being much more susceptible than any other group. Probably the most eye-popping statistic in the report is this:


11% never think about cybersecurity at work and a further 22% rarely.


Given a combined one third of your workforce never give cybersecurity a second thought something needs to change. What needs to change is how your user population understands the risks that, for whatever reason, make it past the vast layers of security organizations have. Indeed, employees are often called the weakest link, yet they are often the last line of defence in this on-going battle to prevent the cyber criminals for gaining a foothold in your network. It would appear enterprise IT is doing a woeful job at communication and training. That cybersecurity is a shared responsibility needs to be shouted from the hills, and shouted often.

To make matters worse,
a report from KnowBe4 (Security culture report 2021) states that:

An astounding 57% of employees believe they would recognize if their device got hacked.


The above statement is an absurd notion (it's a least an order of magnitude too high, if not two), but to make matters worse only 20% of respondents reported to needing more training. Essentially if the aforementioned results hold true, then is it any surprise that organization after organization falls foul of the cyber criminals?


So how do we overcome this apparent gap in what employees believe they know and what they actually know? Cybersecurity awareness training. Spoiler alert, you simply can't do this alone. You need assistance from one of the above mentioned (or the many not mentioned) to help close the gap. Don't get me wrong, cybersecurity awareness training is no panacea, it is however a good starting point and just moving the knowledge needle 5% is still moving it. So while organizations are embracing it, I see massive room for improvement.


Episode I
Episode IV - A new hope


You may already have a program in place, but even if you do how effective is it if your employees only see it once per year? Not very. So the first step to overcoming these hurdles is to define what you are doing. A once annual 5 minute video is not going to cut it. I know KnowBe4 pretty well so that is what I will cover here, but most providers such as Barracuda Phishline also provide some of these features.  So here's a series of suggestions to add to, replace, or when creating a cybersecurity awareness program:
  • Make sure everyone understands cybersecurity is a team sport. Users can't do it without help from you, and you can't do it without help from the users.
  • Start with education in mind, never blame. If a user thinks that they may have done something to compromise security you want to them to notify you as soon as possible. Using blame is a sure fire way to ensure you will never be notified and this could be the difference between a successful defence and a successful attack.
  • Don't start with a phishing attack simulation. That just leads to huge amounts of animosity. Again. start with education in mind.
  • For new hires, you have to baseline them. You have no earthly idea what they do or do not know. Start every new hire with at least a 45 minute online class. If possible have this tied into your AD new user creation process and on-boarding process. KnowBe4 can do this, simply add a user to a specific AD group and they get added to the correct new employee training on KnowBe4. If you are just starting a program, I strongly suggest *every* employee do a baseline 45 minute class.
  • Every existing employee implement an every 6 month 15 minute refresher. If each time we run the 15 minute class we gain an additional 15% if employee knowledge that's a least a starting point. Build, build, build. Repeat, repeat, repeat. A year between training is simply too long a gap. Cybersecurity is a shared responsibility, and this is the employee's share.
  • Once you've done a 6 month cycle or two, you can do a simulated phishing attack. Again, no blame, no publicizing the results (yes, I've seen this, yes it's really, really bad).
  • Remember, it's no longer just phishing. Your education program needs to include vishing, smishing and all the other cool names for being attacked.
  • Ensure your employee policies and handbooks cover what to do in the event they suspect that have been compromised. And that these are easy to locate. Time is off the essence when a possible compromise is happening. And that these policies align with what you are trying to achieve.

With your first simulated phishing campaign (hint, never offer free money in your campaigns, it could make you famous for all the wrong reasons) you should now have a series of hard facts that you can work on:
  • How many users opened the email?
  • How many users clicked the link?
  • How may users reported that they think this is bad/a test/your all trying to trick them?
  • How many users entered credentials?
  • What your score is relative to others in your industry.

With this in hand you can now target remediation (do some users need to retake the 45 minute course? Do I need to add extra content?) or add in other tools to assist the users. Tools? Yes tools.


A lot of organizations have filters in between the users and their email. Happily rewriting links in email so as to be confusing to a human as possible but hopefully preventing the user from navigating to a malicious web site. Indeed one of the most common ways to spot a phishing email is to look at the target URL. Our additional layers of security have just negated some of the video training your users will do. Fantastic!!!  The good news is that there are tools starting to percolate out that help decipher these seemingly incomprehensible URLs. KnowBe4 have add-in named
Second Chance that for certain desktop email clients that will show the user the actual link they are clicking on. It turns this jibberish behind an email button:

Image:Ransomware Prevention Part 5 - Cybersecurity awareness
Into this warning that decodes the link:


Image:Ransomware Prevention Part 5 - Cybersecurity awareness


Now if someone could make this a universal plugin that also works with web based email, we'd have a winner. Still, it's a start and if you have KnowBe4 there's a good chance you don't know about Second Chance.


Another tool to empower users is
VirusTotal. There are plugins for most browsers that will allow users to self-check worrisome URLs and/or files. IT may not always be available or accessible, the internet  however is. Finally telling user about HaveIBeenPwned is seeing them use it is quite the sight to behold.

Password reuse


Beyond end-user training is end-user education. What they don't know because you didn't tell them can, and often will, hurt you. As I mentioned earlier, the online video how-to's are no panacea. Some don't even touch on password hygiene or reuse. From some truly shocking (not shocking) statistics on passwords, look no further than
the Compaitech Password Statistics page. Some highlights (or more correctly low lights):

Google found that :
  • 52% of users reuse a password some of the time.
  • 13% use the *same* password for *all accounts*
  • Only 35% use a different password for all accounts.

Also present in this page is maybe the most disheartening statistic (again, surprised, not surprised):


IT professional reuse password more than average users (50% vs 39%).


Yet again the IT professionals unerring belief that they are superhuman and immune from the perils that only mere mortals fall for strikes again. How the use of enterprise password managers such as
ManageEngine's Password Manager Pro or Keeper Enterprise is not mandated in every IT department on the planet is beyond me. I'm often stunned by an organization's desire to keep passwords less than or equal to 8 characters (the Windows GPO default). Simply making them longer and requiring a special character can do wonders for password security. An oldie but goodie is this LifeHacker article on passwords. I'll sum it up with this table which outlines the estimated time to brute force a password based on adding on an upper-case and special character vs lower-case only:

Image:Ransomware Prevention Part 5 - Cybersecurity awareness

Yeah, as as IT professional you'll want at least 12 characters for your own passwords, and at least 10 for your end-users. So how does on overcome the perils of password reuse, woeful complexity and overall crappy password hygiene? Multi-factor authentication or MFA. Or 2FA.


MFA is incredibly effective at prevent credential theft.
A 2019 Microsoft study has it as high as 99.9% effective. Given that success rate you would expect almost every organization to have implemented it right? In Wrong. While I admit it can be complex and relatively expensive (much less so that being ransomwared FWIW) just over half of organizations in 2019 have implemented MFA (57%). In fact a 2021 report from the Fido Alliance indicates that 91% of MFA projects are to prevent credential theft.

MFA is reported to be as high as 99.9% effective in reducing credential theft.


So *where* to you do MFA? Well, everywhere, or not. The possible exception is when you are in a trusted location (read on-network, on-LAN). There is little use having MFA enabled in your corporate LAN when accessing Office365 and you already have 12 character strong passwords and SSO is enabled. All you do is piss your users off with little effect to your overall security posture. However when accessing *anything* from outside the LAN you'd want MFA. MFA to VPN. MFA for Office365. MFA for Azure App Proxy. If I'm coming from the outside to the inside (and even if inside is an externally hosted cloud service) you need to require MFA.


Now there are some select users who should be forced to use MFA even when inside the corporate LAN. You. The IT admin. The Domain Admin. The people with the keys to the kingdom. At every logon. At every screen lock. Every time. And your critical servers too. DMZ servers. Proxy servers. Domain controller. Every. Single. Time. How you'd do this is a little complex now that Microsoft foisted Windows Hello on the world (don't use Windows Hello). but would probably involve Cisco Duo, Okta or the like. Why?


IT professional reuse password more than average users (50% vs 39%).


Because you are part of the problem. Now you can be part of the solution.

I often hear MFA is expensive and difficult (I'll give you the latter point), but every Office365 license has the ability to do MFA. Everyone license. Now you'll need something like Azure P1 or P2 (or Duo, or Okta, or any of the other providers of enterprise SSO) to get some of the more useful features such as trusted locations (not requiring MFA for Office365 on the LAN), but it does have it and you can implement it. And you should because a
2019 article from TechRepublic citing a report from Cyren and Osterman Research states that a staggering 40% of enterprises experienced Office 365 credential theft. And if those stolen credentials happen to be the ones you use for AD (because SSO and DirSync) then a users AD credentials have just been compromised. And if said user is a domain admin level of user....yeah, now you can see how these attacks you read about happen. MFA FTW!

40% of enterprises have experienced Office365 credential theft.


Conclusion


Your end-user population can the difference between a ransomware meltdown and none event. Engage them, train them, educate them. After all cybersecurity is a team sport. Build a program, create an internal blog. Because even an incremental increase in knowledge is an increase. And you need all the help you can get.


Finally, roll out MFA. Yes it's difficult. Yes it can be somewhat costly, The the results in decreasing credential theft are simply astounding. Oh, and change your password policies to at least 10 characters with a requirement for a special character.
Darren Duke   |   June 3 2021 05:22:13 AM   |    ransomware  security    |   Comments [0]

Part 4 - Endpoint Protection

See here for the entire series of posts, if you are just stumbling onto these posts.


As I said in part one, these post are supposed to be helpful in giving you meaningful useful advice to prevent ransomware.


Your last line of technical defence is often your most ignored.


Antivirus, or more correctly called endpoint protection these days, is the one item that organization rarely change out. And when they do it's often because they just got hit and their current endpoint protection solution, did not, in fact, offer any protection to any of their endpoints. Or even worse the solution didn't report it until the point of no recovery. These organizations could have saved themselves a whole lot of hurt if only they had a policy in place to evaluate these solutions every two to three years. You then have to be willing and able to switch to whatever solution best fits their needs. If you've been running the same solution for over 4 years and have not looked at the competition you are doing it wrong.


Let's get this out of the way right now, no one solution is guaranteed to stop everything. However the earlier the detection the better off you will be. The solution you choose can be the difference between a report of a stopped threat or a 2-6 month hell of restoring backups (backups are covered in a later post) or being caught in the hellscape that is awaiting those who pay up. So how does one chose a solution that is your last line of technical defence, the last skin on the onion? Especially now what there is EDR, EDTR and many other acronyms flying about? As with most other choices in life you use data.


There are independent test sites out there that will take a vendor's solution out for a range test. The vendor sometime suggests settings these test should have, or not. Just depends. Now, some vendors don't want to play in the test range scenarios, so if you are looking at one of those you will have to look elsewhere for your data. My personal go to site for these tests is
AV Comparatives and AV-Test (you want the business/enterprise tests, not the consumer.....different game). For those who want really, really detailed reporting (although less overall solutions are reviewed) look no further than MRG Effitas and their 40 page reports.

Long before you hit the independent sites you will already want your list of criteria (and stopping everything with zero overhead and no false positives is not a criteria, that's called a dream). Something along the lines of this:
  • Easy to use, not a lot of professional services required.
  • Low system overhead, can't cause significant slowdowns of systems.
  • High long-term score on independent test sites..
  • Reasonably priced.
  • Prefer cloud to on-prem for management.

Now, the above may not be your list (price maybe of no concern for example), but write down and rank your objectives for your replacement solution. Now for a little secret, this is pretty much my list, so I'm going to go over each point one at a time:


Easy to Use

If it's not easy to use, it's not easy to secure. By that I mean for the most part you want the configuration and management simple enough to do in-house (unless price is no issue). I have the same rule for firewalls, it's it too complex for on-premises folks to understand can you really guarantee your security?


Low system overhead

The one that no one thinks about until you've already deployed it and what causes you to disable features. Now, no one wants a slow protection solution, but many get one.


High long-term score on independent test sites

Or whatever site you trust the best. Each vendor can have a good month or quarter. Even a wrong watch is correct twice a day. What you are after is a long-term trend of excellent scores. When I say independent test site I do not mean a magic quadrant or some other somewhat meaningless mechanism that offers no real-world efficacy results.

Reasonably priced

Usually in relation to the vendor you are replacing. Sometimes not. Most vendors have competitive SKUs that offer significant discounts when moving to their products. If money is absolutely no object I'll save you a lot of reading, go look at CrowdStrike.


Prefer cloud to on-prem management

When you get ransomwared you could also lose domain controllers and the very security management servers and manage you endpoint protection solutions. If your solution relies solely with an on-prem management server and it got nuked now what do you do? You may even have that management server using AD SSO. So now you need a DC restored to even get to manage your endpoints. As you can imagine, in the heat of a recovery operation (meaning can you recovery or are you likely to have to pay?) the less you have to have online or restore in the heat of the moment the better off you are. If you had cloud management, this type of hellish scenario is moot. Another reason to embrace cloud for this is AI and sheer amount of samples submitted. This significantly reduces the time to updated definitions.

With the above in mind off I trot to my trusted independent site. I'll use AV Comparatives for this as their charts are easier to read.

Types of tests

These sites don't only measure efficacy, but some also measure performance (remember, you do not want your new shiny all singing, all dancing solution to be a boat anchor). This saves quite the step when attempting to do a benchmarking bake-off. AV-Test has some even better breakdowns of the performance:


Image:Ransomware Prevention Part 4 - Endpoint Protection aka Antivirus

The performance chart is quite eye-popping for than no other reason than Fortinet. The other one I see a lot, and hence hear a lot of grumbling is Sophos from a performance stand-point. Anything >7 is doing a whole lot of stuff:


Image:Ransomware Prevention Part 4 - Endpoint Protection aka Antivirus

From a real-world protection view things appear somewhat close. This is mainly due to the fact that any vendor not hitting >95% has little reason to submit to this kind of test. This does not mean that just because a vendor is not on here (Webroot and Sentinel One are two phenomenal solutions that are not here) and that you should immediately start a project to change them out. Not at all, but least go find out how effective your current solution is in relation to other solutions. Then act appropriately.

So to get a better view of the contenders you will need to do some tweaking to make the charts easier to read: Specifically, change these settings:


Image:Ransomware Prevention Part 4 - Endpoint Protection aka Antivirus

With those adjusted it now shows a much clearer indication of the efficacy:


Image:Ransomware Prevention Part 4 - Endpoint Protection aka Antivirus

OK, now were getting somewhere. Some points from the above chart, albeit a single point in time:
  • Microsoft usually does well. But could I sleep well at night having all my eggs in *that* basket?
  • Kaspersky is usually a lot higher. Proving that even one of the best efficacy solutions can have a bad month/quarter.
  • Be aware that a vendor may appear in the efficacy but not the performance charts (Malwarebytes). Test performance of that solution accordingly.
  •  Webroot and Sentinel One are absent.
  • VMware is Carbon Black.
  • Kaspersky cannot be used in US government agencies. If this is you, disregard this vendor. Kaspersky is like Microsoft, if you can sleep well at night using it, have at it.
  • Neither McAfee or Symantec are anywhere to be seen. I'll leave you to jump to your own conclusion about these absences.

The above is just a snapshot, by changing the month/year the results can swing wildly (CrowdStrike and FireEye anyone?):


Image:Ransomware Prevention Part 4 - Endpoint Protection aka Antivirus

Each and every reporting period will be slightly different (remember, one solution won't stop everything immediately) but patterns do emerge. While some seesaw wildly, some are always in the top 50% others are always in the bottom 50%. Maybe now I have my top three or four contenders, so it maybe time to see what the other sites say. You're on your own here, you now know what to do.


Features


There are now a plethora of features. Some solutions offer patching (usually as an add-on and patching is not vulnerability scanning, right?). Most will now do EDR and some will do ransomware protection (YMMV), common misconfigurations and process recording with their higher end versions, Some can even take screen shots when the device (read user) does something that triggers it. You want behavior analysis (sometimes called heuristics). As an example if a Word document suddenly decides to send 100's of emails is that normal (hint, it's not normal)? Also be sure to RTFM every few months, especially if you are on a cloud managed solution. They are adding features all the time and not all are enabled by default.


As an example here is Bitdefender GravityZone Ultra's process execution track when something suspicious happens and misconfigured systems screens:


Image:Ransomware Prevention Part 4 - Endpoint Protection aka Antivirus
Image:Ransomware Prevention Part 4 - Endpoint Protection aka Antivirus
Here is an example of all the modules currently available for a desktop/laptop when using BitDefender GravityZone Ultra which goes to show the sheer number of features some of these products now have:


Image:Ransomware Prevention Part 4 - Endpoint Protection aka Antivirus
Image:Ransomware Prevention Part 4 - Endpoint Protection aka Antivirus


What these solutions won't do


While I said ease of use and low overhead was a desirable attributes, these solutions do not configure themselves. You need to slowly tighten down the protection settings to ensure you are getting the best possible protection. All security is a knife edge and endpoint protection is no different. A fully secure endpoint is one that is not connected to a power outlet, but it's level of productivity is affected adversely. While it's not a zero sum game it's still something we have to be cognizant of. And even a relatively fast solution can be made slow by not paying attention. The number of places I see who turned on a new protection feature only to disable it again after user complaints is astronomical.

Turning a feature off is not the answer. Tuning the feature is.


An example of configuration.....Most solutions can do signing certificate exemptions (I'll also talk at length about signing certs in the GPO post). However when whitelisting most will simply enter a path. So when Microsoft Teams doesn't work with their new endpoint solution (enter your own joke here about Teams being a virus), and while entering a path is often the simplest way to make it work (with something like %userprofile%\appdata\Teams\*) this is also opening up a whole host of issues. It's not like the hackers don't know world + dog use Teams. It's not like the hackers don't know the path that Teams installs in (that somewhat flaunts Microsoft's own programming guidelines). But if you just whitelisted the entire Teams folder and a hacker drops Emotet or some other dropper in there what now happens? Right, a no good very bad day is awaiting you sometime in the future. So whitelist your endpoint security exceptions with signing certificates and not paths.


Conclusion


While I can't assist you in the the decision you are now mulling over after reading this (this is not a paying engagement, but feel free to contact Lisa if that floats your boat), you now at least know how to cut through the sales person talk and find out what really are the best of the best in terms of endpoint protection.
Darren Duke   |   June 1 2021 03:15:00 AM   |    ransomware  security    |   Comments [0]

Part 3 - Patching

See here for the entire series of posts, if you are just stumbling onto these posts.


As I said in part one, these post are supposed to be helpful in giving you meaningful useful advice to prevent ransomware.


Recall from part two....


Vulnerability scanning and analysis is not the same as patch management.

There are a multitude of reasons for this so if you need a refresher go read part 2, vulnerability scanning.


You're probably doing it wrong, if you're doing it at all


OK, so we're all on the same page, let's start off with a statement that is true in at least 80% of the organizations we get contracted in to help prevent ransomware:


You are doing patch management badly. Almost everyone does patch management badly.

Now don't get all upset and storm off in a huff. That's how you get ransomwared, letting your emotions get the better of you. Let's start with a series of statements that I hear about patching and work from there:
  1. Patching is risky.
  2. There are lots of patches.
  3. Patching is time consuming.
  4. We do patch, pinky swear (meaning they use Windows Updates).
  5. Prioritizing what needs to be patched first is difficult.

The above is my list, but if you don't believe me, go get the excellent
ServiceNow "Cost and Consequences of Gaps in Vulnerability Response" report. It is free once you fill in the form. It's quite the read and does show that organizations can be overwhelmed by patching. On page 27 of the report (* - I'm using the average for the large and small organization numbers):

~60%* of data breaches occurred because a patch was available for a known vulnerability, but not applied,

On the very next page:


~46%* were unaware that the vulnerability existed before the breach.


Let's break that down:
  1. ~46%* did not know the vulnerability existed.
  2. ~40%* DID KNOW but had not patched the vulnerability.

That's some pretty eye popping numbers. Number 1 is handled by a vulnerability scanning (aka part 2 of this series), you can't patch what you don't know about. Number 2 is a failure of patch management.


The solution - automate, patch, report


So let's tackle each of the above "reasons" one at a time.
  1. Yes, patching is risky. Not patching is far riskier. 60% of data breaches reported in the ServiceNow report were tied to a known patchable, unpatched vulnerability. We all know that every 18 months or so (not including Feature Updates here, organizations really need to look at Windows 10 LTSC) you are going to have a bad few weeks where Microsoft breaks printing, etc. While not optimal, it's at least manageable. Far more so than a data breach or a ransomware incident.
  2. Yes, there are lots of patches. Too many for a mere human to RTFM and address. That's why you need automation. To not automate is to fail. Failure leads to fear. Fear leads to the dark side (quite literally in the case of Colonial Pipeline).
  3. Patching is not time consuming if you are automating and reporting. You should not need a human to interactively patch 95%+ of your systems. Manage the exceptions.
  4. While WSUS is technically a patch management system, it's pretty awful. I'm on the fence as to SCCM (System Center Configuration Manager) being a a significant upgrade. The world has moved on, it's not longer just Microsoft Windows, SQL Server and Office. Your patch management system also needs to be automated, heterogeneous and have an ever expanding ability to patch 3rd party applications and various Linux and Mac versions too. Have you ever installed VLC player on a system? If you have does your patch management system patch it? If it doesn't it should. Go read a few VLC security bullitins and you will soon fine the dreaded words "arbitrary code execution". Yes, you could build a package in SCCM, but using a human to create a software package (MSI, EXE, etc.) and figure out all the switches in order to patch a system is not automation. No, it's not. Get over yourself.
  5. Prioritization is only difficult if you are doing it manually. Again, a technician scouring the web for security bulletins and RTFMing them is not the way to do this. The vendors already already have a priority assigned when the patch is released. Or you could use the CVE/ CVSS number. Or a vulnerability scanner value (like Tenable's Vulnerability Priority Index, or VPR index). Either way, the heavy lifting is done for you. Stop trying to do it better. You won't succeed.

All 5 of the reasons organizations don't patch listed above can be mostly addressed by simply applying an automate, patch and report strategy to this issue. You want to manage the exceptions, the failures, the one-offs.


Automate


Now, I'm fully cognizant that not every single system can be fully automated for patch management. You have systems of such import that taking them down every month maybe unacceptable. If that's the case then manage the risks. Or find ways around it. Clustering is a excellent solution where loads can be transferred to other systems while patching takes place.


You also don't need to patch everything at the same time. Do each Domain Controller on a different day. Do DMZ systems sooner than other less risky networks. Patch desktops overnight or on a weekend. Hopefully your solution can do off-network laptops. COVID has pretty much made that a requirement.


Here's an example of an automated patch schedule from ManageEngine's Desktop Central (MEDC, free for 25 devices or less) which can patch Windows, Mac and Linux OSes and a vast multitude of 3rd party applications:


Image:Ransomware Prevention Part 3 - Patch Management
While the above doesn't show the days (grrr!) many of these happen over a weekend or overnight and are staggered. You can also see the current status of the entire patch group. In the above there are two Linux servers that are currently missing patches. I can either do them manually (SSH into them one at a time), create a patch job in MEDC and immediately patch these or just wait until the next patch window hits.


Additionally, you can automate the approval of patches. You're going to do them anyway, so why sit there an approve them. Some organizations require patches be tested on a subset of machines before being approved, and most good systems can do this, Orgs that to this subset way and get hit with ransomware pretty much immediately change to fully automated for all but a handful of systems.


Patch


Generally you want to patch often, patch quickly. Maybe for servers or other critical systems you wait a few days to see what Microsoft may have broken. It's essentially whatever allows you to sleep. Most desktops and laptops I would do immediately. The benefits far outweigh the risks.

As to what you patch, again, it's more gut than anything else. I have generally shied away from updating BIOS with patch management (yes, some can do BIOS and driver updates) as it can cause problems (changing resolutions once users screens, causing a Bitlocker recovery, etc). Anything that the patch system says is critical, high or moderate is going in the next window. These would be automatically approved with zero human interaction (automate!). Anything not categorized, I'd maybe leave off unless my vulnerability scanner says otherwise (see what I did there?) or whatever allow me to sleep. Here's an example of an automatic non-critical server deployment from MEDC (these would be patched and rebooted on a weekend). Note the 3 day delay:


Image:Ransomware Prevention Part 3 - Patch Management

Report


If you already have an automated patch management system but you are not reporting from it or not looking at the reports then you are no better off than if you no patch management system at all. Because:


If you are not looking, it's not working! Automation is not absolution!

I see this all the time with organizations that use GPOs to set Windows Updates to install automatically or use WSUS. The lack of any discernible errors or warnings does not mean there are no errors or warnings. No, no, no, no. Patching (like vulnerability) is only as good as your monitoring of the system. Yes, 95% of systems are going to hum along, patch, reboot, rinse, repeat. But you will have several systems that have issues.


Issues could be a multitude of things, such as lack of disk space (so patches can't be copied, extracted, etc), the patch is corrupt, the patch needs to be downloaded to your management system because the patch is behind a  pay-wall (anything Oracle). But unless you are monitoring and reporting from it, YOU WOULD NEVER KNOW UNTIL IT IS TOO LATE, You'll be one of the 60% who got hit despite a patch being available. Remember (and not just for patch management) automation is not absolution for you ensuring the automation is functioning correctly.

Here is a reporting screen from MEDC. There is a lot of instantly available information here. Also a security feed on the right so we can see what is coming:


Image:Ransomware Prevention Part 3 - Patch Management

Note this from the above:


Image:Ransomware Prevention Part 3 - Patch Management
The yellow is "health not available", These need to be looked at as to why (especially if those systems are on),


Additionally, this:


Image:Ransomware Prevention Part 3 - Patch Management
Now I have two areas to spend my time and find out why these two numbers are > 0. Out of the 345 systems managed here by MEDC I now have to manage 29 exceptions (8+21). This is a little over 8% of the systems (and most of the deployment failed are because the system was powered off when the scan was attempted, do in reality it probably < 4%).


Conclusion


With a bit of up-front effort, some good planning and a good patch management system, you can go a long way to helping prevent becoming one of the 60% of organizations who were breached via a patchable vulnerability.
Darren Duke   |   May 24 2021 06:02:00 AM   |    ransomware  security    |   Comments [0]

In this 2nd installment of the ransomware prevention series we cover vulnerability scanning and analysis. Part 1 - DNS filtering is here or here for the entire series of posts.

So without futher ado, repeat after me:

Vulnerability scanning and analysis is not the same as patch management.

Vulnerability scanning and analysis is not the same as patch management.

Vulnerability scanning and analysis is not the same as patch management.


(Patch management is a later post)

OK, now we have that out of the way let me explain why they are not the same. Think of it this way: Not everything that is vulnerable can be mitigated (notice I didn't say patched....). Say what now Darren? Let's take a simple example, Windows 2003 Server. Yes they still exist. In your patch management software a Windows 2003 Server will most likely be shown as fully patched and hence give you warm fuzzy feelings that it is "safe", because it is "safe" insofar as you have every patch Microsoft has issues installed on said server. But this does not mean a fully patched Windows 2003 Server is protected from all known vulnerabilities because it's not. In fact just being end of life (it went EOL in July of 2015) makes it a vulnerability simply due to the fact that it no longer receives patches. It's not just Windows. Ubuntu Linux 14.04 LTS went EOL in April 2019. Again, any Ubuntu 14's you may have are probably fully patched. But only for Ubuntu 14 and only up to April 2019 (FWIW Ubuntu 16 LTS went EOL in April 2021). It's not just OSes, what about old, old versions of Java (well, any Java really)? Flash? Office 2010? All may show as fully patched. See the difference here:

Vulnerability scanning and analysis is not the same as patch management.


The 2nd reason that vulnerability scanning and analysis is not the same as patch management is that the latter almost always *only* looks to see if the patch is installed. The former (usually) checks it's active. There are many, many Windows updates that you dutifully install that also require administrators to add a GPO or change a registry key in order to make the patch active. A good example of this is MS
KB3000483.

The 3rd reason that vulnerability scanning and analysis is not the same as patch management is that the latter can scan more than just endpoints with Windows, Linux and MacOS. Copiers, switches, routers, et al are probably no where to be found in your patch management solution, Chances are they are in your vulnerability scanning system. Along with their issues (such as
Ripple20 present a whole host of IoT and MFC devices).

It is worth noting that the world + dog gets very excited by esoteric zero day vulnerabilities that require root or admin access and local logon and the wind to be from ESE. Sure you should be concerned about that (and know the impacts to your organization and patch or otherwise mitigate), but if you don't have a vulnerability management solution in place you have a lot more to worry about than what the press tells you to worry about. The constant appearance of older exploits (so not new, and *definitely* not zero day any longer) in the annual top 10's of active exploits is filled with 2 to 3 year old vulnerabilities (some date back to 2014 and beyond!!!) that can be mitigated, but for some unknown reason (negligence and/or inexperience being my best guess) have been left unmitigated by the attacked organizations. Indeed in 2020 only two (yes two) of the top 10 exploited vulnerabilities have CVE's dated in 2020, meaning they were uncovered and reported in 2020!!! Two. See
the CISA Top 10 Routinely Exploited Vulnerabilities and Security Intelligence's Top 10 Cybersecurity Vulnerabilities of 2020 for more details on this. SMBv1 is also another common vulnerability, so big in fact Microsoft have completely removed it from Server 2019 onwards. You should do the same for everything < Server 2019. SMBv1 being active would not show in patch management because:

Vulnerability scanning and analysis is not the same as patch management.


From a scanning perspective scan your most public attack area more often. Then split you your network segments into scannable chunks. Some solutions can have multiple scanners on different subnets to increase speed of scans and reduce network traffic. Also warn your security folks and always have permission to scan said networks.

Once you have a vulnerability list from your scan(s) (yes, it will be large list) you can now start to mitigate the risks or choose to live with them based on some sort of criteria you set (severity, exploitability, etc). But at least you know, so if you choose to leave a Windows 2000 Server up and running you may take extra precautions around it (because not everything has a patch or mitigation, and really some shit just needs to be retired and thrown out).

Here's an example of the older OpenVAS of an actual scan back in April 2020 with lots of actionable intelligence and some false positives (those top 3 would be very, very important were those servers open to the world via SSH, they are not). In this example I would probably choose to prioritize mitigation of items >5.0 in severity. The location of the scanned networks may also play a role in mitigation priority, for example I'd almost always prioritize mitigating a DMZ subnet over a LAN subnet (hopefully for obvious reasons):


Image:Ransomware Prevention Part 2 - Vulnerability Scanning

So where do you get started with vulnerability scanning? If I'd had wrote this a year ago I would have said the free OpenVAS. They used to have a free virtual appliance you could download and scan away. Alas they have moved on from that to the Greenbone Community Edition/Greenbone Vulnerability Manager and there is no longer an appliance. You have to install it from scratch, I have tried several times (both CentOS and Ubuntu) with no success. Still it does come as a Kali Linux add-on so that's my next course of action. If you know of an appliance version of GCE/GVM leave a comment. So for now it's probably going to be Kali or Rapid7's Nexpose Community Edition if you want to get started with $0 down.

Rapid7's
Nexpose Community Edition is good for 1 year but can be re-upped each year. This is now my free, go to solution.

There is also
Nessus Essentials (free for 16 IP addresses per scanner) that also allows you to see the impressive results that Nessus/Tenable can deliver.

From a paid perspective you have a plethora a choices, all of the above have paid options which usually add a host of features such as trend analysis and reporting. The one I'm most familiar with is Tenable.sc which is Nessus fronted by a reporting engine. Here's the Tenable executive summary:


Image:Ransomware Prevention Part 2 - Vulnerability Scanning

Every vulnerability has a rating and lists if it is known to be exploitable. Most vendors also have a propitary score beyond CVE/CVS that allows you to expend your effort on actually known in-the-wild exploits:

Image:Ransomware Prevention Part 2 - Vulnerability Scanning

It also includes information on how to fix most issues:


Image:Ransomware Prevention Part 2 - Vulnerability Scanning

Again, it is very unlikely the above vulnerability (insecure Windows Service permissions) would ever be caught by a patch management solution. Because, you guessed it:

Vulnerability scanning and analysis is not the same as patch management.


So there you have it, vulnerability scanning will ferret out all the (potentially) bad things hanging around on your network. As to if you fix them, well only you can answer that with some testing. But being forewarned is forearmed.
Darren Duke   |   May 19 2021 02:41:00 AM   |    ransomware  security    |   Comments [0]

Go here for the entire series of posts.

Let's face it, ransomware is not gong away. It's simply too damn profitable for the criminals and too damn easy for them to perpetrate. When an highly publicized incident happens (last week is Colonial Pipeline) you'll see a whole host of articles in the press (IT and otherwise) list a series of steps that organizations can take to prevent it. Platitudes such as "zero trust", "AI", and other meaningless suggestions make their way out. Rarely do these articles have anything by the way of useful and actionable tools and techniques you can utilize to prevent this type of attack.


For the past 6 months I've been giving presentations on ransomware prevention (trust me, you want to prevent.....recovery is lot harder and will eventually be covered in this series). I have decided as a public service to break out this private presentation to a series of blog posts to give enterprise IT professionals the tools and techniques to help prevent their organization becoming the next Colonial Pipeline. You don't need to be a CISSP to protect your network, Nor do you need to pay a big 5 consultancy firm a lot of money to protect your network. You can do it. Just no one has showed you how. Until now.


I don't yet know how many articles will make up this series (it could be 6, it could be 9) but this is the first. The plan is to cover vulnerability analysis, patching, GPO tricks, email security and backup and recovery. This being the first, it is going to be the easiest one organizations can do to protect themselves, add protection at the DNS layer.


At it's heart DNS filtering is having your DNS forwarders/resolvers use a service that will prevent knowingly malicious DNS entries from resolving thus preventing users and services from locating the malicious site hosting whatever is about to ruin your day.


The most basic implementation of this is to simply have your Active Directory and edge firewall DNS settings (or even your home router) to point to one of the free services that provide this type of protection. At the other end of the spectrum are paid services that will allow filter categories, reporting, and filtering of off-LAN devices. Off-LAN devices is the Achilles heel of the free services.


This is not an exhaustive list of services, so if I've missed a good one add a comment.


The free DNS filtering services


Again, there is no mobile filtering for these services, and you need to be behind a router or AD DNS for these to work. For malicious only, I'd start with Quad9. If you need adult or family friendly filtering, CleanBrowsing will be your jam.


CleanBrowsing, has free services that will also block "adult content" and force safe search. Also does malicious filtering. This is very good for public access wifi's were you need to block adult sites.

Quad9, malicious filtering with a good dose of privacy. Recommended by MS-ISAC.

OpenDNS, bought by Cisco and now part of Cisco Umbrella but the free servers have remained online. This service will filter out malicious sites.

The paid DNS filtering services


Paid services will add a whole lot of features and usually the ability to also filter off-LAN devices such as laptops (essential in these COVID WFH times). There are fully fledged filters that will allow for reporting and customization, some also offer on-prem proxies. In some circumstances these can even replace your on-prem web filters but I'm not sure I would recommend that wholeheartedly as most "appliance" web filters can also do ATP on attachment downloading, etc, and DNS filtering only works when the malware has a URL for the command and control infrastructure it's communicating with. If it's communicating directly to an IP address, well you are out of luck.


Webtitan, by far the best value I've come across. Not the best reporting web interface, but the price will make up for that.

DNSFilter, very nice interface.

CleanBrowsing, the paid version of their free offering. No mobile client which is a shame.

Cisco Umberalla. It's Cisco, so expect it to be more expensive than the competition. Usually part of the larger system you will implement. Getting a price is not fun either. Essentially the paid version of OpenDNS.

Conclusion


Adding even the free filters as your upstream DNS resolvers will give you layer of protection you may never have had or even considered. This is important as enterprise IT security is like the skin of an onion. Layered and deep.


If you need to DNS filter mobile devices such as laptops then you will need to look at paid as setting a laptop forwarder to a free service will play havoc when they return to the office and cannot resolve local LAN DNS addresses.
Darren Duke   |   May 16 2021 11:50:14 AM   |    security  ransomware    |   Comments [5]

Veeam v11 changed from a powershell snap-in to a powershell module. As such it broke everything.

In v10 and earlier Veeam PS you probably loaded it something like this:



$snaps = Get-PSSnapin
foreach($snap in $snaps){if($snap.name -eq "VeeamPSSnapin"){$exflag = 1}}
if($exflag -ne 1){
      Add-PSSnapin -name VeeamPSSnapin -erroraction silentlycontinue
      if($error -ne $null){write-host "CRITICAL - Could not load Veeam snapin";exit 2}
}


....rest of your existing code


Well, Veeam decided "due to popular demand" to change this and broke everything. After struggling for a few days to figure out the secret sauce (I had a lot of trouble invoking the new module non-interactively) I hit pay dirt.


For Veeam v11 simply change the above code to this:



$VeeamPath = “C:\Program Files\Veeam\Backup and Replication\Console”
$env:PSPath = $env:PSPath + “$([System.IO.Path]::PathSeparator)$VeeamPath”
Import-Module -DisableNameChecking Veeam.Backup.PowerShell

Connect-VBRServer


...rest of your existing code


Now, I don't have any error checking code in there yet, but this may help some people when the upgrade.

FYI the secret sauce for non-interactive was adding the explicit  Veeam Path (the top two lines of the new code). If your module install path is different adjust accordingly. You can probably achieve the same "fix" by manually adding the Veeam Console path to the local environment variable "PSModulePath" on the Veeam server, I haven't tried that yet and the code way of adding the path is more flexible when I'm copying code around to different systems:


Image:Upgraded to Veeam v11 and now all your Veeam related powershell scripts are broke?
Darren Duke   |   April 23 2021 03:57:25 AM   |    veeam  security    |   Comments [0]

Back by popular demand......

On Friday 29th of January I will be hosting an hour long webinar to provide some real-world proven tips and tricks on preventing and surviving a ransomware attack.

It's at 1pm Eastern time. To reserve a spot simply email info@simplified-tech.com.


It could save you $100,000's.
Darren Duke   |   January 22 2021 08:39:26 AM   |    security    |   Comments [0]