To paraphrase a well-known cliché in cybersecurity, which like all clichés contains more than a kernel of truth, either you have been breached or you will be breached.
How many of you have an action plan ready to implement if you’re the next one to be breached? If you don’t, no need to be embarrassed—just add developing a security action plan to your to-do list.
Next question: If you already have an action plan, are you ready to act?
Even if you’re ready, I have bad news: According to the Ponemon Institute, in 2015 it took the average enterprise 170 days to discover a data breach, and many found out from a third party. I’ve heard that the current discovery gap is now over 230 days. How effective can your action plan be if you can’t implement it until six months or more after the data breach occurred?
The question I’m about to ask is relevant for large organizations as well as private individuals. I’ll focus on the enterprise/agency/organization in my discussion, but individuals can learn a thing or two that could help them also.
How do you know if you’ve been breached?
The digital world is very different from the physical world. If your home is breached, and your jewelry is stolen, there are physical indications. Damage to doors and locks, damaged safes, and most noticeable, your jewelry is GONE.
If your home is breached, there are physical indications. On the other hand, data theft is not really theft, in that it doesn’t remove data from your possession. Data thieves copy/exfiltrate your data, but leave your original data intact.
On the other hand, data theft is not really theft, in that it doesn’t remove data from your possession. Data thieves copy and exfiltrate your data, but they leave your original data intact.
Did you notice anything abnormal before a breach? How can you recognize abnormal if you are not familiar with what normal looks like? There are a number of items you should monitor over time to determine what is normal:
- What is the egress bandwidth on your internet circuits?
- What is the size of your DNS traffic?
- What is the size of your FTP/HTTP/HTTPS traffic?
- What is the size of your FTP traffic?
- What other non-standard ports are in use?
- How many login errors occur?
- Is peer-to-peer traffic normal in your desktop population?
- Is outbound data traffic normal for each server?
- What is the rate of storage utilization?
- Are only authorized applications permitted and in use?
- What is the volume (number and size) of outbound email?
- What is normal CPU utilization for your servers?
- What is your usual risk detection rate from your endpoint security stack?
- Do excessive exclusions for your endpoint security prevent detections, hiding activity from view?
- EVERY exception is a compromise in security!
- Software vendors that recommend anti-virus exceptions don’t care about your security—they care about their application(s).
- What is your compliance rate for patching?
- How many deprecated operating systems are in your environment?
- Wannacry around May 2017 propagated due to a vulnerability that had an available patch months before, and deprecated systems no longer patched.
Sure, you have a SIEM and custom alert rules based on certain conditions. However, SIEMs come with cost, some based on the number of logs ingested, so many SIEMs filter out non-critical log data. Therefore, in many environments, your SIEM is not very helpful in understanding normal in many environments.
Here are some typical scenarios – there are more questions, but by now I’m sure you get the idea:
- If you know your SQL servers normally cruise along at around 40% utilization and you suddenly see sustained utilization of 75% or higher, you have recognized abnormal: It may be a software glitch, an increase in business, or maybe the result of malicious data access.
- If you know your egress router normally runs at 60% of circuit capacity, and you notice sustained circuit saturation, you have recognized abnormal: It may be that success leading to massive hiring caused the increase, or is it a quick data exfiltration in process?
- If you normally have no business case for drive mapping from desktop to desktop (peer to peer) and you suddenly see a high volume of SMB traffic between desktops, you have recognized abnormal: Could there be a legitimate explanation, or are you under a ransomware encryption attack?
- If your users’ normal login failure attempt rate is sporadic, representing random “fat fingering” typographic errors, and you suddenly see a consistently higher rate of login failures, you have recognized abnormal: It may be that high numbers of new users are settling in, or are you seeing malicious attempts at resource access or privilege escalation attempts?
All Is Not Lost
By now you see that it takes a bit of effort to recognize normal (also known as baselining). Automation helps, but there’s no replacement for sentient eyes reviewing your data. Proper baselining can help you improve response time to abnormal activity.
Consider that you might not be able to prevent infiltration by malicious intent, but you can recognize abnormal versus normal and react quickly enough to implement your action plan: You prevent your valuable assets from being exfiltrated! You might have to apologize to 300 customers, but that’s demonstrably better than 300,000 or 30 million! You learn what methods were used and bolster your security to prevent the same kind of attack from working again.
Quite possibly, you will protect your most valuable asset:
If I have not piqued your interest, if I have not convinced you of the value of Proactive Baselining, then I have failed.
I challenge you to establish a Proactive Baselining process within your own environment. Crawl. Walk. Run.
You can slowly add to your process and can even start with your most junior admin staff reviewing your domain event viewer for invalid logins and recording the number each day. Excel can make a nice chart from those numbers, making it easy to recognize the normal range, and your junior admin staff will increase their understanding of your environment. Implement a low cost or free SNMP MIB charting tool (à la MRTG) to track CPU utilization on routers and servers; the tool will create the histogram charts for you.
Start now, and empower your team to reduce your response time to abnormal behavior.
ABOUT ENTERPRISE STUDIO
Enterprise Studio by HCL Technologies is the leading provider and preferred services and education partner for Broadcom Enterprise Software solutions, the preferred partner for Broadcom’s Symantec Enterprise Security solutions, and a leader in agile transformation and DevOps consulting.
Whether you’re an established Global 500 company or a new disruptive force in your industry, we can help you navigate complexities that come with competing in an inter-connected digital era.
We can help you achieve your desired business outcomes, quickly and confidently, by leveraging our team of seasoned technologists, coaches, and educators. We are a global solution provider and Tier 1 global value-added reseller of Broadcom CA Technologies enterprise and mainframe software.
Many of our experts at Enterprise Studio are from the professional services units of Symantec and former CA Technologies. For decades, our teams have supported and help lead organizations to innovation using powerful enterprise software solutions and cutting-edge methodologies – from API management to security, business management to AIOps, and continuous testing to automation.