Silicon Shecky

Infosec Practitioner

  • About
  • Categories
    • General
    • Computers
    • Software
    • Rants
    • Security
    • Internet/Music
    • Reviews
    • Microsoft
    • Hardware
    • Mobile Computing
  • Links
    • Infosec
      • Burbsec
      • Infosec Exchange Mastodon
      • Hacks4Pancakes Blog
      • Krebs On Security
      • Bleeping Computer
  • Archives

Connect

  • Bluesky
  • LinkedIn
  • Mastodon
  • RSS
  • Twitter

[footer_backtotop]

Copyright © 2025 ·Sixteen Nine Pro Theme · Genesis Framework by StudioPress · WordPress

Noise, Noise oh the Noise

June 21, 2018 By Michael Kavka Leave a Comment

So many security controls. So much noise. It is a wonder we find anything at all relevant. The amount of time spent going through log after log is amazing. Even with SIEMs, dashboards, machine learning and “AI” there is still a ton of logs to go through. Yes we can whittle away once we know what we are looking at or looking for. The problem is that we have no good way of determining if we are missing something even with machines helping us.

Example: you have product X which uses behavior of files it knows vs files it does not know with Tactics, Techniques and Procedures it is aware of to make a determination on what to do about said file. This along with known hashes of known malware give said action a score. The problem: even using known files in this fashion produces a lot of false positives, even though their scores might be lower. Example: Outlook uses a known third party plugin for encryption/decryption of e-mail. These constantly show up as a monitored low level event because of memory calls and invoking the plugin. Problem: same technique can be used by malicious software/scripts using Outlook as the vector in through a vulnerability triggers by a malicious e-mail. Now what we have is that attack lost in a sea of false positives. Depending on the product, you may be able to tune out the false positives only, but not always. The tuning becomes too broad and you would lose alerting on the attack due to tuning out the Outlook invoking another program alert as something that is expected, or not getting any of these at all.

So what is my point? Products, especially security products at times either wind up generating too much data without enough control, or generate not enough data with a huge amount of complexity to them to open up the data. It seems to be get flooded out or starve for the information. We the spend more time tuning said products to our environment than we probably should have to. Part of this problem is the fast pace state of updates and upgrades causing hash changes and sometimes behavior changes. Part of it seems to be the rush to get the product to market. Speed kills, in so many different ways, to the point of burning us out. Computer intelligence is getting better, but we still need plenty of eyes on the issues. We need more “entry level” analysts and SOC workers to go through the data and tune the flow. Instead, companies are focus on hiring higher level engineers and architects. Everyone wants to jump up fast, and that leaves the higher levels positions doing jobs they don’t want to/should not have to do. This leads to job hopping, and burnout, and then more openings. It is a vicious circle that needs to be broken. Maybe we need to start off with less noise and no increasing the amount of products until we have the resources or the current products we use tuned properly.

What are your thoughts?

Filed Under: Rants, Security Tagged With: Burnout, Logs, Machine Learning, SIEM

Cyclical

June 15, 2018 By Michael Kavka Leave a Comment

There was a new Intel flaw announced this week dealing with Floating Point Calculations and Math processing. You would think it would take longer than 24 years for Intel to have another Floating Point issue.

Now, I am not saying it is the same type of flaw, but the general public won’t understand that, at least those that remember the 1994 flaw. Back then it was actual mathematics that failed. This time is is the registers that can have data swiped from them. It got me thinking about security and security flaws.

Last year EternalBlue was announced. It reminded me of the days of code red in the way it wormed around using old flaws. We keep seeing cross site scripting, cross site forgery, SQL injection and others go in and out of style. How cyclical is security and why does it seem to be so?

The answer may be as simple as we can only focus on so much at once. We see attacks of a certain style happening, we learn about them, learn how to defend them and forget about other types of attacks that are not really being used. We figure our governance on how to prevent them is still being heeded, so we lose sight, until that style of attack becomes hot again. Same is true about flaws. In our fast paced, need it now, first to market society something has to give. A human can only do so much. Yes, we have tools that should find what we call the low hanging fruit, the items we are not focusing our direct energy on, but these are only tools, and if we ignore the reports they generate, we miss out.

The other way we miss out on the reports generated is with too much noise in those reports. This is one of the biggest flaws I see. You get a tool, and it generates so much noise that it becomes useless. We only have so many eyes and so much time so we just naturally start dismissing everything. It does not matter if the tools are meant to find bugs in code, security holes or monitor for security events, we get false positive fatigue. We have to make exceptions on top of it all in order to allow our companies to do their work and make the money so they can pay us and we can stay employed. Tools are supposed to help with all of that, and by tools I mean software and hardware. The issue there comes in that nothing is a one size fits all.

The tools leave us with a couple other problems. They become too unwieldy to manage or tune, or just a bad, do not allow the granularity to tune them as needed. It takes time, time away from us doing other things that are important, such as going through the latest scans, checking hardening standards, learning about the equipment we have. Instead what was though of as a 6 month deployment turns into a 2-3 year deployment. By the time the deployment is done there are new tools that are being used, perhaps even in your environment that cause the same tuning problem.

We burn out from it all. You might think this could be handled by hiring more of us, by getting more people into the field. That is only part of the answer though. The other part is understanding what level we really need people in. Where we need more people is in the “entry level” analyst type roles. Companies are crying about the shortage of mid to high level people out there, but if they focused on bringing in more of the entry level, they could groom those people to move up and have more eyes on those noisy logs and reports. Give those analysts a chance to learn in their position so they can take the next step. Instead we have fewer eyes on the problem. This is not even taking into account the culture that drives so many away from our field. The sexist, racist, elitist situations that drive some very smart and capable people away. Yes we have a pretty strong community but our field is not just our community that talks to each other, and even there we have issues. Those issues keep cycling back around from how we treat people with not as much knowledge, to how we treat people who look at problems differently, and even how we treat people we disagree with. The sexism is not cyclical, it is a constant. It use to be cyclical though, at least how and when it was brought up. I fear how constant it is makes it become background noise to many, when it should not, but that is for a post and discussion of its own.

So the question is how do we break from this vicious set of cycles? Some thoughts have been proposed about, such as focusing on generating more entry level positions and then mentoring people up through the tuning projects you have. Outside of that, better made tools, that allow us to do what we want with them. Ones that have the granularity and the ability to be easily tuned for noise. We need to learn from the past and realize that just because something is not being used widely now, does not mean it will not be in the future again. There are only so many ways to do things, and while that can be a large number, we are lazy, and will go back to the simplest way we can find.

Finally, we need to stop thinking we are better than each other and other people. We say that people are the biggest security hole. Just remember that we are part of that hole just as much as the secretary, or the neophyte in the field or the CEO. We are fighting from inside the hole and holes are normally circles.

 

Filed Under: General

Lack of Control?

June 7, 2018 By Michael Kavka Leave a Comment

You get a tool to use. The tool looks good, in fact other tools from the same company look good. This tool though seems to be the red-headed stepchild.

Recently, I have been trying to clean up and tune an endpoint solution. It is made by a company I have some respect for, and I had seen in the past. The tool itself seems to work well. The problem, like with so many tools, is getting the best data out of it. The signal to noise ratio on the monitoring aspect is huge! The threat detection and prevention works well enough, but the amount of data to sift through on data that is just being brought in for you to monitor is difficult at best. The issue is a fine line between being able to warn about a potential threat and knowing what normal behavior is. The example I will use here is opening known PDFs. These are seen as an application, each with their own hash by said product, for scanning purposes. All fine and dandy, but when the PDF is opened, certain executables that get called cause a trigger for a monitoring alert to be made. These executables are known and trusted applications (part of the PDF reader), but due to the unknown nature of the PDF, they come up as an alert for running the executable. This creates a ton of noise in the alerts area. The problem is there is no real way to tune out these alerts, as the product is set up now, without risking not seeing alerts that one might need to check on. It makes the monitoring alerts useless, unless you feed the data to a SIEM where you can do the filtering. The kicker is customers have been complaining in the user forums about this for over a year now, and still nothing has been officially announced as being done. The rumor is they stealth fixed this in an update to the product, but there is nothing in the release notes. So in the mean time, I have to go through and keep the alert area clean of the noise, which eats into my time to do other things. A company my size it is nowhere as bad as what could happen in a huge company with tens of thousands of endpoints.

Situations like this can leave a bad taste in the customers mouth. Companies get so overprotective of certain things, and hate admitting mistakes. Personally, I would rather a company be up front and over communicate what is going on with an issue such as this. I am more likely to recommend a company that is upfront and honest, and has good communication with a not quite as mature product than a completely mature product but hides from their customers, especially the smaller ones. They need to remember that while the larger companies bring in the greater money, attackers tend to move from the smaller fish into the bigger ones, therefore showing attention to the smaller companies gives you a good reputation with the larger ones.

Filed Under: Rants, Security, Software

  • « Previous Page
  • 1
  • …
  • 12
  • 13
  • 14
  • 15
  • 16
  • …
  • 248
  • Next Page »

RSS Taggart Institute Intel Feed

  • Are You Protecting Yourself from Deepfakes? Take This Quick Quiz. October 27, 2025 Shanan Winters
  • Losing the Swing States October 27, 2025 Richard Fontaine
  • UN Cybercrime Treaty wins dozens of signatories, to go with its many critics October 27, 2025 Simon Sharwood
  • Uncovering Qilin attack methods exposed through multiple cases October 27, 2025 Takahiro Takeda
  • ISC Stormcast For Monday, October 27th, 2025 https://isc.sans.edu/podcastdetail/9672, (Mon, Oct 27th) October 27, 2025
  • Shaq's new ride gets jaq'ed in haq attaq October 26, 2025 Brandon Vigliarolo
  • The Kavanaugh stop, 50 days later October 26, 2025 Chris Geidner
  • Kaitai Struct WebIDE, (Sun, Oct 26th) October 26, 2025
  • Hackers steal Discord accounts with RedTiger-based infostealer October 26, 2025 Bill Toulas
  • [REVIVE-SA-2025-002] Revive Adserver Vulnerability October 26, 2025

Browse by tags

Active Directory Android Antivirus Apple Beta Chrome Computers Exchange Exchange 2007 Firefox General Thoughts Google InfoSec Internet Explorer iOS iPad IT Linux Mac Malware Microsoft OS OSx Patches Rants SBS SBS 2008 Security Security Patches Server SMB Software Support Surface TechEd Tweets Ubuntu Verizon Virus Vista vulnerabilities Windows Windows 7 Windows 8 XP