Silicon Shecky

Infosec Practitioner

  • About
  • Categories
    • General
    • Computers
    • Software
    • Rants
    • Security
    • Internet/Music
    • Reviews
    • Microsoft
    • Hardware
    • Mobile Computing
  • Links
    • Infosec
      • Burbsec
      • Infosec Exchange Mastodon
      • Hacks4Pancakes Blog
      • Krebs On Security
      • Bleeping Computer
  • Archives

Connect

  • Bluesky
  • LinkedIn
  • Mastodon
  • RSS
  • Twitter

[footer_backtotop]

Copyright © 2025 ·Sixteen Nine Pro Theme · Genesis Framework by StudioPress · WordPress

EcoSystems, or Why the Security Tools Industry is making us less secure

June 13, 2019 By Michael Kavka Leave a Comment

Warning: I will be dropping company names in this article based on items I use or have used. These are meant as examples only from personal experience.

We live in a world where we do not have enough eyes on things, we suffer from burnout, work long hours, and generally are banging our heads against the wall. We also live in a world where almost every single product we deal with markets itself as the magic bullet in securing our company. The lack of interoperability though is as much a security hole as any bug or technique used against us.

There is an old, and true saying: The more complex something is the more chance that it can be defeated by something simple. It is a statement we, the people working in the security field understand. We deploy “SIEMS” (actually a function of data/log collecting), Anti-Virus, EDR, Firewalls, IDS/IPS, Web filtering, deep packet inspection, and so much more. More and more frequently these are becoming walled gardens, and complex ones at that. They are not talking to each other very easily, and worse, they are making it harder at times for us to find the problems.

Log/Data collectors (SIEMS) are supposed to be the one stop shop. You send your data there, usually logs, potentially Netflow streams, so you can then cross-reference and analyze the aggregated data. Simple enough, and with the proper AI/Machine Learning it should make our lives easier. Now, think about this, how many companies put their own spin on log formatting? Recently I had to write a log parser for Cisco’s syslog format because it is not using the standard style, and therefor Graylog would not parse it on its own. A simple thing that does have standardization, and open source project using that standard, and a large highly respected company saying “We will do things our way, deal with it.” Reminds me of the arguments/issues over Microsoft not using standards properly. Along the same lines, let’s look at another Cisco product, Umbrella. Hey, they put things in their cloud, you use their dashboard, and there is no simple way to forward that data to a SIEM. You have to jump through multiple hoops. This does not even address the lack of proper reporting in the console, the clunky search system, and poor information on how the whole thing works (or doesn’t) in identifying people/computers.So now you have your SIEM dashboards open and this Cisco Umbrella Console open, Just to keep an eye on things since it is now a manual cross reference situation.

Now add in an EDR solution that again causes you to use either a specific SIEM (like Splunk), or their console. Again It doesn’t give you full information in the console, requiring you to make guesses based on the information it does provide. It also does not play nicely with built in security services, so you have to put in allow or bypass rules for files and directories. It doesn’t even update known software certificates in any normal time, although a different piece of software from the same company does. Talk about software in a silo. Same ecosystem not even working similarly.

So we have console number 3 opened. You are a small business and have say 4 people on your whole security team, if you are lucky. Add on that at least one of these items has to be extra monitored due to false positive potential being extremely high (EDR) that can have a major negative impact on the company’s bottom line. Again more complexity, more eyes needed on items, more work to do.

Buying into an ecosystem is fine and all, but most companies tend to look for what software will work best in their environment (or more often suck the least). These solutions though are requiring more and more complexity in our setup. So you add onto the bottom line by outsourcing your SOC. The question is how long until they actually understand the environment for your company to be useful? How much turn over is there, and how many other companies are they monitoring at the same time with their own limited resources? Oh and now you have another layer of complexity added onto the whole ordeal.

The truth as I see it is this. There is no money in security, only in the products. The more products are in their own silo and do not communicate, the more people are needed. It is a system that is predicated on making money and helping other companies make money. If companies used standards, and made it less complex so say you can have one product, a SIEM, that allows for all the cross referencing and dashboards in one place (like it is suppose to), we can start focusing more on the real issues. We also can stop burning out our security teams due to product overload. Proper developing of the products for human consumption is needed. Interoperability is needed.

There was/is a conspiracy theory that Anti-Virus vendors actually write viruses and release them to the wild so that their products are needed. In a similar vein we are actually seeing that same type of idea in the security field/industry. It is helping cause burnout, and a huge employment gap. Not enough eyes and then we wonder why it takes so long to notice breaches. AI, machine learning and automation are fantastic tools, but we still need the human factor to confirm and monitor them. It is time we started simplifying certain things to make us more secure and cut down the burnout.

Filed Under: Rants, Security Tagged With: Communication, Security, SIEM

DCSync, where the heck did that come from?

October 25, 2018 By Michael Kavka Leave a Comment

Have you every had a pentest or red team report that talks about DCSync? How much of it has been hair pulling? What is DCSync and what is the significance of it?

When securing Active Directory, there are a ton of moving parts, and even more rights available, especially when you add in extended rights. There is a set though, that can get assigned, which are used for synchronizing all of Active Directory. Two of them work together and allow for the copying of secret information, such as password hashes. This right is important when do certain types of sign on using AD credentials, such as Sharepoint, or synchronizing with Azure. It also allows for Domain Controllers to synchronize all the domain information between them. The rights, “Replicating Domain Changes” and “Replicating Domain Changes All”, have that much power, but to be able to get a full sync with password hashes you need both rights. The idea is to keep these rights, especially the All right, to a bare minimum of user/services accounts. This is important to prevent Mimikatz’s DCSync attack, which essentially makes a copy of all the AD information so one can crack passwords offline.

One would think this should not be a big deal, but it can get out of control very quickly. For starters, the only place that you can directly see these rights is at the root of the domain when using ADUC (Active Directory Users and Computers). Even when propagated down you cannot just see it as part of the advanced security section of the properties, to remove it. Second, you can get it by being granted full control, or part of a group that is granted full control, of a group that either has the right granted directly from the root, or from a group that already has the rights propagated down to it. You can see this sort of delegation of rights by using a tool such as BloodhoundAD to map out the relationships. Another way one can get this right is through being assigned it through delegation at the root of the domain. The end result of all of this is that, there should be little to no reason for normal, or even admin users to have the Replicating Domain Changes All right. Certain services account that are not allowed interactive logon may need it, such as a service account to replicate to Azure as mentioned before. Bottom line, this right should not be given out unless absolutely necessary.

So we have a tool in BloodhoundAD that can show us what accounts and groups have this right. It shows if they are getting it through a security assignment such as Full Control, or if it is because of being a member of a group with the right. The fear on removing the right is knowing if it will break anything. How do we know what accounts are actually using said right? Windows Logs come in handy on this instance, as long as you are getting them from Domain Controllers.

Finding the search string happened for me while learning about information I was getting from a search I found in the Blue Team Field Manual, which gives some nice searches using Powershell for event logs. The one that started everything for me was their Domain Service Access – Audit Directory Service Access. More specifically, Event ID 4662is the one to search for. From there, I started looking at what Access Masks meant what, finding that Access Mask 0x100  is the Control Access property. The actual rights that it uses are given in GUID format. Some searching on the GUIDs gave me the following that relate to the extended rights “Replicating Directory Changes” GUID:{19195a5b\-6da0\-11d0\-afd3\-00c04fd930c9\}  and “Replicating Directory Changes All” GUID:{1131f6aa\-9c07\-11d1\-f79f\-00c04fc2dcd2} . The big thing is that it shows Computer Accounts, such as Domain Controllers, using the rights to synchronize between themselves, and any user accounts that are actually using the rights, such as an Azure service account, that is using it. Filtering out the computer accounts will show what accounts are actually doing any synchronization. This allows for one to ask account owners why they might be synchronizing the domain with said account. Through that, one can easily get a good idea of what will happen if the Replicating Directory Changes All right is removed, and reassign accounts that actually need the right to have it directly from the root of the domain, thereby allowing removal the right from groups also. In Graylog the search if you are using NXLog into a GELF input to parse the Event Log information properly the search would like like this:

EventID:4662 AND AccessMask:0x100 AND ObjectType:"%\{19195a5b\-6da0\-11d0\-afd3\-00c04fd930c9\}" AND "{1131f6aa\-9c07\-11d1\-f79f\-00c04fc2dcd2}" AND NOT SubjectUserName:*$

Mind you that using *$ requires in Graylog that you have configured Graylog to accept wildcards as the start of a search string. If that is not configured you should be able to put the first character of your naming scheme then the wildcard and $. In Splunk the search would be similar, making sure you use the proper index and field name for EventIDs in Splunk.

This gives a basic way to search for an attacker attempting to make a copy of the domain. You could potentially remove the GUID for Replicating Domain Changes ALL from the search and see anyone trying to copy all the non-secret information from the domain, again a way for attachers to do searches through the domain offline, and thereby making less noise.

As always, if I have made any errors in this, feel free to let me know, and feel free to discuss all of this

Filed Under: General Tagged With: Blue Team, DCSync, Graylog, Replicating Directory Changes, Replicating Directory Changes All, SIEM

Noise, Noise oh the Noise

June 21, 2018 By Michael Kavka Leave a Comment

So many security controls. So much noise. It is a wonder we find anything at all relevant. The amount of time spent going through log after log is amazing. Even with SIEMs, dashboards, machine learning and “AI” there is still a ton of logs to go through. Yes we can whittle away once we know what we are looking at or looking for. The problem is that we have no good way of determining if we are missing something even with machines helping us.

Example: you have product X which uses behavior of files it knows vs files it does not know with Tactics, Techniques and Procedures it is aware of to make a determination on what to do about said file. This along with known hashes of known malware give said action a score. The problem: even using known files in this fashion produces a lot of false positives, even though their scores might be lower. Example: Outlook uses a known third party plugin for encryption/decryption of e-mail. These constantly show up as a monitored low level event because of memory calls and invoking the plugin. Problem: same technique can be used by malicious software/scripts using Outlook as the vector in through a vulnerability triggers by a malicious e-mail. Now what we have is that attack lost in a sea of false positives. Depending on the product, you may be able to tune out the false positives only, but not always. The tuning becomes too broad and you would lose alerting on the attack due to tuning out the Outlook invoking another program alert as something that is expected, or not getting any of these at all.

So what is my point? Products, especially security products at times either wind up generating too much data without enough control, or generate not enough data with a huge amount of complexity to them to open up the data. It seems to be get flooded out or starve for the information. We the spend more time tuning said products to our environment than we probably should have to. Part of this problem is the fast pace state of updates and upgrades causing hash changes and sometimes behavior changes. Part of it seems to be the rush to get the product to market. Speed kills, in so many different ways, to the point of burning us out. Computer intelligence is getting better, but we still need plenty of eyes on the issues. We need more “entry level” analysts and SOC workers to go through the data and tune the flow. Instead, companies are focus on hiring higher level engineers and architects. Everyone wants to jump up fast, and that leaves the higher levels positions doing jobs they don’t want to/should not have to do. This leads to job hopping, and burnout, and then more openings. It is a vicious circle that needs to be broken. Maybe we need to start off with less noise and no increasing the amount of products until we have the resources or the current products we use tuned properly.

What are your thoughts?

Filed Under: Rants, Security Tagged With: Burnout, Logs, Machine Learning, SIEM

RSS Taggart Institute Intel Feed

  • UN Cybercrime Treaty wins dozens of signatories, to go with its many critics October 27, 2025 Simon Sharwood
  • Uncovering Qilin attack methods exposed through multiple cases October 27, 2025 Takahiro Takeda
  • ISC Stormcast For Monday, October 27th, 2025 https://isc.sans.edu/podcastdetail/9672, (Mon, Oct 27th) October 27, 2025
  • Shaq's new ride gets jaq'ed in haq attaq October 26, 2025 Brandon Vigliarolo
  • The Kavanaugh stop, 50 days later October 26, 2025 Chris Geidner
  • Kaitai Struct WebIDE, (Sun, Oct 26th) October 26, 2025
  • [REVIVE-SA-2025-002] Revive Adserver Vulnerability October 26, 2025
  • [REVIVE-SA-2025-001] Revive Adserver Vulnerability October 26, 2025
  • New CoPhish attack steals OAuth tokens via Copilot Studio agents October 25, 2025 Bill Toulas
  • What Really Doomed Napoleon’s Army? Scientists Find New Clues in DNA October 25, 2025 Becky Ferreira

Browse by tags

Active Directory Android Antivirus Apple Beta Chrome Computers Exchange Exchange 2007 Firefox General Thoughts Google InfoSec Internet Explorer iOS iPad IT Linux Mac Malware Microsoft OS OSx Patches Rants SBS SBS 2008 Security Security Patches Server SMB Software Support Surface TechEd Tweets Ubuntu Verizon Virus Vista vulnerabilities Windows Windows 7 Windows 8 XP