Artwork

Sisällön tarjoaa Jerry Bell and Andrew Kalat, Jerry Bell, and Andrew Kalat. Jerry Bell and Andrew Kalat, Jerry Bell, and Andrew Kalat tai sen podcast-alustan kumppani lataa ja toimittaa kaiken podcast-sisällön, mukaan lukien jaksot, grafiikat ja podcast-kuvaukset. Jos uskot jonkun käyttävän tekijänoikeudella suojattua teostasi ilman lupaasi, voit seurata tässä https://fi.player.fm/legal kuvattua prosessia.
Player FM - Podcast-sovellus
Siirry offline-tilaan Player FM avulla!

Defensive Security Podcast Episode 276

46:11
 
Jaa
 

Manage episode 434619208 series 1344233
Sisällön tarjoaa Jerry Bell and Andrew Kalat, Jerry Bell, and Andrew Kalat. Jerry Bell and Andrew Kalat, Jerry Bell, and Andrew Kalat tai sen podcast-alustan kumppani lataa ja toimittaa kaiken podcast-sisällön, mukaan lukien jaksot, grafiikat ja podcast-kuvaukset. Jos uskot jonkun käyttävän tekijänoikeudella suojattua teostasi ilman lupaasi, voit seurata tässä https://fi.player.fm/legal kuvattua prosessia.

Check out the latest Defensive Security Podcast Ep. 276! From cow milking robots held ransom to why IT folks dread patching, Jerry Bell and Andrew Kalat cover it all. Tune in and stay informed on the latest in cybersecurity!

Summary:

In episode 276 of the Defensive Security Podcast, hosts Jerry Bell and Andrew Kalat delve into a variety of security topics including a ransomware attack on a Swedish farm’s milking machine leading to the tragic death of a cow, issues with patch management in IT industries, and an alarming new wormable IPv6 vulnerability patch from Microsoft. The episode also covers a fascinating study on the exposure and exploitation of AWS credentials left in public places, highlighting the urgency of automating patching and establishing robust credential management systems. The hosts engage listeners with a mix of humor and in-depth technical discussions aimed at shedding light on critical cybersecurity challenges.

00:00 Introduction and Casual Banter
01:14 Milking Robot Ransomware Incident
04:47 Patch Management Challenges
05:41 CrowdStrike Outage and Patching Strategies
08:24 The Importance of Regular Maintenance and Automation
15:01 Technical Debt and Ownership Issues
18:57 Vulnerability Management and Exploitation
25:55 Prioritizing Vulnerability Patching
26:14 AWS Credentials Left in Public: A Case Study
29:06 The Speed of Credential Exploitation
31:05 Container Image Vulnerabilities
37:07 Teaching Secure Development Practices
40:02 Microsoft’s IPv6 Security Bug
43:29 Podcast Wrap-Up and Social Media Plugs-tokens-in-popular-projects/

Links:

  • https://securityaffairs.com/166839/cyber-crime/cow-milking-robot-hacked.html
  • https://www.theregister.com/2024/07/25/patch_management_study/
  • https://www.cybersecuritydive.com/news/misguided-lessons-crowdstrike-outage/723991/
  • https://cybenari.com/2024/08/whats-the-worst-place-to-leave-your-secrets/
  • https://www.theregister.com/2024/08/14/august_patch_tuesday_ipv6/

Transcript:

Jerry: Today is Thursday, August 15th, 2024. And this is episode 276 of the defensive security podcast. My name is Jerry Bell and joining me tonight as always is Mr. Andrew Kalat.

Andrew: Good evening, Jerry. Once again, from your southern compound, I see.

Jerry: Once again, in the final time for a two whole weeks, and then I’ll be back.

Andrew: Alright hopefully next time you come back, you’ll have yet another hurricane to dodge.

Jerry: God, I hope not.

Andrew: How are you, sir?

Jerry: I’m doing great. It’s a, it’s been a great couple of weeks and I’m looking forward to going home for a little bit and then then coming back. How are you?

Andrew: I’m good, man. It’s getting towards the end of summer. forward to a fall trip coming up pretty soon, and just cruising along. Livin the dream.

Jerry: We will make up for last week’s banter about storms and just get into some stories. But first a reminder that the thoughts and opinions we express are those of us and not our employers.

Andrew: Indeed. Which is important because they would probably fire me. You’ve tried.

Jerry: I would yeah. So the the first story we have tonight is very Moving.

Andrew: I got some beef with these people.

Jerry: Great. Very moving. This one comes from security affairs and the title is crooks took control of a cow milking robot, causing the death of a cow. Now, I will tell you that the headline is much more salacious than the actual story that the. When I saw the headline, I thought, oh my God, somebody hacked a robot and it somehow kill the cow, but no, that’s not actually what happened,

Andrew: Now, also, let’s just say up front, the death of a cow is terrible, and we are not making light of that. But we are gonna milk this story for a little while.

Jerry: that’s very true.

Andrew: I’m almost out of cow puns.

Jerry: Thank God for that. So, what happened here is this farm in Sweden had their milking machine, I guess is a milking machine ransomware and the farmer noticed that he was no longer able to manage the system, contacted the support for that system. And they said, no, you’ve been ransomware.

Actually, the milking machine itself apparently was pretty trivial to get back up and running, but apparently what was lost in the attack was important health information about the cows, including when some of the cows were inseminated. And because of that, they didn’t know that one of the pregnant cows was supposed to have given birth, but actually hadn’t.

And so it. What had turned out to be the case is that the cow’s fetus, unfortunately passed away inside the cow and the farmer didn’t know it until they found the cow laying lethargic in it stall, and they called a vet. And unfortunately, at that point it was too late to save the cow.

This is an unfortunate situation where a ransomware attack did cause a fatality.

Andrew: Yeah, and I think in the interest of accuracy, I think it was in Switzerland,

Jerry: Is it switzerland? Okay. I knew it started with a S W.

Andrew: That’s fair. You’re close. It’s Europe.

Jerry: It’s all up there.

Andrew: But yeah, I guess in this theory that if they had a better tracking date when the cow had been inseminated, they would have known that the cow was in distress with labor and could have done something more proactively to save cow and potentially the calf. And unfortunately, because I didn’t have that data, because it was in this ransomwared milking robot machine we ended up with a dead cow and a dead calf.

Jerry: So not without grilling the farmer too much. I was I was thinking, that,

Andrew: Wow!

Jerry: I’m sorry. I was thinking that, they clearly had an ability to recover. And what they thought was the important aspect of that machine’s operation, which was milking, they were able to get that back up and running pretty quickly.

But it seemed to me like they were unaware that this other information was in kind tied to that same system. I don’t fully understand. Seems like it’s a little more complicated than I’m, than I’ve got it envisioned in my mind. But very clearly they hadn’t thought through all the the potential harm.

A good lesson, I think for us all.

Andrew: I feel like we’ve butchered this story.

Jerry: The the next story we have for today comes from register. com and the title is patch management still seemingly abysmal because no one wants the job can’t stop laughing. All right.

Andrew: A cow died! That’s tragic!

Jerry: I’m laughing at your terrible attempts at humor.

Andrew: I couldn’t work leather in there. I tried. I kept trying to come up with a leather pun.

Jerry: We appreciate your efforts.

So anyhow. This next story talks about the challenge that we as an IT industry have with patching. And basically that it is a very boring task that not a lot of people who are in IT actually want to do. And so it, it highlights the importance again of automation and.

This in the complimentary story which is titled misguided lessons from CrowdStrike outage could be disastrous from cybersecurity dive. I put these two together for a reason because one of the, one of the. I think takeaways from the recent CrowdStrike disaster is we need to go slower with patching and updates and perhaps not rely on automatic updates.

And these 2 articles really point out the folly in that. Number 1, this. Article from the register is pointing out that relying on manual patching is a losing proposition because really nobody wants to do it and it doesn’t scale. It’s, it’s already, it’s IT operations is already a crap job in many instances, and then trying to expect people to to do things manually is a problem.

The second article points out the security issues that come along with Adopting that strategy, which is, you’re exposing your environment unduly unnecessarily. And in fact the improvements in. Your security posture and the let the reduction in likelihood of some kind of an attack far outweigh the remote possibility of what happened.

Like we saw with CrowdStrike. Now there is a kind of an asterisk at the bottom. They point out the importance of doing staged deployments of patches, which I think is one of the central lessons of the, at least for my Perspective, one of the central lessons of the CrowdStrike disaster is that go fast, but stage it.

Andrew: yeah it’s an interesting problem that we’re struggling with here, which is how many times have we saved our own butts without knowing it by automate or rapidly patching? It’s very difficult to prove that negative. And so it’s very difficult to. Weigh the pros and cons empirical data showing where automatic patching or rapid patching solved a problem or avoided a problem versus when patching broke something.

Cause all we know about is when it breaks, like when a Microsoft patch rolls out and breaks and that sort of thing. And it’s one of those things where it has to be perfect every time is the feeling from a lot of folks. And if it, if every time we have a problem, we break some of that trust. It hurts the credibility of auto patching or, rapidly patching. The other thing that comes to mind is I would love to get more IT folks and technical operations folks and SREs and DevOps folks, with the concept of patching as just part of regular maintenance. That is, just built into their process. A lot of times it feels like a patch is an interrupt driven or toil type work that they have to stop what they’re doing to go work on this.

Where, in my mind, at least the way I look at it from a risk manager perspective, unless something’s on fire or is a known RCE or known exploited, certain criteria. I’m good. Hey, take patch on a monthly cadence and just catch everything up on that monthly cadence, whatever it is. I can work within that cadence.

If I’ve got something that I think is a higher priority, we can try to interrupt that or drive a different cadence to get that patched or mitigated in some way. But the problem often is that, Okay. Every one of these patches seems to be like a one off action if you’re not doing automatic patching in some way, that is very Cognitively dissonant with what a lot of these teams are doing and I don’t know how to get Across very well that you will always have to patch it was all this will never stop So you have to plan for it.

You have to build time for that. You have to build Automation and cycles for that and around it and it’ll be a lot less painful It’s it feels like pushing the rock up the hill on that one.

Jerry: One of my observations was

an impediment to fast patching is the reluctance for downtime and, or the potential impacts from downtime. And I think that dovetails with what you just said, in part, that concern stems from the way we design our IT systems and our IT environments. If we design them in a way that they’re not patchable without interrupting operations, then my view is we’ve not been successful in designing the environment to meet the business.

And that’s something that, that I tried hard to drive and just thinks in some aspects I was successful and others I was not. But I think that is one of the real key things that, that we as a it leader or security leaders really need to be imparting in the teams is that when we’re designing things, it needs to be, Maintainable as a, not as a, like you described it as an interrupt, but as an, in the normal course of business without heroic efforts, it has to be maintainable.

You have to be able to patch, you have to be able to take the system down. You can’t say that gosh, this system is so important. Like we can’t, we take it down. We’re going to lose millions of dollars ever. Like we can’t take it down. Not a good, it’s not a good look. You didn’t design it right.

Andrew: That system is gonna go down. Is it gonna be on your schedule or not? The other thing I think about with patching is not just vulnerability management But you know Let’s say you don’t patch and suddenly you’ve got a very urgent Vulnerability that does need to be patched and your four major versions and three sub versions behind now you have this massive uplift That’s probably going to be far more disruptive to get that security patch applied, as opposed to if you’re staying relatively current, n minus one or n minus two, it’s much less disruptive get that up to date.

Not to mention all of the end of life and end of support issues that come with running really old software. And don’t even know what vulnerabilities might be running out there, but just keeping things current as a matter of course, I believe. It makes dealing with emergency patches much, much easier. all these things take time and resources away from what is perceived to be higher value activities. So it’s constantly a resource battle.

Jerry: And there was like, there was a quote related to what you just said in, at the end of this article, it said I think it mostly comes down to quote, I think it mostly comes down to technical debt. You explained it’s very, it’s a very unsexy thing to work on. Nobody wants to do it and everyone feels like it should be automated, but nobody wants to take responsibility for doing it.

You added the net effect is that nothing gets done and people stay in the state of technical debt. Where they’re not able to prioritize it.

Andrew: That’s not a great place to be.

Jerry: No, there wasn’t another interesting quote that I often see thrown around and it has to do with the percent of patches. And so the, I’ll just give the quote towards the beginning of the article. Patching is still notoriously difficult for us to principal analyst. Andrew Hewitt told the register.

Hewitt, who specializes in it ops said that while organizations strive for a 97 99 percent patch rate, they typically only managed to successfully fix between 75 and 85 percent of issues in their software. I’m left wondering, what does that mean?

Andrew: Yeah, like in what time frame? In what? I don’t know. I feel like what he’s talking about maybe is They only have the ability to automatically patch up to 85 percent of the deployed software in their environment.

Jerry: That could be, it’s a little ambiguous.

Andrew: It is. And from my perspective, there’s actually a couple different things where we’re talking about here, and we’re not being very specific. We’re talking about I. T. Operations are talking about corporate I. T. Solutions and systems and servers. For an IT house, I work in a software shop, so we’ve got the whole software side of this equation, too, for the code we’re writing and keeping all that stuff up to date, which is a whole other complicated problem that, some of which I think would be inappropriate for me to talk about, but, so there’s, it’s doubly difficult, I think, if you’re a software dev shop to keep all of your components and dependencies and containers and all that stuff up to date.

Jerry: Absolutely. Absolutely. I will also say that A couple of other random thoughts on my part, this, in my view, gets harder or gets more complicated, the larger in larger organizations, because you end up having these kind of siloed functions where responsibility for patching isn’t necessarily clear, whereas in a smaller shop.

You may have an IT function who’s responsible end to end for everything, but in large organizations, oftentimes you’ll have a platform engineering team or who’s responsible for, let’s say, operating systems. And then you may have a, that, that team is a service provider for other other parts of the business.

And those other parts of the business may not have a full appreciation for what they’re responsible for from an application perspective, and especially in larger companies where, they’re want to reduce head count and cut costs, the, those application type people in my, my experience, as well as the platform team are are ripe targets for reductions.

And when that happens. You end up in this kind of a weird spot of having systems and no clear owner on who’s actually responsible. You may even know that you have to patch it, but you may not know whose job it is.

Andrew: Yeah, absolutely. In my perfect world, every application has a technical owner and every underlying operating system or underlying container has a technical owner. Might be the same, might be different. And they have their own set of expectations. Often they’re different and often they’re not talking to each other. So there could be issues in dependencies between the two that they’re not coordinating well. And then you get gridlock and nobody does anything.

Jerry: So these are pragmatic problems that in my experience. They present themselves as salt is a sand in the gears, right? They make it very difficult to move swiftly. And that’s what in my ex in my experience drives that heroic effort, especially when something important comes down the line, because now you have to pay extra attention because that something is not going to, that there isn’t a well functioning process.

And I think that’s. Something as an industry, we need to focus on. Oh, go ahead.

Andrew: I was just gonna say, in my mind, some of the ways you solve this, and these are usually said difficult to do, but proper. I should define that. Maintained asset management, I. T. asset management is key. And in my mind, you’ve got to push your business to make sure that somebody has accountability to every layer of that application. And push your business to say, hey, if we’re not willing to invest in maintaining this, and nobody’s going to take ownership of it, it shouldn’t be in our environment. must be well owned. This is, it’s like when you adopt a dog. Somebody’s got to take care of it. And you can’t just neglect it in the backyard. So we run into stuff all the time where it’s just, Oh, nobody knows what that is. Then get rid of it. attack surface. That’s a single thing out there is something that could be attacked. If it’s about being maintained, that becomes far riskier from an attack surface perspective. So I think that, and I also think about, Hey, tell people before you go buy a piece of software, do you have the cycles to maintain it? Do you have the expertise to maintain it?

Jerry: The business commitment to fund its ongoing operations, right?

Andrew: Exactly. I don’t know. It gets stickier. And now we have this concept of SaaS, where a lot of people are buying software and not even thinking about the backend of it because it’s all just auto magic to them. So they get surprised when it’s, Oh, it’s in house. We’ve got to actually patch it ourselves. Yeah,

Jerry: The other article in cybersecurity dive had a, another interesting quote that I thought lacked some context and the quote was. There are, there were 26, 447 vulnerabilities disclosed last year and bad actors exploited 75 percent of vulnerabilities within 19 days.

Andrew: no, that’s not right.

Jerry: Yeah, here’s, here is the missing context.

Oh, and it also says one in four high risk vulnerabilities were exploited the same day they were disclosed. What now, the missing context is this report linked, or this quote is referring to a report by QALYS that came out early. At the beginning of the year and what it was saying is that about 1 percent of vulnerabilities or are what they call high risk and those are the vulnerabilities that end up having exploits created, which is an interesting data point in and of itself, that only 1 percent of vulnerabilities are what people go after.

Patching our goal is to patch all of them. What they’re saying is that 75 percent of the 1%, which had vulnerability or had exploits created, had those exploits created within 19 days.

Andrew: That makes, that’s a lot more in line with my understanding.

Jerry: And 25 percent were exploited within this the same day. So I, and that’s the important context. It’s a very salacious statement without that extra context. And I will say that as a as a security leader, one of the challenges we have is, again, that there, there were almost 27, 000.

Vulnerabilities. I think we’re going to blow the doors off that this year,

not all that they’re not all equally important. Obviously they’re rated at different levels of severity, but the real, the reality for those of us who pay attention, that it’s not just the critical vulnerabilities that are leading to. being exploited and hacked and data breaches and whatnot.

There’s plenty of instances where you have lower severity from a CVSS perspective, vulnerabilities being exploited either on their own or as put together but the problem is which ones are important. And so there’s a whole cottage industry growing up around trying to help you prioritize better with which which vulnerabilities to go after.

But that is the problem, right? We, like we, we, I feel like we have quite Kind of a crying wolf problem because 99 percent of the time or more, the thing that we’re saying the business has to go off and spend lots of time and disrupting their their availability and pulling people in on the weekends and whatnot is not, Exploited, it’s not a targeted by the bad guys, you only know which ones are in that camp after the fact.

So if you had that visibility before the fact, it’d be better, but that’s a that’s a very naive thing at this point.

Andrew: Yeah. If 1%.

Jerry: If I could only predict the winning lottery numbers.

Andrew: The other thing, and the debate, this opens up, which I’ve had, Many times in my career is ops folks, whomever, I’m not the bad guys. They’re just asking questions, trying to prioritize. Prove to me how this is exploitable. That’s a really unfair question. I can’t because I’m not hacker who could predict every single way this could be used against a business.

I have to play the odds. I have to play statistically what I know to be true, which is that some of them will be exploited. One of the things I could do is I could prioritize, Hey, what’s open to the internet? What’s my attack service? What services do I know are open to anonymous browsing or not browsing, but, reachability from the internet. Maybe those are my top priority. And I watched those carefully for open RCEs or likely exploitable things or, and I prioritize on those, but at the end of the day, not patching something because I can’t prove it’s exploitable. that I can predict what every bad guy is ever going to do in the future or chain attacks in the future that I’m not aware of.

And I think that’s a really difficult thing to prove.

Jerry: Yeah, a hundred percent. There, there are some things that can help you, some things beyond just CVSS scores that can help you a bit, certainly if you look at something and it is worm able , right? Remote code, execution of any sort is something in my estimation that you really need to prioritize the the CISA agency, the cybersecurity infrastructure security agency, whose name still pisses me off.

All these years later, because they has the word security too many times in it, but they didn’t ask me. They have this list they call Kev. It’s the known exploited vulnerabilities list, which, in, in previous years was a joke because they didn’t update it very often. But now it’s actually upgrade updated very aggressively.

And so it contains the list of vulnerabilities that the U S government and some other foreign partners see actively being exploited in the industry. And so there’s a, that’s also a data point. And I would say. My perspective is that shouldn’t be the thing that you say that’s, those are going to be what we patch then it’s your, my view, your approach should be, we were going to patch it all, but those are the ones that we’re not going to relent on.

There’s always going to be a need. There’s going to be some sort of There’s going to be an end of quarter situation or what have you, but these are the ones that, that you should be looking at and saying, no, like these can’t wait they have to, we have to patch those.

Andrew: Yep. 100%. And a lot of your vulnerability management tools are now integrating that list. So it can help you right in the tool know what the prioritization is. But bear in mind, there’s a lot of assumptions in that, that those authorities have noted activity, have noted and shared it, understood it, and zero days happen.

Jerry: Somebody had to get, the reality is somebody had to get hacked.

Autologically, somebody had to get hacked for it to be on the list.

Andrew: right, so don’t rely only on that, but it is absolutely a good prioritization tool and a good focusing item of look, we have this, know we have this is known exploitable. We’re seeing exploits in the wild. We need to get this patched.

Jerry: Yeah, absolutely. So moving on to the next story, this one is from a cybersecurity consulting company called Cybernary. I guess it’s how you would say it.

Andrew: I’d go with that. That seems reasonable.

Jerry: The title is, I’m sure somebody will correct me if I got it wrong. Title here is what’s the worst place to leave your secrets. Research into what happens to AWS credentials that are left in public places. I thought this was a fascinating read, especially given where I had come from. I’ve been saying for some time now on this, on the show, API keys and whatnot are the next big horizon for attacks.

And in fact, we had been seeing that, we’re actually, I think on the upswing in the past, in my former role, we saw a lot of that manifesting itself as attackers using those to mine crypto, like they would hijack servers or platforms. Or containers or whatever to mine cryptocurrency.

But I think over time, we’re going to see that morph into, more data theft and perhaps less overt actions. I’m sure it’s, it is already happening. I’m not, I don’t mean to say that it isn’t happening, but I think it’s in the periphery right now where a lot of the activity, at least A lot of the voluminous activity tends to be what I’ll call more benign, like again, crypto mining.

But anyway, this the approach that this organization took here was pretty interesting. There’s a company called thinkst that has this kind of like. Concept of canary tokens and canary credentials. And they are exactly what they sound like. It’s a set, it’s a set of secrets that you can create through this company and watch how they’re used.

You can see, you can get an alert when somebody tries to use them. And that’s exactly what they did here. They created, I think it was 121, 121. That’s the number of total attempts. I don’t know exactly how many credentials they created. They created a number of credentials and they spread them around and they they used a number of different services.

Let’s see, they had GitHub, GitLab, Bitbucket, Docker hub. They created their own FTP web server. And blog, they put them on pastebin jfiddle. They put them into the NPM JS repository in the PyPI repository, which we just talked about. They put them in the various cloud storage buckets. And then they just waited to see how and when they were accessed.

And unsurprisingly to me this. Was pretty successful. It was a very successful test. I am a little surprised at some of the results. So the one of the, one of the charts they give is how long it took for different credit credentials planted at different places to be used.

Andrew: It’s insane.

Jerry: So the NPM JS was less than 60 seconds.

Andrew: So that means, from the time of posting, somebody somehow picked it up with some kind of scanner and could turn around and test using it in less than a minute.

Jerry: Yes.

Andrew: That’s insane.

Jerry: Pi was 120 seconds. So just shy of two minutes. GitHub was 127. 127 seconds, just just over two minutes. Pastebin was 50 minutes their own web server, 47 hours and docker hub 6. 9 days.

Andrew: Man, what’s going on with Docker Hub? Just nobody cares?

Around to it.

Jerry: nobody cares. I think it’s a lot more involved. It’s not as it’s not as readily scannable, I would say.

Andrew: I can tell you from my own experience in previous roles, we used to get reports all the time for Hey, you’ve got the secret out here. Hey, you’ve got the secret out here people looking for bounties. I still want to know what tools are using to find this stuff so rapidly because it’s fast.

Jerry: Yes.

Andrew: And

Jerry: Like a GitHub, GitHub will, we’ll give you a, an API so you can actually subscribe to an API. Again, it’s not perfect because it’s obviously they are. Typically relying on randomness or, something being prefixed with password equals or what have you. But it’s not a, it’s not a perfect match, but there’s lots of lots of tools out there that people are using.

The one that I found most interesting and it’s more aligned with the Docker Hub one, but not. And I think it’s something that is a much larger problem that hasn’t manifested itself as a big problem yet. And that is, with container images you can continue to iterate on them.

You can and by default when you spin up a container, it is the end state of a whole bunch of what I’ll just call layers And so if you, let’s say included credentials at some point in a configuration file, and then you later deleted that file, when you spin up that container in a running image, you won’t find that file.

But it actually is still in that container image file. And so if you were to upload that container image file to, let’s say Docker hub and somebody knew what they were doing, they could actually look through the history and find that file. And that has happened I’ve seen it happen a fair number of times you, you have to go through some extra steps to squash down the container image so that you basically purge all the history and you only ended up with the last intended state of the container file, but not a lot of people know that, like how many people know that you have to do that?

Andrew: well, including you, the six people listening to this show, maybe four others.

Jerry: So there’s a lot of there’s a lot of nuance here. So I thought, The time the timing was just. Fascinating. That, that it was going to be fast just based on my experience. I knew it was going to be fast, but I did not expect it to be that fast. Now in terms of where most of the credentials were used.

That was also very interesting. Hello. Was a little, in some areas, some respects, not what I expected. So the most the most targeted or the place where the most credentials was used was Pastebin, which is interesting because Pastebin also had a relatively long time to detect. And so I think it means that people are more aggressively crawling it.

And then the second most common is a website. And I think that one does not surprise me because crawling websites has been a thing for a very long time. And I think there’s lots and lots of tools out there to help identify credentials. So obviously it’s a little. Dependent on how present them.

If you have a password. txt file and that’s available in a index in directory index on your webpage. That’s probably going to get hit on a lot more.

Andrew: I’m, you know what?

Jerry: Yeah, I know. You’re not even going to go there. Yep. You’re I’ll tell you the trouble with your mom. There you go. Feel better.

Andrew: I feel like she’s going to tan your hide.

Jerry: See, there you go. You got the leather joke after all. Just like your mom.

Andrew: Oh, of nowhere,

Jerry: All right. Then GitHub was a distant third.

Andrew: which surprises me. I,

Jerry: That did surprise me too.

Andrew: Yeah. And also I also know GitHub is a place that tons and tons of secrets get leaked and get labs and similar because developers do have, it’s very easy for them to accidentally leak secrets in their code up to these public repos. And then you can never get rid of them.

You’ve got to rotate them.

Jerry: I think it. So my view is it’s more a reflection of the complexity with finding them, because in a repository, you got to search through a lot of crap. And I don’t think that the tools to search for them is as sophisticated as let’s say, a web crawler, hitting paste in the website.

Andrew: Which is fascinating that the incentive is on finding the mistake by third parties. Yeah. got better tooling then. Now, to be fair, all of these, like if GitHub for instance, has plenty of tools you can buy, both homegrown at GitHub or third parties that in theory will help you detect a secret before you commit it, but they’re not perfect and not everybody has them.

Jerry: Correct. Correct. And I also think it’s more in my experience. It’s much more of a common problem from a, from a likelihood of exposure from from the average it shop, you’re much more likely to see your keys leak through GitHub than you are from people posting them on a website or on pastebin.

But, knowing that if they do end up on pastebin, like somebody’s going to find them is I think important to know, but my Experience it’s, it’s Docker hub in the code repositories, like PyPy and MPM and GitLab and GitHub. That’s where it happens, right? That’s where we leak them.

It’s interesting in this, in this test, they tried out all the different channels to see which ones were more, more or less likely to get hit on. I think you get hub in my experience, GitHub and Docker hub and whatnot. are the places that you have to really focus and worry about because that’s where they’re, that’s where they’re leaking.

Andrew: Yeah. It makes sense. It’s a fascinating study.

Jerry: Yeah. And it

sorry, go ahead.

Andrew: I would love for other people to replicate it and see if they get similar findings.

Jerry: Yes. Yes. I, and this is one of those things that, again the tooling is not there’s not a deterministic way to tell whether or not your code has a password or not in it. There are tools, like you said, that will help identify them. To me, it’s. And it’s important to create a I would call the three, three legs of the stool approach.

One is making sure that you have those tools available. Another would be making sure that you have the tools available on how to store credentials securely, like having. Hash a car vault or something like that available. And then the third leg of the stool is making sure that the developers know how to use those.

Know that they exist and that’s how you’re, how they’re expected to actually use them. Again, it’s not perfect. It’s not a firewall. It’s, you’re still reliant on people who make mistakes.

Andrew: Two questions. First of all, that three legged stool, would that be a milking stool?

Jerry: Yes.

Andrew: Second, plus a question, more comment. I would also try to teach your teams, hey, try to develop this software with the idea that we may have to rotate this secret at some point.

Jerry: Oh, great point. Yes.

Andrew: and try not to back yourself into a corner that makes it very difficult to rotate.

Jerry: Yeah. I will also, I’ll go one step further and say that not only should you do that, but you should at the same time implement a strategy where those credentials are automatically rotated on some periodic basis. Whether it’s a month, a quarter, every six months, a year, it doesn’t really matter having the ability to automate it or to have them automated, have that change automated, gives you the ability in the worst case scenario that somebody calls you up and says, Hey, like we just found our key on on GitHub, you have an ability to go exercise that automation, but Without having to go create it or incur downtime or whatnot.

And that the worst case is you’re stuck with this hellacious situation of I’ve got to rotate the keys, but if I rotate the keys, the only guy that knows how to, this application works is on a cruise right now. And if we rotate it, we know it’s going to go down and we, so you’re end up in this That’s really bad spot.

And I’ve seen that happen so many times.

Andrew: And then the see saw ends up foaming at the mouth like a mad cow.

Jerry: Yes, that’s right. Cannot wait for this to be over. All right. The the last story mercifully is Mike is also from the register. com. And the title is Microsoft patches is scary. Wormable hijacked my box via IPv6 security bug and others. It’s been a while since we’ve had one that feels like this. So the the issue here is that Microsoft just released a patch as part of its patch Tuesday for a a light touch pre authentication, remote code exploit yeah, remote code exploit over the network, but only for IPv6,

Andrew: which to me is holy crap, big deal. That’s really scary.

Jerry: incredible.

Andrew: and either, I don’t know, I feel like this hasn’t gotten a ton of attention yet. Maybe because there wasn’t like a website and a mascot and a theme song and a catchy name.

Jerry: Yes. And

Andrew: But, so if you’ve got IPv6 running on pretty much any modern version of Windows, zero click RCE exploit, over, have a nice day. That’s scary. That’s a big deal.

Jerry: the better part is that it is IPv6. Now I guess on the on the downside it’s IPv6 and IPv6 typically is isn’t. Affected by things like NAT based firewalls. And so quite often you have a line of sight from the internet straight to your device, which is a problem. Obviously not always the case.

On the other

Hand, it’s not widely adapted.

Andrew: but a lot of modern windows systems are automatically turning it on by default. fact, I would wager a lot of people have IPv6 turned on and don’t even know it.

Jerry: Very true.

Andrew: Now you’ve got to have all the interim networking equipment, also supporting IPv6 and for that to be a problem, but it could be.

Jerry: So there, there’s the the researcher who identified this has not released any exploit code or in fact, any details other than it exists. But I would say now that Apache exists I think it’s fair to say every freaking security researcher out there right now is trying to reverse those patches to figure out exactly what changed in hopes of finding out what the.

Problem was because they want to create blogware and create a name for it and whatnot. I’m sure. This is a huge deal. I think it is for alarm fire, you’ve got to get this one patched like yesterday.

Andrew: Yeah. It’s been a while since we’ve seen something like this. Like you said, at the top of the story, it’s, Vulnerable, zero clickable, RCE, just being on the network with IPv6 is all it takes. And I think it’s everything past Windows 2008. Server, is vulnerable. Obviously patches are out, but it’s gnarly. It’s a big deal.

Jerry: As you would say, get ye to the patchery.

Andrew: Get ye to the patchery. I’ve not used that lately much. I need to get back to that. Fresh patches available to you at the patchery.

Jerry: All right. I think I think we’ll cut it off there and then ride the rest home.

Andrew: Go do some grazing in the meadow. As you can probably imagine, this is not our first radio.

Jerry: Jesus Christ. Where did I go wrong? Anyway, we I I sincerely apologize but I also find it. I also find it weird.

Andrew: I don’t apologize in the least.

Jerry: We’ll, I’m sure there’ll be more.

Andrew: Look man, this is a tough job. You gotta add a little lightness to it. It can drain your soul if you’re not careful.

Jerry: Absolutely. Now I Was

Andrew: But once again, I feel bad for the cow and the calf. That’s terrible. That’s, I don’t wish that on anyone.

Jerry: alright. Just a reminder that you can find all of our podcast episodes on our website@wwwdefensivesecurity.org, including jokes like that and the infamous llama jokes way, way back way, way back. You can find Mr. Clet on X at LER.

Andrew: That is correct.

Jerry: Wonderful, beautiful social media site, InfoSec. Exchange at L E R G there as well. And I am at Jerry on InfoSec. Exchange. And by the way, if you like this show, give us a good rating on your favorite podcast platform. If you don’t like this show, keep it to yourself.

Andrew: Or still give us a good reading. That’s fine.

Jerry: Or just, yeah, that works.

Andrew: allowed,

Jerry: That works too.

We don’t discriminate.

Andrew: hopefully you find it useful. That’s all we can that’s our hope,

Jerry: That’s right.

Andrew: us riffing about craziness for an hour and hopefully you pick up a something or two and you can take it, use it and be happy.

Jerry: All right. Have a lovely week ahead and weekend. And we’ll talk to you again very soon.

Andrew: See you later guys. Bye bye.

  continue reading

268 jaksoa

Artwork
iconJaa
 
Manage episode 434619208 series 1344233
Sisällön tarjoaa Jerry Bell and Andrew Kalat, Jerry Bell, and Andrew Kalat. Jerry Bell and Andrew Kalat, Jerry Bell, and Andrew Kalat tai sen podcast-alustan kumppani lataa ja toimittaa kaiken podcast-sisällön, mukaan lukien jaksot, grafiikat ja podcast-kuvaukset. Jos uskot jonkun käyttävän tekijänoikeudella suojattua teostasi ilman lupaasi, voit seurata tässä https://fi.player.fm/legal kuvattua prosessia.

Check out the latest Defensive Security Podcast Ep. 276! From cow milking robots held ransom to why IT folks dread patching, Jerry Bell and Andrew Kalat cover it all. Tune in and stay informed on the latest in cybersecurity!

Summary:

In episode 276 of the Defensive Security Podcast, hosts Jerry Bell and Andrew Kalat delve into a variety of security topics including a ransomware attack on a Swedish farm’s milking machine leading to the tragic death of a cow, issues with patch management in IT industries, and an alarming new wormable IPv6 vulnerability patch from Microsoft. The episode also covers a fascinating study on the exposure and exploitation of AWS credentials left in public places, highlighting the urgency of automating patching and establishing robust credential management systems. The hosts engage listeners with a mix of humor and in-depth technical discussions aimed at shedding light on critical cybersecurity challenges.

00:00 Introduction and Casual Banter
01:14 Milking Robot Ransomware Incident
04:47 Patch Management Challenges
05:41 CrowdStrike Outage and Patching Strategies
08:24 The Importance of Regular Maintenance and Automation
15:01 Technical Debt and Ownership Issues
18:57 Vulnerability Management and Exploitation
25:55 Prioritizing Vulnerability Patching
26:14 AWS Credentials Left in Public: A Case Study
29:06 The Speed of Credential Exploitation
31:05 Container Image Vulnerabilities
37:07 Teaching Secure Development Practices
40:02 Microsoft’s IPv6 Security Bug
43:29 Podcast Wrap-Up and Social Media Plugs-tokens-in-popular-projects/

Links:

  • https://securityaffairs.com/166839/cyber-crime/cow-milking-robot-hacked.html
  • https://www.theregister.com/2024/07/25/patch_management_study/
  • https://www.cybersecuritydive.com/news/misguided-lessons-crowdstrike-outage/723991/
  • https://cybenari.com/2024/08/whats-the-worst-place-to-leave-your-secrets/
  • https://www.theregister.com/2024/08/14/august_patch_tuesday_ipv6/

Transcript:

Jerry: Today is Thursday, August 15th, 2024. And this is episode 276 of the defensive security podcast. My name is Jerry Bell and joining me tonight as always is Mr. Andrew Kalat.

Andrew: Good evening, Jerry. Once again, from your southern compound, I see.

Jerry: Once again, in the final time for a two whole weeks, and then I’ll be back.

Andrew: Alright hopefully next time you come back, you’ll have yet another hurricane to dodge.

Jerry: God, I hope not.

Andrew: How are you, sir?

Jerry: I’m doing great. It’s a, it’s been a great couple of weeks and I’m looking forward to going home for a little bit and then then coming back. How are you?

Andrew: I’m good, man. It’s getting towards the end of summer. forward to a fall trip coming up pretty soon, and just cruising along. Livin the dream.

Jerry: We will make up for last week’s banter about storms and just get into some stories. But first a reminder that the thoughts and opinions we express are those of us and not our employers.

Andrew: Indeed. Which is important because they would probably fire me. You’ve tried.

Jerry: I would yeah. So the the first story we have tonight is very Moving.

Andrew: I got some beef with these people.

Jerry: Great. Very moving. This one comes from security affairs and the title is crooks took control of a cow milking robot, causing the death of a cow. Now, I will tell you that the headline is much more salacious than the actual story that the. When I saw the headline, I thought, oh my God, somebody hacked a robot and it somehow kill the cow, but no, that’s not actually what happened,

Andrew: Now, also, let’s just say up front, the death of a cow is terrible, and we are not making light of that. But we are gonna milk this story for a little while.

Jerry: that’s very true.

Andrew: I’m almost out of cow puns.

Jerry: Thank God for that. So, what happened here is this farm in Sweden had their milking machine, I guess is a milking machine ransomware and the farmer noticed that he was no longer able to manage the system, contacted the support for that system. And they said, no, you’ve been ransomware.

Actually, the milking machine itself apparently was pretty trivial to get back up and running, but apparently what was lost in the attack was important health information about the cows, including when some of the cows were inseminated. And because of that, they didn’t know that one of the pregnant cows was supposed to have given birth, but actually hadn’t.

And so it. What had turned out to be the case is that the cow’s fetus, unfortunately passed away inside the cow and the farmer didn’t know it until they found the cow laying lethargic in it stall, and they called a vet. And unfortunately, at that point it was too late to save the cow.

This is an unfortunate situation where a ransomware attack did cause a fatality.

Andrew: Yeah, and I think in the interest of accuracy, I think it was in Switzerland,

Jerry: Is it switzerland? Okay. I knew it started with a S W.

Andrew: That’s fair. You’re close. It’s Europe.

Jerry: It’s all up there.

Andrew: But yeah, I guess in this theory that if they had a better tracking date when the cow had been inseminated, they would have known that the cow was in distress with labor and could have done something more proactively to save cow and potentially the calf. And unfortunately, because I didn’t have that data, because it was in this ransomwared milking robot machine we ended up with a dead cow and a dead calf.

Jerry: So not without grilling the farmer too much. I was I was thinking, that,

Andrew: Wow!

Jerry: I’m sorry. I was thinking that, they clearly had an ability to recover. And what they thought was the important aspect of that machine’s operation, which was milking, they were able to get that back up and running pretty quickly.

But it seemed to me like they were unaware that this other information was in kind tied to that same system. I don’t fully understand. Seems like it’s a little more complicated than I’m, than I’ve got it envisioned in my mind. But very clearly they hadn’t thought through all the the potential harm.

A good lesson, I think for us all.

Andrew: I feel like we’ve butchered this story.

Jerry: The the next story we have for today comes from register. com and the title is patch management still seemingly abysmal because no one wants the job can’t stop laughing. All right.

Andrew: A cow died! That’s tragic!

Jerry: I’m laughing at your terrible attempts at humor.

Andrew: I couldn’t work leather in there. I tried. I kept trying to come up with a leather pun.

Jerry: We appreciate your efforts.

So anyhow. This next story talks about the challenge that we as an IT industry have with patching. And basically that it is a very boring task that not a lot of people who are in IT actually want to do. And so it, it highlights the importance again of automation and.

This in the complimentary story which is titled misguided lessons from CrowdStrike outage could be disastrous from cybersecurity dive. I put these two together for a reason because one of the, one of the. I think takeaways from the recent CrowdStrike disaster is we need to go slower with patching and updates and perhaps not rely on automatic updates.

And these 2 articles really point out the folly in that. Number 1, this. Article from the register is pointing out that relying on manual patching is a losing proposition because really nobody wants to do it and it doesn’t scale. It’s, it’s already, it’s IT operations is already a crap job in many instances, and then trying to expect people to to do things manually is a problem.

The second article points out the security issues that come along with Adopting that strategy, which is, you’re exposing your environment unduly unnecessarily. And in fact the improvements in. Your security posture and the let the reduction in likelihood of some kind of an attack far outweigh the remote possibility of what happened.

Like we saw with CrowdStrike. Now there is a kind of an asterisk at the bottom. They point out the importance of doing staged deployments of patches, which I think is one of the central lessons of the, at least for my Perspective, one of the central lessons of the CrowdStrike disaster is that go fast, but stage it.

Andrew: yeah it’s an interesting problem that we’re struggling with here, which is how many times have we saved our own butts without knowing it by automate or rapidly patching? It’s very difficult to prove that negative. And so it’s very difficult to. Weigh the pros and cons empirical data showing where automatic patching or rapid patching solved a problem or avoided a problem versus when patching broke something.

Cause all we know about is when it breaks, like when a Microsoft patch rolls out and breaks and that sort of thing. And it’s one of those things where it has to be perfect every time is the feeling from a lot of folks. And if it, if every time we have a problem, we break some of that trust. It hurts the credibility of auto patching or, rapidly patching. The other thing that comes to mind is I would love to get more IT folks and technical operations folks and SREs and DevOps folks, with the concept of patching as just part of regular maintenance. That is, just built into their process. A lot of times it feels like a patch is an interrupt driven or toil type work that they have to stop what they’re doing to go work on this.

Where, in my mind, at least the way I look at it from a risk manager perspective, unless something’s on fire or is a known RCE or known exploited, certain criteria. I’m good. Hey, take patch on a monthly cadence and just catch everything up on that monthly cadence, whatever it is. I can work within that cadence.

If I’ve got something that I think is a higher priority, we can try to interrupt that or drive a different cadence to get that patched or mitigated in some way. But the problem often is that, Okay. Every one of these patches seems to be like a one off action if you’re not doing automatic patching in some way, that is very Cognitively dissonant with what a lot of these teams are doing and I don’t know how to get Across very well that you will always have to patch it was all this will never stop So you have to plan for it.

You have to build time for that. You have to build Automation and cycles for that and around it and it’ll be a lot less painful It’s it feels like pushing the rock up the hill on that one.

Jerry: One of my observations was

an impediment to fast patching is the reluctance for downtime and, or the potential impacts from downtime. And I think that dovetails with what you just said, in part, that concern stems from the way we design our IT systems and our IT environments. If we design them in a way that they’re not patchable without interrupting operations, then my view is we’ve not been successful in designing the environment to meet the business.

And that’s something that, that I tried hard to drive and just thinks in some aspects I was successful and others I was not. But I think that is one of the real key things that, that we as a it leader or security leaders really need to be imparting in the teams is that when we’re designing things, it needs to be, Maintainable as a, not as a, like you described it as an interrupt, but as an, in the normal course of business without heroic efforts, it has to be maintainable.

You have to be able to patch, you have to be able to take the system down. You can’t say that gosh, this system is so important. Like we can’t, we take it down. We’re going to lose millions of dollars ever. Like we can’t take it down. Not a good, it’s not a good look. You didn’t design it right.

Andrew: That system is gonna go down. Is it gonna be on your schedule or not? The other thing I think about with patching is not just vulnerability management But you know Let’s say you don’t patch and suddenly you’ve got a very urgent Vulnerability that does need to be patched and your four major versions and three sub versions behind now you have this massive uplift That’s probably going to be far more disruptive to get that security patch applied, as opposed to if you’re staying relatively current, n minus one or n minus two, it’s much less disruptive get that up to date.

Not to mention all of the end of life and end of support issues that come with running really old software. And don’t even know what vulnerabilities might be running out there, but just keeping things current as a matter of course, I believe. It makes dealing with emergency patches much, much easier. all these things take time and resources away from what is perceived to be higher value activities. So it’s constantly a resource battle.

Jerry: And there was like, there was a quote related to what you just said in, at the end of this article, it said I think it mostly comes down to quote, I think it mostly comes down to technical debt. You explained it’s very, it’s a very unsexy thing to work on. Nobody wants to do it and everyone feels like it should be automated, but nobody wants to take responsibility for doing it.

You added the net effect is that nothing gets done and people stay in the state of technical debt. Where they’re not able to prioritize it.

Andrew: That’s not a great place to be.

Jerry: No, there wasn’t another interesting quote that I often see thrown around and it has to do with the percent of patches. And so the, I’ll just give the quote towards the beginning of the article. Patching is still notoriously difficult for us to principal analyst. Andrew Hewitt told the register.

Hewitt, who specializes in it ops said that while organizations strive for a 97 99 percent patch rate, they typically only managed to successfully fix between 75 and 85 percent of issues in their software. I’m left wondering, what does that mean?

Andrew: Yeah, like in what time frame? In what? I don’t know. I feel like what he’s talking about maybe is They only have the ability to automatically patch up to 85 percent of the deployed software in their environment.

Jerry: That could be, it’s a little ambiguous.

Andrew: It is. And from my perspective, there’s actually a couple different things where we’re talking about here, and we’re not being very specific. We’re talking about I. T. Operations are talking about corporate I. T. Solutions and systems and servers. For an IT house, I work in a software shop, so we’ve got the whole software side of this equation, too, for the code we’re writing and keeping all that stuff up to date, which is a whole other complicated problem that, some of which I think would be inappropriate for me to talk about, but, so there’s, it’s doubly difficult, I think, if you’re a software dev shop to keep all of your components and dependencies and containers and all that stuff up to date.

Jerry: Absolutely. Absolutely. I will also say that A couple of other random thoughts on my part, this, in my view, gets harder or gets more complicated, the larger in larger organizations, because you end up having these kind of siloed functions where responsibility for patching isn’t necessarily clear, whereas in a smaller shop.

You may have an IT function who’s responsible end to end for everything, but in large organizations, oftentimes you’ll have a platform engineering team or who’s responsible for, let’s say, operating systems. And then you may have a, that, that team is a service provider for other other parts of the business.

And those other parts of the business may not have a full appreciation for what they’re responsible for from an application perspective, and especially in larger companies where, they’re want to reduce head count and cut costs, the, those application type people in my, my experience, as well as the platform team are are ripe targets for reductions.

And when that happens. You end up in this kind of a weird spot of having systems and no clear owner on who’s actually responsible. You may even know that you have to patch it, but you may not know whose job it is.

Andrew: Yeah, absolutely. In my perfect world, every application has a technical owner and every underlying operating system or underlying container has a technical owner. Might be the same, might be different. And they have their own set of expectations. Often they’re different and often they’re not talking to each other. So there could be issues in dependencies between the two that they’re not coordinating well. And then you get gridlock and nobody does anything.

Jerry: So these are pragmatic problems that in my experience. They present themselves as salt is a sand in the gears, right? They make it very difficult to move swiftly. And that’s what in my ex in my experience drives that heroic effort, especially when something important comes down the line, because now you have to pay extra attention because that something is not going to, that there isn’t a well functioning process.

And I think that’s. Something as an industry, we need to focus on. Oh, go ahead.

Andrew: I was just gonna say, in my mind, some of the ways you solve this, and these are usually said difficult to do, but proper. I should define that. Maintained asset management, I. T. asset management is key. And in my mind, you’ve got to push your business to make sure that somebody has accountability to every layer of that application. And push your business to say, hey, if we’re not willing to invest in maintaining this, and nobody’s going to take ownership of it, it shouldn’t be in our environment. must be well owned. This is, it’s like when you adopt a dog. Somebody’s got to take care of it. And you can’t just neglect it in the backyard. So we run into stuff all the time where it’s just, Oh, nobody knows what that is. Then get rid of it. attack surface. That’s a single thing out there is something that could be attacked. If it’s about being maintained, that becomes far riskier from an attack surface perspective. So I think that, and I also think about, Hey, tell people before you go buy a piece of software, do you have the cycles to maintain it? Do you have the expertise to maintain it?

Jerry: The business commitment to fund its ongoing operations, right?

Andrew: Exactly. I don’t know. It gets stickier. And now we have this concept of SaaS, where a lot of people are buying software and not even thinking about the backend of it because it’s all just auto magic to them. So they get surprised when it’s, Oh, it’s in house. We’ve got to actually patch it ourselves. Yeah,

Jerry: The other article in cybersecurity dive had a, another interesting quote that I thought lacked some context and the quote was. There are, there were 26, 447 vulnerabilities disclosed last year and bad actors exploited 75 percent of vulnerabilities within 19 days.

Andrew: no, that’s not right.

Jerry: Yeah, here’s, here is the missing context.

Oh, and it also says one in four high risk vulnerabilities were exploited the same day they were disclosed. What now, the missing context is this report linked, or this quote is referring to a report by QALYS that came out early. At the beginning of the year and what it was saying is that about 1 percent of vulnerabilities or are what they call high risk and those are the vulnerabilities that end up having exploits created, which is an interesting data point in and of itself, that only 1 percent of vulnerabilities are what people go after.

Patching our goal is to patch all of them. What they’re saying is that 75 percent of the 1%, which had vulnerability or had exploits created, had those exploits created within 19 days.

Andrew: That makes, that’s a lot more in line with my understanding.

Jerry: And 25 percent were exploited within this the same day. So I, and that’s the important context. It’s a very salacious statement without that extra context. And I will say that as a as a security leader, one of the challenges we have is, again, that there, there were almost 27, 000.

Vulnerabilities. I think we’re going to blow the doors off that this year,

not all that they’re not all equally important. Obviously they’re rated at different levels of severity, but the real, the reality for those of us who pay attention, that it’s not just the critical vulnerabilities that are leading to. being exploited and hacked and data breaches and whatnot.

There’s plenty of instances where you have lower severity from a CVSS perspective, vulnerabilities being exploited either on their own or as put together but the problem is which ones are important. And so there’s a whole cottage industry growing up around trying to help you prioritize better with which which vulnerabilities to go after.

But that is the problem, right? We, like we, we, I feel like we have quite Kind of a crying wolf problem because 99 percent of the time or more, the thing that we’re saying the business has to go off and spend lots of time and disrupting their their availability and pulling people in on the weekends and whatnot is not, Exploited, it’s not a targeted by the bad guys, you only know which ones are in that camp after the fact.

So if you had that visibility before the fact, it’d be better, but that’s a that’s a very naive thing at this point.

Andrew: Yeah. If 1%.

Jerry: If I could only predict the winning lottery numbers.

Andrew: The other thing, and the debate, this opens up, which I’ve had, Many times in my career is ops folks, whomever, I’m not the bad guys. They’re just asking questions, trying to prioritize. Prove to me how this is exploitable. That’s a really unfair question. I can’t because I’m not hacker who could predict every single way this could be used against a business.

I have to play the odds. I have to play statistically what I know to be true, which is that some of them will be exploited. One of the things I could do is I could prioritize, Hey, what’s open to the internet? What’s my attack service? What services do I know are open to anonymous browsing or not browsing, but, reachability from the internet. Maybe those are my top priority. And I watched those carefully for open RCEs or likely exploitable things or, and I prioritize on those, but at the end of the day, not patching something because I can’t prove it’s exploitable. that I can predict what every bad guy is ever going to do in the future or chain attacks in the future that I’m not aware of.

And I think that’s a really difficult thing to prove.

Jerry: Yeah, a hundred percent. There, there are some things that can help you, some things beyond just CVSS scores that can help you a bit, certainly if you look at something and it is worm able , right? Remote code, execution of any sort is something in my estimation that you really need to prioritize the the CISA agency, the cybersecurity infrastructure security agency, whose name still pisses me off.

All these years later, because they has the word security too many times in it, but they didn’t ask me. They have this list they call Kev. It’s the known exploited vulnerabilities list, which, in, in previous years was a joke because they didn’t update it very often. But now it’s actually upgrade updated very aggressively.

And so it contains the list of vulnerabilities that the U S government and some other foreign partners see actively being exploited in the industry. And so there’s a, that’s also a data point. And I would say. My perspective is that shouldn’t be the thing that you say that’s, those are going to be what we patch then it’s your, my view, your approach should be, we were going to patch it all, but those are the ones that we’re not going to relent on.

There’s always going to be a need. There’s going to be some sort of There’s going to be an end of quarter situation or what have you, but these are the ones that, that you should be looking at and saying, no, like these can’t wait they have to, we have to patch those.

Andrew: Yep. 100%. And a lot of your vulnerability management tools are now integrating that list. So it can help you right in the tool know what the prioritization is. But bear in mind, there’s a lot of assumptions in that, that those authorities have noted activity, have noted and shared it, understood it, and zero days happen.

Jerry: Somebody had to get, the reality is somebody had to get hacked.

Autologically, somebody had to get hacked for it to be on the list.

Andrew: right, so don’t rely only on that, but it is absolutely a good prioritization tool and a good focusing item of look, we have this, know we have this is known exploitable. We’re seeing exploits in the wild. We need to get this patched.

Jerry: Yeah, absolutely. So moving on to the next story, this one is from a cybersecurity consulting company called Cybernary. I guess it’s how you would say it.

Andrew: I’d go with that. That seems reasonable.

Jerry: The title is, I’m sure somebody will correct me if I got it wrong. Title here is what’s the worst place to leave your secrets. Research into what happens to AWS credentials that are left in public places. I thought this was a fascinating read, especially given where I had come from. I’ve been saying for some time now on this, on the show, API keys and whatnot are the next big horizon for attacks.

And in fact, we had been seeing that, we’re actually, I think on the upswing in the past, in my former role, we saw a lot of that manifesting itself as attackers using those to mine crypto, like they would hijack servers or platforms. Or containers or whatever to mine cryptocurrency.

But I think over time, we’re going to see that morph into, more data theft and perhaps less overt actions. I’m sure it’s, it is already happening. I’m not, I don’t mean to say that it isn’t happening, but I think it’s in the periphery right now where a lot of the activity, at least A lot of the voluminous activity tends to be what I’ll call more benign, like again, crypto mining.

But anyway, this the approach that this organization took here was pretty interesting. There’s a company called thinkst that has this kind of like. Concept of canary tokens and canary credentials. And they are exactly what they sound like. It’s a set, it’s a set of secrets that you can create through this company and watch how they’re used.

You can see, you can get an alert when somebody tries to use them. And that’s exactly what they did here. They created, I think it was 121, 121. That’s the number of total attempts. I don’t know exactly how many credentials they created. They created a number of credentials and they spread them around and they they used a number of different services.

Let’s see, they had GitHub, GitLab, Bitbucket, Docker hub. They created their own FTP web server. And blog, they put them on pastebin jfiddle. They put them into the NPM JS repository in the PyPI repository, which we just talked about. They put them in the various cloud storage buckets. And then they just waited to see how and when they were accessed.

And unsurprisingly to me this. Was pretty successful. It was a very successful test. I am a little surprised at some of the results. So the one of the, one of the charts they give is how long it took for different credit credentials planted at different places to be used.

Andrew: It’s insane.

Jerry: So the NPM JS was less than 60 seconds.

Andrew: So that means, from the time of posting, somebody somehow picked it up with some kind of scanner and could turn around and test using it in less than a minute.

Jerry: Yes.

Andrew: That’s insane.

Jerry: Pi was 120 seconds. So just shy of two minutes. GitHub was 127. 127 seconds, just just over two minutes. Pastebin was 50 minutes their own web server, 47 hours and docker hub 6. 9 days.

Andrew: Man, what’s going on with Docker Hub? Just nobody cares?

Around to it.

Jerry: nobody cares. I think it’s a lot more involved. It’s not as it’s not as readily scannable, I would say.

Andrew: I can tell you from my own experience in previous roles, we used to get reports all the time for Hey, you’ve got the secret out here. Hey, you’ve got the secret out here people looking for bounties. I still want to know what tools are using to find this stuff so rapidly because it’s fast.

Jerry: Yes.

Andrew: And

Jerry: Like a GitHub, GitHub will, we’ll give you a, an API so you can actually subscribe to an API. Again, it’s not perfect because it’s obviously they are. Typically relying on randomness or, something being prefixed with password equals or what have you. But it’s not a, it’s not a perfect match, but there’s lots of lots of tools out there that people are using.

The one that I found most interesting and it’s more aligned with the Docker Hub one, but not. And I think it’s something that is a much larger problem that hasn’t manifested itself as a big problem yet. And that is, with container images you can continue to iterate on them.

You can and by default when you spin up a container, it is the end state of a whole bunch of what I’ll just call layers And so if you, let’s say included credentials at some point in a configuration file, and then you later deleted that file, when you spin up that container in a running image, you won’t find that file.

But it actually is still in that container image file. And so if you were to upload that container image file to, let’s say Docker hub and somebody knew what they were doing, they could actually look through the history and find that file. And that has happened I’ve seen it happen a fair number of times you, you have to go through some extra steps to squash down the container image so that you basically purge all the history and you only ended up with the last intended state of the container file, but not a lot of people know that, like how many people know that you have to do that?

Andrew: well, including you, the six people listening to this show, maybe four others.

Jerry: So there’s a lot of there’s a lot of nuance here. So I thought, The time the timing was just. Fascinating. That, that it was going to be fast just based on my experience. I knew it was going to be fast, but I did not expect it to be that fast. Now in terms of where most of the credentials were used.

That was also very interesting. Hello. Was a little, in some areas, some respects, not what I expected. So the most the most targeted or the place where the most credentials was used was Pastebin, which is interesting because Pastebin also had a relatively long time to detect. And so I think it means that people are more aggressively crawling it.

And then the second most common is a website. And I think that one does not surprise me because crawling websites has been a thing for a very long time. And I think there’s lots and lots of tools out there to help identify credentials. So obviously it’s a little. Dependent on how present them.

If you have a password. txt file and that’s available in a index in directory index on your webpage. That’s probably going to get hit on a lot more.

Andrew: I’m, you know what?

Jerry: Yeah, I know. You’re not even going to go there. Yep. You’re I’ll tell you the trouble with your mom. There you go. Feel better.

Andrew: I feel like she’s going to tan your hide.

Jerry: See, there you go. You got the leather joke after all. Just like your mom.

Andrew: Oh, of nowhere,

Jerry: All right. Then GitHub was a distant third.

Andrew: which surprises me. I,

Jerry: That did surprise me too.

Andrew: Yeah. And also I also know GitHub is a place that tons and tons of secrets get leaked and get labs and similar because developers do have, it’s very easy for them to accidentally leak secrets in their code up to these public repos. And then you can never get rid of them.

You’ve got to rotate them.

Jerry: I think it. So my view is it’s more a reflection of the complexity with finding them, because in a repository, you got to search through a lot of crap. And I don’t think that the tools to search for them is as sophisticated as let’s say, a web crawler, hitting paste in the website.

Andrew: Which is fascinating that the incentive is on finding the mistake by third parties. Yeah. got better tooling then. Now, to be fair, all of these, like if GitHub for instance, has plenty of tools you can buy, both homegrown at GitHub or third parties that in theory will help you detect a secret before you commit it, but they’re not perfect and not everybody has them.

Jerry: Correct. Correct. And I also think it’s more in my experience. It’s much more of a common problem from a, from a likelihood of exposure from from the average it shop, you’re much more likely to see your keys leak through GitHub than you are from people posting them on a website or on pastebin.

But, knowing that if they do end up on pastebin, like somebody’s going to find them is I think important to know, but my Experience it’s, it’s Docker hub in the code repositories, like PyPy and MPM and GitLab and GitHub. That’s where it happens, right? That’s where we leak them.

It’s interesting in this, in this test, they tried out all the different channels to see which ones were more, more or less likely to get hit on. I think you get hub in my experience, GitHub and Docker hub and whatnot. are the places that you have to really focus and worry about because that’s where they’re, that’s where they’re leaking.

Andrew: Yeah. It makes sense. It’s a fascinating study.

Jerry: Yeah. And it

sorry, go ahead.

Andrew: I would love for other people to replicate it and see if they get similar findings.

Jerry: Yes. Yes. I, and this is one of those things that, again the tooling is not there’s not a deterministic way to tell whether or not your code has a password or not in it. There are tools, like you said, that will help identify them. To me, it’s. And it’s important to create a I would call the three, three legs of the stool approach.

One is making sure that you have those tools available. Another would be making sure that you have the tools available on how to store credentials securely, like having. Hash a car vault or something like that available. And then the third leg of the stool is making sure that the developers know how to use those.

Know that they exist and that’s how you’re, how they’re expected to actually use them. Again, it’s not perfect. It’s not a firewall. It’s, you’re still reliant on people who make mistakes.

Andrew: Two questions. First of all, that three legged stool, would that be a milking stool?

Jerry: Yes.

Andrew: Second, plus a question, more comment. I would also try to teach your teams, hey, try to develop this software with the idea that we may have to rotate this secret at some point.

Jerry: Oh, great point. Yes.

Andrew: and try not to back yourself into a corner that makes it very difficult to rotate.

Jerry: Yeah. I will also, I’ll go one step further and say that not only should you do that, but you should at the same time implement a strategy where those credentials are automatically rotated on some periodic basis. Whether it’s a month, a quarter, every six months, a year, it doesn’t really matter having the ability to automate it or to have them automated, have that change automated, gives you the ability in the worst case scenario that somebody calls you up and says, Hey, like we just found our key on on GitHub, you have an ability to go exercise that automation, but Without having to go create it or incur downtime or whatnot.

And that the worst case is you’re stuck with this hellacious situation of I’ve got to rotate the keys, but if I rotate the keys, the only guy that knows how to, this application works is on a cruise right now. And if we rotate it, we know it’s going to go down and we, so you’re end up in this That’s really bad spot.

And I’ve seen that happen so many times.

Andrew: And then the see saw ends up foaming at the mouth like a mad cow.

Jerry: Yes, that’s right. Cannot wait for this to be over. All right. The the last story mercifully is Mike is also from the register. com. And the title is Microsoft patches is scary. Wormable hijacked my box via IPv6 security bug and others. It’s been a while since we’ve had one that feels like this. So the the issue here is that Microsoft just released a patch as part of its patch Tuesday for a a light touch pre authentication, remote code exploit yeah, remote code exploit over the network, but only for IPv6,

Andrew: which to me is holy crap, big deal. That’s really scary.

Jerry: incredible.

Andrew: and either, I don’t know, I feel like this hasn’t gotten a ton of attention yet. Maybe because there wasn’t like a website and a mascot and a theme song and a catchy name.

Jerry: Yes. And

Andrew: But, so if you’ve got IPv6 running on pretty much any modern version of Windows, zero click RCE exploit, over, have a nice day. That’s scary. That’s a big deal.

Jerry: the better part is that it is IPv6. Now I guess on the on the downside it’s IPv6 and IPv6 typically is isn’t. Affected by things like NAT based firewalls. And so quite often you have a line of sight from the internet straight to your device, which is a problem. Obviously not always the case.

On the other

Hand, it’s not widely adapted.

Andrew: but a lot of modern windows systems are automatically turning it on by default. fact, I would wager a lot of people have IPv6 turned on and don’t even know it.

Jerry: Very true.

Andrew: Now you’ve got to have all the interim networking equipment, also supporting IPv6 and for that to be a problem, but it could be.

Jerry: So there, there’s the the researcher who identified this has not released any exploit code or in fact, any details other than it exists. But I would say now that Apache exists I think it’s fair to say every freaking security researcher out there right now is trying to reverse those patches to figure out exactly what changed in hopes of finding out what the.

Problem was because they want to create blogware and create a name for it and whatnot. I’m sure. This is a huge deal. I think it is for alarm fire, you’ve got to get this one patched like yesterday.

Andrew: Yeah. It’s been a while since we’ve seen something like this. Like you said, at the top of the story, it’s, Vulnerable, zero clickable, RCE, just being on the network with IPv6 is all it takes. And I think it’s everything past Windows 2008. Server, is vulnerable. Obviously patches are out, but it’s gnarly. It’s a big deal.

Jerry: As you would say, get ye to the patchery.

Andrew: Get ye to the patchery. I’ve not used that lately much. I need to get back to that. Fresh patches available to you at the patchery.

Jerry: All right. I think I think we’ll cut it off there and then ride the rest home.

Andrew: Go do some grazing in the meadow. As you can probably imagine, this is not our first radio.

Jerry: Jesus Christ. Where did I go wrong? Anyway, we I I sincerely apologize but I also find it. I also find it weird.

Andrew: I don’t apologize in the least.

Jerry: We’ll, I’m sure there’ll be more.

Andrew: Look man, this is a tough job. You gotta add a little lightness to it. It can drain your soul if you’re not careful.

Jerry: Absolutely. Now I Was

Andrew: But once again, I feel bad for the cow and the calf. That’s terrible. That’s, I don’t wish that on anyone.

Jerry: alright. Just a reminder that you can find all of our podcast episodes on our website@wwwdefensivesecurity.org, including jokes like that and the infamous llama jokes way, way back way, way back. You can find Mr. Clet on X at LER.

Andrew: That is correct.

Jerry: Wonderful, beautiful social media site, InfoSec. Exchange at L E R G there as well. And I am at Jerry on InfoSec. Exchange. And by the way, if you like this show, give us a good rating on your favorite podcast platform. If you don’t like this show, keep it to yourself.

Andrew: Or still give us a good reading. That’s fine.

Jerry: Or just, yeah, that works.

Andrew: allowed,

Jerry: That works too.

We don’t discriminate.

Andrew: hopefully you find it useful. That’s all we can that’s our hope,

Jerry: That’s right.

Andrew: us riffing about craziness for an hour and hopefully you pick up a something or two and you can take it, use it and be happy.

Jerry: All right. Have a lovely week ahead and weekend. And we’ll talk to you again very soon.

Andrew: See you later guys. Bye bye.

  continue reading

268 jaksoa

सभी एपिसोड

×
 
Loading …

Tervetuloa Player FM:n!

Player FM skannaa verkkoa löytääkseen korkealaatuisia podcasteja, joista voit nauttia juuri nyt. Se on paras podcast-sovellus ja toimii Androidilla, iPhonela, ja verkossa. Rekisteröidy sykronoidaksesi tilaukset laitteiden välillä.

 

Pikakäyttöopas