This week in Tech News of the Week we dive into a series of significant tech and cybersecurity developments: Home Depot's troubling supply chain data breach, Supermicro's controversial decision not to fix hackable BMCs, and much more!
Links:
Welcome to Tech News of the Week with your host, a human masquerading as an alien. Wait.
Welcome to trebuchet nautical of the wizard.
These are literally getting terrible.
Or better. This is our weekly short form podcast where we talk about 4 news articles that caught our attention. Chris, you're up first with something about Home Depot. It's good. Right?
It's a good thing?
Oh, I forgot to put the link in. That's embarrassing.
Yes. Because everyone can see our screens.
Home Depot hit by SaaS based data breach or can't believe I have to say this out loud, as dark reading put it, Home Depot hammered by supply chain breach. Get it? Because Home Depot, they sell hammers and I'll allow it. Getting hammered by a by a breach is you get it.
Don't you think Wrenched would have been better and slightly more clever?
No. Anyone. The breach is interesting in the sense that it did not affect customers or the general public per se. Rather, the loss of data was related only to approximately 10,000 of Home Depot's 500,000 or so employees? Wow.
Yeah. In other news, Home Depot's huge. I completely forgot about that. I thought it was, like, 6 people worldwide, and they just traveled real well.
They're all named Frank. They're retired firefighters. You know, the usual.
And they're disappointed in you. Universally. Now this particular breach was not of the devastating kind. All that was breached was corporate IDs, names, and email addresses. But I mean, still, not great.
Oh, according to the release, quote, a third party SaaS vendor mistakenly exposed sample employee data, unquote. So it's not really the years long deep attack and slow takeover approach of, say, a SolarWinds breach. Now this breach, this Home Depot breach is notable specifically because of its banality. The vast majority of breaches are going to happen because of human error. And this is important for SaaS, and especially considering the growth of SaaS usage inside of organizations.
Mhmm. The estimates of how many SaaS tools are in use at the average company are pretty staggering, with Statista reporting the number as an unbelievable 130. Now, to be fair, other research by a company called Cloud 0 has that number at a much more reasonable sounding a 101, which either way wow.
For a bit of context, as a independent business owner, I probably leverage somewhere around 20 to 30 SaaS services or solutions. So, yeah, I actually think 130 might be lowballing. Hackable BMCs for super micro won't be fixed. Fine. Because you don't deserve it.
Security research firm Binerly, we'll go with that, has identified a vulnerability in baseboard management controllers from Intel and Lenovo that could allow for arbitrary memory access of running systems. This is what the kids call bad. Bad? Bad. Bad.
The Baseboard Management Controller or BMC is a chipset on the system board that allows for out of band access to the system. When connected to the BMC, you can monitor the status of hardware components, get a virtual KVM console, or power on and off the system. Most BMC's have a web server component that allows for a graphical user interface through a browser and many BMC manufacturers chose to use the open source web server Light TPD. It's light h t t p d but spelled in an annoying way.
That's a tough one.
In 2008, the maintainers of Light TPD released a new version that fixed a possible vulnerability that could allow an attacker to remotely access memory through the software. Unfortunately, the release notes do not contain useful keywords like vulnerability or exploit or even an associated CVE. BMC makers like AMI and ATEN missed the update and continued to use an older version of the software for years, leaving a gaping vulnerability on all affected motherboards from vendors like Intel and Lenovo. Even more unfortunately, the affected systems are all now end of support life or EOSL, which leaves the owners just SOL. If you think you might be impacted, check with your hardware vendor and, for the love of all that's holy, secure the hell out of your BMC network for real.
Like for real for real. For reals. Intel releases new AI accelerator chip looks to keep on accelerate to ring. Yay. Y'all remember Intel.
Right? They were the super significant company that built world changing CPUs for, like, 40 years ish until they decided to rest on their laurels and stop innovating because they had an unchecked monopoly that was bringing in enough revenue that they didn't think they had to, like, work for a decade.
Or 2.
Ish. Well, they're back, baby. Woo. And they're riding that old familiar flavor of the week, our buddy Al. I mean, AI.
AI. Stupid font. Bring back kerning. I mean, no. Serifs.
Yes.
And Kerning. Kerning is important too. Damn it, Ned. Focus. This week at the Intel Vision event, Intel released the Gaudi 3 chip.
Get it? Gaudi, g u a d dash a I. Man, the jokes are just killer this week. The chip marks a big jump in performance from its predecessor, appropriately named Gaudy 2, with Intel claiming increases in the 3 version of 4 x in AI Compute Benchmarks, 1.5x in memory bandwidth, and 2x in network bandwidth. They also announced a new PCIe card capable of a 128 gigabytes of memory and bandwidth of 3.7 terabytes per second.
Meh.
All of these numbers are grand and, of course, inconceivable to the human mind, but the takeaway is that this is a clear shot across NVIDIA's bow. For reference, the hallowed NVIDIA h one hundred's total bandwidth is 3.4 terabytes per second. Also, these Gaute chips have significant network accelerators that are basically mean there are 24 200 gigabyte Ethernet ports integrated into each chip. The significance of this escapes me, but it sounds good. This is also a step away from previous tech that, as we have discussed in the past, relied heavily on InfiniBand.
Mhmm. Also also, these Intel chips utilize the quote, Open Accelerator Module interface, which is a standardized connector that should help with interoperability and critically adoption by people and companies that don't want to mess with NVIDIA's proprietary interface. So according to Intel, all of this tech will be quote, foundational for Falcon Shores, Intel's next generation graphics processing unit for AI and for high performance computing, unquote. Fun. Fun.
One thing they did not talk about was the price, but I'm going to be guessing that it will be as the kids say hi.
No cap. It's not a callback. It's like a call forward.
Ground yourself.
Google tries to get a leg up on Intel with a new arm. Listen. It's 9 AM on a Friday, and that's the best I got. At the Google Cloud Next conference this week past, good old Goog announced a bunch of stuff mostly around how serious they are when it comes to AI, Which, at this point, I mostly believe them. Certainly more than AWS's protestations about being all in on AI while slashing capex spending.
Interesting choice. But I digress. As I predicted during some past episode, look it up, don't look it up, Google has now joined AWS and Microsoft in manufacturing their own custom ARM server processor. AWS was the first to market with their Graviton processor, which is now in generation 4. Microsoft followed suit last year with their announcement of the a 128 core Cobalt ARM Processor.
Google's entry into the foray is the Axion ARM Processor based on the Neo verse V2 Chips and Cypress Design from ARM. Google was reticent to share actual speeds and feeds on the chip but did claim 50% better performance over x86 VMs. 50% better than what exactly was not specified. Google is already using these processors in production for their Bigtable, Spanner, and YouTube Ads platforms among others. And you'll be able to use them directly on Google Cloud later this year.
Arm must be feeling pretty smug after achieving the cloud trifecta, but don't get too comfortable, Arm. RISC 5 is right around the corner, and the hyperscalers are already experimenting with it. You should probably put that on your risk register for 2025. I'm so sorry, everyone. I'll see myself out.
Alright. That's it. We're done. Go away now. Bye.