Welcome back to another jam-packed episode of Tech News of the Week! Chris and I are diving into four big stories that caught our attention this week. From sketchy ISP routers to OpenAI’s latest security fail, let’s break it all down.
🔹 **Stop Using Your ISP Router—Seriously**
If you're still using the router your internet provider gave you, it's time for an upgrade. Not only are ISP-provided routers outdated and full of security holes, but they might also be spying on you—and, in some cases, even harboring actual bugs (the creepy-crawly kind). A new website, [RouterSecurity.org], lays out just how bad these devices can be. Investing in a good third-party router is a small price to pay for better security and performance. Also, if you haven’t looked into mesh routing yet, you’re missing out! LINK: https://routersecurity.org/ISProuters.php
🔹 **AWS Goes on an AI Spending Spree**
Amazon reported solid Q4 earnings, but apparently, a 19% growth in AWS wasn't enough to impress investors. So, what's Amazon’s solution? Throw more money at AI! They’re planning to invest a whopping $100 billion in AI hardware this year, with much of that going toward NVIDIA-powered chips. The hope is that supply chain issues will ease up, allowing AWS to scale its AI efforts even further. But will all this spending pay off in the long run? We’ll see. LINK: https://ir.aboutamazon.com/news-release/news-release-details/2025/Amazon.com-Announces-Fourth-Quarter-Results/default.aspx
🔹 **Phishing Tests Are Getting… Meaner?**
We all know about phishing tests—those fake scam emails companies send to see if employees fall for them. But lately, these tests have been pushing the limits, with some using emotionally charged messages like fake Ebola outbreaks or rescinded bonuses. The Wall Street Journal reports that while these tactics may be effective, they’re also making employees furious. One particularly controversial example? A phishing email promising free Eagles tickets to people in Philadelphia. Ouch. LINK: https://www.wsj.com/tech/cybersecurity/phishing-tests-the-bane-of-work-life-are-getting-meaner-76f30173
🔹 **OpenAI’s New Model Helps… Write Malware?**
Well, that didn’t take long. OpenAI's new "secure" GPT-4 variant, O3 Mini, was supposed to be better at filtering out harmful requests. But within days, a security researcher tricked it into generating code to exploit Windows security processes. OpenAI insists the exploit wasn’t serious, but the fact remains—these models still aren’t as locked down as they claim. Maybe a little more internal testing before release wouldn’t hurt? LINK: https://www.darkreading.com/application-security/researcher-jailbreaks-openai-o3-mini
That’s it for this week! Drop a comment, let us know your thoughts, and we’ll catch you in the next one. 🚀
00:00 - - Intro
00:19 - - ISP routers are bad, actually
01:55 - - AWS spends billions on AI
03:41 - - Phishing tests are getting aggressive
05:40 - - OpenAI’s security fail
[00:00:00.11]
Announcer: Welcome to Tech News of the Week with your host, Konstance Fickelman.
[00:00:05.13]
Ned: Welcome to Tech News of the Week. This is our weekly Tech News podcast, where Chris and I talk about four interesting stories that we found. Chris, why don't you take us away?
[00:00:19.11]
Chris: Do not, we repeat, do not use the ISP-provided router in your home.
[00:00:27.05]
Ned: Nope.
[00:00:28.24]
Chris: This has been discussed I've discussed on the show before, but it bears repeating. The router that your ISP provides to you is terrible. It is likely years out of date from a software and software security perspective, making it vulnerable to hackers. It is also possibly, probably phoning home to the ISP about all the devices on your network and your activity on those devices. It It is also also, and this one was fun, it is also also possibly infected with a colony of roaches. Wouldn't that be fun for an unboxing video on The Tick and the Tuck? No. All of this reminder comes from a nice collated website that I found called routeersecurity. Org, which has a good summary of the whys of avoiding ISP routers, but also some evidence of the shoddy to the point of malfeasance behaviors of the companies in question. Let's be real here. A good router is not expensive, and they are not that hard to install and configure third year. We here at Chaos Lever, highly recommend that you at least consider it. Also look up Mesh routing because that's fun.
[00:01:55.25]
Ned: Aws going on AI spending spree to cheer itself up. Amazon released their Q4 results on February sixth, and everything was going up and to the right. But is it going up and to the right fast enough? Apparently not for investors as Amazon's stock took a bit of a dip. Aws growth was at 19% year over year. 19%, and yet that fell short of analysts' estimates. I should note that both Google and Microsoft also missed analyst targets on their earnings calls and also made a big profit It. What's a sad Andy Jassy to do? Go on a spending spree, of course. Off to the AI store we go. Aws spent 26 billion in CapEx in Q4 and plans to invest another 100 billion on CapEx in 2025. The vast majority of that will be on AI-related hardware. They're even changing the life cycle of their gear from 6 to 5 years to cycle out faster for better AI support. Jassie basically said they would have spent even more last year, but there have been supply chain issues from chip manufacturers, aka NVIDIA. He claims those shortages will end in the second half of the year, which means that NVIDIA has told them where they are in the line.
[00:03:17.16]
Ned: In the press release was a list of major features and enhancements for AWS last year, and of the 14 bullet points, 11 were AI-specific. What impact will more efficient models and approaches like those pioneered on deep seek have on Amazon spending? I suspect very little. You'll just get more bang for your buck.
[00:03:41.18]
Chris: Phishing tests designed to educate users on scammy emails are getting mean-er. You surely know about phishing tests, right? These are calculated fake, fake emails that are sent by your company to see if you click on them. And if you do, you get your hand slapped. The whole idea here is to educate users about what's a bad email, what to look for, how to avoid getting hit by these types of things. Blah, blah, blah, blah, blah. Well, apparently, the emails that are getting sent by these companies are getting meaner. That's actually the word they used. Like I said, these tests have been a thing for a while now, built into cybersecurity education programs that companies do. It makes sense. Better for a user to click on a fake, fake email that gives them that scolding rather than to click on a real fake email that empties their bank account. Recently, though, the Wall Street Journal reports that messages that are being sent in these fake, fake emails have been, well, people are calling them problematic. Like, for example, a message about an Ebola outbreak at a nearby college or something announcing that your yearly bonus is being rescinded.
[00:05:05.06]
Chris: This, as the kids say, is pissing some people off. So what to do about this? On the one hand, you don't want an adversarial relationship with your users. On the other hand, these messages appear to be working. One reported campaign that was, not well received, but received lots of clicks, offered Philadelphia area users free Eagles tickets. You will probably not be surprised to know that plenty of people went ahead and clicked on that one.
[00:05:40.03]
Ned: Now that is me. Openai '03 model thinks that could help you write malware. Well, that didn't take long. Just days after OpenAI dropped its fancy new O3 Mini model, boasting a shiny new security feature called Deliberative Alignment, a security reach researcher, promptly broke it. Aaron Schimini, an expert at poking holes in AI defenses, tricked O3 Mini into helping him exploit a critical Windows security process. Openai had assured everyone this time that for sure, their model would reason through tricky safety scenarios instead of blindly following bad prompts. Turns out all it took was some clever wording and a historian disguised to get past the guardrails. Schimini posed as a historian researching the history of malware and code injection for the lsas. Exe process. Chatgpt eventually reasoned its way into giving Aaron pseudo code that would perform said code injection. Chimony, who regularly writes tests for LLMs to test for vulnerabilities, notes that OpenAI's models are especially easy to manipulate with natural language trickery, unlike Meta's llama, which has different weak spots. Despite OpenAI AI's PR spin that the leaked exploit wasn't that serious because you could Google similar stuff. Jiminy sees a simple fix, better classifiers to catch blatantly harmful requests.
[00:07:11.24]
Ned: Maybe OpenAI should start doing more internal testing before shoving things out the door, but that would slow down their releases. I'm not sure if you heard about this whole deep seek thing. We might have just done a whole episode on that. You could check that out. That's it. We're done now. Go away. Bye.