SN 1001: Artificial General Intelligence (AGI) - Gmail Temp Addresses, Russia's Internet Off Switch
en
November 19, 2024
TLDR: Discussions on Microsoft's increasing US Government cybersecurity dependence, Gmail adding disposable email aliases, Russia banning hosting companies and attacking Ukraine through Windows flaw, TrueCrypt successor, Cloudflare Tunnel service usage, local server emulation for remote public IP, password sharing techniques, finding obscure Security Now references, AGI parameters and OpenAI's prediction by Sam Altman.
In this episode of Security Now, hosts Steve Gibson and Leo Laporte delve into the evolving world of Artificial General Intelligence (AGI), cybersecurity topics, and listener questions. This comprehensive summary encapsulates key discussions, expert opinions, and insights that emerged from the episode.
Key Highlights
Here's what you can expect from this episode:
Microsoft and Government Cybersecurity
- Microsoft's Strategy: Microsoft has reportedly created a deeper dependency for the US government on its cybersecurity products as they lured federal agencies with promises of enhanced security amid cyber threats from state-sponsored hackers from Russia, China, and Iran.
- Consequences: This dependence raises questions about competition, transparency, and long-term implications for federal cybersecurity.
Gmail’s New Feature
- Temporary Email Aliases: Google plans to roll out a feature allowing users to create throwaway email addresses similar to offerings from Apple and Mozilla, enhancing user privacy and reducing spam. This feature targets the significant number of Gmail users, especially those using it for public sign-ups.
Russia’s Internet Policies
- Increased Restriction: Russia announced a further clampdown on foreign hosting services that allegedly discredit its military. They also plan to test their "internet disconnect switch", potentially isolating their internet from the rest of the world.
- Exploitation of Vulnerabilities: Russian hackers exploited a Windows vulnerability to conduct attacks against Ukrainian targets, highlighting ongoing cyber warfare tactics in the region.
Listener Interactions and Technical Discussions
- Value of Past Episodes: The hosts revisit older episodes, reaffirming their worth and the foundational knowledge they provide to listeners.
- TrueCrypt's Successor: Discussions about TrueCrypt’s discontinuation led to recommendations for suitable encryption software, with a focus on VeraCrypt.
- Remote Network Access: Utilizing services like Cloudflare’s Tunnel for secure remote access without exposing local servers was highlighted. Techniques for securely sharing complex passwords were also discussed.
- AGI Landscape:
- Current Status and Predictions: AGI remains a focus for researchers, with divergent opinions on its imminent arrival. Notably, OpenAI's CEO Sam Altman predicts AGI could be realized by 2025, a stark contrast to more conservative estimates by others who suggest timelines extending to at least 2060.
- Key Components for AGI: Current AI models lack several capabilities necessary for true AGI, such as common sense reasoning, context awareness, and continual learning. Efforts to simulate human-like understanding continue to raise ethical questions.
Expert Opinions on AGI
- Diverse Perspectives: The episode features insights from AI experts who outline the hurdles in achieving AGI, emphasizing the mystery still surrounding current AI models' decision-making processes. Experts argue that while substantial progress has been made, fundamental advancements are needed before AGI can be realized.
- Practical Considerations: Many experts agree that instead of rushing towards AGI, refining existing technologies to improve their reliability in specific tasks could yield practical benefits in industries such as healthcare and cybersecurity.
Takeaways for Listeners
- Understand AGI: The concept of AGI is still largely theoretical, with practical implementations still years away. Experts suggest cultivating a foundational understanding of AI’s capabilities, limitations, and existing technologies while staying informed about future developments.
- Cybersecurity Awareness: Companies should critically assess reliance on proprietary solutions, such as those offered by Microsoft, to avoid vendor lock-in.
- Email Privacy Solutions: Utilize features like Gmail’s temporary email aliases to enhance privacy and manage spam effectively.
Conclusion
This episode of Security Now provides a rich tapestry of insights from the realms of artificial intelligence, cybersecurity, and how these areas intersect and evolve. As anticipation for AGI continues to grow, this discussion serves as a reminder of the complexities and ethical considerations that accompany breakthrough technologies.
Was this summary helpful?
It's time for security now Steve Gibson is here. He says there's not a lot of news So we're gonna do a lot of questions from the audience feedback and so forth and then Steve will explain in his understanding of what is going on with AI the Search for artificial general intelligence and how close we are coming I think you're gonna like this episode security now is next Podcasts you love
from people you trust. This is Twitter.
It's time for security now. The show we cover your security, privacy, safety, how computers work, what's so intelligent about artificial intelligence, all that jazz with the most intelligent guy. I know this cat right here, Mr. Steve Gibson. I am not that Leo. You're not that? No, I'm a, what do we call it? A domain expert. I have some expertise in a couple of places, but when it comes to Sudoku, you're just like the rest of us.
and when it comes to artificial intelligence i'm claiming no expertise well i want to talk about as i said last week artificial general intelligence a g i
Everyone's throwing the term around. We're hearing people talking about it. What caught my attention was when Sam Altman, the infamous and famous CEO of OpenAI, he claimed, oh yeah, we'll have that next week, next year. Any day. He said 2025. Yeah. And it's like, what? Maybe he's kind of a salesman.
Well, yeah, so maybe this was just a nice little stock price boosting ploy But I wanted to take some time. I found a couple interesting articles with with a lot of other people in the industry interviewed and some academics interviewed and I thought let's you know let let so so today is like not no one's gonna find out some great revelation about AGI because I don't have it
But, you know, it's clearly a thing, and I just thought we should kind of put a marker down and say, okay, here's where it is. So you've done it before you did it with blockchain. It's very frequent that you're able to, because that's how you work, to digest all this stuff. You're kind of our retrieval augmented generation. You digest all this stuff and give it back to us so we can understand it. So I'm very much looking forward to this episode.
Well, in the fullness of time, if I spend some time digging in, then that would be interesting. But we got a bunch of stuff to talk about. We're going to look at, oh, this is a great story, how Microsoft lured the US government into a far deeper and expensive dependency upon its own proprietary cybersecurity solutions.
than the by the ministration expected also gmail will be offering native throwaway e-mailed a aliases much like apple amazon will cut will touch on that oh my god and russia
Well, they're banning additional hosting companies. They're going to give their big internet cutoff switch, another trial next month, and some other things that we'll talk about. Oh, and they used a diabolical Windows flaw to attack Ukrainians. It was found by a security group. And boy, when our old timers find out
what that's something we assumed was safe might not be safe to do uh... that's gonna raise some hair also
We're going to look at, oh, I have a note from our listener about the value of old security now episodes. We're going to touch on TrueCrypt's successor, also using CloudFlare's tunnel service for remote network access. Another of our listeners said, hey, this is what I'm doing. So we're going to share that. Also answer the question about how to make a local server appear to be on a remote public IP, which
In this case, it's coming in handy for pretending to be a remote command and control server when testing malware. Also, how to share an impossible to type a password with someone else.
Oh, and another listener asked, and I answered, and then he confirmed about finding obscure previous references in the Security Now podcast. So that, and then we're going to dig into this whole question of what is artificial general intelligence and how was what we have today failing that? What are the recognized and widely agreed upon characteristics that AGI has to have?
And when might we get some? So I think a great podcast. There was not, as you could tell, there was not a huge amount of news. I looked everywhere for good stuff. But boy, I added it up. I think I have 4,300 some plus some.
inbound pieces of email from our listeners holy cow so like since this began so i'm not starving at all for for listener feedback and you know i think it's it's fun but actually we've got uh... changing this from twitter to email
completely change the feel of the feedback, since it no longer needs to fit into 280 characters. And so it's a lot more interesting. So a great podcast. Oh, and Leo, we're starting in on our second thousand.
This is podcast number one thousand and one. I hadn't really thought of it quite that way. A second thousand. That's right. You put that into perspective. I guess everybody wants. They want another thousand. It's like, okay. Oh, God. There we go. Okay. Well, you and I are going to work on it. We're going to do our best.
That's all we can promise, just the best. I look different than I did 20 years ago, but you look about the same. You got your hair still. It's nice silver. I think it was the badger. I still have the badger on top. Our show today brought to you by, I'm very happy to say, Big ID. This is a really, really interesting company. They're the leading data security posture management solution. Sometimes they call it DSPM.
Big ID is the first and only DSPM solution to uncover dark data, to identify and manage risk, to remediate the way you want, scale your data security strategy through unmatched data source coverage. Big ID seamlessly integrates with your existing tech stack and allows you to coordinate security and remediation workflows.
You could take action on data risks, annotate, delete, quarantine, and more based on the data, all while maintaining an audit trail for import for compliance, right? Partners include service now, Palo Alto Networks, Microsoft, of course, Google, AWS, and more and more.
And with Big ID's advanced AI models, you can reduce risk, accelerate time to insight. This is a new metric for me. I love it. Time to insight, TTI, and gain visibility and control over all your data. Now, let me give you an idea of the kinds of people who use Big ID.
Who do you think would have an awful lot of data in a lot of places, in a variety of formats, some legacy formats, who would need to know where all their data is in such a situation? How about, oh, I don't know, the US Army, right? They use Big ID to illuminate all that dark data, to accelerate cloud migration, minimize redundancy, and to automate data retention.
I have this quote is from the US Training and Doctor Command is mind-boggling.
This is the quote, quote, the first wow moment with big ID came with just being able to have that single interface that inter inventories a variety of data holdings, including structured and unstructured data across emails, zip files, SharePoint databases and more. I mean, parenthetically, I'm just gonna say you can imagine the different kinds of formats that your army has had has collected over the last couple of decades.
He goes on to say, to see that mass and be able to correlate across those is completely novel. I've never seen a capability that brings this together like Big ID does. That's a pretty good endorsement. CNBC recognized Big ID as one of the top 25 startups for the enterprise named to the Inc. 5000, the Deloitte 500, two years in a row. They're the leading modern data security vendor in the market today. You need to know this name, Big ID.
Publisher of Cyber Defense Magazine said, quote, big idea embodies the three major features we judges look for to become winners, understanding tomorrow's threats today, providing a cost-effective solution, of course, and innovating in unexpected ways that can help mitigate cyber risk and get one step ahead of the next breach. It all starts with knowing where your data is. By the way, also really important if you're looking at AI,
Because if you think about it, you want to train, but you want to train on the stuff you know. The Army probably has a lot of stuff. They don't want to train AI on because it's sensitive or secret. So it's really important to understand what your data is, where it is in all sorts of places. That's what Big ID can do. Start protecting your sensitive data wherever your data lives at bigid.com slash security.
Now, get a free demo, see how Big ID can help your organization reduce data risk, accelerate the adoption of generative AI. We're going to be talking about that later today. Big ID, B-I-G, big ID, B-I-G-I-D. You don't ask me to spell ID. Bigid.com slash security now. They do have, we're talking about AI, they have so many great reports on their website, bigid.com slash security now, but they do have
a free report that's brand new, that gives you some really useful insights on key trends on AI adoption, challenges, including those challenges of what to train on and what not to train on, and the overall impact of generative AI across organizations. They know and they have got a great paper on this.
So read it at bigid.com slash security now. You need big ID. Thank you so much for supporting the work Steve does here and you support us of course when you go to that address then they know that we saw it on security now. Bigid.com slash security now.
Steve, I'm ready with the picture of the week. It's a good one this week. It is a good one. And I've had some feedback from our listeners already to really liked it. I was again on the ball and just a reminder to our listeners that those we just shy of 13,000 people are now subscribed to the security now mailing list, 12,979. Almost exactly the same number of club twit members we have. So I think there's maybe a correlation there.
I think there may be. And there was a, that was the count when the mailing went out around 3 p.m. yesterday. So just saying that 24 hours ahead of time, anybody who subscribed to the list got this stuff. So, okay, anyway, so the point was that many people wrote back and said, wow, that's terrific. So what we have is a residential
a staircase going up, as they do along one wall with a handrail, and then a banister on the outside so that the stairs are not open. Now, this family has a couple of toddlers.
And looks like maybe his sister is a little older than brother. She was supposed to. He's in diapers still and looks like maybe he's two. She might be maybe two and a half or three. I don't know. But that across the bottom of the stairs is a screen. Mom and Dad have said, kids are not going upstairs. They stay downstairs. The child.
And I think it's a brand new one. It looks like it because it's still got the sales tag on it. You're right. And I noticed also that behind it are a couple of stacks of stuff that they don't want the kids to get into. They don't want the kids to get into it exactly. Well, now I gave this picture the caption, the bottom of the staircase may have been blocked, but these future hackers are not deterred.
Because the stairs protrude out from the banister supports, and both of the kids have walked up the outside of the stairs.
shirts like seeing whether there's a way they can get in there because they're going to find a way and it looks like maybe that the if i'm right the the the oldest sibling uh... looks like she's sort of trying to squeeze herself in because she sort of ran out of runway and i have a risk now i'm going to happen
We hope the analogy is not that they're behind bars because the banister does look a little bit like that too. These guys, they're determined to find a way past mom and dad's blockade of the stairs. Future hackers.
Okay, so some recent reporting by ProPublica raised some interesting questions. And I got a kick out of this. I'm sure that our listeners will too. So ProPublica and I'll be interrupting a few times here with some of my own comments. They said in the summer of 2021, and we covered this at the time, President Joe Biden summoned the CEOs of the nation's biggest tech companies to the White House.
A series of cyber attacks, linked to Russia, China, and Iran, had left the government reeling. And of course, some of that was Microsoft's fault, right? And the administration had asked the heads of Microsoft, Amazon, Apple, Google, and others to offer concrete commitments to help the US bolster its defenses.
Biden told the executives gathered in the east room quote you have the power the capacity and the responsibility i believe he said to raise the bar on cyber security unquote
They said non-Microsoft had more to prove than most, its own security lapses had contributed to some of the incursions that had prompted the summit in the first place, such as the SolarWinds attack, in which Russian state-sponsored hackers stole sensitive data from federal agencies, including the National Nuclear Security Administration.
Following the discovery of that breach, some members of Congress said the company should provide better cybersecurity for its customers. Others went even further. Senator Ron Wyden, who chairs the Senate's finance committee, called on the government to, quote, reevaluate its dependence on Microsoft before awarding it any more contracts. Now, as we're going to see shortly,
What happened is not exactly what Ron was looking for. This was not the kind of reevaluation that Ron had in mind. The Republican said, in response to the president's call for help, Microsoft's CEO, Satya Nadella, pledged to give
the government $150 million in technical services to help upgrade its digital security. Well, isn't that nice? On the surface, they wrote, it seemed a political win for the Biden administration and an instance of routine damage control from the world's largest software company.
But the result of ProPublica's subsequent investigation suggests that Microsoft's seemingly straightforward commitment to provide a bunch of free technical services be lied a more complex profit-driven agenda. As time has since revealed, Microsoft's apparent generosity was a calculated business maneuver designed to bring in billions of dollars in ongoing revenue.
Lock competitors out of lucrative government contracts, and even further tighten the company's grip on federal business. And as I'm reading this, I thought, you know, if I didn't know better, I would think Gates was still around. Since this turned out to be a recognizably classic bill move.
So they wrote, the White House offer, as it was known inside Microsoft, would dispatch Microsoft consultants across the federal government to install Microsoft's cybersecurity products, which, as part of the offer, were provided free of charge for a limited time.
That's right, what a bargain. What's wrong with this picture? Okay, so they said, well, how about once the consultants installed the upgrades, federal customers would be effectively locked in? Because shifting to a competitor after the free trial would be cumbersome and costly. According to former Microsoft employees involved in the effort,
most of whom spoke on the condition of anonymity because they feared professional repercussions. At that point, the customer would have little choice but to pay for the higher subscription fees. In fact, two former sales leaders involved in the effort likened it to a drug dealer hooking a user with free samples.
If we give you the crack and you take the crack, you'll enjoy the crack," one said. And when it comes time for us to take the crack away, your end users will say, don't take it away from me, and you'll be forced to pay.
Former salespeople said that Microsoft wanted more than those subscription fees. The White House offer would lead customers to buy other Microsoft products that ran on Azure. The companies, of course, their cloud platform. This carried additional charges based on how much storage space and computing power the customer used.
These former salespeople said that the expectation was that the upgrades would ultimately spin the meter. In quoting them, spin the meter for Azure, helping Microsoft take market share from its main cloud rival, Amazon Web Services.
In the years after Nadella made his commitment to Biden, Microsoft's goals became reality. The Department of Defense, which had resisted the upgrades for years due to their steep cost, began paying for them once the free trial ended, laying the groundwork for future Azure consumption.
so that many other civilian agencies, former Microsoft salesperson Karan Sandhi, who acknowledged the deal, said that, quote, the White House offer got the government hooked on Azure, and it was successful beyond what any of us could have imagined, unquote.
While Microsoft's Gambit paid off handsomely for the company, legal experts told ProPublica the White House offer should have never come to pass. As they sidestep or even possibly violate federal laws that regulate government procurement, such laws generally bar gifts from contractors and require open competition for federal business.
Eve Lyon, an attorney who worked for four decades as a procurement specialist in the federal government, said that accepting free product upgrades and consulting services collectively worth hundreds of millions of dollars is not like a free sample at Costco, where I can take a sample say thanks for the snack and go on my merry way. Here you have changed the IT culture.
and it would cost a lot of money to switch to another system." Microsoft for its part
defended of course its conduct steve fail and that's a f a a h l name that's good that's it yeah i thought i should spell it f a a f a h l steve fail the security leader for microsoft federal business said in a statement quote the company's soul goal
During this period was to support an urgent request by the administration to enhance the security posture of federal agencies who are continuously being targeted by sophisticated nation-state threat actors.
There was no guarantee that agencies would purchase these licenses, and they were free to engage with other vendors to support their future security needs."
Pricing for Microsoft Security Suite was transparent, he said, and the company worked, quote, closely with the administration to ensure any service and support agreements were pursued ethically and in full compliance with federal laws and regulations, unquote.
Fail said in the statement that Microsoft asked the White House to, quote, review the detail for antitrust concerns and ensure everything was proper and they did so.
I love the phrase on Azure. I just think that on Azure. That's a nice ad campaign. There's only one little problem with this. Of course, as we know, it really is surprisingly difficult to switch vendors. And of course, it gets worse.
The White House summit ushered in a new form of concentrated reliance. As well as the kind of anti-competitive behavior the Biden administration has pledged to stamp out.
former Microsoft salespeople told ProPublica that during their White House offer push, they advised federal departments to save, get this Leo, to save money by dropping cybersecurity products they had purchased from competitors. Those products, they told them, were now redundant.
Salespeople also fended off new competitors by explaining to federal customers that most of the cybersecurity tools they needed were included in the free upgrade bundle.
Today, as a result of the deals, vast swaths of the federal government, including all of the military services in the Defense Department, are more reliant than ever on a single company to meet their IT needs.
ProPublica's investigation supported by interviews with eight former Microsoft employees who were involved in the White House offer. Reveals for the first time how this sweeping transformation came to be. A change
that critics say leaves Washington vulnerable, the very opposite of what Biden had set out to achieve with his summit because of the monoculture, right? It's like, oh, everybody's using Microsoft. Unfortunately, we've seen Microsoft making some significant mistakes. Well, wasn't this in kind of response to SolarWinds?
Yes, yes. This was three years ago when it was like, oh my God, what are we going to do? And so Microsoft said, hey, how would you like some free stuff? What? It was only free for the first year. I mean, it wasn't even free free. It was a trial offer, basically. It was.
I mean, okay, so the ProPublica article, I've got a link in the show notes. It goes in a much greater detail. That was just like the introduction quarter of it. So I have a link to it, as I said, for anyone who wants more. But I'm sure that all of our listeners get the idea. At one point, Microsoft was asked to provide this enhanced security support to the federal government at no charge indefinitely
which they flatly declined. Then of course, it became a negotiation, over well then, how long would the services be free? And of course, what adds even more salt to this wound is that for many years, the same federal and military agencies had been steadfastly refusing to go with Microsoft solutions due to their cost. But they could not say no to free.
So this allowed Microsoft to get their solutions in the door to remove any previous reasonably priced competitive solutions, and then, once the free offer expired, the choice was either pay up or go without.
Uh, you know, uh, it's at least mildly disgusting. And what's more, you know, this didn't just fall into Microsoft's lap, right? Former insiders made it clear that this was their intention all along from the beginning. Microsoft CEO, Satya Nadella knew exactly what he was doing. Basically it was a Trojan horse.
How hard is it if you've upgraded your security to Microsoft G5 level? Is it to go back? Like if they go, we don't want to pay for it, so we're going to go backwards.
If Elon Musk is going to do anything, this is something he might want to weigh. This is the kind of thing. I mean, it takes holding your breath and pinching your nose. And I mean, it's an upheaval. And so anyone in IT understands that, but it's not their money they're spending. It's our money they're spending.
And so it's always less expensive to pay for the incremental cost of another, you know, another three months than it is to say, okay, we're on the wrong path.
we're gonna just we're gonna dead end this path because it does it does then mean going out getting competitive bids and and literally having downtime while all of this changes because that you know you have to remove all of this junk and and put in new stuff so if the whole motivation for doing this was oh my god we've got a big security problem
uh... you're not gonna tear out the security fix you just installed to fix that so that you can do something else you're you're you're gonna be a lot of pressure just to keep on keeping on well i'm like lio uni and are on the old timers of the of who are listening to the podcast we all remember gates
I mean, Bill was much, he revered as some technical genius. I mean, he's a genius, but he was much more of a businessman than he was a coder. And he says that now too. I mean, so we watched all of the early shenanigans that Microsoft got up to things like, oh, you can't remove our browser. We built it into windows. No, it's part of the operating system.
No, it's not until the EU said take it out they said well, okay
If you don't give us any choice. But this just struck me as so Gaussian. It was just like, oh boy. Yeah, so ouch. OK, so Apple has hide my email. Mozilla offers their Firefox relay. And these are email services that create throwaway aliases for a user's primary account.
The recent news is that Google is reportedly working on adding something which they call shielded email to Gmail, for their 2 billion Gmail users. So as with the other services, users will be able to quickly generate random looking user names for filling out online forms and subscribing to things and so forth.
which hide their real email addresses, so those are just aliases, and then you'll have some means of managing the aliases, so that for example, if you started to get spammed on one, first of all, it would be interesting to know which email address is spamming you, and then you're just able to delete it and you'll get rid of it.
I've noticed that a large percentage of the subscribers to GRC's mailing lists are Gmail domain users. So I imagine this will come as a welcome service. Unfortunately, I use Gmail as my trash can already.
because i've got you know g-r-c dot com email addresses so it's a little late for me and i don't think i think it was so much purpose using you know shielding what is already my throwaway account but still for people whose main whose primary email is gmail i think this sounds a good thing and you know better late than never it's certainly took a while on the other hand
Leo, can you imagine the infrastructure that Google must have in order to give 2 billion users, like email that works as well as Gmail does? And they use their own server. They aren't using an open source server or anything like that.
If you were, you might be a simple plug-in, but that's a big deal. That's a lot to move. Plus it's old, let's not forget. Gmail is not a brand new service by any means. Correct. It's one of the very first web services.
correct in fact i remember uh... do you remember a guy named steve bass who was uh... uh... he he was that he ran the passadina i-b-m-p-c-user oh yes yes okay mug yes that if you try to pronounce that anyway
uh... and i think he wrote for pc world also is byline i do yes uh... neat guy uh... and he had early access to gmail and so sent me uh... and and invite that allowed me to get a you know uh... uh... special uh... email account at at at gmail so which you're not going to tell anybody cuz you did
Otherwise it would be completely useless. It's, it's, believe me, it's next to that now. Anyway, it's just, you know, I have a report at Gmail, which was, because I was also early on. Very nice. Yep. And everybody's decided, apparently the spam world's decided that I'm French and I get a lot of French spam, almost exclusively French spam.
And I also, because people, you probably, this happens to you, I'm sure it happens to our listeners, they don't really understand that you can't put a space in a Gmail address. So a lot of people named Francois Laport and Gabigail Laport, they type a space in there and it all goes to Laport a Gmail.
So I get all sorts of stuff like your tickets are ready. I mean just endless your reservations for tonight in Paris. I mean, it's I'm tempted but no, I'm
Well, and you're right, the problem with it being that big, like all those domains or all those names in a single domain, is that if it is not like, you know, BZQRT79 or something, if it is Leo or Fred. It's the end of the world. Yeah. It's like, you know, goodbye. There's a story about Jim.
at AOL.com. Poor Jim never really did get to use that email address. Do you want me to take a break or do you want to continue on? I think now is a good time. We're half an hour in and then we're going to talk about, it's definitely not love coming from Russia. So from Russia, we do get to talk about us.
Thank you, Steve. Our show today brought to you by those great folks that delete me. I have some direct experience with delete me because we have been using it for our CEO for some time now. If you've ever searched for your name online, I don't actually recommend that you do this. But if you've done it, you know how much of your personal information is right there in public.
It's all data brokers. They've been collecting this stuff for years, every app you use. It's not just TikTok, it's Facebook, it's Instagram, every site you visit. And they take all that information, they collate it and they make basically a dossier about you and your family, about everybody you know.
Maintaining privacy is more than a personal concern. It's a family affair. That's why Delete Me has introduced family plans. So you can have Delete Me for everyone in the family. I think and I think they do have this corporate plans as well. I think that's what we use because you really should have Delete Me.
for every manager in your company. I've told this story before, forgive me if you've heard it before, but we ran to delete me because Lisa somehow bad guys figured out what her phone number was, what company she worked for, and who her direct reports were, and what their phone numbers were. I wonder where they got that information, right?
And as a result, they were able to do a spearfishing campaign, purporting to be texts from Lisa's phone, the CEO's phone saying, quick, I need some Amazon gift cards. I'm in a meeting, get them and send them to this address. Fortunately, our employees are smarter than that, but it immediately told me, you know, we got to do something to reduce the amount of information about our management.
online, and that's when we went to delete me. DeleteMe helps reduce risk from identity theft, from cybersecurity threats like that, from harassment. You know, from all of the things privacy violations can do. It is not a nice thing. DeleteMe's experts know where the data is. They will find and remove your information from hundreds of data brokers. And by the way, if you get the family or the corporate plan, you can assign a data sheet for each member
that's tailored to them so that you could say, well, you know, don't delete the Instagram information, but do delete the face, but that kind of thing. Easy to use controls. So as an account manager, you can manage privacy settings for the whole family. But this is important. Once they've removed that data, you don't just then walk away.
Because you could do that yourself. First of all, you need to know the hundreds of data brokers out there, but then you need to know as new ones come online, and they do every single day. It's a very profitable business. You need to know to go back, and that's what Delete Me does. They continue to scan and remove your information regularly, not only from the existing data brokers, from all the new ones that pop up all the time. And I'm talking addresses, photos, emails, relatives, phone numbers, social media, property value,
everything. It's all online. Data brokers have it all. Until we get a comprehensive privacy law in this country protecting you, you've got to protect yourself and your family and your business. Reclaim your privacy by going to joindeleteme.com slash Twitch. The offer code Twitch gets you 20% off, which is a great deal. Join Delete Me.
dot com slash twit and use the offer code to it for twenty percent off and if you if you want to go to join the lead the dot com slash twit look at all the offerings they have a very uh... granular are set of offerings that can really do the things you need to do to protect yourself online so very much recommend looking at all that it's really amazing company join delete me
dot com slash twit thank you delete me uh... by the way after the uh... national public data broker breach steve we search for my name is right there my so she's doing everything mine not least is not least is now and i thought that that's that's a pretty telling thing that delete me really worked jointly me dot com slash uh... to it thank you to leave me
steve so russian officials but i've recently announced via telegram that they plot which i thought was interesting oh yeah let's use telegram i didn't really know what is missing them out that they plan to expand russia's ban on foreign web hosting providers
who are hosting content that discredits the glorious Russian army, their words. So Akamai and CDN77 may soon find themselves added to the banned list for being naughty.
Overall, Russia appears to feel that the Internet is at best a mixed blessing. It's unclear to me how it's possible to even function within today's globalized economy without it. I think they're nuts. But Russia seems poised to at least explore
getting along without the internet to which end rushes illustrious internet watchdog none other than ross ross
i'm sorry and has announced its plan to conduct another test next month of russia's big internet disconnect switch when pulled does what it says its servers all ties between russia and the rest of the global internet and they did it once before didn't they they try yes and they've and they've been working on it for years that they have to do things like
figure out what to do with DNS queries that resolve to IP addresses that are no longer available. I mean, they just don't want everything to hang and crash and like sitting in like, you know, with the hourglass spinning. So, you know, it turns out that disconnecting from the internet is not an easy thing to do. And of course, as I was thinking about this, I thought, what about Starlink?
Because you know, it's no longer the case that ever the useful internet connectivity requires you know landlines and fiber optic trunks and and all of that Starlink is a starlink is banned in Russia That was it my guess or it doesn't offer it. Let me see It's available in Ukraine of course they and you're right Russia is sanctioned right now. So yeah, so
that just works into their uh... in their favor does that's right easier to disconnect to pull the switch so anyway so they're they're gonna do another test in december uh... and again you know it's like uh... is there some long term plan here as it is it just so that they like are worried they're gonna get attacked i i don't know uh...
we would know if our country was was doing the same thing because it it would have an effect i mean pulling the switch on global connectivity will have an effect
Really interesting. We'll have to see what they've got planned. But while we're on the topic of Russian antics, get a load of this. One of the zero days, it was CVE 2024, 43 451, that Microsoft patched this past week.
was uh... you know that in patch tuesday last week was used in a russian hack of ukrainian organizations earlier this year according to the security firm clear sky the zero day was part of an exploit chain that exposed nt landman you know uh... nt land manager credential hashes also knows lt l uh... nt l m
credential hashes when victims interacted with dot URL files that were received in phishing emails. But here's the part that really caught my attention. ClearSky said that right clicking
deleting or moving the file. Established a connection with the attackers server. Exposing authentication data. The report suggests that the campaign also used social engineering to convince victims to run executables. Okay, but hold on. Right clicking on a file to display its context menu and examine its properties.
deleting it or dragging it to another directory was all that's needed to cause the victim's machine to establish a remote connection to a malicious server. What? So I went over to ClearSky to see what was up. And I've got a link in the show notes for anyone who wants to see too. The ClearSky research team posted their write up last Wednesday writing
a new zero day vulnerability cv e to oh by the way it was posted wednesday because the patches were pushed on tuesday the day before so you know closing this down they said a new zero day vulnerability forty three four fifty one likely clear sky securities
send an invalid response. I don't know if it's blocked or it can't provide a secure connection. So this might be my browser. Sometimes this happens. Interesting. I think that's exactly. Maybe doing it.
Excuse me, do an explicit HTTPS. Yeah, no. Because I think the Ubiquity blocks certain things. Ah, OK. Yeah. So I was just clicking the link you had. Yeah. Yeah. Yeah. Let me try clicking it here. Yeah, I'm sure it's fine. It's just me. Yeah. I also have that from Safari. It just came right up for me. Yeah. So it's a ubiquitous. I've noticed this. There's certain places I can't go. And I think it's the security. I do use security in Ubiquity.
Okay, so they wrote, a new zero-day vulnerability, 43-451, was discovered by ClearSky Cybersecurity in June of this year, 2024. This vulnerability affects Windows systems and is being actively exploited in attacks against Ukrainian entities. The vulnerability activates URL files containing malicious code through seemingly innocuous actions.
Then they have three bullet points. First, a single right-click on the file in all Windows systems will do this. Deleting the file in Windows 10 or 11 will do this, dragging the file to another folder in Windows 10 or 11, and some Windows 7, 8, and 8.1, they wrote.
The malicious URL files were, and I should note that a URL file is just text, so it's kind of pushing it to call it malicious, but okay. It's just a link. It's just, yeah, it looks like an any file.
So they wrote the malicious URL files were disguised as academic certificates and were initially observed being distributed from a compromised official Ukrainian government website. What actually happened was that the Russians compromised an email server in Ukraine
and then used the email servers credentials to send, you know, a DKIM SPF, you know, DMARC approved email to others in Ukraine. So the email that was coming in looked like it was verifiably authentic from the compromised server. But in fact, unfortunately, it was phishing email. So they said, the attack begins
with a phishing email set from a compromised Ukrainian government server. The email prompts the recipient to renew their academic certificate. The email contains a malicious URL file. When the user interacts with the URL file by right clicking, deleting, or moving it, the vulnerability is triggered. So I'll just say this is like
This is the first time I've seen that dragging a file and dropping it in the trash or right-clicking to learn more about it, that's all it takes under Windows 10 and 11.
in order to, well, and right clicking in all versions of Windows in order for this thing to happen. Anyway, I've got more details. So they said, when the user interacts with a URL file by right clicking, deleting or moving it, the vulnerability is triggered. This action
establishes a connection with the attacker's server and downloads further malicious files including spark rat malware spark rat is an open source remote access trojan that allows the attacker to gain control of the victim system the attackers also employed techniques to maintain persistence on the infected system ensuring their access even after a reboot okay so
The culprit here is a .url file.
which is a Windows internet URL shortcut. It's a text file and anyone who's ever looked at like the original.ini config files back in the early days of Windows will recognize the format here. It's got sections that are surrounded by square brackets and then just simple name equals value pairs in all in text.
The key is that the file contains a URL equals line, where the scheme of the URL is file colon forward slash forward slash, followed by the IP of the malicious remote server.
In Windows, the file colon slash slash scheme is handled by SMB, which is of course server message blocks, which underlies Windows original file and printer sharing, which as we know, was never up to stuff security-wise.
So that's where NTLM credential hashes come in, because Windows has always been extremely generous handing out, it's like IDing its users by sending their credential hashes around.
long before it was realized that that's not a good idea to be sending somebody's hash credentials because there's all kinds of mischief you can get up with them including just a replay of of of the credential hash in order to impersonate them which is exactly what this thing does so
apparently upon even the extremely innocuous contact with these files and windows uh... and you know it's worse in in more recent uh... windows ten and eleven windows explorer will without any prompting reach out to the file server that's indicated in the shortcut even without its recipient executing the shortcut the researchers wrote
When examining the URL file, ClearSky's team exposed a new vulnerability. Right-clicking the file establishes a connection to an external server. In addition, execution in a sandbox raised an alert about an attempt to pass the NTLM hash through the SMB protocol.
After receiving the NTLM hash, an attacker can carry out a pass the hash attack to identify as the user associated with the captured hash without needing the corresponding password. In other words,
The credential hash that NTLM's SMB protocol sends out to identify its Windows user can simply be captured and subsequently used to impersonate the user as if they were logged in.
The researchers wrote, further investigation yielded that in Windows 10 and 11 operating systems, the action of dragging the file from one folder to another or deleting the file caused the file to communicate with a target server and only then be deleted or moved.
under Windows 7, 8, and 8.1, the file did not initiate communication when dragged or deleted unless the target folder was open at the time of dragging. They said this did not happen on the first attempt but was observed only after two to three attempts. That is, they concluded the newly detected vulnerability is somewhat more exploitable on Windows 10 and 11 operating systems.
I'm sure that it must be a bit unnerving to those old pros among our listeners here to learn that the actions that any of us might take to dispose of something we may have inadvertently received could themselves lead directly to a compromise of our machine. That's new.
So Microsoft reportedly patched and closed this flaw in last Tuesday's patch updates, so that's good. But it should serve to remind us that those of us using Windows are using an extremely complex operating system that is still dragging a ton of legacy code forward.
That code was written that NTLM SMB file and printer sharing code was written and its protocols were designed long before the world had an appreciation for just how secure our future systems would need to be. What came to mind as I was thinking about this, the classic example of this was the original design of the Windows Metafile format.
Windows draws on the screen through a series of drawing primitives in invoking a circle or a rectangle or a line function with parameters and so forth. A Windows Metafile WMF is just the capture of those drawing primitives. It's essentially a script.
Then later, when that Metafile is opened, those primitives are replayed onto a new blank canvas to recreate the original drawing. So, the Metafile contents are interpreted. But the designers of the original Metafile format thought, what if we wanna do something more, something more than just replaying something that was previously recorded?
Why can't the file contain some code that's executed? And remember, this was Windows 3.0.
So among all of the interpreted tokens, they specified a meta escape code, which is what it was called, that would cause the system to execute to essentially escape from interpreting a GDI, a graphics device interface tokens.
and execute the code contained within the Windows Metafile, starting at the bytes immediately following the special escape code.
And so it sat there in the Metafile specification for years until much later. Oh, and it was copied as like from from 95 to 98 to. What was that? The last 16 bit version. It was me, Windows ME. And then it made the jump to Windows NT and so on. So.
Later, years later, in the era of NT and network and internet connectivity, it was suddenly rediscovered and labeled as a horrible, exploitable flaw.
At the time, when I calmly stated that it was obviously there all along by design, many people misunderstood me. They thought I was saying that Microsoft had deliberately planted a back door in Windows Metaphiles. It was, you know, it was originally deliberate
But it was never malicious. It was convenient. Yes, it was a reasonable thing to do back when we could trust every image our machines might try to render.
But let's just say it didn't age well. And neither was Microsoft's original NT land manager and their SMB protocol. They didn't, they have not aged well either.
And they were also designed back before we really understood security. So this wasn't deliberate on Microsoft's part. And what was really interesting was that a couple of week or two ago, we were just talking about how Microsoft has decided not to keep patching NTLM problems, yet the zero patch guys are.
So there's another reason why Zero Patch is worth looking at. Oh, and I should mention, I got a bunch of feedback from our listeners who said, you know, Steve, you should mention that there's a free tier also.
So it's not necessary to subscribe to Zero Patch in order to get some of the benefits of it. So I just wanted to mention that along with all the others. And thank you, everybody, who wrote to say, you know, there's a freebie available. So there is a free tier for Zero Patch.
OK, so not a lot happened this week, and we've just covered it all. So I'm going to spend some time with some feedback from our amazing listeners. I believe he would pronounce his name Aiko, A-Y-I-K-O. I'm sorry if that's wrong, but I'll say Aiko Fred is in Uganda. And he said, hey, Stephen Leo, this is Aiko Fred from Uganda.
I've been listening to security now since 2021, starting around the 800s. As in, you know, episode number. He said, I occasionally miss a few episodes when things get busy, sometimes up to a month, but I'm thoroughly enjoying the show. Exclamation point. He said, I don't have, I do not have, a formal background in computer science.
But I developed an interest in programming in 2020 and learned some Erlang and Elixir. He said my first and only languages, which I'm now using at work. He said, it made me realize I had only a blurry understanding of many key concepts.
I'd never thought to go back to the earlier episodes from 2005. But a few episodes ago, a listener recommended going back to the earlier episodes. So I decided to give it a try. And wow, exclamation point. He said, the way you explain topics like how the internet works, cryptography, and VPNs really clicked for me.
He said, I was blown away by how much easier it was to understand these concepts through your explanations. Now I feel like I've been programming by superstition all along.
He said each episode has left me wanting more, and I've even re-listened to some episodes three to four times, especially those on cryptography and internet fundamentals. I'm now on episode 58, and I'd encourage anyone with a shaky grasp on these topics to check out the earlier episodes. They won't regret it.
isn't that so i wanted to share that just as as to remind our listeners about that but he finishes saying one episode made me think this is exactly what i need he said that was episode forty one true crypt
He said, unfortunately, I learned the true crypt's development was discontinued in 2014. Do you have any recommendations for alternative tools with similar features to true crypts that are compatible with Linux? I love something with the same level of privacy and security. Thank you again for all your work. I really appreciate it. Looking forward to episode 1000. Best regards.
So I mentioned this bit of feedback last week that I wanted to share it this week because I know that this podcast has been discovered by many people years after we recorded those early fundamental technology podcasts.
We've heard from others who, after discovering this podcast, had the idea of going back to start from scratch and catch up. And those people have invariably found that it was worth their time.
Frankly, part of me is tempted to just stop and recreate some of that work from the early days so that they're put back into everyone's feeds. But that doesn't make any sense, because they're already there. Every podcast we've ever recorded remains available to everyone, and reproducing content we've already created would displace our new content for which we often barely have enough time as it is.
So from time to time, I'll take a moment, as I have here, to remind our listeners that back in the early days, we laid down many of the fundamentals of the way everything we're talking about today works. And it was done in a way that many people have found to be extremely accessible.
Also, another thing we often hear is that while our listeners enjoy the content today, they feel that there's much they don't understand. They say, well, I understand maybe 20% of what you're talking about. We just mentioned that a week or two ago. It is true that I consciously build upon the foundation that we have laid down before.
using what's come before. That's the only way it's possible for us to move forward. So to those who feel that they've been tossed into the deep end of the pool by showing up here late, let me note that all of that knowledge that's missing and assumed was once covered in detail back in the earlier days of this podcast.
Really? I mean, all of the stuff I've talked about and sort of zip over when we're talking about something new, that's all been discussed in detail in the past, and it's all there waiting and free for the asking for anyone who wants it. At some point, I'd love to make a playlist of foundational
episodes that people should listen to. But just for IECO Fred, there is a replacement for TrueCrypt. Steve talks about it in episode 582. You'll get there. It's a Vera crypt that he talks about it in this episode and many other episodes. And I have a link to Vera crypt in the show notes. V-E-R-A-C-R-Y-P-T dot F-R.
Veracrypt.fr. I went over and took a look and it was updated a month or two ago. So it is being kept current and it is platform agnostic. It'll work beautifully for Linux and encrypt your drive just like TrueCrypt once would have.
Very nice. Yes. See, we've covered it all. We've covered it all over the years. We really have. Well, Leo, how many thousands of hours? That's right. Wow. Several at least. Okay. Scott Gottfried wrote to share his powerful solution for accessing his network from home. But, Leo, let's take a break and then we're going to find out what Scott is using in order to get roaming access. And it's not something we've ever talked about.
Oh, how fun. Something new. Yeah, like Hamachi, or we've talked about a lot of different ways of doing stuff like that. Yeah. And you know, Hamachi still exists. Really? But it will log me and log me and bought them. Yeah. And so it's a commercial service, but it's still there.
And it was a great idea using what five dot, right? Yeah. Exactly. Well, I can't wait to hear what else there is out there. But first, a word from our fine sponsor, a name you know, I know you know, one password. You may be thinking, oh yeah, I know. They do a really good password manager. Well, this is a new product from one password. It kind of takes a password manager the next step. It's called extended access management. Now let me ask you a question.
If you're an IT or run a business, do your employees, do your end users always work on company-owned devices using IT-approved apps? Of course, they're the best, right? No, they don't. They bring their phone in, their laptop, they're watching their Plex server from home.
So how do you keep your company's data safe when it's sitting on all those unmanaged apps, on all those unmanaged devices? One password's answer to that question, extended access management. One password, extended access management helps you secure every sign-in for every app on every device because it solves the problem's traditional IAM password management.
and MDM cannot touch. Imagine your company's security like the quad of a college campus, you know, the nice brick paths leading through the green sward between the ivy covered buildings. Those are the company owned apps, IT approved apps, company owned devices,
The managed employee identities. It's all nice. It's all peaceful. But then, as on any college campus, there are the paths people actually use. The shortcuts worn through that beautiful green grass that is actually the straightest line for building A to building B. You don't want to go round about to get to Physics 101.
You know about straight lines, right? Those are the unmanaged devices. The shadow IT apps, the non-employee identities, like contractors. If you've got employees, it's inevitable. They're going to do their own thing. Problem is, most security tools only work on those happy little brick paths. A lot of the security problems take place on the shortcuts. That's why you need one password, extended access management.
It's the first security solution that brings all these unmanaged devices, apps, and identities under your control. It ensures that every user credential is strong and protected. Every device is known and healthy. Every app is visible. It's security for the way we
really work today. And it's now generally available to companies that use Octa or Microsoft Entra. It's also on beta for Google Workspace customers. So good news. You can check it out right now at onepassword.com slash security now. This is really an exciting new offering from one password.
one p a s s w o r d right one password dot com slash security now we think of so much for supporting steves important work here at security now we thank you for supporting it by going to that site so they know you saw it here one password dot com slash security now okay on we go more so in a scott
leaves to the end that everything he describes is all a free service provided by cloud flare, which is really interesting. I've used their pages. They have a lot of free services. Yeah. So I wanted to mention that upfront, that is the freeness, so that while I'm sharing what Scott wrote, everyone who might have a similar need will be taking it seriously and thinking, oh, this is interesting.
Scott said, Hi Steve, congrats on 1000. I've listened for all 20 years every episode. Thank you and Leo.
He said, I've heard several questions from your listeners about how to access their home network while traveling. VPN, overlay network, I had the same question. My primary requirement for accessing my home network was that I did not want to open any ports on my router. Amen to that. He said, I researched solutions for several months until I happened upon a blog post at CloudFlare. The solution for me is the CloudFlare tunnel.
And that's at www.cloudflair.com slash products slash tunnel to UNNEL. And he said, I run an old Intel NUC from inside my network that creates an outgoing tunnel to CloudFlair.
The CloudFlare dashboard lets me add my own domains, has a firewall, provides authentication, and allows me to configure routing for my four internal home subnets. He said, it's awesome. I run two separate photo sharing apps for the family. The apps run in Docker containers on the NUC, which has Linux and CASA OS.
but the tunnel could run on a NAS or ZEMA board. When traveling, I use the CloudFlare Warp app on my laptop and connect to my home network. I can then RDP to my Windows NUC. I can access my Ubiquity cams and I can access my True NAS. Nothing on the home network is exposed to the internet. It all happens through the tunnel.
The family accesses my shared photo apps, Jelly Fin and PiWigo using a web browser pointed to my custom domain. I add authorized family member email addresses to the CloudFlare dashboard. When a family member tries to log on to one of the apps, they just enter their email address. They are sent a pin for access. All of that is handled by CloudFlare.
It's a little bit of a propeller beanie kind of stuff, but one could just start with a tunnel to access the home network without sharing apps and dealing with authentication. Oh, he says, I forgot to mention, all of the stuff I use at CloudFlare is free all caps exclamation point. His thought, I hope this might help anyone searching for this type of solution. Best Scott.
So thank you, Scott, for sharing that. It was news to me. So I went over to take a look. CloudFlare's tunnel page says, protect your web servers from direct attack. From the moment an application is deployed, developers and IT spend time locking it down, configuring ACLs, access control lists, rotating IP addresses, and using clunky solutions like GRE tunnels.
There's a simpler and more secure way to protect your applications and web servers from direct attacks. Cloud Flare Tunnel. Ensure your server is safe no matter where it's running, public cloud, private cloud, Kubernetes cluster, or even a Mac mini under your TV.
So from Scott's description, it sounds like an extremely powerful and capable solution. For simple, safe, remote connections to an internal network, it may be more
than many of our listeners need, but I wanted to put it on everyone's radar because it really does sound like a power users tool. Being able to set up authentication, have registered email addresses where someone is able to receive a PIN, provide that back, and then automatically get access through the tunnel back to the network.
You know, there's a lot there. It does a lot. But anyway, it looks like a potentially very interesting solution. At the same time, I got a note from Jeff Price, who also happened to write, thanks for the emails. They're very helpful. He said, I have meaning the weekly security now, you know, preview of the podcast. He said, I have a medium sized network at home with Synology NAS, dozens of IoT devices, et cetera.
I've been using tail scale for all remote connections. This means no open ports or port forwarding. I also set up a system inside my home as an exit node, which means even when I am traveling.
I could encrypt all of my traffic back to my home and then exit from there." In other words, anything he's doing while he's traveling believes he's still at home, which can be useful for access to streaming services and so forth that have specific geographic boundaries. He said, tail scale has worked great and it is much faster than open VPN.
So just another reminder that the overlay network solution is almost drop in easy to use. And there are tail scale and zero tier, and there's also Nebula and NetMaker. There are clients for all of the various OS that we're using, and even for the various NASA's.
you know, there's a probably a, well, it is far less flexible and capable. It's also sort of more of a homegrown solution than CloudFlare's tunnel. So, you know, your mileage may vary. Pick the solution that seems best for you.
Adam B has an intriguing problem. He said, Hi, Steve, I'm a long time listener to the show. I'm not sure how long, but I definitely remember when you used to alternate episodes between topics and news. And he means news and feedback. He says, I'm a proud spin right owner. And thanks to you and Leo getting me interested in hacker one, a few hundred dollars better off, having found a couple of local privilege escalation vulnerabilities during some poking around on my weekends.
That's very cool. So he's a little bit of a white hat hacker helping people. He says, I have a question that I have not been able to find an answer to online. And I thought might interest you and my fellow listeners. I'm a hobbyist malware analyst.
Clearly from interesting experience shared. He said, and as part of that, I often run the samples in a network that's isolated from the internet just to see what happens. Sometimes the samples will try to communicate with a command and control server.
Often, the hard-coded C2 server is a fully qualified domain name, but sometimes it's a public IP address. He says it can often be useful to pretend to be the command and control server, just to see what the sample sends.
When the C2 server is a fully qualified domain name, it's easy enough to use my own DNS server in the isolated network to answer the DNS request with an A record IP address of my choosing, meaning that the malware says, I need the IP address of badguys.ru.
And because he's created an isolated network, he's got his own DNS server. So the machine running the malware generates a DNS query to badguys.ru and the DNS responds with 192.168.0.20 or something, which is a machine on that network. So that's where the malware
attempts to connect to which is his own server so he can see what's going on. He said, however, when the C2 server is a public IP address, this becomes more troublesome. I think I have two choices, he wrote. He said, one, patch the sample to change the IP address to one on the land.
or to somehow get my LAN to answer the ARP request with a MAC address of my choosing.
He said, the problem with choice number one is that this isn't practical at scale, meaning patching the malware in order to point it to something local. And I agree. And he said, sometimes I like to run 10, 20, or 50 versions of the same malware family. He said, I don't want to have to manually patch 50 different samples. It also seems like the less satisfactory choice.
The problem with choice 2 is that I simply can't figure out how to do it. How can I configure my network so that if a sample makes a request for a public IP address, in other words, one that isn't in the slash 24 of my LAN, the request is handled by my C2 server.
The best answer I could find online was concerned with ARP poisoning. But this seemed very unreliable and likely to cause an unstable network. It feels like the answer will be something to do with the default gateway, but I can't figure it out. I hope that makes sense. I would really appreciate your thoughts on the subject. A big thank you to you, Leo, and the whole team. Kind regards Adam.
Okay, what Adam wants to do can definitely be done in a highly robust fashion.
it would be possible to manually add static routes to the routing table of the machine that's hosting the malware. This would cause the traffic bound for that target IP to override the normal non-local default route, which would send the traffic out to the network's gateway interface and instead to another local network interface. But doing that is tricky and messy.
the more straightforward solution, and it's really slick, would be to obtain a router that has some extra hardware interfaces. That little net gate, SG-1100, which I'm using here, has an aux network connection. It's got WAN, and LAN, and aux, as in auxiliary.
And it's not a simple switch using the same network as the LAN. It's a separate network interface. And that can be given its own LAN.
Or, for example, one of those protectly, P-R-O-T-E-C-T-L-I, protectly vault devices. I'm using one of those at my other location. Those are nice also, and Amazon has those for sale, or you can get them directly from protectly.
The idea is to have an extra physical network interface. You would use the router software, such as PFSense or OPNSense, to define another small LAN network for that extra interface.
and instead of using one of the normal private networks like 192.168.something.something or 10.something.something. You would create a network that includes the target IP of the command and control server.
You then attach a machine, this C2, your command and control spoof server. You attach a machine to that interface and manually assign it the IP of the command and control server that the malware is looking for.
Whenever the malware in the host machine addresses internet traffic to that remote public IP, your local router's routing table will see that the IP matches within that extra network and will send the traffic to it rather than out onto the public internet. So you wind up with a very straightforward, robust and easily adjusted and maintained solution.
Yes. Dale Myers has a problem. I've forgotten how many breaks we've taken. I thought there was something going on. We have one more, so you could put that anywhere you want. Only one last one.
And then we'll finish our feedback and before we get into what is AGI. Thank you. Dale Myers has a problem no one should ever face. He said, Hi, Steve. I never thought when I started listening at 0001.
that there would ever be a thousand and still counting security now podcast he said i started at the beginning right after Fred Langa suggested that your podcast might be worthwhile he was right
At the time, I was a volunteer in the IT department of a parochial school. The things I learned from security now led to important improvements in our system over the years. In those days, there were not so many listeners, and you took time to answer two of my questions submitted in the feedback dialog box at the bottom of the security now page.
Now I have a new question that relates to using a password manager. He said, I've been doing a bit of traveling by air lately, and the last time I was in my travel agent's office, I decided to use some of the accumulated points. She said she could not access my account without my password.
there was a place for it on her screen. But I could not figure out how to get the password from there or to there from my password manager. Any thoughts? Signed Dale Myers. Okay, so my first thought was, huh, that's a really good question. How would you do that securely? And then I thought, I wonder why this isn't a problem we've heard about before.
And then the question answered itself, since no one should ever have this problem. No one should ever be asked to give their password to someone else, like a travel agent, so that she could access their account. So, you know, it's not a bigger problem because it should never be required of anyone ever.
the whole thing you know seems like a fundamentally bad idea but that doesn't help dale who apparently does have this problem even if everyone agrees he should never have had this problem in the first place
given that Dale has been listening since episode one. We know that his travel account is currently protected by a ridiculously gnarly, long, random, and impossible to manually enter or even communicate password. So my advice would be not to even try.
briefly change your password to something ridiculously simple to type, which meets the travel system's password policies, but otherwise minimal in every way. It's only gonna be that way for a few minutes, so its security doesn't really matter. Once the travel points have been transferred, the account's password can either be restored to what it was before or set to something new.
Now, a workable alternative would be to just send the accounts initial gnarly password via email or a text to the travel agent. Let her log in, do whatever she needs, then change the accounts password to something new and super secure once the points have been moved.
Now, having said that, I did get a piece of feedback from a listener about an incredibly cool looking device. I've got it on the way to me because I want to understand it and be able to talk about it. It is a little dongle which has a USB port and it is a Bluetooth keyboard dongle.
meaning that what Dale could do if he had this or if any of our listeners had this problem, Dale could have this with him, give it to the travel agent and have her plug it into her computer, just any USB port. Now, very much like the original Yuba key, this thing looks like a USB keyboard.
So then there are Android and iOS and other apps for this thing. So Dale would be able to send his password through this app and it would type into the password field on the travel agent's computer.
which is kind of a cool hack. Anyway, I'll know more about it. I'll have all the details in next week's podcast for anybody who wants to jump ahead. It was not cheap, it was $37, and it's being shipped from Poland, as I recall. But still, I thought it was kind of a cool thing.
Chris C. asked a while back, you said something about a large company that was fined for not keeping teams or slack chats as required by federal law. Do you remember who this was and what the law was?
So I replied to Chris, I vaguely recall that in passing, but I have no specific recollection. And I said, GRC's onsite search in the upper right of every page can be used to search only the podcast transcripts which are fully indexed. So you might be able to track down the reference that way.
So that was my reply to Chris. I wanted to share this because I use GRC search from time to time myself in the same way when I'm looking for something from our own past. You heard me casually mentioned that we talked about something, whatever it was, back during podcast number, whatever.
I don't, you know, so I just don't want anyone to imagine for a second that I recalled that podcast. Like Chris here, I did recall that it was something that was mentioned, but not what or when. Since I get these sorts of questions often, like Chris asked, I just wanted to pass on to everyone that both the show notes and Elaine's precise transcripts are fully indexed and that index can be easily searched using GRC search box.
and i checked a little bit later chris had replied he's responded thank you exclamation point i didn't know that was there he said i found it in s and number nine fifty nine he said google did not help me but the search engine on your site powered by the same company did so again we do have you know essentially podcast specific search
which will allow anyone to find something that they think they recall that we talked about before, but can't remember exactly where or when. You're free to keep asking me, but I'll do the same thing you could do, which is to use the little search box in the upper right of every page at GRC.
And Leo, we are ready to talk about artificial general intelligence, whatever that is. We'll at least maybe know what it is, even if we don't know when, about half an hour from now. But let's take our last break and then we'll plow into that.
I'm excited. I'm really excited. I'm ready to take notes. Uh, maybe our show today, uh, brought to you by those great folks that delete me. I have some direct experience with delete me because we have been using it for our CEO for some time now. Uh, if you've ever searched for your name online, I don't actually recommend that you do this, but if you've done it, you know how much of your personal information is right there in public.
It's all data brokers. They've been collecting this stuff for years, every app you use. It's not just TikTok, it's Facebook, it's Instagram, every site you visit, and they take all that information, they collate it, and they make basically a dossier about you and your family, about everybody you know.
Maintaining privacy is more than a personal concern. It's a family affair. That's why Delete Me has introduced family plans. So you can have Delete Me for everyone in the family. I think and I think they do have this corporate plans as well. I think that's what we use because you really should have Delete Me.
for every manager in your company. I've told this story before, forgive me if you've heard it before, but we ran to delete me because Lisa somehow bad guys figured out what her phone number was, what company she worked for, and who her direct reports were, and what their phone numbers were. I wonder where they got that information, right?
And as a result, they were able to do a spearfishing campaign, purporting to be texts from Lisa's phone, the CEO's phone saying, quick, I need some Amazon gift cards. I'm in a meeting, get them and send them to this address.
Fortunately, our employees are smarter than that, but immediately told me, you know, we got to do something to reduce the amount of information about our management online. And that's when we went to delete me. Delete me helps reduce risk from identity theft, from cybersecurity threats like that, from harassment, you know, from all of the things privacy violations can do. It is not a nice thing.
Delete these experts, know where the data is. They will find and remove your information from hundreds of data brokers. And by the way, if you get the family or the corporate plan, you can assign a data sheet for each member.
It's tailored to them so that you could say, well, you know, don't delete the Instagram information, but do delete the face, but that kind of thing. Easy to use controls. So as an account manager, you can manage privacy settings for the whole family. But this is important. Once they've removed that data, you don't just then walk away.
Because you could do that yourself. First of all, you need to know the hundreds of data brokers out there, but then you need to know as new ones come online, and they do every single day. It's a very profitable business. You need to know to go back, and that's what Delete Me does. They continue to scan and remove your information regularly, not only from the existing data brokers, from all the new ones that pop up all the time. And I'm talking addresses, photos, emails, relatives, phone numbers, social media, property value,
everything. It's all online. Data brokers have it all. Until we get a comprehensive privacy law in this country protecting you, you've got to protect yourself and your family and your business. Reclaim your privacy by going to joindeleteme.com slash Twit. The offer code Twit gets you 20% off, which is a great deal. Join Delete Me.
dot com slash twit and use the offer code twit for twenty percent off and if you if you want to go to join the lead the dot com slash twit look at all the offerings they have a very uh... granular art set of offerings that can really do the things you need to do to protect yourself online so very much recommend looking at all that it's really an amazing company join delete me
dot com slash twit thank you delete me uh... by the way after the uh... national public data broker breach steve we search for my name is right there my so she's doing everything mine not least is not least is now and i thought that that's that's a pretty telling thing that the lee me really worked jointly me dot com slash uh... to it thank you to leave me
All right, I've been dying to hear this. Steve Gibson on AGI. Well, okay. Steve Gibson surveying a bunch of people's feelings about AGI. Yeah, that's fair. I want to know what you think too though. I think you'll probably give us some ideas. Yeah, I do have some feelings. So, okay, I should note that I already have
everything I need with thanks to today's chat GPT-40 and it has changed my life for the better. I've been using it increasingly as a time saver in sort of in the form of a programming language
super search engine, and even a syntax checker. I've used it sort of as a crutch when I need to quickly write some throwaway code in a language like PHP, where I do not have expertise, but I want to get something done quickly. I just, you know, I've like, you know, get, solve a quick problem. You know, parse a text file in a certain way into a different format, that sort of thing.
In the past, I would take, you know, if it was a somewhat bigger project than that, an hour or two, putting queries into Google, following links to programmers corner, stack overflow, or other similar sites, and I would piece together the language construction that I needed from other similar bits of code that I would find online.
or if I was unable to find anything useful, like solve the problem, I would then dig deeper in through the languages actual reference texts to find the usage and the syntax that I needed and then build up from that.
You know, after you've programmed a bunch of languages, they're all sort of the same largely. I mean, Lisp is a different animal entirely as is APL. But, you know, the procedural languages, it's just a matter of like, okay, what do I use for inequality? What do I use for, you know, how exactly are the looping constructs built that kind of thing? That's no longer what I do.
because I now have access to what I consider a super programming language search engine. Now, I ask the experimental coding version of chat GPT for whatever it is I need. I don't ask it to provide the complete program. Since that's really not what I want. I love coding in any language because I love puzzles and puzzles are language agnostic.
but I do not equally know the details of every other language. There's nothing Chad GPT can tell me about programming assembly language that I have not already known for decades.
But if I want to write a quick throwaway utility program like in visualbasic.net, a language that I've spent very little time with, and because I like to write an assembly language, you know, but I need to, for example, quickly implement an associative array as I did last week.
Rather than poking around the internet or scanning through the visual basic syntax to find what I'm looking for, I'll now just pose the question to chat GPT. I'll ask it very specifically and carefully for what I want. And in about two seconds, I'll get what I may have previously spent 30 to 60 minutes sussing out online.
It has transformed my work path for those sorts of, for that class of problem that I've traditionally had. It's useful whenever I need some details where I do not have expertise is that I think the way I would put it. And I've seen plenty of criticism levied by other programmers of the code produced by today's AI.
To me, it seems misplaced. That is, their criticism seems misplaced, and maybe just a bit nervous. And maybe they're also asking the wrong question. I don't ask chat GPT for a finished product because I know exactly what I want, and I'm not even sure I could specify
the finished product in words, or that that's what it's really good for. So I ask it just for specific bits and pieces, and I have to report that the results have been fantastic. I mean, it is literally, it's the way I will now code
Languages I don't know I think is probably the best way to put it it it is you know, it's ingested the internet and and You know obviously we have to use the term it knowing them very advisedly it doesn't know them but
whatever it is, I am able to ask it a question and I actually get really good answers to tight problem domain questions. But what I want to explore today is what lies beyond what we have today, what the challenges are and what predictions are being made
about how and when we may get more, whatever that more is. The there where we want to get is generically known as artificial general intelligence, which is abbreviated AGI.
Okay, so let's start by looking at how Wikipedia defines this goal. Wikipedia says, artificial general intelligence is a type of artificial intelligence that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks.
Artificial Superintelligence ASI, on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of strong AI.
They say creating AGI is a primary goal of AI research and of companies such as OpenAI and Meta. A 2020 survey identified 72 active AGI research and development projects across 37 countries.
The timeline for achieving AGI remains a subject of ongoing debate among researchers and experts. As of 2023, some argue that it may be possible in years or decades others maintain it might take a century or longer. And a minority believe it may never be achieved.
Notable AI researcher Jeffrey Hinton has expressed concerns about the rapid progress toward AGI, suggesting it could be achieved sooner than many expect.
There's debate on the exact definition of AGI and regarding whether modern large language models, LLMs, such as GPT-4, are early forms of AGI. Contention exists over whether AGI represents an existential risk.
Many experts on AI have stated that mitigating the risk of human extinction posed by AGI should be a global priority. Others find the development of AGI to be too remote to present such a risk.
AGI is also known as strong AI, full AI, human level AI, or general intelligent action. However, some academic sources reserve the term strong AI for computer programs that experience sentience or consciousness.
In contrast, weak AI or narrow AI is able to solve one specific problem but lacks general cognitive abilities. Some academic sources use weak AI as the term to refer more broadly to any programs that neither experience consciousness nor have a mind in the same sense as humans.
Related concepts include artificial superintelligence and transformative AI. And artificial superintelligence is a hypothetical type of AGI that is much more generally intelligent than humans. While the notion of transformative AI relates to AI having a large impact on society, thus transforming it, for example, similar to the agricultural or industrial revolutions.
A framework for classifying AGI levels was proposed in 2023 with Google DeepMind researchers. Or by Google DeepMind researchers, they define five levels of AGI, emerging, competent, expert, virtuoso, and superhuman.
They define, for example, a competent AGI is defined as an AGI that outperforms 50% of skilled adults in a wide range of non-physical tasks. And a superhuman AGI, in other words, an artificial superintelligence is similarly defined, but with a threshold of 100%.
They consider large language models, like chat GPT or LAMA2, to be instances of the first level emerging AGI. OK.
So we're getting some useful language and terminology for talking about these things. The article that caught my eye last week as we were celebrating the thousandth episode of this podcast was posted on perplexity AI titled Altman predicts AGI by 2025.
The perplexity piece turned out not to have much meat, but it did offer the kernel of some interesting thoughts and some additional terminology and talking points, so I still want to share it.
Perplexity wrote, OpenAI CEO Sam Altman has stirred the tech community with his prediction that artificial general intelligence, AGI, could be realized by 2025, a timeline that contrasts sharply with many experts who foresee AGI's arrival much later.
Despite skepticism, Altman asserts that OpenAI is on track to achieve this ambitious goal, emphasizing ongoing achievements and substantial funding while also suggesting that the initial societal impact of AGI might be minimal.
In a Y combinator interview, Altman expressed excitement about the potential developments in AGI for the coming year. However, he also made a surprising claim that the advent of AGI would have surprisingly little impact on society, at least initially. This statement has sparked debate among AI experts and enthusiasts given the potentially transformative nature of AGI.
and Altman's optimistic timeline stands in stark contrast to many other experts in the field who typically project AGI development to occur much later around 2050. Despite the skepticism, Altman maintains that OpenAI is actively pursuing this ambitious goal, even suggesting that it might be possible to achieve AGI with current hardware.
This confidence coupled with OpenAI's recent $6.6 billion funding round and its market valuation exceeding $157 billion underscores the company's commitment to pushing the boundaries of AI technology.
Achieving artificial general intelligence faces several significant technical challenges that extend beyond current AI capabilities. So here we have four bullet points that outline where
What AGI needs that there's no sign of today? First, common sense reasoning. AGI systems must develop intuitive understanding of the world, including implicit knowledge and unspoken rules to navigate complex social situations and make everyday judgments.
Number two, context awareness. AGI needs to dynamically adjust behavior and interpretations based on situational factors, environment and prior experiences. Third, handling uncertainty. AGI must interpret incomplete or ambiguous data.
draw inferences from limited information and make sound decisions in the face of the unknown. And fourth, continual learning, developing AGI systems that can update their knowledge and capabilities over time without losing previously acquired skills remains a significant challenge.
So one thing that occurs to me as I read those four points, reasoning, contextual awareness, uncertainty, and learning is that none of the AIs I've ever interacted with has ever asked for any clarification about what I'm asking.
That's not something that appears to be wired into the current generation of AI. I'm sure it could be simulated if it would have further raised the stock price of the company doing it. But it wouldn't really matter because it would be a faked question.
like that very old Eliza pseudo-therapist program from the 70s. You know, you would type into it, I'm feeling sort of cranky today. And it would reply, why do you think you're feeling sort of cranky today? You know, it wasn't really asking a question. It was just programmed to seem like it was, you know, understanding what we were typing in.
The point I hope to make is that there's a holiness to today's AI. It's truly an amazing search engine technology, but it doesn't seem to be much more than that to me. There's no presence or understanding behind its answers.
The perplexity article continues saying, overcoming these hurdles requires advancements in areas such as neural network architectures, reinforcement learning, and transfer learning. Additionally, AGI development demands substantial computational resources and interdisciplinary collaboration among experts in computer science, neuroscience, and cognitive psychology.
While some AI leaders, like Sam Altman, predict AGI by 2025, many experts remain skeptical of such an accelerated timeline. A 2022 survey of 352 AI experts found that the median estimate for AGI development was around 2060, also known
has security now. Episode 2860.
90% of the 352 experts surveyed expect to see AGI within 100 years. 90% expected, so not to take longer than 100 years, but the median is by 2060, so not next year, as Sam suggests. They wrote this more conservative outlet stems from several key challenges.
First, the missing ingredient problem. Some researchers argue that current AI systems, while impressive, lack fundamental components necessary for general intelligence. Statistical learning alone may not be sufficient to achieve AGI.
the missing ingredient problem. I think that sounds exactly right. Also, training limitations, creating virtual environments complex enough to train an AGI system to navigate the real world, including human deception, presents significant hurdles.
and third, scaling challenges. Despite advancements in large language models, some reports suggest diminishing returns in improvement rates between generations.
These factors contribute to a more cautious view among many AI researchers who believe AGI development will likely take decades rather than years to achieve. OpenAI has recently achieved significant milestones in both technological advancement and financial growth.
The company successfully closed, and here they're saying again, a massive $6.6 billion funding round valuing at $157 billion. But who cares? That's just Sam as a good salesman.
They said this round attracted investments from major players like Microsoft and Vidia and SoftBank highlighting the tech industry's confidence in OpenAI's potential. The company's flagship product ChatGPT has seen exponential growth now boasting over 250 million weekly active users.
OpenAI has also made substantial inroads into the corporate sector, with 92% of Fortune 500 companies reportedly using its technologies. Despite these successes, OpenAI faces challenges, including high operational costs and the need for extensive computing power.
The company is projected to incur losses of about $5 billion this year, primarily due to the expenses associated with training and operating its large-language models.
So, when I was thinking about this idea of, you know, we're just going to throw all this money at it and it's going to solve the problem. And oh look, you know, the solution is going to be next year. The analogy that hit me was curing cancer.
because they're sort of is an example of you know we just we had a breakthrough and this is going to you know cure cancer it's like
No, we don't really understand enough yet about human biology To to say that we're gonna do that and you know I know that the current administration has been you know these cancer moonshots and it's like okay Have you actually talked to any biologists about this or do you just think that you can pour money on it and it's gonna?
do the job. So that's not always the case. So to me, this notion of the missing ingredient is the most salient of all of this is like, what we may have today has become very good at doing what it does. But it may not be extendable. It may never be what we need for AGI.
But I think that what I've shared so far gives a bit of calibration about where we are and what the goals of AGI are.
I found a piece also in information week where the author did a bunch of interviewing and quoting the people that I just want to share just to finish this topic off. It was titled artificial general intelligence in 2025. Good luck with that.
And to have the teaser, AI experts have said it would likely be 2050 before AGI hits the market. Open AI CEO Sam Altman says 2025, but it's a very difficult problem to solve.
So they wrote, A few years ago, AI experts were predicting that artificial general intelligence would become a reality by 2050. Open AI has been pushing the art of the possible, along with big tech. But despite Sam Altman's estimate of 2025, realizing AGI is unlikely soon.
H.P. Newquist, author of The Brainmakers, an executive director of the Relayer Group, a consulting firm that tracks the development of practical AI.
We can't presume that we're close to AGI because we really don't understand current AI, which is a far cry from the dream of AGI. We don't know how current AIs arrive at their conclusions.
nor concurrent AIs even explain to us the processes by which that happens. That's a huge gap that needs to be closed before we can start creating an AI that can do what every human can do.
and a hallmark of human thinking, which AGI will attempt to replicate, is being able to explain the rationale for coming up with a solution to a problem or an answer to a question. We're still trying to keep existing large language models from hallucinating." And I'll just interrupt to say that I think this is the crucial point.
either or rather earlier, I describe chat GPT as being a really amazingly powerful internet search engine. Partly that's because that's what I've been using it to replicate. For my own needs, as I said, it's been a miraculous replacement for a bunch of searching I would otherwise need to do myself.
My point is, this entire current large language model approach may never be more than that. This could be a dead end. If so, it's a super useful dead end. But it might not be the road to AGI at all. It might never amount to be more than a super-spiffy search engine.
The InfoWeek article continues. OpenAI is currently alpha testing advanced voice mode, which is designed to sound human, such as pausing occasionally when one speaks to draw a breath. It could also detect emotion and nonverbal clues.
This advancement will help AI seem more human-like, which is important, but there's more work to do. And frankly, that's where we begin to get into the category of parlor tricks, in my opinion, like making it seem like more than it is, but it still isn't.
Edward Tian, CEO of Zero GPT, which detects generative AI's use in text, also believes the realization of AGI will take time. In an email interview with the article's author, Edward said, quote,
The idea behind artificial general intelligence is creating the most human-like AI possible, a type of AI that can teach itself and essentially operate in an autonomous manner. So, one of the most obvious challenges is creating AI in a way that allows the developers to be able to take their hands off eventually, as the goal is for it to operate on its own.
Technology, no matter how advanced, cannot be human. So the challenge is trying to develop it to be as human as possible. That also leads to ethical dilemmas regarding oversight. There are certainly a lot of people out there who are concerned about AI having too much autonomy and control, and those concerns are valid.
how the developers make AGI while also being able to limit its abilities when necessary. Because of all these questions and our limited capabilities and regulations at the present, I do not believe that 2025 is realistic.
current AI, which is artificial narrow intelligence, ANI, performs a specific task well, but it cannot generalize that knowledge to suit a different use case.
Max Lee, the CEO of the decentralized AI data provider, URT, and an adjunct associate professor in the Department of Electrical Engineering at Columbia University said, quote, given how long it took to build current AI models, which suffer from inconsistent outputs, flawed data sources, and unexplainable biases.
it would likely make sense to perfect what already exists rather than start working on even more complex models. In academia, for many components of AGI, we do not even know why it works, nor why it does not work."
To achieve AGI, a system needs to do more than just produce outputs and encourage and engage in conversation, which means that LLMs alone won't be enough. Alex James, chief AI officer at the AI company DataMiner, said in an email interview, quote, it should also be able to continuously learn.
Forget, make judgments that consider others, including the environment in which the judgments are made and a lot more. From that perspective, we're still very far. It's hard to imagine AGI that doesn't include social intelligence, and current AI systems don't have any social capabilities, such as understanding how their behavior impacts others, cultural and social norms, et cetera.
Sergey Cass Tovitch, the deputy CTO at the gambling software company, softswiss said, quote, to get to AGI, we need advanced learning algorithms that can generalize and learn autonomously, integrated systems that combine various AI disciplines, massive computational power, diverse data, and a lot of interdisciplinary collaboration.
For example, current AI models like those using autonomous vehicles require enormous data sets and computational power just to handle driving in specific conditions, let alone achieve general intelligence.
LLMs are based on complex transformer models. While they are incredibly powerful and even have some emergent intelligence, the transformer is pre-trained and does not learn in real time.
For AGI, there will need to be some breakthroughs with AI models. They will need to be able to generalize about situations without having to be trained on a particular scenario. A system will also need to do this in real time, just like a human can when they intuitively understand something.