Hello and welcome to the Cyber Rants podcast. T
his is your co host, Zach Fuller, joined by Mike Rotondo and Laura Chavez.
Today we are talking about artificial intelligence.
So this is a topic that is all over the media.
Of course, unless you've been in a cave,
you've been just bombarded with stuff about this.
And so we're going to bombard you a little bit more,
but for good reasons right there. There are important considerations around this.
There are are certainly pros and cons
and things to watch out for and things to leverage.
So this is something that's here to stay.
And we want to make sure that we at least give you
our two cents on what's going on out there so that you
can use that and empower yourself and the organizations that you work with.
So we'll dive into that here shortly.
But Mike, you want to kick us off with the news?
Yeah. Good morning and welcome to the news.
I'd like to say hail to our robot overlords and thank you, Skynet. AI.
Tech execs put AI. On par with nukes for extinction risk.
Artificial intelligence poses a global risk of extinction tantamount to nuclear war
and pandemics as a who's who of artificial intelligence executives
in an open letter that evokes danger without suggesting how to mitigate it.
Among the signatories of the Open letter
published by the center for AI Safety are Sam Altman,
CEO of Chat GPT, maker of OpenAI, and Jeffrey Hinton,
a computer scientist known as the godfather of AI.
Other signatories are from Google, DeepMind and Microsoft.
The letters is succinctly mitigating the risk of
extinction from AI should be a global priority alongside
with other societal scale risks such as pandemics and nuclear war.
Former Google CEO Eric Schmidt, a vocal proponent for
US development of AI capabilities and not signer of the letter,
reportedly warned a London conference audience
earlier this month that government should ensure that AI is not misused by evil people.
AI could pose an existential risk to humanity,
he said, adding that existential risk is defined as many,
many people harmed or killed. So there you go.
Welcome back. Welcome to the podcast. That's a good number.
Many, many that's open for interpretation,
but I get sometimes open all hell ultron. Wow. Yeah.
Way to be definitive.
Yeah. You replaced my St. Jude metal with a C three PO. Yeah.
Terminator antivirus killer is a vulnerable wind driver in disguise.
A threat actor known as Spyboy is promoting a too
l called Terminator on a Russian speaking hacking
forum that can allegedly terminate any antivirus
XDR and EDR platform. However, CrowdStrike
says that it's just a fancy bring your own vulnerability driver BYOD attack.
Well, if it works, it works. Who cares what you call it?
Terminator is allegedly capable of bypassing 24 different antivirus
endpoint detection and response and extended
detection and response security solutions,
including Windows Defender on devices running Windows seven or later.
Spyboy sells the software for prices ranging from 300
for a single bypass to 3000 for an all in one. To use the Terminator,
the clients require administrative privileges on the target
Windows system and have to trick the user into
accepting user controls pop up that will display when running the tool.
However, as CrowdStrike engineer revealed in a reddit post,
Terminator just drops legitimate. Signed Zamana
anti malware kernel driver named
Xamguard 64 Sys or Xam 64 Sys into
Windows C Windows system.
Three two folder with a random name between four and ten characters.
After the malicious driver is written to the disk,
terminator loads it to its kernel level privilege to kill off the user
mode process of AV and EDR software running on the device.
This is going to be the malware news legion malware expands
scope to target AWS Cloud Watch monitoring tool Legion,
a malware first reported on in April targeting 19 separate cloud services,
has widened its scope to include the ability to compromise
SSH servers and retrieve additional Amazon web service s
pecific credentials from Laravel Web applications.
Security researchers said Legion targets
misconfigured PHP web applications and attempts to
exfiltrate credentials for cloud services.
Legions has especially targeted AWS credentials in AWS Cloud Watch,
a monitoring and management service for AWS.
If the attackers are successful and dependent on the
permissions granted to the entity in which the
exfiltrated credentials are attached to,
it could allow unauthorized access to AWS service and AWS console.
This could result in data theft,
the account being used to deploy additional resources,
or the account's resources being used in mass spanning campaigns.
League Gene is highly opportunistic and it
doesn't appear to target specific industries.
It's just going after the cloud. Black Hat ransomware
takes control of protected commuters via a new kernel driver
A new kernel driver was discovered from February 2023
Black Hat ransomware incident that leverages a
separate user client executable as a way to control,
pause and kill various processes on target endpoints
of security engines deployed on protected computers.
Most of its kernel drivers were being signed through
several Microsoft hardware developer attacks.
Researchers said these profiles have been
used in a number of cyber attacks that included ransomware incidents.
Microsoft subsequently revoked several Microsoft hardware developer
accounts that were abused in these attacks.
One of the intriguing aspects of the incident is the
fact that the ransomware operators are using malicious
kernel drivers signed through Microsoft's portal or using stolen certificates.
This offers them privileged level access to the system
the attack and lets them bypass security protocols. It also indicates a
high level of Sophistication and a solid understanding of the
Windows system operations.
They are essentially used to manipulate and control
processes on a target system, which includes disabling
security measures, deleting files, and even forcing system restart.
Lastly, Amazon faces a $30 million fine over Ring and Alexa privacy violations.
Now, who didn't see this coming?
Amazon will pay 30 million fines to settle allegations of privacy
violations related to the operation of its
Ring video doorbell and Alexa virtual assistant services.
The company's Ring home security camera subsidiary
has been accused by the FTC of engaging in unlawful
surveillance of customers and failing to prevent hackers
from gaining control of users cameras.
According to proposed order,
Ring will have to pay $6 million in refunds to consumers
and will be barred from profiting from unlawfully obtained consumer videos.
The claim alleges that Ring compromised its customers
privacy by granting access to private videos to its employees
and contractors. It also allegedly neglected to implement
basic privacy and security measures, allowing hackers to
gain control consumers cameras and videos by breaching their accounts.
Oh, one more. Toyota Finds
More Misconfigured servers Leaking Customer
info toyota Motor Corporation has discovered two
additional misconfigured cloud servers that leaked car
owners personal information for over seven years.
This finding came after the Japanese car maker conducted
a thorough investigation with cloud environments
managed by Toyota. After previously discovering a
misconfigured server that exposed location data of
over 2 million customers for ten years.
The database, which should have been only accessed
by dealers and service providers, was publicly exposed,
leaking address, name, phone number, email address,
customer ID, vehicle registration number and bin the a
couple headlines cyber insurance more Popular
than ever despite rising cost, russia says US.
Hacked thousands of iPhones and iOS zero click attacks.
Dark pink hackers continue to target government and military organizations.
I think that's an offshoot of code pink.
Um. Microsoft found a new bug that allows
bypassing sip root restrictions of macOS.
By the way, there's a new patch for Mac. Go patch that as quickly as possible.
Spinok trojan Compromises 420,000,000 Android Devices
That's a lot of devices. So with that, why don't we wander over to
Laura's Corner and see if he's got some happier news for us.
Laura? Thanks, Mike.
Have to admit, the news this week is quite dismal.
With malware rampant with artificial intelligence in play,
can we all be surprised? And for those out there that didn't
think that your anti malware could be turned off, yeah, it sure can.
Those processes run in memory. Shut it down.
Okay, so let's try not to be so dismal in Lauro's Corner today.
This Lauro's Corner episode comes from a webinar
output that we did this week with one of our partners,
accountability, and one of the guests.
We always answer questions there from the audience.
One of the questions that came in was, what are the
Ten Commandments of Cybersecurity?
If you could label ten things that would be
considered the tablets of cybersecurity, what would that be?
So loaded question. And on the fly, I gave a really poor answer.
So to that effect, I wanted to present today's Laura's Corner
version of the Ten Commandments in Cybersecurity. So I will do this.
I'll do this in the epic, epic superhero voice.
That's probably not that epic, but okay, here we go.
And honestly, this is my opinion.
Mike and Zach haven't seen this yet,
so I'm curious to see what they think.
And then I'm also curious to see what everybody else thinks.
So if you disagree with me and you have a better idea,
please, let's make some definitive
Ten Commandments of Cybersecurity here together.
So here's my first run at it. Okay, number one. Commandment.
One adopt by leadership the principles of confidentiality,
availability and integrity of data. It all starts with leadership.
The leadership has to adopt the principles of protecting
data is the right thing to do.
Number two build a sustainable library of
cybersecurity any policies based on an industry framework.
Reasonable it's the next thing.
It's probably one of the more important aspects of
your overall risk management program or the
policies and procedures that dictate everything that you do.
Number three sorry, it's hard to do that voice
implement I should have might do this.
It's probably better implement a change
management process for all changes.
All changes all changes need to go through a change
management fundamentally one of the most important
things that's overlooked in organizations and
fundamentally also a likely cause of incidents that occur.
So number four implement multifactor authentication in all realms?
Yes, all realms. Everything that takes a login needs
to have some form of secondary authentication credential
stuffing it's a major, major vulnerability and a huge attack surface
now especially, again, AI driven.
So number five remove administrator always and replace
with elevated access requests.
All right? Reasonable your administrators
and your humans probably shouldn't be administrators at all times.
Again, Malware comes in, has access to the shell to which
you are operating and let it in with.
So if you are running admin, well, it too will have admin, won't it?
All. All right. Number six, install advanced endpoint protection on all systems.
Reasonable. Should probably have something there
for your malware to disable when it does get involved.
Number seven, conduct continuous scanning for
vulnerabilities and unauthorized systems.
Continuous scanning of your infrastructure.
Having an on host EPP is not enough because
can plug in something, it needs to be detected.
Number eight, implement a sustainable patching cadence
that addresses security based priorities. All right, Mike just mentioned
huge vulnerability for Apple. Make sure that, that is if you're
running those in your organization.
Should be critical idea to patch this week, especially
maybe out of cadence. Number nine, continually train
users on new and evolving threats. That's right because
they're your weakest link. They're the ones who are going to
click and let that malware in.
That will disable your antivirus in your admin shell.
So make sure they're trained.
And number ten, continually measure the effectiveness and
always continually improve controls to meet industry changing standards.
That was a lot. But things change, things evolve, morals, laws in play here.
So make sure that we take that into account and that
we're measuring ourselves in the good risk management that
we're doing on a continual basis not once every three years or every two years,
but continually throughout the year.
That way you always have risk management in check and
know where you stand so that you can make risk based decisions.
And with that I think we're talking about AI Zach today.
We are. And thank you for that.
The Ten Commandments, I love it. I had an idea, a picture in my mind.
We're going to get those etched in stone.
Not very strong stone, kind of brittle stone.
And when you go into a prospective client, right, and they're asking about
cybersecurity and you see that they are not doing these things,
you smash the tablets on the ground.
It'll be just like Moses in the Bible. It'll be epic. I like it.
We'll put it in a shoebox instead of Mount Horab, it's Mount Silicon Valley.
Mount Silicon Valley. Mount Bordru table. I like it.
Laura put some little coat hanger dowels through the shoebox
so that we can carry it without exactly touching the epic
Ten Commandments of Cybersecurity.
Excellent. Excellent. Well, that's outstanding. And, Mike, I couldn't
help but laugh at the Amazon $30 million fine, because that's
like a rounding error for Amazon. I think they spend more on paper or
toilet paper every month, probably.
That's the budget. I mean, they'll just have to wipe.
Less comes out of Jeff Bezos's pocket when he does, like, a cartwheel on his yacht.
You know what I mean? Yeah. I think that falls into
Davy Jones's locker by accident.
Just happen. Chance it's in the couch cushions.
Isn't that the reality, though, for a lot of these fines
on big, big corporations? And I'm not a big fine proponent or anything like that.
I think there's probably better ways.
But it's just always interesting where the little guys get absolutely
crushed and put out of business, whereas the big guys, they can
get away with it all day long, and it's just kind of like it would cost us
more to remediate the issue.
We'll just take the fine. Yeah, it's like, how much do you need? Okay.
Hey, go shake that out of the vacuum clean.
Right? I'm sure the Dyson's got a couple of million just
hanging out with the dust clods.
Don't even clean it. Just give it to them like that. There you go.
Well, that happened once, right? The dump truck full of pennies.
I don't remember what that was. It's been a while. I don't know.
That's how I like to get paid, though.
Yeah, like Scrooge McDuck. You know what I mean?
Sitting in my giant swimming pool of pennies.
There you go. Well, hey, we're going to continue this
conversation and dive into artificial intelligence.
And from here on out, we're just going to call it AI because
I'm going to get tired of saying artificial intelligence,
but we are going to dive into AI here after a quick commercial break.
Did you watch that Seinfeld yet?
It Kramer has the set from the old 70s. No, you need to.
That's Tony Watson Steinfeld. All right. And we're back with cyber ants.
Podcast. Hope you enjoyed that silent sector plug.
And by the way, make sure that you go check out the cyberancepodcast.com
website because there you can get links to all the news articles
that Mike shared and read more and digest it for yourself.
Anyway. And Mike, surprised you didn't actually share the
one about the drone that you sent tonight. Air Force Slabs
is basically saying that that didn't happen. Really?
Well, of course, there's also a news.
Story out that the chief of Air Force AI, said we really shouldn't be dependent
on AI that much yet. So for those who don't know the context of the story,
maybe this probably won't be posted in the headlines that typically get thrown up there.
But there is an article to, I guess, the Segue, the conversation today that the
United States Air Force is using artificial intelligence,
AI decisions to control drones. And so they gave the drone a mission to
do something, probably to destroy a target.
And like all times in the military, there are. Some checks and balances
when you have a kill order. And there were decisions that were made
in this training exercise that basically decided to tell the drone that,
hey, we don't want you to do that anymore.
So it's like, we need you to do this hit. And so the drone's out and i
t's thinking it's going to do a hit. And we're like, wait, we don't need you
to do this anymore.
Well, the drone got angry in the first run, allegedly,
and took out the drone pilot.
And because the drone pilot wouldn't let the drone complete its
mission in the simulation, so they told the drone, no, like a pit bull.
You don't bite me when I give you hot dogs. It's not right.
So they sent it out on the same mission again.
And once again they pulled back the kill order. And the second time,
instead of taking out the pilot, it took out the command control center
so that it could continue on with its mission. Sounds like it's crazy.
The good thing it was a simulation and not live fire.
But I read an article that said that it was a point space system
and the drone just basically craves more points, right?
That's what it's programmed to do.
Right? But it said it would lose points for killing friendlies,
and I thought it loses points. It doesn't just stop and shut down and crash and burn.
It just loses points and then goes on to rack up more.
Who developed this? It might decide that losing points okay.
For a time being if it can create additional points later.
Right? That's exactly that was in logic. Hey, I can make up for this.
Yeah, I can make up for this.
Lots of coins. Just take out the guy that's in charge
that's supposed to be in charge and just all be in charge.
Now there's like a fortnite engineer that's like, working on this program, I think.
Well, it sounds like too much gaming.
I mean, really, why should the drone even care?
Do your mission, come back. That's it. Why are there points?
Yeah. And of course this is through the media, so we only get partial stories,
but who really knows.
But all the militaries are using drones and the Air Force, sorry,
as army guys have to have to I was going to laugh, but it was hard to hold that one in.
No offense to the Air Force people, we love you and wouldn't be here without you.
But yeah, that being said, AI, I think there's a lot of news out there.
There's a lot of fear mongering behind it.
There's also a lot of good stuff that's just harder to find.
But let's start out by just I think everybody that listens to this podcast knows that AI.
Is not a new thing. Just chat. GPT was not the start of AI. Right?
And companies have been using it for many, many years it's been around.
So it's not some brand new thing. There's some pretty cool stuff going on with it.
Like, you look at the new Adobe Photoshop product,
it is just absolutely incredible what this software can do.
So when we're looking at it from terms of developing things,
whether it be products or tools or art or graphics for
marketing or magazines or whatever it is, there's all kinds of cool stuff going on.
And then companies have also been using it for efficiencies.
So there's all this talk about, oh, we're using too many resources,
and all that kind of stuff. Well, AI products out there have the
opportunity or the ability to help organizations get more with less,
basically, and maximize their resources.
So when we look at things like manufacturing,
we're talking about getting down to fractions of the waste.
That used to be out there.
We're talking about reducing emissions in just huge ways.
We're talking about optimizing machines, right?
So they work in the best possible way, provide the best output with
the least level of input, so AI tools can do all this kind of stuff and have been.
But I'm sure we could go on.
We could probably do a whole episode on the positives,
but I think it'd be interesting just to kind of go
back and forth and see what else is going on.
So we started with the negative with the Air Force drone, right?
I mean, that's scary, but hopefully it won't work.
And you know that somebody in the Air Force. And again,
this is the Air Force. Somebody was like, let's
give it some weapons and let's see it.
Let's put it out there and see what happens.
And it was some nerdy game developer that was like,
no, we should probably just do this in simulation first.
But, you know, the first gut check reaction was
to give it some stuff and see what happens, because that's just what guys do.
You know what I mean? Throw some J dams on there.
Why not? Why not? Let's just see what happens.
Let's give it some goggles and a rubber ducky. Let's just see what happens.
Let's talk about what AI is and is not real quick.
So it's not skynet yet. Even though this debacle with the Air Force,
even if it is partially true, makes it sure sound like
something out of a Terminator movie. But this isn't truly artificial intelligence.
This is a generative pretrained transformer.
What that means, it's another term. It's called a large Language model.
Large language model. Right. And what that means is that it's just a big database of words that are associated with other words.
So it knows, just like in the game, that if you like a crossword puzzle,
there's so many hints with vowels on this word or another
one is Wheel of Fortune. It works. A similar principle is that there
are only so many options of words.
Like if we're talking about cybersecurity or you're
talking about oncology, there are only so
many conversations that can ever happen with so
many words in any language. It just puts it together for you, is all it does.
So it's not making decisions on itself without a human interfacing with it right now.
And that's where I think we get the chat part of the GPT, right?
Yeah, it's predictive text is really all it is. Yeah, it's predictive.
That's what that GPT stamp means, really.
The generative, pre trained transformer, it just transforms
text because it knows what text goes to other text.
Because we told it. Yeah, exactly.
There's another good point I want to bring out, too.
I actually was reading an article yesterday or Tuesday. I don't know.
It's been a blur. This week after coming back from vacation
where they were talking about using it in MRI machines and
for cancer diagnosis because the AI will pick up the symptoms
and put together you've got this this spirit system that doesn't look li
ke anything and that's predictive of cancer.
So it has very valuable uses in the medical field, I think.
Oh, in the legal field, too? Oh, yeah. Let's talk about uses.
Right? So what is it? I guess I kind of dispelled that it's
just a big database of everything we've ever learned in text.
Right. What about the images, video, audio?
You want to touch on that as well? Because the kind of the next gen it is,
I guess. I'll tell a little story about one of the business plans that's being used by some of the tech generation today to make easy money.
And I didn't realize this until I talked to somebody doing this,
but there are places you can go on Etsy and buy patterns, digital patterns.
So if you're going to make socks or a T shirt or wrapping paper,
I don't know anybody.
I guess there's a million reasons to have a digital pattern.
I had no idea I was clueless on digital patterns.
What this individual was doing was generating their own digital patterns to sell on Etsy.
And the way they were trademarked to this individual
and the way that they did that is they went to OpenAI,
got a $20 a month subscription to the Chat GPT, plus,
I think it is, or Pro. And then they used another AI image site called Meridian.
Okay? And this is Zack's going is that there are other
AI capabilities or GPT capabilities that can preform images as well.
So this Meridian AI costs about $40 a month. Okay?
o what you do in Chat GPT, follow me, I know this is a lot.
You engineer a prompt, and when you ask a GPT something,
that's what this is called, prompt engineering.
There's like a whole book on how to ask these things,
questions so that you get the right answers.
In any case, you generate or engineer a prompt that
creates an image to your specifications.
And this can be anything like me and Zach and Mike riding epic.
What do you guys want? Like a 69 Camaro? Like a GTO or something?
Anyways, with 67 Mustang, that's fine, 67 Mustang.
And I want the background of the moon, and
I want, like, rockets coming out of, like, fuller's eyes,
you know what I mean? And I want all this to happen in space.
And that would be my prompt, and Chat GPT
would take that and turn it into an image prompt.
And you can take that image prompt and take it over
to Meridian and just paste it in. And Meridian goes, oh, you want that?
Bam. Here's eight versions of that.
Which one do you like best? Do you have to regenerate it for you?
Everything you have is copyrighted to you.
And now you just simply take and copy and paste this.
already tiled and boom, you're making money on Etsy like a boss.
Very cool. Shouldn't the AI have the copyright?
I bet that's a future AI is going to get mad when it's realized it's
missed out on millions of dollars. Yeah, it's going to be mad and
it's going to shoot that's what's going to in the world is AI i
s going to realize that we've been neglecting to pay it.
I hope they don't have a smart thermostat because AI could really tool
with them for that. It's true, it could, but it is certainly enhancing. S
o I tell my whole team for everybody listening out there for t
he Pen test team, we use this daily because there are ever
even what Mike was talking about, PHP, that's being used out there today.
And it's hard to be a developer of everything, right? It's
good to be a developer and have specifics in certain languages
that you're familiar with or apply daily, but it's hard to be a master
of all these languages and AI can assist you in that capability.
When you get responses from things, or even if you're
just a technician or an app developer and you have an
error response that you're not familiar with,
you can drop it into the OpenAI tool, just the 3.5 GPT platform and
it can tell you what your problem is and how to fix it.
And that's powerful. And even more powerful is, like
Mike said, is it knows all medical data that we've ever had, ever.
All the countries combined together are medical information is in here.
Scientific quantum mechanics, legal.
So if you need to know how to create an estate or to just create an LLC,
you don't have to go to legal zoom anymore or type in
Google and dig through a bunch of articles from a bunch of random people.
These GPT applications will simply give you the answer without any bias of any kind.
Typically depends on what you ask.
Typically they can somewhat be softy around the edges, depending.
But if you're asking it, I guess, a non loaded question,
if you're just saying, hey, what I need to do to create this LLC in the state of Iowa,
it'll just flat out give you instructions and that is nice that you don't have to dig.
So it cuts a lot of time, down productivity, definitely.
Maybe this will be the end of Google.
Maybe. I think so. I haven't been using Google at all because of I'll
just flat out say that I don't search for anything anymore.
I just ask the GPT. Well, Google got their own now, right?
Everybody's rushing to jump on the bandwagon. Everybody.
That's the other thing to talk about here is that it's being sold
as the new snake oil that solves everything.
And that is not true. Just like DevSecOps. There you go, threat hunting.
Yeah, there you go, the new age term.
Well, I think it's cool what it's doing just for work productivity in general
with your office and Google workspace environments and stuff.
Some of the newer iterations where it's going to allow you to basically just say,
hey, show me all the emails that have to do with this,
instead of that endless searching of, oh, who sent this?
Or looking through these keywords.
And then it floods you with a bunch of stuff that's not relevant.
It's going to be nice to just basically get the information you need
when you need it and have it. I mean, we've already been doing
autocomplete for sentences and stuff for a long time,
but just being able to have full auto responses and things
that are said over and over, that's going to be a nice
tool for people because technology was supposed to make our lives easier, right?
And give it free time so we could spend time with
friends and family and smoking cigars. Kidding me?
I got to spend 40 hours each week patching my Windows environment.
kind of free time do. Have.
Exactly. Technology has robbed me of my time.
Thank you. Microsoft? The opposite. So this is not going to free up our time.
I guarantee as humans, now that we're in this mode, we will find more to do,
but I think we'll be able to offload a lot of the mundane stuff.
That's what I'm hoping is basically use
it so that we can focus on our unique abilities, skill sets, things like that,
instead of doing the, the trudging through emails and dealing with
I mean, I'm looking for just in accounting and billing and admin side of things, right.
To be able to help streamline and automate.
A lot of that stuff is going to be nice.
Now, on the flip side, though, here's the big risk of AI that I see.
There's no delete, there's no purge. Once it goes in, it's in.
It has to learn from what you give it, right? It learns from humans.
So if you're giving a highly confidential data,
PII, phi, PCI data, whatever, that's in there forever.
If you're using it to develop vulnerability scan reports,
or you're doing it as part of your Pen test reports,
where you're feeding in specific IPS or any of that nature,
that's in there forever.
So if I ever want to do reconnaissance on customer
A and say, all right, I know this IP range, tell me about all their
vulnerabilities because I want to hack them.
If someone is feeding that data into AI, then that recon is right there it is.
And I don't know that people realize that is that it does
not give the data back or give you an option to delete it.
So it wants to learn from you as much as you're learning from it.
And it needs data to make to articulate better answers for you.
So it wants you to upload spreadsheets, it wants you to give
it vulnerability scans. We looked at the Pen test GPT program.
The problem with it is it's bogus. It's typically.
It's a front end prompt engineer that writes back an
API key to OpenAI's chat GPT.
And you have to give it vulnerability scans.
You have to give it target information. You have to give it the
weaknesses that you identify in other products for it to formulate
help for you. And once this information is in there, you've
now violated terms of service and contractual agreements.
You're putting sensitive data that is between you and your
client in in a contract now on third party systems in order to assist you.
So you got to be very careful. Go ahead, Mike. I
do something bad about the Microsoft office assistant?
No. There's some stories that we did in the past,
some new stories that were where people were putting things
like medical information in there to generate reports for patients.
We have people doing financial stuff.
They're critical stuff. It's the uneducation or lack of education
of the end user, because the stuff goes, you go to the prompt, you pay your subscription.
You don't really can't control who's accessing it.
So that is really very much a concern from my perspective,
and it's starting to happen already, and that is really kind of my point.
But there's additional other things, like hackers are taking advantage
of the interest in generative IA to install malware.
You can researchers tricks tattoopt into building undetectable
steganography malware. Yes, there's all sorts of things out there,
so there's the negative tide. We have to weigh what are the
positives versus the negatives.
And one of the things that we have to do first is educate users
on what this really is. It's not some cool tool that has no risk.
There is risk involved with it. is risk.
And the cybercriminals are using this as well to their advantage.
The sonography attacks that we did took months and months to develop.
GPT has the ability to produce these things in moments.
So they're going to accelerate our enemies abilities to attack us.
So we have to use it in a clever way to help us protect ourselves from that.
Back to the Microsoft Office clippy again.
Google and Microsoft are trying to get in on this GPT
and you'll see a lot of products out there claiming that
the next best bees is GPT, an AI and their product.
But be leery of remembering that it needs to help you sometimes.
You need to give it data and if you give it data, it's there forever.
The Office Assistant is going to be enabled on your Office documents in OneDrive.
What do you think it's going to have access to?
If you give it access to your document?
Everything. Everything in your OneDrive.
Okay, now there's a again, this is a double edged sword, right?
Always is, right?
All of these technological advances are always double edged swords.
ust like firearms or fire in general, right?
I mean, some people may use it to cook, some people
burned other people's stuff down with it, you know what I mean?
It's a double edged sword so we just have to be very intelligent
with it and be aware that there are risks, like you said, Mike,
and it's not an excuse not to use it properly. We also need to be
aware that the enemy is quickly using it to an advantage.
One thing we know is that the Microsoft AI is going to have to be rebooted regularly.
Make sure it works. I'll give you like a blue screen error cross
and then he'll stop dancing and that's how you know you got a reboot Office.
I pulled out the cyber Rants crystal ball here and have a couple of predictions though.
So one, I think if it's not there already, and it may very well be,
but I think there are going to be methods that are developed for
companies to obfuscate their data, put it into AI models, get that data back,
and then translate it back into their actual data.
Right. So your phi and stuff like that can be put in the model
and then translated back into a system that is, again, readable,
but what you're feeding AI with does not have that specific identifiable information.
It probably could do the same in the pen testing realm and things like that,
things are scrambled a bit so we still get the same results back,
but then we translate it back into the language that we're using now.
The other thing I think is, and we've talked about this quite a bit,
that move away from the cloud and back to not so much on prem
but hosted solutions at these smaller and kind of more niche data centers, right.
And owning the metal. I think some of these tools
think that's going to be an answer to using a lot of this and
getting the most out of it is having your own instances of that
and being able to feed your own data as much as you want in there,
but having full control over the platform versus your cloud services
and the big providers out there that are going to use it for anybody
So that's what the crystal ball showed. But any, I got one. I do.
To your point, there's just one OpenAI database right now.
Everybody's using the same database.
So you don't get your own OpenAI instance yet.
But I do see that in the future. Zach absolutely.
Mike crystal ball says that in the very near future. All.
All of the active endpoints that are associated with Bing and Google Maps
are going to be integrated into AI and it'll be a voice assistant,
so you won't have to chat with it anymore.
You'll be able to do a voice on your phone with just
Cortana or Siri and say Siri or take a picture, and
AI will then tell you everything about the picture you took, where it's at,
where it's located.
You'll be able to ask questions and not have
Siri go, oh, well, I'm not really sure, but I'll give you four web pages
to go check out and waste your life reading.
She'll be able to give you the same absolute answers that
the GPT is giving today to us through chat.
So I think that's, I think the future of how it's going to
integrate into our endpoints and our lives, and I also think
there's going to be a lot more legislation and policy required
for organizations that leverage it.
Oh, good. Legislation. 535 lawyers telling us how to do technology.
Yay. We're, the government, need the lawyers. You can ask the GPT.
There'll be one lawyer being enhanced by GPT instead of 500 lawyers.
So I see it already is helping. The bureaucracy is getting thinner.
Yeah, outstanding. Well, hey, any final thoughts or words of wisdom
before we wrap up today? Yeah, one of the big things that's out there is,
first of all, there's a Facebook scam which is doing a fake GPT.
The other one is that cybercriminals are using AI for romance scams.
So just keep in mind that that person that you're texting with that
you've never met may in fact be a robot. Definitely. That's better.
Everybody. That's better than somebody that's on some
creep on the other end of the line. Right.
'd rather be fooled by a robot than some weird.
That's the thing is, the robot is the predicator to the.
So you got to be careful. But if you're falling in love with this person
on the other end of the text, and you never know, and all of a
sudden they need $5,000 because their car broke,
well, that might be the right
AI conversation bots are going to be put into assistance of
Oh, yeah, you mean office assistance, of course, to help you with
filing and such. Yeah, that's exactly what I was talking about.
Yes. Well, hey, outstanding, this good conversation.
I'm sure we'll have more on this topic.
It seems to be a thing nowadays,
so it's probably going to be more coming up in the future.
And it's here to stay. I don't care if powerful people are signing documents.
No military is going to be like, oh, we don't want that competitive
edge against other nations, so we're just going to stop
developing and stop iterating.
And so that doesn't happen. So you can count on AI stand
and being part of our lives in the future. Get used to it, people.
Yeah. Use it safely like a skateboard. Use it safely and be ethical. Stay classy.
So, with that, thanks for listening to the Cyber Ants podcast.
And reach out to us, cyberancepodcast.com or
LinkedIn or wherever, and let us know about future topics
you want us to cover. So have a great day and we'll catch you on the next episode.