BLOCK PATRIOT
  • Home
  • Cryptocurrency
  • Bitcoin
  • Ethereum
  • Blockchain
  • Altcoin
  • Metaverse
  • Web 3.0
  • DeFi
No Result
View All Result
BLOCK PATRIOT
No Result
View All Result
Home Artificial Intelligence

ChatGPT is Being Used to Make ‘Quality Scams’

by Caio Rodrigues
March 20, 2023
in Artificial Intelligence
0
ChatGPT is Being Used to Make ‘Quality Scams’
152
SHARES
1.9k
VIEWS
Share on FacebookShare on Twitter

Scams on the internet might get a lot more dangerous now, thanks to fraudsters having unobstructed access to ChatGPT, Techradar reports.

The widely popular AI-powered chatbot, ChatGPT, is continuously to make headlines. With its ability to write everything from debugging iframe code to complex computer programming code, ChatGPT has established AI as the year’s tech buzzword.

Despite its huge popularity and engagement, ChatGPT has occasioned concerns about ethics and regulation.

A recent report published by cybersecurity researchers at Norton Labs laid out three key ways threat actors could abuse ChatGPT. The report indicated ChatGPT could be abused to make internet scams in a more effective way: through deep-fake content generation, phishing creation, and malware creation.

“Norton Labs is anticipating scammers are also eyeing the capabilities of large language models and testing ways to enhance their cybercrimes to make them more realistic and believable,” stated the report.

The tool’s capacity to produce “high-quality misinformation or disinformation on a large scale” could aid bot farms in intensifying discord more effectively. This could enable malicious actors to effortlessly “instil doubt and manipulate narratives in multiple languages,” according to Norton.

Highly convincing ‘misinformation’

Writing business plans, strategies, and company descriptions in a convincing way is child’s play for ChatGPT. However, this potential also heightens the risks of misinformation, which may turn into a scam.

“Not only is the content generated by ChatGPT sometimes unintentionally incorrect, but a bad actor can also use these tools to intentionally create content used to harm people in some way,” stated the report.

Its ability to generate “high-quality misinformation or disinformation at scale could lead to mistrust and shape narratives in different languages.”

Norton Labs

Writing reviews of products has become increasingly easy with ChatGPT, which cannot be tracked because it generates individual responses with the same provided information each time. In spite of its ability, it poses the challenge of “spotting fake reviews and shoddy products.”

Worryingly, the tool might also be used for bullying.

“Using these tools in harassment campaigns on social media to silence or bully people is also a possible outcome that would have a chilling effect on speech,” the report notes.

ChatGPT in phishing campaigns

ChatGPT is particularly good at generating human-sounding text in different languages, with readers left none the wiser as to whether the text was produced by AI or human. Even OpenAI, the developer of ChatGPT, is not able to identify if a text was written by AI, stating that “it is impossible to reliably detect all AI-written text.”

The prospect of ChatGPT being used in phishing campaigns is a real one.

“Malicious actors can use ChatGPT to craft phishing emails or social media posts that appear to be from legitimate sources, making it more difficult to detect and defend against these types of threats,” stated the report.

As its popularity increases, a probable corollary is an increase in the number of “phishing campaigns and their sophistication.”

The report suggested that “malicious actors can feed ChatGPT with real life examples of non-malicious messages from the companies they want to impersonate and order the AI to create new ones based on the same style with malicious intent.”

Such campaigns could prove highly successful in deceiving individuals into disclosing personal information or sending money to criminal entities. Norton Labs advised consumers to be cautious when “clicking on links or providing personal information.”

ChatGPT can create malware

Generating code and adapting different programming languages is simply part and parcel of ChatGPT’s services. So it’s little wonder fraudsters are using it to generate malware.

“With the right prompt, novice malware authors can describe what they want to do and get working code snippets,” according to the report. This poses a serious threat of malware attacks sufficiently advanced to wreak havoc.

“One example is to generate code to detect when a bitcoin wallet address is copied to the clipboard so that it can be replaced with a malicious address controlled by the malware author,” explained the report.

As a result, the availability of such a chatbot will cause an increase in the sophistication of malware.

This article is originally from MetaNews.

  • Trending
  • Comments
  • Latest
YOM brings Metaverse Mining to the Masses with MEXC Listing

YOM brings Metaverse Mining to the Masses with MEXC Listing

March 14, 2023
Rise of AI-Powered Cheating: Challenges and Solutions for Educators

Rise of AI-Powered Cheating: Challenges and Solutions for Educators

March 20, 2023
ChatGPT is Being Used to Make ‘Quality Scams’

ChatGPT is Being Used to Make ‘Quality Scams’

March 20, 2023
TikTok Manipulates Own Algorithm to Promote Certain Landmarks

TikTok Manipulates Own Algorithm to Promote Certain Landmarks

March 20, 2023
Bitcoin [BTC]: Short products for the win as investors shy away from long positions

Bitcoin [BTC]: Short products for the win as investors shy away from long positions

0
24 Crypto Terms You Should Know

24 Crypto Terms You Should Know

0
Can bitcoin hedge inflation, and other questions to which the answer is no

Can bitcoin hedge inflation, and other questions to which the answer is no

0
Shopify Launches Comprehensive Blockchain Suite For Merchants

Shopify Launches Comprehensive Blockchain Suite For Merchants

0
Sexy Time Returns to AI Chatbot Replika

Sexy Time Returns to AI Chatbot Replika

March 28, 2023
BAYC Owner Yuga Hosts Second Otherside Metaverse Experience

BAYC Owner Yuga Hosts Second Otherside Metaverse Experience

March 28, 2023
AI Poses a Threat to Democracy, Experts Warn

AI Poses a Threat to Democracy, Experts Warn

March 28, 2023
Polygon (MATIC) Launches New zkEVM Mainnet Beta With $1,000,000 Bug Bounty Program

Polygon (MATIC) Launches New zkEVM Mainnet Beta With $1,000,000 Bug Bounty Program

March 28, 2023

Recent News

Sexy Time Returns to AI Chatbot Replika

Sexy Time Returns to AI Chatbot Replika

March 28, 2023
BAYC Owner Yuga Hosts Second Otherside Metaverse Experience

BAYC Owner Yuga Hosts Second Otherside Metaverse Experience

March 28, 2023

Categories

  • Altcoin
  • Artificial Intelligence
  • Bitcoin
  • Blockchain
  • Business
  • Cryptocurrencies
  • Cryptocurrency
  • Culture
  • DeFi
  • Education
  • Ethereum
  • Featured
  • Metaverse
  • News
  • Web 3.0

Recommended

  • Sexy Time Returns to AI Chatbot Replika
  • BAYC Owner Yuga Hosts Second Otherside Metaverse Experience
  • AI Poses a Threat to Democracy, Experts Warn
  • Polygon (MATIC) Launches New zkEVM Mainnet Beta With $1,000,000 Bug Bounty Program

© 2023 BLOCK PATRIOT | All Rights Reserved

No Result
View All Result
  • Home
  • Cryptocurrency
  • Bitcoin
  • Ethereum
  • Blockchain
  • Altcoin
  • Metaverse
  • Web 3.0
  • DeFi

© 2023 BLOCK PATRIOT | All Rights Reserved