The dark side of AI:  Here’s how the tech can be used for Scams, Fraud

HomeFeatures

The dark side of AI: Here’s how the tech can be used for Scams, Fraud

Conserving Native Varieties of Rice
Kashmir: Reviving the Social Fabric
Tarjuman ul-Quran Published in English

When OpenAI launched the ground-breaking tool ChatGPT in November 2022, it changed the world forever, showing the artificial intelligence revolution is truly upon us. The application of AI has contributed to a number of technological advancements, such as the development of modern computer chips, generative media, and more.

Not everybody is excited about AI, though. Some may see it as a two-sided coin that has just as much potential for harm as it does for progress.

Below, I’ll outline a few of the harms associated with AI and how it can be used for scams, fraud, and other malicious activity.

1. Voice Cloning
Today, AI-driven programs are able to take samples of almost anybody’s voice and recreate them almost perfectly, using common phrases while matching the tone and even the accent of the original samples.

AI voice cloning has already been used in a number of financial crimes.

For example, voice cloning could bypass financial institutions’ voice password authentication systems, allowing scammers to access private bank accounts.

In the U.S. state of Arizona, a scammer used AI to call a parent while impersonating their child. The scammer convinced the parent that their child had been kidnapped and was in serious danger before demanding a US$1 million ransom.

2. Deepfakes And Impersonation
“Deepfakes” are computer-generated videos that can be used to impersonate someone and spread misinformation. While deepfakes have circulated on the internet for years, they haven’t usually been the most realistic, and a close observer could tell a real video from a deepfake.

But with AI-generated video and audio cloning, deepfakes have become more realistic than ever.

Some resourceful YouTubers and online course creators are using the technology to help them produce content. However, there could be just as many people using the technology maliciously, creating deepfakes of celebrities and other notable figures that may hurt their reputations.

3. Automated Hacking
One of the most practical applications of AI is for coding. AI can generate entire programs in a fraction of the time it would take a programmer to do so manually. AI can also run through thousands of lines of code in seconds to identify errors.

But AI can also be used for automated hacking. Hackers can use tools such as ChatGPT, for example, to write malicious code or malware.

What would previously require a team of hackers working day and night can now be accomplished by a single AI model. That’s pretty scary.

4. Chatbots and Privacy
Recently, there’s been some concern over the use of chatbots when it comes to accessing private data. Since chatbots such as ChatGPT are still in their “testing” phase, conversations are recorded and used to improve their accuracy and syntax.

Whenever users create an account with OpenAI or use Bing Chat, they agree that their data can be used for development purposes. So users shouldn’t share anything they don’t want to be recorded.

In early April of this year, Canada’s federal privacy commissioner launched an investigation into ChatGPT. It is based on an allegation that the company is collecting and using personal data without user consent.

5. Just How Dangerous is AI?
Much like the early days of the internet, AI comes with a lot of potential dangers. From deepfakes to automated hacking, the risks could almost be as great as the benefits.

When used responsibly, AI can be an invaluable tool. But it can also be used maliciously by those with the worst of intentions.

While some officials around the world are pursuing stricter regulation, such as the European Union coming up with an Artificial Intelligence Act, the reality is that AI is likely here to stay.

Ultimately, it’s up to you to protect yourself from the potential dangers of AI.

Source:https://www.ctvnews.ca

COMMENTS

WORDPRESS: 0