Password strength of AI-powered devices is a threat, new study finds.

dsfsd

Artificial intelligence tools like ChatGPT and Google Bard, and to a degree Microsoft Security Copilot, have opened up new levels of phishing for hackers to steal data and extort sensitive information. Password manager.

In a survey of 1,000 cybersecurity professionals, Password Manager wanted to find out how many AI-powered devices the “average American” has.

AI raises hacking concerns

Key findings from the report include:

  • 56% are concerned about hackers using AI-powered tools to steal passwords.
  • 52% say AI has enabled fraudsters to steal sensitive information.
  • 18% say AI phishing scams pose a “high level” threat to both the average American individual user and company.
  • 56% say they are “somewhat” or “very” concerned about threat actors using AI tools to hack passwords.
  • 58% of respondents say they are “somewhat” or “very” concerned about people using AI-powered tools to create phishing attacks.

Commenting on the findings, Marcin Gwizdala, Chief Technology Officer at Tidio (password manager):

“One of the threats seen using AI, in general, is phishing scams. ChatGPT can easily be mistaken for a human because it can communicate seamlessly with users without spelling, grammar or verb tense errors. That’s exactly what makes it a great tool for phishing scams.

The survey also found that 52% of cybersecurity professionals say AI tools have made it “somewhat” or “very easy” for people to steal sensitive information.

“The threat posed by AI as a tool for cybercriminals is dire,” Steven JJ Weissman, chief fraud, identity theft and cybersecurity officer, told Password Manager.

Weissman, in his report, explains that with AI, phishing scams are now more viable:

“In particular, many scams originate from foreign countries where English is not the first language, and this is often reflected in the poor grammar and spelling found in phishing and phishing emails and text messages from those countries. But now, using AI, those phishing and phishing emails and text messages look more legitimate.

Five tips to protect against AI methods

Password Manager subject matter expert Daniel Farber Huang offers five tips on his blog for individuals and businesses to avoid falling victim to cyber-related scams:

  1. Consider that any unsolicited communication – email, text, DM or otherwise – could be a scam, and take basic precautions when reviewing messages.
  2. If there is a compelling reason to respond to an incoming communication, it is safer to contact the sender or organization directly rather than hitting “reply.” Find the official phone number or email from the company’s website and contact them directly to make sure you are dealing with an authorized representative.
  3. Understand that basic bots are used for all types of solicitation and are trained to look like humans, including on sites like LinkedIn.
  4. If possible, consider adding an icon or emoji to your name listed on social media. For example, LinkedIn allows you to add emojis to your profile name. Real humans don’t manually insert graphics into their private messages, but a bot does so instantly, which serves as a red flag that you’re asking for it in bulk.
  5. Be aware that voicemails, text messages, and even chat room conversations can be created with the goal of tricking you into thinking you’re dealing with a real person, tricking you into revealing personal or sensitive information.

We offer you some site tools and assistance to get the best result in daily life by taking advantage of simple experiences