🤔 ChatGPT has some major security issues.

And here's a few interesting ones.

“We are losing privacy at an alarming rate - we have none left.” — John McAfee

In today’s newsletter:

  • ChatGPT has some major security issues.

  • Paper

  • Learn

Pense

I’m not sure there’s a such thing as privacy anymore. People can find it if they look but hopefully they’re not looking…

Here’s some AI privacy related news to get you thinking.

  • DuckDuckGo, the privacy oriented browswer, has launched a platform to privately interact with AI chatbots privately. It’s called Duck.ai . It keeps your user data private by stripping away metadata. All queries seem to appear from their company instead. It’s currently a rate limited service but they may launch a paid one. Read more here.

  • In my dreams, I’m sometimes a hacker doing espionage work. In reality, I’m just a dull normal coder. If I was living in my dreams, these new AI jail breaking tools would come in handy! AI skeleton keys can bypass AI guardrails through instructing it to augment its behavior guardrails. It can be combated with techniques such as:

    • Input filtering

    • careful prompt engineering

    • Output filtering

    • Abuse monitoring.

    But I except hackers will get around those soon enough. Read more here.

Paper

Large Language Models in Cybersecurity: State-of-the-Art. Read more.

  • LLMs are utilized for various defensive applications such as anomaly detection, penetration testing, and vulnerability mitigation.

  • LLMs can automate responses to scammers and enhance honeypot interfaces for cybersecurity defense.

  • LLMs can be used to evade antivirus detection and establish command and control attacks for malicious activities.

Learn

Ever heard of WormGPT? Me neither

On hacking AI models

Hacking with ChatGPT

Was this email forwarded to you? Sign up here.

This has a been A Geeky Production