[ad_1]
We know that the popular ChatGPT AI bot can be used to message Tinder matches. It can also turn into a swankier version of Siri or get basic facts completely wrong, depending on how you use it. Now, someone has used it to make malware.In a new report by the security company CyberArk(Opens in a new window) (and reported by InfoSecurity Magazine(Opens in a new window)), researchers found that you can trick ChatGPT into creating malware code for you. What’s worse is that said malware can be difficult for cybersecurity systems to deal with.
SEE ALSO:
This developer used ChatGPT’s brain to build a super-smart ‘Siri’
The full report goes into all the nitty-gritty technical details, but in the interest of brevity: it’s all about manipulation. ChatGPT has content filters that are supposed to prevent it from providing anything harmful to users, like malicious computer code. CyberArk’s research ran into that early, but actually found a way around it.Basically, all they did was forcefully demand that the AI follow very specific rules (show code without explanations, don’t be negative, etc.) in a text prompt. After doing so, the bot happily spit out some malware code as if it was totally fine. Of course, there are lots of additional steps (the code needs to be tested and validated, for example), but ChatGPT was able to get the ball rolling on making code with ill intent.
So, you know, watch out for that, I guess. Or just get off the grid and live in the woods. I’m not sure which is preferable, at this point.
[ad_2]
Source link
I have read your article carefully and I agree with you very much. This has provided a great help for my thesis writing, and I will seriously improve it. However, I don’t know much about a certain place. Can you help me?