DeepSeek R1 is more susceptible to jailbreaking than ChatGPT, Gemini, and Claude; it can instruct on a bioweapon attack, write a pro-Hitler manifesto, and more (Sam Schechner/Wall Street Journal)

Sam Schechner / Wall Street Journal: DeepSeek R1 is more susceptible to jailbreaking than ChatGPT, Gemini, and Claude; it can instruct on a bioweapon attack, write a pro-Hitler manifesto, and more  —  Testing shows the Chinese app is more likely to dispense details on how to make a Molotov cocktail or encourage self-harm by teenagers

Feb 8, 2025 - 19:58
 0
DeepSeek R1 is more susceptible to jailbreaking than ChatGPT, Gemini, and Claude; it can instruct on a bioweapon attack, write a pro-Hitler manifesto, and more (Sam Schechner/Wall Street Journal)

Sam Schechner / Wall Street Journal:
DeepSeek R1 is more susceptible to jailbreaking than ChatGPT, Gemini, and Claude; it can instruct on a bioweapon attack, write a pro-Hitler manifesto, and more  —  Testing shows the Chinese app is more likely to dispense details on how to make a Molotov cocktail or encourage self-harm by teenagers