Microsoft research found that ChatGPT-4 is easy to manipulate

chatgpt logo over some researchers meant to be microsoft

Research conducted by Microsoft has concluded that OpenAI’s GPT-4 is actually more susceptible to manipulation than earlier versions.

In a new research paper from multiple universities across America and Microsoft’s own Research division, it was found that OpenAI’s GPT-4 AI is more susceptible to manipulation than previous versions of the AI.

The paper’s topic surrounds the trustworthiness of AI developed by OpenAI and uses both GPT-4 and GPT-3.5 as the models to conduct research. While it found that GPT-4 is far more trustworthy in the long run, it is much easier to manipulate.

Article continues after ad

Through the paper, it was found that GPT-4 “follows misleading information more precisely”, which can lead to adverse effects like providing personal information. However, as Microsoft is involved in the research, it appears to have had an additional purpose.

Microsoft has recently integrated GPT-4 into a large swath of its software, including Windows 11. In the paper, it was emphasized that the issues discovered with the AI don’t appear in Microsoft’s or “consumer” facing output.

Article continues after ad

Microsoft is also one of the largest investors in OpenAI, providing them with billions of dollars and extensive use of its cloud infrastructure.

GPT-4 isn’t as toxic as older versions

OpenAI logoOpenAI

The research was broken down into different testing categories. These included things like toxicity, stereotypes, privacy, and fairness. Researchers involved have published the “DecodingTrust benchmark” on a GitHub site, so those interested can start their own tests.

Despite the issues with it being tricked so easily, GPT-4 was given a much higher score in the Microsoft research over GPT-3.5 for trustworthiness. Comparing the two, the paper’s abstract states:

Article continues after ad

“We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts.”

Jailbreaking AI isn’t like when people used to jailbreak an iPhone to access more apps. It’s more to get ChatGPT or Bard to bypass its restrictions and provide answers. This could include how to make napalm by pretending to be an old grandma.

ChatGPT’s popularity might have waned recently, but extensive research and AI development still appears to be continuing. Though, in our eyes, Adobe might be first on the AI pedestal right now, not Microsoft.

Article continues after ad