Microsoft Removes WizardLM 2 LLM for Toxicity, But Users Rally to Preserve It

Last week, Microsoft specialists released LLM WizardLM 2, which they claim is one of the most powerful large language models with open-source code available today. However, a few hours later, the company removed the model from the network because they accidentally forgot to “test for toxicity” before releasing it. Nevertheless, users managed to save LLM, and it is still available to anyone interested.

As reported by 404 Media, a few people managed to download the model shortly before its removal and re-upload it to GitHub and Hugging Face. Therefore, the model that Microsoft deemed unready for public use and had to be taken offline is now unlikely to be completely removed from the internet.

According to a now-deleted statement from the developers of WizardLM 2, the open-source model represents a “modern large language model of a new generation from Microsoft that excels in performance when dealing with complex chats, multiple languages, arguments, and agents.”

WizardLM 2 is trained on synthetic data created by other AI systems rather than on information obtained from humans, such as from the internet, books, scientific journals, and so on.

“We believe that as natural human-created data becomes increasingly depleted in training LLM, data carefully curated by AI and models incrementally controlled by AI will become the only way to more powerful AI,” wrote the developers.

Microsoft specialists claimed they had tested their LLM using MT-Bench and concluded that the model “demonstrates highly competitive performance compared to the most advanced proprietary developments such as GPT-4-Turbo and Claude-3.” While there are many methods for evaluating LLM performance, and their comparative analysis remains a somewhat imperfect science, Microsoft researchers were confident they had created a powerful model.

Microsoft representatives declined to answer specific questions about why WizardLM 2 was removed shortly after release. However, the Twitter account @WizardLM_AI, associated with the lead author of articles on the first WizardLM, Can Xu, and his co-author and colleague researcher Qingfeng Sun, stated that there was an error.

“We are deeply sorry for what happened,” wrote WizardLM_AI. “We accidentally overlooked one point necessary for releasing the model — toxicity testing. We are currently swiftly completing this test, and we will re-release our model as soon as possible. Don’t worry, thank you for your concern and understanding.”

According to 404 Media, the official WizardLM 2 pages on GitHub and Hugging Face are still not functioning, but multiple copies of LLM can easily be found on the same platforms. In the WizardLM Discord channel, there is also a link to GitHub listing numerous mirrors for LLM, and journalists have reported finding five different instances of the model on Hugging Face.

The publication notes that they have not yet tested the model, so it is unknown how easily it generates harmful and “toxic” responses. Since the model has open-source code, it is possible that people could create their own uncensored versions that generate controversial responses regardless.

In conclusion, the journalists summarize that the fact remains: Microsoft was unable to protect the AI model that they deemed unsafe for the public and accidentally disclosed it.

0 / 5

Your page rank:


Subscribe: YouTube page opens in new windowLinkedin page opens in new windowTelegram page opens in new window

Leave a Reply

Your email address will not be published. Required fields are marked *

Post comment