Search Menu

Elon Musk’s Grok Uses Black Forest Labs’ Controversial AI Image Generator

AI Music Just Got Real with New Model Preview

On Tuesday night, Elon Musk’s Grok unleashed a wild AI image-generation feature that, much like its chatbot counterpart, has little in the way of safeguards. This new tool allows users to create outlandish images—like a fabricated scene of Donald Trump smoking marijuana on the Joe Rogan show—and upload them directly to the X platform.

Contrary to popular belief, it’s not Musk’s AI company doing the heavy lifting. The brilliant minds behind this audacious feature are actually from a new startup based in Germany—Black Forest Labs. The startup, which came out of stealth mode earlier this month with $31 million in seed funding, was revealed to be working with xAI to power Grok’s image generator using their cutting-edge FLUX.1 model.

Black Forest Labs was founded by Robin Rombach, Patrick Esser, and Andreas Blattmann, all of whom previously contributed to Stability AI’s Stable Diffusion models. Their goal is to make powerful AI image-generation models accessible to a wide audience, and they’ve wasted no time in doing so. Already, the internet is awash with outré images generated by this tool, and it seems Grok’s controversial image generator is just getting started.

Despite their aim to enhance trust in the safety of these models, Black Forest Labs’ tools have sparked a deluge of AI-created images on X. Images that users can generate with Grok and Black Forest Labs’ tool, like Pikachu wielding an assault rifle, are something you won’t be able to replicate with Google’s or OpenAI’s models. The reason? Google’s and OpenAI’s generators come with stringent copyright and content safeguards, which are noticeably absent in Black Forest Labs’ approach.

This lack of safeguards aligns with Musk’s public stance against them. He’s argued that training AI to be “woke”—in other words, incorporating ethical guardrails—actually makes the models less safe. The results are evident: images generated by Grok have no filters, leading to an unregulated flood of content, including deepfake images and misleading headlines. This reckless approach has turned X into a veritable firehose of misinformation.

The collaboration between Grok and Black Forest Labs hasn’t been without its hiccups. A particularly glaring issue is the absence of watermarks on AI-generated images, making it challenging to distinguish authentic content from fabrications. This problem became particularly troublesome when explicit deepfake images of Taylor Swift went viral on X.

Furthermore, Grok’s capabilities extend beyond just image generation. The startup has teased that a text-to-video model is in the pipeline, promising more powerful and controversial tools in the near future. However, the existing implementation has already drawn heavy criticism. Social media users and public figures have condemned the lax safeguards, highlighting the potential for widespread abuse and misinformation.

Black Forest Labs’ technology is undeniably impressive, but its implementation in Grok raises serious ethical questions. As the debate over AI safeguards continues, one thing is clear: the digital landscape is about to become even more unpredictable.

Related Posts