Meta’s New Free AI Image Generator is Trained on 1.1 Billion Instagram and Facebook Pics

Meta recently launched a new website called “Imagine with Meta AI,” which is a standalone AI image generator. This website is based on Meta’s Emu image-synthesis model and utilizes 1.1 billion publicly visible Facebook and Instagram images for training.

Meta’s New AI Image Generator Is Trained On 1.1 Billion Instagram And Facebook Pics

With this AI model, users can generate unique images by providing a written prompt. Previously, Meta’s version of this technology was only accessible within messaging and social networking apps like Instagram, but now it is available to everyone on the “Imagine with Meta AI” website.

Emu, the AI model developed by Meta, might have been trained using pictures from your Facebook or Instagram accounts.

However, it’s important to note that Meta’s AI model was trained on a small subset of their vast photo library, considering that Instagram users were uploading over 95 million photos a day as of 2016.

To ensure your photos are not included in Meta’s future AI model training, it is recommended to set your photos on Instagram or Facebook to private.

Meta claims to only use publicly available photos for training, but it is always possible for their policy to change in the future.

Meta’s model is skilled in generating photorealistic images, although it falls slightly short of Midjourney in terms of quality. It excels in handling intricate prompts, outperforming Stable Diffusion XL, but may not match the capabilities of DALL-E 3 in this aspect.

However, text rendering is not its strong suit, and its performance in producing different media outputs such as watercolors, embroidery, and pen-and-ink varies.

On a positive note, its portrayal of people encompasses diversity in ethnic backgrounds. Overall, Meta’s model can be considered average in the current landscape of AI image synthesis.”

Upper Layer Of Emu AI Model

Let’s dive into the details of Emu, the AI model powering Meta’s latest AI image-generation features.

According to a research paper published by Meta in September, Emu achieves its impressive image generation capabilities through a unique process called ‘quality-tuning.’

Unlike conventional text-to-image models that rely on massive amounts of image-text pairs for training, Emu takes a different approach by prioritizing ‘aesthetic alignment’ through post-training, using a curated collection of visually captivating but relatively small images.

Emu’s core is powered by a vast pre-training dataset consisting of 1.1 billion text-image pairs sourced from Facebook and Instagram.

While the Emu research paper does not explicitly mention the origin of this training data, reports from the Meta Connect 2023 conference reveal that Meta’s President of Global Affairs, Nick Clegg, confirmed the usage of social media posts, including images, as training data for AI models like Emu.

Unlike other AI companies, Meta’s approach is unique because it has access to a vast amount of image and caption data from its services.

In contrast, other image-synthesis models often rely on images that have been obtained through unauthorized means from the Internet or licensed from commercial stock image libraries, or sometimes a combination of both.

It’s fascinating to note that Meta’s research paper on Emu stands out as the initial publication on a significant image-synthesis model that doesn’t explicitly disclaim the potential for the model to generate reality-altering disinformation or potentially harmful content.

This observation reflects the widespread acknowledgment (or perhaps resignation) of the prevalence of AI image synthesis models, which are increasingly becoming more common. Whether this development is positive or negative remains an open question.

However, Meta is actively addressing concerns about potential harmful outputs by implementing filters. They are also planning to introduce an invisible watermarking system in the near future, which will enhance transparency and traceability in the Meta AI experience.

Additionally, at the bottom of the website, there is a small disclaimer stating that some images may be inaccurate or inappropriate.

While it is important to note that the accuracy of the images, such as cats drinking beer, may be questionable, and their ethical implications are a subject of debate among the unidentified authors of the 1.1 billion images utilized for training the model, it is undeniable that generating such images can be an enjoyable experience.

However, the level of fun derived from this process may be tempered by an individual’s disposition and their perception of the rate at which AI image synthesis is advancing, which can evoke a certain level of concern.

🙏 Help Us By Sharing This Article 👇: