
Meta recently fixed a security flaw in its Meta AI chatbot that could have exposed users’ private prompts and AI-generated responses. This discovery highlights the importance of protecting user data as AI tools become more common on platforms like Instagram and Facebook.
The bug was found by Sandeep Hodkasia, a security researcher and founder of AppSecure, on December 26, 2024. It allowed unauthorized access to private AI prompts and responses from Meta AI users. When users edited their prompts to create text or images, Meta’s servers assigned each prompt a unique number. By changing this number in the browser’s network traffic, Hodkasia could view someone else’s prompts and AI-generated content.
The issue happened because Meta’s servers didn’t properly check if the person requesting the data was authorized to see it. The prompt numbers were also easy to guess, which could have let hackers use automated tools to collect users’ private data. Meta fixed the bug on January 24, 2025, and confirmed no one misused it.
Meta acted quickly after Hodkasia privately reported the bug through its bug bounty program. The company paid him $10,000 as a reward for his responsible disclosure. Meta’s spokesperson, Ryan Daniels, told TechCrunch that they found no evidence of the bug being exploited. The swift fix shows Meta’s commitment to improving AI security and protecting user privacy.
This isn’t the first time Meta AI faced privacy issues. Earlier in 2025, some users accidentally shared private chats with the chatbot publicly due to unclear privacy settings. These incidents highlight the challenges of keeping AI tools secure as they grow in popularity.
As tech companies like Meta, Google, and OpenAI race to develop AI tools, privacy risks are a growing concern. Bugs like this one could expose sensitive user information, such as personal prompts or AI-generated images and text. This can erode trust in platforms like Instagram and Facebook, where Meta AI is integrated.
The bug also raises questions about data protection regulations. Laws like the EU’s GDPR and new U.S. privacy rules demand strict safeguards for user data. Incidents like this could push governments to enforce tougher AI privacy laws to prevent data leaks.
Meta’s bug bounty program played a key role in fixing this issue. The program encourages ethical hackers to find and report security flaws in exchange for rewards. By paying Hodkasia $10,000, Meta showed how much it values these efforts. Ethical hacking helps companies fix vulnerabilities before hackers can exploit them, keeping users safe.
Details | Information |
Bug Discovered | December 26, 2024 |
Bug Fixed | January 24, 2025 |
Discovered By | Sandeep Hodkasia, AppSecure |
Reward | $10,000 (Bug Bounty Program) |
Impact | Could expose private AI prompts and responses |
Exploitation | No evidence of malicious use |
Affected Platforms | Meta AI (used on Instagram, Facebook, and standalone app) |
Meta is heavily investing in AI to compete with rivals like ChatGPT and Google’s AI tools. Recent moves, like acquiring the voice startup Play AI, show Meta’s focus on expanding its AI features. However, this bug serves as a reminder that rapid AI growth must come with strong security measures. Without proper safeguards, user trust could be at risk.
Experts suggest Meta and other tech companies should regularly audit their AI systems to catch issues early. Moving away from open-source AI models to more controlled systems might also help improve security, as TechCrunch reported.
To protect your data while using Meta AI or similar tools, follow these tips:
This incident shows that even big tech companies face challenges in securing AI tools. As AI becomes a bigger part of our lives, companies must prioritize user privacy. Meta’s quick response is a step in the right direction, but ongoing efforts are needed to prevent future leaks.
By working with ethical hackers and improving data protection, Meta can build safer AI tools. This will help maintain user trust and avoid reputational damage as AI continues to grow.
Meta patched a bug in its AI chatbot that exposed private user prompts & responses to other users.
— TechJuice (@TechJuicePk) July 16, 2025
Security researcher Sandeep Hodkasia reported it in Dec 2024 and earned a $10,000 bug bounty.#MetaAI #Privacy #BugBounty #Cybersecurity pic.twitter.com/jyuDOdPDF6
The bug allowed unauthorized access to private user prompts and AI-generated content in Meta AI. It was caused by a flaw in how Meta’s servers handled prompt numbers, which could be guessed to view other users’ data.
Meta fixed the bug on January 24, 2025, after it was reported by security researcher Sandeep Hodkasia. The company ensured no one exploited the flaw and rewarded the researcher $10,000 through its bug bounty program.
Yes, Meta fixed the bug, and there’s no evidence it was misused. To stay safe, check your privacy settings, avoid sensitive prompts, and keep your apps updated.
Meta’s bug bounty program rewards ethical hackers for finding and reporting security flaws. It helps the company fix issues before they can be exploited, keeping platforms like Instagram and Facebook secure.
AI privacy matters because tools like Meta AI handle sensitive user data, like prompts and generated content. Leaks can expose personal information, erode trust, and lead to stricter regulations like GDPR.