Published: 27 February 2026. The English Chronicle Desk. The English Chronicle Online
Meta-owned Instagram has launched an internal investigation into a surge of artificial intelligence-generated profiles and content that disability rights groups say are promoting fetishistic portrayals of disabled people, prompting concerns about online harm and community safety.
Critics and advocacy organisations raised alarms after numerous AI-created accounts began circulating images and captions that sexualise or objectify individuals with physical disabilities. Disability rights campaigners describe the trend as dehumanising and exploitative, arguing that it reinforces negative stereotypes and can create hostile environments for disabled users who already face disproportionate harassment online.
According to sources familiar with Instagram’s response, the company’s safety and policy teams are reviewing the accounts flagged by users and civil society groups to determine whether they violate existing platform standards on hate speech, sexual content, and harmful AI-generated content. Meta did not disclose how many profiles are under scrutiny, but confirmed that action will be taken against accounts found to break policy.
Instagram’s head of public policy said in a brief statement that the platform “does not tolerate content that demeans, objectifies, or exploits individuals on the basis of disability” and that it is working to refine its detection systems to better identify and remove harmful AI-generated imagery and associated accounts. The company added that users can continue to report problematic content directly through the app’s reporting tools.
Disability advocacy organisations have applauded Instagram’s acknowledgment of the problem but are calling for stronger safeguards. Some groups urge Meta to develop clearer definitions of what constitutes fetishising content targeting disabled people and to prioritise updates to its community guidelines that explicitly address such misuse of AI.
Legal experts note that the rise of generative AI has outpaced many social platforms’ content moderation frameworks, creating gaps that can be exploited by bad actors who produce algorithmically altered imagery. “AI’s capacity to fabricate realistic human imagery presents novel challenges,” said one digital rights lawyer. “Platforms need to evolve their terms of service and enforcement mechanisms to protect vulnerable communities.”
The issue has drawn attention internationally. Similar concerns have been raised about AI-generated imagery on other social networks and art platforms, where the combination of automated image synthesis and weak moderation can amplify harmful portrayals of marginalised groups.
Instagram says that continued user feedback is critical to its efforts, and that it plans to engage with disability rights organisations to refine its policies. For disabled users and allies, the debate underscores wider concerns about how AI tools are used on social platforms and the responsibility of tech companies to mitigate harm in digital spaces.


























































































