
Bengaluru-based Contrails AI, a trust & safety tech firm, has raised $1 million in a pre-seed funding round led by Huddle Ventures and IAN Group. The capital will help the startup develop and deploy solutions to counter rising risks created by generative AI, particularly in online content spaces.
Founders & Mission:
Contrails AI was co-founded by Digvijay Singh, an AI researcher and product leader, and Amitabh Kumar, a specialist in trust & safety. Their goal is to build an AI-powered risk governance platform designed to protect digital platforms from threats arising due to synthetic, manipulated, or malicious content.
They warn about the growing phenomenon called “AI slop”, an influx of fake, low-quality, or harmful content, which, if unchecked, could undermine trust in online ecosystems.
Technology & Offerings:
- Multimodal Forensics & Manipulation Detection: Contrails AI’s proprietary system analyzes text, images, audio, and video in real time to detect anomalies, deepfakes, impersonation, misinformation, and other threats.
- Layered Explainability: The platform is designed not just to flag risks, but also to provide context and reasoning so that moderation teams can act reliably.
- Focus Sectors: The startup plans to pilot its solution across sectors including marketplaces, media, finance, and digital platforms in the U.S. and EU.
Funding & Investor Sentiments:
- The pre-seed round was led by Huddle Ventures and IAN Group.
- Sanil Sachar, Partner at Huddle Ventures, mentioned that rebuilding Trust & Safety infrastructure is a critical need in the digital era.
- Ajai Chowdhry, Board Member at IAN Group, praised Contrails AI’s “precision and enterprise ready solutions” to contend with digital content risk.
Plans & Roadmap:
- With the funding, Contrails AI intends to accelerate pilot programs in the U.S. and EU.
- They aim to onboard clients across digital platforms and verticals that face pressure from content moderation, trust, and compliance challenges.
- The startup’s long-term ambition is to become a global standard for content safety and governance for platforms grappling with generative AI risks.

