How companies are turning AI on itself to fight fraud

In the early interviews, the candidate sailed through the basic, definition-based questions. But as the screening levels got more complex, cracks began to appear. The candidate panicked when the interviewers asked him to walk through decisions in a live scenario. He also showed a lack of basic knowledge in systems he claimed to have built. Eventually, the candidate admitted he had used an artificial intelligence (AI) tool to manipulate his resume in order to appear more role-aligned.

Recruiters and verification companies see versions of this story playing out across India’s job market today, as generative AI (GenAI) tools make it easier to create credible resumes, tighten narratives and pass early screening rounds. The example stated above was cited by Instahyre, a talent acquisition platform.

In a report, identity verification firm IDfy said it found nearly 195,000 white collar candidates were high-risk before onboarding, based on 4.9 million verification cases conducted over the past year. Among these, close to 70% of fraud cases were linked to fake or forged credentials. The report highlighted that employment fraud with fabricated roles, shell employers and manipulated salary records emerged as a bigger issue than fake degrees, largely because they are harder to verify and easier to engineer.

Not too long ago, AI-generated content lived in memes and viral videos of dancing politicians. Now, it has found its way into resumes, identity cards, payslips and claims, with real financial and legal consequences.

Last November, Google introduced Nano Banana Pro, an image generation tool designed to produce realistic and editable media files from simple prompts. Soon after its launch, Bengaluru-based technologist Harveen Chadha demonstrated how the tool could be used to create convincing replicas of Indian identity documents, including Aadhaar and PAN (Permanent Account Number) cards.

As AI-assisted modification becomes easier and more widely accepted, it has also created ambiguity around what is permissible, what is legal and what can be classified as outright fraud.

In the enterprise domain, this ambiguity is introducing new frauds such as AI-modified fake degrees and fake food claims, inviting the additional effort of detecting it. The burden of verification of AI-modified documents is moving upstream to companies that approve hires, process claims, settle disputes or release payments.

Now, AI-generated content has found its way into resumes, identity cards, payslips and claims, with real financial and legal consequences.

This is why sectors as different as banking, insurance, food delivery and recruitment are confronting variations of the same problem. Wherever onboarding, payouts or dispute resolution rely on documents or visual proof, the cost of verification is rising.

Earlier verification

Ashok Hariharan, co-founder and chief executive officer (CEO) of IDfy, says that in the past few years, both the volume and sophistication of deception has gone up significantly. The levels of fraud also span a large spectrum.


View Full Image

Ashok Hariharan, co-founder and CEO of IDfy.

“Severity varies. Sometimes it’s complete fraud, like claiming to attend an institution they never attended. Sometimes they mess with dates, gaps, titles or salary numbers. These are the most common,” Hariharan says.

For the hiring industry, this means they now need to build capabilities to identify resumes that aren’t outright fake but often fail to pass higher levels of scrutiny.

Hariharan says IDfy’s platforms use multiple AI-driven checks to identify anomalies across timelines, salary data, role descriptions and behavioural signals.

“You have to use AI to catch AI fraud now,” he said, adding that this approach allows the company to flag any forged or manipulated records even when documents appear authentic on the surface.

According to Sarbojit Mallick, co-founder of Instahyre, AI-generated resumes and automated applications are showing up frequently in hiring pipelines.

Sarbojit Mallick, co-founder of talent acquisition platform Instahyre.

View Full Image

Sarbojit Mallick, co-founder of talent acquisition platform Instahyre.

The talent acquisition platform shared an example of a software client where a data analyst with a metrics-heavy resume struggled in technical rounds and could not explain the core logic of a project. A background check later revealed that the candidate had only played a minor supporting role, a pattern the company says is becoming increasingly common with AI-polished resumes.

Candidates now submit resumes that closely match job descriptions even when their actual experience is limited or sometimes non-existent.

“Recent surveys of hiring managers show that 59% suspect candidates use AI to misrepresent skills and only 19% feel very confident they can catch every fraudulent application without technological support,” says Mallick, pointing out how detection itself is becoming more technology driven.

Platforms such as Instahyre use structured profile data and check for consistency across skills, experience timelines and engagement behaviour, rather than relying solely on keyword matching.

Verification, which used to be a formality after a job offer, is now becoming a risk filter to shortlist candidates before the interview stage.

For companies hiring at scale or under time constraints, even a few misrepresented hires or candidates with inflated skills can lead to significant costs and effort. This makes it crucial for them to invest earlier in the verification process.

Aditi Jha, head of legal and public policy for LinkedIn India, notes that new technology has made it easier and cheaper to fake credibility online. This manifests as AI-generated profiles, messages and other forms of inauthentic behaviour.

Aditi Jha, head of legal and public policy for LinkedIn India.

View Full Image

Aditi Jha, head of legal and public policy for LinkedIn India.

Jha says LinkedIn has moved toward proactive detection rather than waiting for user complaints. Globally, over 98% of spam or scam content removed in the first half of 2025 was blocked automatically before members encountered it, with the rest reviewed by humans.

“As professionals increasingly rely on digital platforms to find work and build careers, creating a safe, authentic and credible environment isn’t optional—it’s essential,” she said, adding that India is now among LinkedIn’s leading markets for verified members.

Data from market data platform Tracxn shows that this shift is creating an entire trust infrastructure industry. Since 2020, more than 200 Indian startups have been founded in areas such as identity verification, KYC (know your customer), fraud detection, hiring checks and deepfake detection. Identity verification alone saw 111 new companies launched during this time, while 66 fraud detection startups were launched. Since 2020, the sector has drawn over $350 million in funding, with BFSI-focused fraud prevention making up the largest share. BFSI is short for banking, financial services and insurance.

Tampered refunds

Consumer-facing industries such as food delivery are seeing similar incidents.

Last year, a popular social media post described how a customer reportedly got a refund from quick commerce platform Instamart using an AI-modified image.

This incident resonated widely because online delivery platforms have always relied on customer images for verification, assuming those images are real. However, image editing tools are now easily accessible and can create convincing fake claims.

Swiggy, the parent company of Instamart, did not respond to Mint’s inquiries.

Food delivery platform Zomato recognized the issue and stated that while most complaints are made in good faith, the company has started using early-detection models to spot misuse attempts.

“We employ advanced systems to detect rare misuse attempts, including AI-generated images. We’ve already rolled out early-detection models and are scaling these safeguards responsibly,” Aditya Mangla, CEO of Zomato, said in an email.

The food delivery example, though likely a small part of a bigger issue, highlights how AI-generated manipulation can infiltrate everyday transactions.

Karthic Somalinga, vice president of engineering for fulfilment at quick commerce platform Zepto, said the company now combines visual verification with behavioural signals and in-house detection systems to limit misuse.

As creating fake visual evidence becomes easier, food and grocery platforms that rely on quick resolutions as a competitive advantage must invest in fraud detection and verification systems that were not part of their initial operating model.

As visual evidence becomes easy to fabricate, food and grocery platforms, which depend on instant resolution as a competitive advantage, are being forced to invest in fraud detection and verification systems that were never a part of their original operating model.

Banking, finance chokepoint

Beyond consumer platforms, the risk of manipulated images as evidence increases significantly once they enter the country’s financial systems, which are already burdened by challenges with fraud.

According to data from the Reserve Bank of India, Indian banks reported fraud losses of ₹34,771 crore in fiscal year 2025 (FY25), even though the number of reported cases had declined—the financial damage from each fraud has risen.

For banks and insurers, AI-generated misinformation presents the highest financial risk.

Hariharan points out that the BFSI sector is a key target for most scams as every successful fraud eventually needs a financial endpoint.

Hariharan says the highest risk is in digital onboarding, when a bank account is opened, a wallet activated or a loan issued remotely.

According to data from the Reserve Bank of India, Indian banks reported fraud losses of ₹34,771 crore in FY25.

“If someone uses a synthetic identity or manipulated credentials to get through onboarding, that’s where the financial system gets exposed,” he said.

His company, IDfy, serves various industries, including BFSI, consumer and commerce. With clients such as HDFC Bank, Axis Bank, PhonePe and TataAIG Insurance, Hariharan explains that IDfy’s platform is integrated during onboarding and often serves as the first layer of verification for individuals before any transaction approval.

“We give them (financial institutions) a report card of an individual. What happens next—whether they reject, ask for more checks or approve—is up to the bank. We don’t make that decision,” he says.

Fraud related to insurance claims, where reimbursement depends on documents, involves a complex paperwork process. As more claims are submitted digitally, manipulated documents, such as altered records or misleading supporting papers, have become harder to detect in fast-paced systems.

“The real issue in insurance lies in these high-volume settings where verification occurs quickly and at scale,” says Hariharan. “Institutions now need to view verification not as a one-time compliance step, but as an ongoing cost of growth.”

Ankush Tiwari, founder of pi-labs, which develops deepfake detection and forensic AI tools, describes GenAI as a tool that exposes weak points in identity-driven systems.

“Whenever a new disruptive technology comes into software, it exposes new threat vectors,” he said, noting that in the early days of generative AI, most attention was on productivity gains, not on potential misuse. pi-labs focuses on deepfake detection by analysing images, audio and video for synthetic manipulation.

“If AI needs an antivirus, this (AI-modified documents) should be one of the things it supports,” Tiwari said. “It’s a continuous challenge. New-generation techniques are coming every few weeks.”

Tiwari pointed out that most financial workflows, from account opening to claims processing, already treat face, voice or video as proof of presence.

Ankush Tiwari, founder of pi-labs.

View Full Image

Ankush Tiwari, founder of pi-labs.

“In the financial industry, face is an identity. In many countries, audio authentication is also an identity,” he explained.

Live video deepfakes and voice cloning attacks can now bypass these checks and allow fraud without the need to create fake documents.

“In our country, you can open a bank account without going to the bank. And there are tons of NBFCs (non-banking financial companies) that will give you a loan between ₹5,000 and ₹2 lakh within 15 minutes,” Tiwari says.

In a recent incident reported in Roorkee, cybercriminals used a deepfake audio call mimicking a son’s voice to convince a father to transfer money, defrauding him of ₹6 lakh, showing how synthetic voices can be used to exploit trust in financial contexts.

Tiwari admits the industry still lacks clear answers about responsibility when synthetic identity is successfully used.

“If an account is opened using a deepfake or AI-generated ID, who does the liability lie with?” he questioned, adding that this uncertainty is a reason many organizations are hesitating rather than taking action.

pi-labs lists Maharashtra Cyber and the Indian Computer Emergency Response Team (CERT-In) among the agencies it collaborates with. It is also working with regulators and major banks to incorporate deepfake detection tools into identity and face authentication systems in the BFSI sector.

For insurance companies, claims often depend heavily on visual and documentary evidence such as hospital records, discharge summaries and identity proofs. Deepfakes or AI-modified images further complicate the identification of authentic proof.

Blind spots in verification

As companies improve fraud checks, disputes over verification are also rising. In one case, Amravati-based Nikhil Gupta had his ₹52,000 reimbursement claim with Star Health Insurance rejected.

The insurer claimed it found multiple issues in the discharge summary, including formatting errors, overwritten dates and mismatched details.

Gupta, who had been insured with Star Health for seven years before filing his first claim in 2024, disputes the insurer’s conclusions. Star Health has said the discharge summary appeared fabricated, citing a superimposed hospital letterhead, inconsistencies in the template, the absence of the insured person’s name, and mismatched age details between documents and overwritten dates. Gupta maintains that the documents carried the hospital’s seal and authorized signatures and says earlier communications from the insurer referred only to “document discrepancies” rather than suspected fraud.

As companies improve fraud checks, disputes over verification are also rising.

In a written response to Mint, Star Health says it stands by its decision to reject the claim and added that it has referred the matter to the internal ombudsman for an independent review.

This incident shows how tighter fraud controls can sometimes lead to disputed decisions for legitimate policyholders, especially as scrutiny of documents becomes stricter.

The impact extends far beyond simply catching the bad actors. Systems and infrastructure once designed for smooth and fast resolution for consumers now have to go through additional layers of monitoring. While clever fraud may still find gaps, honest users might have to bear the brunt through delayed resolutions, heavy documentation demands and unclear decision making.

Source

Leave a Reply

Your email address will not be published. Required fields are marked *