August 12, 2025 — When The Wall Street Journal reported that Google and other tech giants were bringing back in-person interviews, the move was framed as a decisive strike against AI-assisted cheating. Candidates, the reasoning goes, can’t discreetly consult ChatGPT or pipe in code suggestions from a hidden screen if they’re sitting across from a hiring manager.
But will it work?
Certainly, the problem is real. Recruiters are catching more applicants using AI to generate real-time answers during remote interviews, especially in coding tests. Some have even spotted deepfake “candidates” impersonating real people to extract sensitive company information, according to Business Insider.
Google CEO Sundar Pichai insists at least one in-person round is now essential to confirm “the fundamentals,” while Mike Kyle of Coda Search/Staffing told the Journal that clients demanding on-site interviews have jumped from 5% in 2024 to 30% this year.
Still, critics note that fraud doesn’t vanish in a physical setting—it evolves. A confident candidate can memorize AI-generated solutions ahead of time, or simply pass initial in-person tests but outsource actual work later. And as hybrid work becomes the norm, most employees will eventually return to remote settings where AI assistance is easy to conceal.
Moreover, in-person requirements can exclude strong talent from outside major tech hubs. The pandemic-era expansion of recruitment pools—once hailed for boosting diversity—could quietly reverse if companies lean too heavily on geography-based hiring.
In the end, Google’s policy may make it harder to fake one’s way through the first round. But unless it’s paired with better skills verification, long-term work monitoring, and a culture of trust, AI fraud will simply adapt. As one industry analyst told Axios, “You can close the front door, but the side windows are still wide open.”