Startups come and mostly go, but Builder.ai is an unusual case.
Having raised $445 million from investors including Microsoft Corp. and SoftBank Group Corp., the British firm entered insolvency proceedings this week after a major creditor seized $37 million from its accounts, leaving $5 million in the company’s coffers. Builder.ai’s former staff told Bloomberg News that it had been inflating its sales figures to investors, forcing the company to lower its sales estimates in March. But that wasn’t the only thing it inflated: When I investigated the startup back in 2019, workers told me that its core technology of building apps with artificial intelligence was mostly being done by software developers in Ukraine and India.
The company denied this at the time. (It also later changed its name to Builder.ai from Engineer.ai.) But a spate of other companies has been rapped over the past year for secretly using humans in the place of “AI” thanks to crackdowns by the US Securities and Exchange Commission, Department of Justice and Federal Trade Commission. A less egregious approach is to exaggerate how cutting-edge their tech truly is. In both cases, customers and investors take the bait because they don’t do proper due diligence — and because the definition of “artificial intelligence” itself is so grey, its underpinnings so difficult for non-technical people to parse, that its sellers can get away with slapping the label on more basic software. Or, at least, they could.
We may finally be turning a page on the ignoble “AI washing” chapter of tech history. For one thing, generative AI has delivered concrete breakthroughs that make the practice of using secret contractors unnecessary. A separate AI firm I investigated five years ago, for instance, hired people to trawl social media because its algorithms couldn’t do the job properly. It ordered them to sign nondisclosure agreements and post vague titles to their LinkedIn profiles. Yet today, large language models can do those jobs, with just a few humans providing oversight rather than doing most of the heavy lifting.
Aggressive chasing by regulators recently is also acting as a deterrent — and the offensive is set to continue under the Trump administration. In January 2025, the SEC settled charges with San Carlos, California-based Presto Automation Inc. for overstating the capabilities of its “AI-powered” voice recognition technology for US drive-thru restaurants like Carl’s Jr. and Hardee’s. The company claimed that its product eliminated the need for human order-taking, when in reality “the vast majority” of drive-thru orders placed through its system required intervention from human contractors working abroad, according to the regulator. The company told Bloomberg News that it used offsite workers to help train its system.
Many of Silicon Valley’s most successful entrepreneurs employ some element of the fake-it-till-you-make-it mantra, but taking it too far with AI increases the risk of getting called out. DoNotPay is a Midvale, Utah-based startup that advertised itself as “the world’s first robot lawyer,” but the FTC fined it $193,000 in September 2024 for deceptive advertising, since the company’s AI couldn’t provide legal services without human intervention. Expect the spotlight to keep shining on such players, with the SEC recently saying it will ramp up scrutiny of how AI is marketed and used by financial firms. A spokesman told tech news site Ars Technica that the complaint “relates to the usage of a few hundred customers some years ago (out of millions of people), with services that have long been discontinued.”
Some healthy skepticism is needed for the biggest players too. Alphabet Inc.’s Google this week announced a slew of jaw-dropping products that, among other things, can create hyper-realistic videos of humans talking and singing. But Google has skirted the boundaries of fakery as well. In December 2023, the company released a video on YouTube of its flagship AI model, Gemini, which suggested that it could “talk” and answer questions about an image in real time. When I asked Google about the video then, the company admitted it had edited the demo, which had been based on still images.
The reckoning for AI washers is long overdue, and the regulatory clampdown could help keep investment flowing toward genuine innovation. But the onus shouldn’t be on regulators alone. Investors and business leaders also need to sharpen their scrutiny of AI sellers to avoid the snake oil, which is unlikely to go away completely. Of course, it can be hard to do proper due diligence in the middle of fierce market pressure. One February 2025 study from independent research firm Tech.co found that 58% of businesses using AI were doing so because of “pressure from competitors.” All the more reason for businesses to beware the exaggerators.
A message from Advisor Perspectives and VettaFi: To learn more about this and other topics, check out some of our webcasts.
Bloomberg News provided this article. For more articles like this please visit
bloomberg.com.
More Social Media Topics >