Welcome back to The Prompt,
Chip giant Nvidia said on Monday it will start manufacturing AI supercomputers— machines that can process copious amounts of data and run complex algorithms— entirely within the U.S. for the first time. The announcement comes after President Trump signaled that imported semiconductors would be targeted by tariffs this week and announced a national security trade investigation into chip imports from China. Nvidia said it has already started producing its Blackwell chips in TSMC’s Phoenix, Arizona plant and plans to work with partners like FoxConn and Wistron to set up other factories in Houston and Dallas. It plans to build robots to operate the facilities, which will be designed using “digital twins”— a virtual simulation of their real world objects and environments —to build the plants faster. But even keeping chips aside, Trump’s tariffs could make building AI datacenters more expensive, as they are reliant on raw materials imported from other countries, Forbes reported.
And if you haven’t gotten a chance yet, check out our seventh annual AI 50 list here.
Now let’s get into the headlines.
ETHICS AND LAW
Community colleges across the country are facing an onslaught of enrollments from “bot” students who enroll in classes by the hundreds to bilk tens of millions of dollars in state and federal aid money, Voice of San Diego reported. These “bot” students use fake aliases and submit AI-generated homework in order to stay “enrolled” long enough to collect aid. In 2024, about 25% of community college applicants in California were bots.
PEAK PERFORMANCE
Google has trained an AI model that aims to decipher patterns and structures in dolphin sounds with a goal of understanding their meaning and ascertaining whether they have language. Named DolphinGamma, the model consists of 400 million parameters and is trained on data from the Wild Dolphin Project, a nonprofit that studies and collects data on Atlantic spotted dolphins. The end goal of the project is to build technology that might facilitate two-way interactions between human researchers and dolphins in the ocean.
TALENT RETENTION
AI continues to be a white hot focus for companies–as does the talent needed to build it. To that end, Google Deepmind makes its employees sign noncompetes that can last as long as a year, preventing them from joining a rival for 12 months after they stop working at Google, according to Business Insider. The employees continue to get paid during the extended garden leave period. Nando De Frietas, a former Google DeepMind director, shared his frustration with the contracts on X: “It’s abuse of power, which does not justify any end.”
HUMANS OF AI
May Habib, CEO and cofounder of $1.9 billion-valued enterprise AI startup Writer, says she isn’t just selling her company’s AI software–which allows 300 companies like Intuit, Salesforce and Uber to build AI apps for specific functions across marketing, HR and sales–she’s “selling a different way of doing things.” The company, featured on the Forbes AI 50 list, is expanding to launch a new platform for AI “agents” — systems that can carry out specific work autonomously. From pitching clients like Visa on her nascent machine learning based translation software back in 2016 to now training a family of cost efficient AI models dubbed Palmyra (named after the ancient Syrian city) for the enterprise world, the company’s strategy has remained the same: building what its customers want.
DEEP DIVE
In late March, OpenAI added new image generation capabilities to its star product ChatGPT. The update went viral, resulting in a deluge of Studio Ghibli-inspired AI-generated images posted across social media and drawing millions of users to the platform.
But new research from cybersecurity firm Cato Networks has found that ChatGPT can now be tricked into creating a slew of fake documents, including passports, social security cards and driver licenses. It can also be used to spin up convincing counterfeit checks and receipts. OpenAI spokesperson Taya Christianson said “our goal is to give users as much creative freedom as possible.” Images generated by ChatGPT include C2PA metadata to identify them as AI-generated and OpenAI takes action against people who violate the company’s usage policies.
Etay Maor, a chief security strategist at Cato Networks who has been studying cyber gangs for the past 20 years, said these forged documents are typically sold on the dark web and have largely been difficult to obtain. But thanks to AI tools like ChatGPT, creating realistic fake documents has become orders of magnitude easier and faster. Documents like passports and driver’s licenses are key to verifying a person’s identity and manipulated IDs open the floodgates for criminals to commit financial, insurance and medical fraud. The implications for misuse are broad and wide ranging, from gaining access to bank accounts to prescription abuse, Maor said. “Not just somebody who’s a professional criminal, anybody can do this. And that’s what’s super troubling about this,” he said. In a matter of seconds, he was also able to prompt ChatGPT to create a fake passport of a person that somewhat resembled me.
The use of AI by cybercriminals isn’t new. ChatGPT other AI tools have been used to create malware code, write phishing emails and supercharge cyberattacks. It’s not just AI tools that generate text, but technologies that cater to other mediums like voice, images and videos have added extra layers that help cybercriminals carry out complex fraud.
“All these different elements that build trust— style of a person, their visuals, their voice, their official credentials—all these building blocks for trust are disappearing,” Maor said.
WEEKLY DEMO
A startup called InTouch uses AI to call your parents or grandparents to check in on them and have a conversation if you don’t have the time, 404 Media reported. The AI can be prompted to speak about and ask questions about certain topics. After the call is over, the person who sets up the call receives an AI-generated summary of the call and notes about the person’s mood. “The idea of having an AI call your lonely relative because you can’t or don’t feel like it is dystopian, insulting, and especially non-human, even more so than other AI-based creations,” Joseph Cox writes.
MODEL BEHAVIOR
Education secretary Linda McMahon repeatedly confused AI (artificial intelligence) with A1(steak sauce brand) while giving a speech at ASU+GSV summit in San Diego. The sauce brand seized the moment, sharing an image on Instagram: “You heard her. Every school should have access to A1.”