Latest AI News

OpenAI Falls Short of Revenue and User Targets as It Races Toward IPO, WSJ Reports
OpenAI has fallen short of its goals for new users and revenue in recent months, sparking concern among some company leaders over whether it can support its extensive data-center spending, the Wall Street Journal reported on Monday, citing people familiar with the matter.
View

Lovable’s Vibe Coding Platform Is Now Available as an Android and iOS App
Lovable, the popular artificial intelligence (AI)-powered full-stack coding platform, launched its mobile app on Monday. Available on both the Play Store and the App Store, the app brings vibe coding capabilities to smartphones and lets users generate websites and apps on the go. While the mobile app does not come with the full functionality of the website, users can still prompt their ideas, preview the results, and iterate on them. The startup has also added cross-platform support, allowing users to switch between the app and the website without a hitch.
View

Tech Layoffs Surge: March Worst Month in Two Years as Companies Slashed 38,000 Jobs, Layoff Tracker Claims
Tech companies across the globe are working to integrate AI-powered tools into their daily workflows, citing higher efficiency and lower costs. This appears to be a great proposition for the tech firms, who can increase their margins, increase their profits, and keep their shareholders happy, even if they record the same revenue every quarter, year after year. However, this unprecedented AI adoption has led to massive layoffs across the industry. Another phenomenon fueling the dismissals is the mass pandemic-era hiring, when the blue-collar employees at tech firms ballooned exponentially.
View

TSMC Sells Over 1 Mn Arm Shares, Lets Go of All Stake
The filing shows a full exit from the Arm investment with a $174 million impact on retained earnings.
View

Lovable Launches Mobile App for iOS & Android, Brings Vibe-Coding to Smartphones
The new app allows users to build AI-powered applications using voice or text prompts while adhering to Apple’s evolving rules for code-generating platforms.
View

OpenAI Brings GPT-5.5 & Codex to AWS as Microsoft Exclusivity Ends
OpenAI models will be accessible via Amazon Bedrock, allowing customers to build applications using existing AWS services and governance systems.
View

Why Developers are Abandoning GitHub Again
“Github has been down for most of the day. I'm so tired of this. Never been so ready to move on.”
View

GitHub Changes Copilot Pricing To Usage-Based Model Starting June
GitHub said the change is intended to align pricing with actual usage.
View

AWS Launches Desktop AI Assistant That Connects Apps and Data
Amazon Quick connects with tools such as Google Workspace, Microsoft 365, Salesforce, Zoom, Slack, and Jira.
View

Can Gujarat Lure GCC Elites Away from Bengaluru and Hyderabad?
Gujarat is now seeing growing interest from global firms willing to establish primary operations directly in the state.
View

At his OpenAI trial, Musk relitigates an old friendship
Among the most interesting parts of Elon Musk’s testimony Tuesday in his lawsuit against OpenAI wasn’t the charity he claims was stolen from him (we all knewthat was coming). It was about an old friend. Musk testified that one of his core motivations for co-founding OpenAI was a falling out with Google’s Larry Page over AI safety — specifically, a conversation in which Musk raised the prospect of AI wiping out humanity and Page shrugged it off as “fine,” so long as AI itself survived. Page called Musk a “speciest” for being “pro human.” Musk called the attitude “insane.” That’s mostly notable given how close the two once were. Fortune included them on its 2016 list of secretly best-friend business leaders; Musk was so comfortable with Page that he regularly crashed at his Palo Alto home. Page once told Charlie Rose that he’d rathergive his money to Muskthan to charity. The friendship didn’t survive OpenAI. When Musk recruited Google AI star Ilya Sutskever to help launch the company in 2015, Page felt personally betrayed and cut off contact. It’s a story Musk has told before — including to author Walter Isaacson for his bestselling biography of Musk — but Tuesday was the first time he said it under oath. Page hasn’t commented, and it’s worth remembering everything that Musk said was in service of a lawsuit. Still, as recently as 2023 he told tech podcaster Lex Fridman he wanted to patch things up: “We were friends for a very long time.”
View

Google expands Pentagon’s access to its AI after Anthropic’s refusal
Google has granted the U.S. Department of Defense access to its AI for classified networks, essentially allowing all lawful uses, according to multiple news reports. This deal follows Anthropic’spublic stand against the Trump administrationafter the model maker refused to grant the DoD the same terms. The Pentagon wanted unrestricted use of AI, whereas Anthropic wanted guardrails to prevent its AI from being used for domestic mass surveillance and autonomous weapons. Because Anthropic refused those use cases, the DoD branded the model maker a “supply-chain risk” — a designation normally reserved for foreign adversaries. Anthropic and the DoD are now embroiled in a lawsuit, witha judge last month granting Anthropic an injunctionagainst the designation while the case proceeds. Google marks the third AI company to try and turn Anthropic’s loss into its own gain.OpenAI immediately signed a dealwith the DoD,as did xAI. Google’s agreement includes some language saying that it doesn’t intend for its AI to be used for domestic mass surveillance or in autonomous weapons,The Wall Street Journal reports, which is similar to contract language with OpenAI. But it is unclear whether such provisions are legally binding or enforceable, per the WSJ. Google entered this deal even though 950 of its employees have signedan open letterasking it to follow Anthropic’s lead and not sell AI to the Defense Department without similar guardrails. Google did not respond to a request for comment.
View

Amazon launches an AI-powered audio Q&A experience on product pages
Amazonlauncheda new AI-powered feature on Tuesday that allows users to ask questions about products and receive conversational audio responses generated in real time. The responses are delivered by what the company calls “AI-powered shopping experts,” which present information in a natural, discussion-style format. The new “Join the chat” feature aims to save customers time by providing key product details without requiring them to scroll through lengthy descriptions or reviews. The AI pulls together insights about product features, customer feedback, and other relevant information. For example, shoppers can ask questions like whether a coffee maker is suited for beginners or whether a sweater feels itchy based on customer reviews. Rather than giving generic answers, Amazon says the AI builds on previous responses to provide more relevant and helpful information, while also making sure not to repeat anything. This is meant to be a similar experience to speaking with a knowledgeable employee at a store. “Customers can ask questions and actually steer where the conversation goes. Every question they ask influences what comes next, making the experience a conversation customers can join and customize,” the company writes in a blog post. “Join the chat” is part of a broader experience called “Hear the highlights,” which offers short audio summaries on millions of product pages within the Amazon Shopping app. That feature begantestinglast May and is currently available in the U.S. However, only select products have audio summaries. To use the feature, customers open a product page in the app and tap the “Hear the highlights” button, located below the product image. From there, they can listen to a brief overview or tap the “Join the chat” icon to ask specific questions via text or voice. The audio can continue playing even as users browse. The new capability builds on Amazon’s growing lineup of AI-driven shopping tools. These includeRufus, its generative AI assistant that helps customers research products and compare options;Interests, which continuously tracks and surfaces new items aligned with a shopper’s preferences; and “Help me decide,” which suggests products based on a person’s searches, browsing, and shopping history.
View

Amazon is already offering new OpenAI products on AWS
Almost as soon as OpenAI announced that its major investor and cloud partner, Microsoft,no longer has exclusive rightsto any of its products, Amazon started gloating. After the revised OpenAI/Microsoft agreement was announced on Monday, Amazon CEO Andy Jassy noted in a tweet that it was a“very interesting announcement.”That agreement solved OpenAI’s problem of allowing AWS to offer its products, an issue that crystalized after it signed anup-to-$50-billion dealwith Amazon. Amazonannouncedon Tuesday that AWS’s Bedrock service now has OpenAI’s latest models, its code-writing service Codex, and a new product for creating OpenAI-powered AI agents. Bedrock is Amazon’s AI app building and model-choosing service. Amazon is calling the new agent service Bedrock Managed Agents. It is specifically designed to use OpenAI’s reasoning models, offering features like agent steering and security. Amazon promises in its blog post that “this is the beginning of a deeper collaboration between AWS and OpenAI.” And it will certainly be interesting to watch. The Microsoft/OpenAI relationship hasreportedly been deteriorating for some time, with each of them finding comfort in the arms of their partner’s biggest rival. OpenAI has turned to AWS and Oracle. Microsoft to Anthropic; the Redmond-based software giant is alsoworking on a new agent offering powered by Claude.
View

Red Hat’s OpenClaw maintainer just made enterprise Claw deployments a lot safer
On Tuesday, Red Hat principal software engineer Sally O’Malleyreleaseda new open source tool called Tank OS to make it easier to deploy and manage OpenClaw agents more safely. “This was a fun project that I put together on the weekend that I knew would be a really good fit for AI and where we’re going,” she told TechCrunch, adding that she wanted to give it “to the masses.” Tank OS is geared toward power users looking to run OpenClaw on their own computers and toward IT pros managing fleets of corporate OpenClaw agents. It makes OpenClaw safer and easier to maintain en masse. Countless people, companies, and startups are already inventing better ways to work with OpenClaw — theopen source projectthat installs an AI agent on a local computer. There is also a growing number of startups building competing claw alternatives that they say are safer (like NanoClaw). What makes O’Malley’s project notable is that she is an OpenClaw maintainer. That means she’s among the select software engineers working with creator Peter Steinberger to decide which features and bugs get worked on. In her case, she focuses on making OpenClaw work better in enterprise use cases, and with Red Hat’s various flavors of the Linux operating system. (While Steinbergerwas hired by OpenAI,hestill leads the independent open source OpenClawproject.) O’Malley joined OpenClaw because she sees it working to “enable everyone to run AI in a safe way, that’s open,” she said. But she got to thinking about what will happen when OpenClaw invades an enterprise and decided to build a tool for that eventuality. She began with an open source container tool called Podman, created by a colleague at Red Hat. Containers are a way to run apps separately from the underlying computer, with everything the app needs to run, bundled together. They can run a Linux app on a Windows or Mac machine, for instance. Podman is a particularly secure way to do this because it’s “rootless,” meaning it doesn’t give the containers any privileges from the underlying machine, Red Hat says. Tank OS loads OpenClaw onto Red Hat’s Fedora Linux OS in a Podman container and makes that container a bootable image, meaning it will run and launch OpenClaw when you start the computer. Her tool includes everything needed to make OpenClaw useful without human oversight, like state (the part that allows it to remember); the ability to store API keys (the credentials for accessing subscriptions and services); and other features. Users can run multiple Tank OS instances on a machine to do different tasks, never sharing passwords or credentials between them, and no OpenClaw instance can gain access to anything else running on the computer. While O’Malley knows that the OpenClaw project is working to make the agent safer, she says that “it’s an incredibly powerful application,” but can also be “dangerous” if not configured properly. “It’s not a tool that you can use easily unless you do have some sort of technical experience,” she said. Stories abound, such as the Meta AI security researcherwhose Claw started deleting all of her work email,or an agent thatdownloaded in plain text all of a user’s WhatsApp DMs.There’s also a growingcrop of malwareaimed at OpenClaw users. To be sure, Tank OS isn’t really for techno novices either, she says. You have to be comfortable installing and maintaining software on your computer, she says. Tank OS is also not the only OpenClaw implementation working in containers.NanoClaw, for instance, is doing a similar thingwith well-known container company Docker. But Tank OS is intended to be especially useful for IT pros (aka, Red Hat’s main customers) who may one day manage fleets of OpenClaw agents on corporate computers. It allows them to update the agents the same way they already manage other containers. “My role within OpenClaw is really my interest in it,” O’Malley said. “How it’s going to look scaled out when there are millions of these autonomous agents talking to one another.”
View

BCI startup Neurable looks to license its ‘mind-reading’ tech for consumer wearables
BCI (brain-computer interface) technology — in which neural signals are routed from a person’s head to a computer — was once the stuff of science fiction, but these days the technology represents a competitive corner of the tech industry. One of the companies racing to commercialize BCI isNeurable, which this week announced that it’s looking to license its “mind-reading” technology to consumer wearables. Neurable specializes in “non-invasive” BCI, which distinguishes itself fromfirms like Neuralink— the Elon Musk-founded startup known for inserting computer chips directly into people’s skulls — in that its product doesn’t require users to undergo brain surgery to enjoy its benefits. Neurable’s technology works through a combination of EEG sensors and signal processing that can scan a user’s brain activity, analyze it with AI, and provide information about a person’s cognitive performance. In December, Neurableraised $35 million in a Series A, which it plans to use to scale the commercialization of its technology. This week, the company announced that, as part of its expansion effort, it is looking to license its technology to a variety of consumer-facing companies. The idea is that mind-reading tech (which can provide detailed data about how a person’s brain works while they’re engaged in various activities) could be integrated into wearables across a number of industries — including health and athletic products, productivity tools, and gaming. “Through Neurable’s licensing platform, OEMs can directly integrate its AI-powered brain-sensing technology into existing hardware, such as headphones, hats, glasses, and headbands, while maintaining full control over product design, user experience, and distribution,” the company said in a press release on Tuesday. Neurable has already fostered partnerships with a number of companies to test out its effectiveness. This includes HP Inc.’s HyperX, a gaming brand, with which itcreated a headsetdesigned to help gamers “level up their game play by optimizing focus and performance.” It has also partnered witha company called iMotions, a software platform that specializes in human behavior research, to assist with the company’s research initiatives. In an interview, Neurable’s CEO Ramses Alcaide declined to say what new partnerships the company has in the works, but said that the company was seeking to expand its purview across a host of domains. “In the past, we were very specific about our partnerships,” Alcaide said, noting that Neurable tended to home in on a particular company to prove that a unique commercial application was worthwhile. Now that they know expectations can be met on a number of fronts, the startup is focused on scaling itself, he said. “What we’re doing now is we’re basically saying, like, ‘Hey, we’ve demonstrated that we’re getting great traction’,” Alcaide said. “Like, let’s make this as ubiquitous as heart rate sensors on your wrist, right?” Despite the “non-invasive” label, brain data is arguably a little bit more intimate than the information culled from a heart rate sensor, so what kind of privacy protections does a company like Neurable provide? Alcaide said that the company ensures that user data is “protected and anonymized.” The company’s privacy policy provides avariety of different guidelinesfor when and how a user’s data might be accessed and used. “We make sure we follow HIPAA standards, like we’ve gone above and beyond where a lot of startups would be at our stage to make sure that we protect the data, we encrypt it, and that we anonymize it,” Alcaide said. Does Neurable leverage a user’s neural data to train its AI software? “We can with user consent, right?” said Alcaide. “But we do it in a very specific way.” That specific way involves asking the user whether their data can be used for the purposes of particular experiments, Alcaide said. “We are not collecting the data, just training on it willy-nilly,” he said. In other words, this kind of data usage is quite targeted. Alcaide said that his industry is at an “inflection point” — one wherein there finally exists “a real business model in neuro-technology that is scalable.” What comes after that inflection point is the big question.
View

YouTube is testing an AI-powered search feature that shows guided answers
Users often search for recipes and travel plans on YouTube to find videos related to their queries. Now, the video platform will offer a new tool to cater to those users’ needs with its introduction of anAI-powered interactive search featurethat presents step-by-step results along with a mix of both text and video. Through this new “Ask YouTube” feature, users can ask questions like “plan a 3-day road trip from San Francisco to Santa Barbara” and get step-by-step results, which would be a mix of text, short videos, and longer videos instead of just video results. The company says it will show videos and relevant video segments with titles and channel details to help users discover new creators. What’s more, users can ask follow-up questions like “Where can I get good coffee?” and get suggestions in a similar style. The feature is available to Premium subscribers in the U.S. who are aged 18 or older. (Interested users will need to opt into this experimentthrough this URL.) Google noted that it is working on making this feature available to non-Premium users, as well. Google has been pushing its AI mode-styled search on multiple surfaces beyond YouTube. The company introduced AI mode last year, letting users askmulti-part questions and follow-ups. This year, it introducedside-by-side web browsingandproduct price exploration featuresfor AI mode. The company also introducedGemini’s Canvas feature to maintain projectswithin AI mode last month. With this new feature test on YouTube, Google could later explore surfacing different kinds of videos along with sponsored placements.
View

Lovable launches its vibe coding app on iOS and Android
Apple’s recentcrackdownon vibe-coding apps hasn’t held up Lovable’s launch of its no-code AI app builder, which isnow availableas a mobile app onAppleandGoogle’sapp stores. The vibe coding startup’s new mobile app is being pitched to would-be app builders as a way to code on the go via voice or text AI prompts that let you capture your ideas as they pop into your head. That means you can kick off Lovable to work on your random app idea from anywhere, letting its agent run autonomously after receiving your input. The new app will also allow you to switch back and forth between your computer and phone to pick up where you left off on a given project and receive notifications when a build is ready for review. The app’s arrival comes shortly after Apple addressed what vibe coding apps can and can’t do on its App Store. The tech giantrecently blocked updatesto popular vibe coding tools, including Replit and Vibecode, for violations of its developer guidelines. Simply put, Apple wasn’t banning vibe-coding apps themselves, but it won’t allow apps that download new code or change their functionality, as that presents a security risk to end users. (It also means that Apple’s App Review team can’t properly vet the app during the approval process.) Apple also temporarilyremovedthe vibe-coding app Anything from the App Store for similar reasons, but the appreturnedafter making changesearlier this month. To comply with Apple’s rules, the vibe coding apps are no longer able to run their generated apps inside the host app. Instead, those app previews were moved to web browsers. Lovable has also seemingly complied with these rules as its new app touts the ability to turn ideas into “working websites or web apps.”
View

With $680 Mn in Q1, Indian AI Startups Have Found Their Mojo. But for How Long?
Investors are increasingly backing AI startups that can demonstrate monetisation, customer stickiness, and scalable enterprise use cases.
View

Taylor Swift Files to Trademark Voice, Image to Protect Her Likeness From AI Deepfakes: Report
Musician Taylor Swift has reportedly filed to trademark her voice and image. As per reports, the move comes amid rising artificial intelligence (AI)-generated deepfakes that, in particular, have affected celebrities and public figures. The pop superstar is said to have filed applications to trademark two samples of her audio and an image of her performing on the stage during her Eras tour. The application is currently in the review period, and the US government body will have to decide if the audio sample qualifies for trademark.
View

Otter’s new feature lets users search across their enterprise tools
AI meeting notetaker apps have realized that transcribing meetings and providing summaries alone is not enough to justify their business models and valuations. They now want to act as a full workspace where users bring in data from different sources, search across all of it, and make decisions about their business. Following notetakers like Read AI, Fireflies.ai, and Fathom, Otter is now launching enterprise search by acting as a Model Context Protocol (MCP) client. That means it can connect to and pull data from outside apps and services using a common standard that AI tools are rapidly adopting. Otter has been around for nearly a decade now, but it has been making moves toward becoming an enterprise productivity tool in the last few months. Last October, the company launched a way fororganizations to build custom MCPs to access Otter data outside the app. The company’s latest move is more about bringing outside data into the app. With this launch, users can connect their Gmail, Google Drive, Notion, Jira, and Salesforce accounts and query that data along with existing meeting data. The company said that it will soon allow connections with Microsoft Outlook, Teams, SharePoint, and Slack. Users can not only search for data across these tools but can also push meeting summaries to Notion or draft a Gmail message. The company said that it has also redesigned its AI assistant to be consistently present across the whole interface, so users can ask questions anytime. The assistant can understand the context of the screen, such as a particular meeting or a channel, and answer questions accordingly. Meanwhile, most notetakers are followingGranola’s leadand allowing fora botless meeting capture— recording meetings using a device’s system audio rather than having a bot join the call. Otter said that it brought this feature to the Mac app late last year, and is now launching a Windows app with a similar feature. There has been a debate around meeting note-taking with bots (where a bot joins the meeting) or without bots. Otter CEO Sam Liang said that the company’s enterprise customers prefer when a meeting notetaker joins the call. “When we talk to enterprise customers, most of them actually prefer the note taker that joins the Zoom meeting because it provides the transparency. They also prefer the meeting notes to be shared with all the meeting attendees, so that the note is not limited to one person,” he told TechCrunch over a call. Otter said that it has a deduplication feature that prevents a swarm of bots from joining a meeting simultaneously to avoid situations where there are more bots than humans on a call. Last year, the company said it had25 million users and $100 million in annual recurring revenue. While the company didn’t provide a new set of financials, it said that the platform now has 35 million users.
View

Vertiv Acquires Liquid-Cooling Specialist to Bolster AI Infrastructure Capabilities
As AI workloads push rack densities beyond the limits of air cooling, operators need integrated thermal solutions that work from the chip to the cooling tower.
View

Is NVIDIA’s Golden Age—and Monopoly—Coming to an End?
The AI chip market is growing fast, but NVIDIA’s dominance is under pressure as big cloud companies build their own chips.
View

The $20 Billion Bet: Why OpenAI-Cerebras Deal is Far More Strategic Than You Think
In the AI infrastructure arms race, speed, capital, and control of the supply chain have become critical. OpenAI and Cerebras are now bound together by all three.
View

Inside GenAI Meetup Hosted by RPTech, an NVIDIA Partner, in Association With AIM in Hyderabad
The session explored how developers can build, fine-tune, and deploy AI models locally without relying entirely on cloud APIs.
View
