Happy Tuesday! Yesterday was a big day—it was Martin Luther King Jr. Day and President Trump was inaugurated for the second time. It was also a great day for Ohio—native JD Vance became the Vice President and The Ohio State University won the National Championship over Notre Dame.
But AI news hasn't stopped—Trump's already gotten to work and Deepseek announced another new open-source model, it's second in the last 30 days.
Before we get started, some housekeeping news: in case you missed it, this is the first issue of Machine Earnings, a new newsletter & community dedicated to AI's impact on the business world. Here's my piece introducing Machine Earnings, and if you're interested in our community, you can apply here.
Last week, OpenAI launched "scheduled tasks," a feature that, as the name suggests, allows users to set tasks to run at specific times within a ChatGPT chat.
I've been playing around with them and they work great, just as advertised. You can use natural language to set up reminders in a chat, they pop up when they're supposed to, and you get a corresponding email notification too.
At first glance, it might not sound groundbreaking—after all, even Siri can set reminders. But if you look closer, it's clear that OpenAI is laying the groundwork for a more powerful concept: AI Agents that can handle tasks intelligently and efficiently.
AI Agents have enormous potential, but keeping them narrowly focused is key. In my experience, the most effective Agents are task-based: simple, dedicated to a single function, and designed to produce reliable results. While multi-agent frameworks—where multiple agents collaborate—get a lot of buzz, their results are often inconsistent and hard to replicate, which makes them tricky for businesses to implement at scale. For instance, a company might set up a multi-agent system to troubleshoot customer problems, only to find that different agents offer different solutions to similar issues. That kind of inconsistency leads to confusion for both customers and the human agents who have to clean up after the AI. By contrast, a task-based agent can stick to what it does best, delivering reliable results with much less friction.
Scheduled tasks introduce users to the idea of setting up and managing tasks directly within ChatGPT. As more users get comfortable using this feature, OpenAI gathers valuable data about how people structure their workflows. That insight helps refine AI's ability to assist with tasks in a practical, repeatable way.
If you pair this approach with AI Agents, the potential expands dramatically. Imagine having an AI Agent that not only reminds you about an upcoming deadline, but also preps the materials you need in advance—compiling research, drafting an outline, or highlighting key points—so you can hit the ground running. Tasks start out as simple reminders, but over time, they evolve into a cornerstone of how AI Agents operate within ChatGPT. In that sense, tasks might just be the atomic unit of AI Agents, much like tweets are for X. They're the building blocks that power consistent user engagement, enabling the AI to deliver value in a clear, measurable way.
Over the past 18 months, I've spent more time in public market investing. Compared to angel investing, it's often easier: there's real data, proven traction, and less guesswork about a business's viability.
One stock that flew under the radar last year was Soundhound AI. In 2024, its share price soared over 830%, peaking at more than a 1000% gain. Soundhound AI, known for its conversational voice AI technology, hit several major milestones that helped drive this incredible performance.
In February 2024, Nvidia invested in Soundhound, sparking a fourfold jump in its stock price. But the main driver was revenue growth. In Q3 2024, Soundhound posted $25.1 million in revenue—an 89% year-over-year increase—and projected $82–85 million in full-year revenue, marking an 82% annual growth rate.
Diversification was key to this success. Soundhound had relied heavily on automotive partnerships, with 90% of 2023 revenue coming from that sector. By 2024, the company expanded into retail, financial services, healthcare, and hospitality. These new verticals now contribute between 5% and 25% of revenue each, reducing dependence on any single industry. Customer concentration also dropped: its largest customer now represents just 12% of revenue, down from 72% in 2023.
The company's August acquisition of Amelia, a leading enterprise AI provider, added more fuel. For $80 million in cash, equity, and debt assumption, Soundhound gained an asset expected to contribute $45 million in recurring revenue in 2025. That acquisition positioned Soundhound to reach a forecasted $150 million in revenue next year.
Despite the strong numbers, the stock's fundamentals warrant caution. With a price-to-sales ratio of 109—much higher than companies like Nvidia—Soundhound's valuation doesn't align with its current performance. As shown in the 6-month chart below, the stock's price overheated and has since started retreating. If it dips below $10/share, I might consider buying again. (Reminder: this isn't investment advice. Do your own research!)
On Inauguration Day, Trump swiftly signed a series of Executive Orders, including one that effectively repealed Biden's 2023 AI Executive Order.
The Biden Administration had pursued ambitious AI regulations that many in the industry considered premature and heavy-handed. The 111-page order mandated stringent safety and transparency protocols, requiring AI companies to share detailed system information with the government and implement measures against algorithmic discrimination. One of the most controversial elements was the invocation of the Defense Production Act (DPA). Originally passed during the Korean War, the DPA's inclusion sparked criticism, with detractors arguing that it represented executive overreach and set a concerning precedent for future AI oversight.
With Biden's framework now reversed, Trump and his newly appointed Crypto & AI Czar, David Sacks, have the opportunity to introduce a lighter-touch regulatory approach. Although the specifics of their framework remain unclear, it's likely to be far less burdensome than the guidelines introduced by the previous administration.
On January 13, Nvidia openly criticized the Biden administration's newly announced export restrictions on advanced AI chips. The company described these measures as "misguided" and cautioned that they could undermine U.S. leadership in artificial intelligence. Nvidia's Vice President of Government Affairs, Ned Finkle, emphasized that the restrictions might stifle innovation and economic growth, stating, "This last-minute Biden Administration policy would be a legacy that will be criticized by U.S. industry and the global community."
The export controls, introduced in the final days of the Biden administration, aim to limit the distribution of advanced AI processors to countries such as China and Russia, citing national security concerns. However, Nvidia argues that these restrictions could inadvertently hinder the global adoption of AI technologies and negatively impact the U.S. economy. The company urged the administration to reconsider the policy, highlighting the potential for it to "set America back, and play into the hands of U.S. adversaries."
In response to the announcement, Nvidia's stock experienced a decline, reflecting investor concerns about the potential impact on the company's revenue and the broader tech industry's growth. The Semiconductor Industry Association echoed Nvidia's sentiments, warning that the export controls could "disrupt supply chains, harm U.S. companies, and lead to unintended consequences."
It's easy to see why Nvidia takes issue with regulations that limit its ability to sell advanced AI chips abroad. But there are real concerns here too. Restricting U.S. companies from selling to countries like China and Russia could inadvertently accelerate foreign competitors' R&D efforts, giving them an opportunity to close the gap in AI hardware. For instance, Deepseek in China has already demonstrated the ability to match U.S. AI products on a shoestring budget. If these restrictions push them to do the same with hardware, the U.S. may find itself enabling competitors rather than maintaining its technological edge.
While Windsurf might not get the same press and mainstream adoration as Cursor, it's just as good as a AI code generation platform. I use both Windsurf and Cursor, and there's a ton of debate on places like Reddit around which one is better.
As we all know, more competition fuels better better products, and the race for the best coding IDE isn't different. Windsurf launched "Wind 2" last week, an upgrade that features a slew of new features:
These changes are a big deal—I'm particularly excited for the real-time web search. Instead of copy/pasting API docs or different open-source links from Github, all of that can now be done seamlessly and natively. While Windsurf warns that it could cause you to hit your limits faster due to the number of tokens it uses, leveraging it sparingly can help you build a stronger foundation for your project. I've played around with it a bit, but not enough to give a full review—stay tuned for that later.
Our second issue comes out on Friday—we'll be writing more about Deepseek's newest reasoning model and how the Chinese company has been able to build a reasoning model on par with OpenAI O1, on a fraction of the cost.
See you then!
- Ian Kar