yahoo Press
Tech stocks today: Micron stock falls after blowout earnings report, Nvidia wraps up GTC event
Images
Nvidia’s (NVDA) developer conference is wrapping up on Thursday. On Monday, CEO Jensen Huang unveiled the company’s all-new Nvidia Groq 3 chip, an AI chip for space, and the NemoClaw platform for AI agents. Huang said Nvidia received chip purchase orders for China and forecast that AI chip sales would surpass $1 trillion through 2027, raising expectations from a previous outlook of $500 billion in demand through 2026. That was followed on Tuesday by Microsoft’s (MSFT) announcement that it plans to shake up the company’s AI organization, centralizing its commercial and consumer Copilot teams under the new executive vice president of Copilot, Jacob Andreou. On Wednesday afternoon, Micron announced its Q2 earnings, beating expectations on the top and bottom lines and providing an upbeat Q3 outlook as memory chip demand soars amid the AI build-out. Micron (MU) reported its second quarter earnings after the closing bell on Wednesday, beating expectations on the top and bottom lines and providing Q3 guidance well above estimates, as the AI market continues to drive massive demand for memory chips around the world. For the quarter, Micron reported earnings per share (EPS) of $12.20 on revenue of $23.86 billion. Wall Street was anticipating EPS of $9.00 on revenue of $19.7 billion, year over year. Memory, or RAM, is an integral component of data center servers for both GPU-based systems by Nvidia (NVDA) and CPU-based systems by the likes of Intel (INTC) and AMD (AMD). The explosion in AI training and inferencing and the broader push into agentic AI is driving a dearth of available memory supplies, raising prices and impacting the cost of consumer and enterprise electronics. In February, market research firm Gartner said the memory shortage will cause PC shipments to drop 10.4% in 2026 and smartphone shipments to decline 8.4%. Read more here Nvidia’s (NVDA) chips are known for their general-purpose use. They can train and run AI models, power robots, and serve as the backbone of self-driving cars. And while the company’s offerings are still the industry standard, upstart chip firm’s like Cerebras and Groq have begun designing and rolling out processors geared specifically toward running AI models, creating a potential threat to Nvidia’s formidable AI moat. CEO Jensen Huang and company answered those concerns at the company’s GTC event on Monday with a slew of announcements meant to prove Nvidia is the inferencing leader to beat, including the debut of its Groq 3 chip and rack system. “They are evolving in a big way,” TECHnalysis Research founder and chief analyst Bob O’Donnell told Yahoo Finance. Read more here: Arizona Attorney General Kris Mayes filed criminal charges against Kalshi on Tuesday, accusing the prediction market operator of “illegal gambling” without a license and “election wagering.” This is the first time Kalshi has faced criminal charges — marking an escalation in its legal troubles — though several states have filed civil lawsuits against the company. "Kalshi may brand itself as a 'prediction market,' but what it's actually doing is running an illegal gambling operation and taking bets on Arizona elections, both of which violate Arizona law," Mayes said. "No company gets to decide for itself which laws to follow." A Kalshi spokesperson denied the charges and told the New York Times that they are “meritless.” Kalshi also preemptively sued Arizona’s Department of Gaming on March 12. Scrutiny of prediction markets has intensified alongside the industry’s explosive growth since the Supreme Court struck down a federal ban on sports betting in 2018. According to the blockchain security firm CertiK, trading on prediction markets quadrupled from $15.8 billion in 2024 to about $63.5 billion in 2025, with most of the volume coalescing in Kalshi, Polymarket, and Opinion. (Disclosure: Yahoo Finance has a partnership with Polymarket.) BMW (BMW.DE) officially unveiled its new i3 EV on Wednesday, a sedan loaded with the automaker’s sport-oriented DNA and a Tesla-fighting 440 miles of estimated range. The new i3, the second model built on BMW’s next-gen Neue Klasse platform, is longer, wider, and taller than the outgoing 3 Series and previews the design of the upcoming gas-powered 3 Series, the standard bearer in the class. BMW is claiming 900 km of range on Europe’s WLTP cycle, translating to an EPA-estimated 440 miles on a single charge, with charging capability that can add around 250 miles in just 10 minutes. Tesla’s Model 3 Premium has 363 EPA-estimated miles of range, with the Performance coming in lower at 309 miles Read more here. Samsung Electronics (005930.KS) and Advanced Micro Devices (AMD) have signed a memorandum of understanding to expand their strategic partnership on memory chip supplies as global chipmakers race to lock in long-term supply partnerships for advanced memory. Reuters reports: Read more here. Nvidia (NVDA) CEO Jensen Huang said that the company currently has purchase orders for chips destined for China and is firing up the supply chain to meet that demand during a press Q&A at Nvidia’s GTC event on Tuesday. The news comes after the Financial Times reported that the company was stopping production of its H200 chips for the Chinese market and instead shifting to produce Vera Rubin processors for the rest of the world. Nvidia has been at the center of a tug-of-war between the US and China over whether the US government should allow the country’s most sought-after chip to power AI platforms in China that could end up benefiting that country’s military. Huang successfully lobbied the Trump administration to allow Nvidia’s H200 chips to flow to China, saying that it was better for the country to be reliant on US technology rather than pushing it to develop its own high-powered processors. But China has balked at the idea, telling companies to use homegrown chips instead. Nvidia has said it still hasn’t begun shipping its products into the country. Nvidia (NVDA) CEO Jensen Huang said his projection that the company has line of sight for $1 trillion in revenue through 2027 only applies to sales of its Grace Blackwell and Vera Rubin chips, not the others he debuted during his keynote on Monday. The $1 trillion figure, he explained, was meant to serve as an apples-to-apples comparison to his prior projection, back in October, that the company had a throughline to $500 billion in revenue by the end of 2026. But, he said, adding the anticipated revenue from its new Vera CPU, Groq 3, and new storage racks will push that amount beyond $1 trillion. Nvidia generated $215.9 billion in its fiscal 2026, which ended Jan. 25, a 65% year-over-year increase. The company ‘s Groq 3 and Vera chips are designed to assist with AI inferencing, or running AI, and agentic AI. Both inferencing and agentic AI are becoming increasingly important across the AI industry as more users begin using the technology and putting AI agents to work in both enterprise and consumer settings. Microsoft (MSFT) CEO Satya Nadella on Tuesday announced changes within the company’s AI organization that will bring its Copilot efforts more directly under his control within a new group. In a memo to staff, Nadella said Jacob Andreou will be named executive vice president of Copilot, reporting directly to Nadella. Under Andreou, this group will be focused on the Copilot experience across consumer and commercial, driving design, product, growth, and engineering, Nadella said in his note. Andreou had previously reported to Mustafa Suleyman, CEO of Microsoft AI. Suleyman said Tuesday in a memo that these changes will allow him to “focus all my energy on our Superintelligence efforts and be able to deliver world class models for Microsoft over the next 5 years.” “Progress at the AI model layer is more critical than ever to our success as a company over the next decade and is foundational to everything we build above it,” Nadella said. “We are doubling down on our superintelligence mission with the talent and compute to build models that have real product impact, in terms of evals, COGS reduction, as well as advancing the frontier when it comes to meeting enterprise needs and achieving the next set of research breakthroughs.” Earlier this month, Microsoft announced a new Copilot tier that will include access to Anthropic’s Claude alongside the latest generation of OpenAI models. Microsoft stock is down about 17% year to date. Nvidia (NVDA) announced on Monday that Uber (UBER) will begin rolling out a fleet of Level 4 autonomous vehicles in Los Angeles and San Francisco in 2027 as part of both companies’ broader self-driving efforts. The firms previously announced their intent to deploy 100,000 vehicles running on Nvidia’s Drive Hyperion self-driving platform during Nvidia’s GTC event in Washington, D.C., in October. But the latest news, unveiled at GTC in San Jose, Calif., provides a timeline for when and where vehicles will eventually hit the road. According to Nvidia, the service will eventually move beyond California to include 28 cities across four continents. In addition to Uber, Nvidia said Lyft (LYFT), Estonia-based Bolt, and Singapore’s Grab are also using its systems to power their own self-driving capabilities. Nvidia (NVDA) is moving further into AI software with the launch of its NemoClaw stack for the OpenClaw agent platform. The service gives companies that use OpenClaw privacy and security controls that Nvidia says make self-evolving autonomous agents “more trustworthy, scalable, and accessible to the world.” OpenClaw, which debuted as Clawd in November 2025 before being renamed Moltbot and finally OpenClaw in January, has taken off thanks to its ability to run AI agents powered by different AI models on users’ machines via apps like WhatsApp, Discord, Slack, and others. It can perform a litany of different tasks via your computer on your behalf using your existing data. But the fact that it can control a laptop or desktop and has access to your personal data raises certain privacy and security concerns. Nvidia’s NemoClaw is meant to address those issues. While OpenClaw can run on both Mac and Windows systems, Nvidia is positioning its own GeForce RTX platforms as the computers of choice for the service. That includes its RTX Pro-powered workstations, DGX Station, and DGX Spark mini desktop. Read more here. Nvidia (NVDA) kicked off its GTC event in San Jose, Calif., on Monday, debuting a number of chips and platforms ranging from its all-new Nvidia Groq 3 language processing unit (LPU) to its massive Vera central processing unit (CPU) rack, designed to go head-to-head with offerings from Intel (INTC) and AMD (AMD). All totaled, Nvidia said it’s rolling out five massive server racks, each serving different purposes inside AI data centers. The biggest announcement of the lot, though, is the Nvidia Groq 3 chip. Nvidia announced it had entered into an agreement to license technology from Groq and hired founder Jonathan Ross, president Sunny Madra, and other members of the Groq team as part of a $20 billion deal in December.Groq’s processors focus on AI inferencing, or running AI models. It’s what happens when you type something into OpenAI’s (OPAI.PVT) ChatGPT, Anthropic’s (ANTH.PVT) Claude, or Google’s (GOOG, GOOGL) Gemini and get a response. Nvidia’s graphics processing units (GPUs) are multipurpose and can both train and run AI models, but as the AI market moves toward running models, ensuring the company has a dedicated inferencing chip has become paramount. That’s where Groq 3 comes in. Yahoo Finance’s Dan Howley reports from on the ground at the GTC event in San Jose, Calif., that Nvidia (NVDA) is taking its AI chips to the next frontier: space. The company revealed its Vera Rubin Space Module, saying the platform is designed for orbital data centers, geospatial intelligence, and autonomous space operations. Read more here. Nvidia CEO Jensen Huang said the company now sees AI chip demand reaching $1 trillion through 2027. That’s a massive increase from the $500 billion high-confidence demand and order backlog for Blackwell and Rubin chips that Nvidia projected last year through 2026. “In fact, we are going to be short,” Huang added. “I am certain computing demand will be much higher than that.” CEO Jensen Huang touted Nvidia’s (NVDA) relationships with cloud service providers, such as Google (GOOG), Microsoft (MSFT), Amazon (AMZN), and Oracle (ORCL), saying the AI company is “bringing customers to them.” Huang argued that Nvidia is driving down data-processing costs by increasing scale and speed. “Moore's Law has run out of steam; we need a new approach,” Huang said. “Accelerated computing allows us to take these giant leaps forward, and as you will see later, because we continue to optimize the algorithms … and because our reach is so large and our installed base is so large, we can reduce the computing cost, increasing the scale, increasing the speed for everybody, continuously.” Hyperscalers account for roughly 50% of Nvidia’s data center revenue, which totaled $62.3 billion in the fourth quarter. Nvidia CEO Jensen Huang began his annual keynote address at a quarter after 2 p.m. ET on Monday. As my colleague Dan Howley noted in his preview of the event, Huang’s leather-jacket-clad keynotes are usually littered with a litany of product launches and updates, and we’ll likely see the same at this year’s event. Huang is expected to speak for about two hours. We’ll be following along and posting updates. You can watch the speech live below. Nebius (NBIS) stock soared 14% after the AI cloud company announced it struck a new long-term AI infrastructure supply agreement with Meta (META). Nebius will provide Meta with $12 billion worth of neocloud capacity as part of its deployment of Nvidia's Vera Rubin platform, starting in 2027. Meta has also committed to purchasing additional compute capacity up to a total of $15 billion over a five-year period. Last week, Nvidia disclosed a $2 billion investment in Nebius to deploy more than 5 gigawatts of data center capacity by the end of 2030. Meta stock rose 2.6% on the Nebius news, as well as rumors that it's planning sweeping layoffs that could affect up to 20% of the company as it looks to offset high artificial intelligence costs. The date and extent of the layoffs have yet to be finalized, according to Reuters, but it could mark Meta's largest restructuring since late 2022 and early 2023. Meta (META) has been working for months to develop a new frontier AI model that can better compete with top-of-the-line offerings from Anthropic (ANTH.PVT), Google (GOOG), and OpenAI (OPAI.PVT), but the effort has reportedly hit a major delay. According to the New York Times, while the new model, which is code-named Avocado, is better than Google’s previous generation Gemini 2.5 model, it can’t quite match Google’s Gemini 3. That model made a splash when Google debuted it late last year and pushed the company into the leadership position in the AI race, supplanting OpenAI in the eyes of analysts and developers. Now Meta is planning to delay Avocado until at least May. The company’s AI leaders have also discussed licensing Google’s Gemini model to help run its AI services on an interim basis. Meta has sunk billions into building up its Llama AI models. But after showing off its Llama 4 family in the spring of 2025, the company delayed its flagship Llama 4 Behemoth model. Meta still hasn’t launched the software. CEO Mark Zuckerberg subsequently hired Scale AI CEO Alexandr Wang, investing $14.3 billion into the company. Nvidia’s (NVDA) GTC 2026, the company’s biggest event of the year, kicks off in San Jose, Calif., on Monday with a keynote from CEO Jensen Huang. The leather-jacket-clad CEO’s keynotes are usually littered with a litany of product launches and updates, and we’ll likely see the same at this year’s event. Nvidia has been on a dealmaking spree over the past several months, including an agreement with AI inferencing chip designer Groq. That could mean we’ll see a new inferencing chip out of Nvidia using Groq’s technology, or the company could show how it’s integrating Groq’s tech into its own GPUs. There’s also rumors Nvidia could debut an all-new platform for AI agents, and you can also expect to hear plenty about Nvidia’s various open-source AI models The company could also launch its long-rumored laptop processor that would take on AMD’s own offerings. But don’t expect the chips to generate the kind of massive revenues that Nvidia’s GPUs and networking products do. Sales of the company’s gaming segment totaled $22.5 billion in 2025, while its data center business brought in $193.5 billion. Bloomberg reports: Read more here. Adobe (ADBE) CEO Shantanu Narayen is stepping down after 18 years in the role, the company announced on Thursday in conjunction with its first quarter earnings report. Adobe stock fell more than % on the news. Narayen will leave the post when a successor is found but will stay on as chair of the board afterward. The company's board also appointed Frank Calderoni, lead independent director of Adobe, as chair of the special committee that will look for CEO candidates both inside and outside of the firm. "I love Adobe and the privilege of leading it has been the greatest honor of my career. I will ensure that I set up Adobe for its next decade of greatness with the right leader and executive team, in partnership with the Board, while continuing to deliver on our FY26 Must Wins," Narayen wrote in an email to employees. "The opportunity in front of us is extraordinary. Together, we are uniquely positioned to lead it — and I remain deeply committed to doing so as we look ahead and prepare to name Adobe’s next CEO. I am more confident than ever that Adobe’s best days are still to come." For the quarter, Adobe saw earnings per share (EPS) of $6.06 on revenue of $6.39 billion, topping analysts' EPS and revenue estimates of $5.88 and $6.28 billion, respectively. Looking ahead to the second quarter, Adobe forecast revenue of $6.43 billion to $6.48 billion. Expectations were for $6.43 billion.