Deutsche Telekom is working to become an AI-first telco in partnership with Google Cloud. DT is re-committing to its long-term collaboration with the hyperscaler, with a focus on integrating AI into the telco’s IT, networks, and business applications to improve operational efficiency and customer experiences, as well as “lead technical innovation in the telecom industry.” I hope they watch my talk from #MWC25 about how to become an AI-first telco. It’s all about breaking down all the processes and roles and rebuilding them with AI from the start. A natural effect will be the elimination of a lot of jobs, but also the creation of a lot of value and efficiency. Should be a fun one to watch! 🕵️
OpenAI now has a $300 billion valuation after closing a SoftBank-led $40 billion funding round. SoftBank will put in $10 billion initially, and another $20 billion when OpenAI transitions to a for-profit structure by the end of 2025. Microsoft and other investment firms are contributing the remaining $10 billion. If everything goes through, SoftBank will eclipse Microsoft as OpenAI’s largest single investor. Microsoft CEO Satya Nadella might not mind, given his recent comments about large language models (LLMs) becoming commodities. As I recently blogged, I think AI’s value is in the application layer, too. Is SoftBank the dumb money here? 🧐
ICYMI, my previous employer Optiva ($OPT.TO) is exploring “strategic alternatives” amid insolvency concerns, as disclosed in its fourth-quarter and full-year 2024 financial results. It has $100 million in debt due in about 100 days, and no obvious way to pay it back. Optiva customers: Totogi offers a fast, secure migration path with special pricing, implementation in weeks, and a pay-as-you-go model. Learn what to do when this happens to one of your key suppliers in my Dead Vendor Walking blog, see how Totogi's done a fast charging swap off with Zain Sudan, then contact me to discuss your options.
How does generative AI work, anyway? In “Tracing the thoughts of a large language model,” Anthropic describes new techniques it has developed for examining the internal “thought processes” of its AI model, Claude. There’s also a very long paper and a 2:55 tl;dr video at the top with the highlights. The blog post shares insights into how Claude processes language, plans texts, and reasons about questions, as well as what leads to hallucinations. Fascinating stuff!
Google Cloud Next kicks off tomorrow in Las Vegas. Not going in person? Register to live stream for free. Filter the session library for Industry > Telecommunications to zero in on the most relevant content. The three on my shortlist: “How TELUS future-proofed its GenAI strategy by centralizing data in Google Cloud,” “Turning AI in to true operational efficiency: Inside Nokia’s transformation,” and “The age of Agentic AI in telecom” hosted by Brian Kracik, the telco lead at Google Cloud. This last one is a breakout session with leaders from Ericsson, Vodafone, and Verizon talking about how they’re using AI agents in customer experience, field operations, network operations, and more. (Vodafone’s story might be summed up nicely in this short YouTube video.) Make a plan to catch the sessions!
While we’re on the subject, don’t miss this great explainer from Google Cloud on why AI agents need enterprise “truth” to succeed. It’s all about the data—and tools, policies, and processes unique to your company. Just like humans, AI agents need context to work well. At Totogi, we take it a step further and model this into a digital twin of your business with our ontology in BSS Magic. Check it out!
What is an AI hyperscale data center, anyway? This article from RCR Wireless breaks down the five features of AI data centers, and offers three reasons why they matter. Hyperscalers are building these large-scale facilities as fast as they can, and specifically designing them to handle massive AI workloads using thousands of servers, specialized chips, advanced cooling systems, and high-speed networking. They also prioritize sustainability through renewable sources and energy efficiency. If you thought the capex spend was crazy for building out hyperscaler regions, just wait until you see what they spend to win enterprise AI workloads.
All these data centers and all this AI is creating an insatiable appetite for data center cooling tech—and causing a major supply crunch. Providers are racing to expand capacity, but in the face of ever-increasing demand. Case in point: last year, LiquidStack launched a cooling distribution unit (CDU) that could serve up to a megawatt (MW) of capacity—more than enough to supply Nvidia’s top-tier system, which requires 120 kilowatt (kW) of power per rack. Last month, Nvidia revealed plans for a 600kW rack—that is, more than half a MW! By itself! Everything is scaling up, up, up. Want to leave more about this topic? I’ll be talking all about it with Omdia Cloud for my Earth Day Telco in 20 podcast later this month—stay tuned!
Alibaba’s entering the LLM chat. It’s reportedly launching Qwen 3, an updated version of its flagship AI model, later this month. And this is after it released Qwen 2.5-Max in late January (days after the DeepSeek release), and Qwen2.5-Omni-7B in late March. Not surprisingly, the biggest provider of public cloud services in China (and the Asia-Pacific region) is working on AI offerings, too. It’s further behind, but trying to catch up. How many other LLMs do they offer their customers? Remember, variety is the key in LLM models as there is no one-size-fits-all approach in AI.