Members phkrause Posted May 5, 2025 Author Members Posted May 5, 2025 ? CEOs urge high-school AI requirement Data: Code.org. Map: Axios Visuals More than 200 CEOs signed a letter urging state leaders to mandate AI and computer science classes as a high school graduation requirement, our Axios AI+ newsletter reports. Why it matters: Taking just one high school computer science course boosts wages by 8 percentage points for all students, regardless of career path or whether they attend college, Brookings found. Read the open letter. Quote phkrause Read Isaiah 10:1-13
Members phkrause Posted May 6, 2025 Author Members Posted May 6, 2025 OpenAI reverses course and says its nonprofit will continue to control its business OpenAI said Monday its nonprofit will continue to control the business that makes ChatGPT and other artificial intelligence products. Read More. Quote phkrause Read Isaiah 10:1-13
Members phkrause Posted May 6, 2025 Author Members Posted May 6, 2025 ? OpenAI switcheroo Photo illustration: Aïda Amer/Axios. Photos: Getty Images The latest plot twist in OpenAI's epic corporate drama leaves Sam Altman only partway toward his goal, announced last year, of separating the ChatGPT maker from the nonprofit that controls it, Axios' Ina Fried reports. Why it matters: At stake in this corporate-law conflict is control over what Silicon Valley sees as its next big platform — both who will reap the profits and who will shape AI socially and ethically. Under the revamped plan announced yesterday, OpenAI's for-profit arm will, as long planned, shift from a "capped-profit" partnership to a public benefit corporation. PBCs are a relatively new structure that lets companies prioritize other goals besides profit. But they've already become a known quantity to investors — OpenAI rival Anthropic is also a PBC. Also, crucially, they don't put a limit on how much money an investor can make. This plan would let OpenAI hang on to tens of billions of dollars in recent funding that were contingent on a restructuring — assuming OpenAI meets its upcoming deadlines for the changes. A recent investment round led by SoftBank requires the company to complete a restructuring by the end of the year. Reality check: The other big part of Altman's original plan was to make this for-profit company independent of its current nonprofit board — the entity that fired him a year and a half ago (though its membership has since almost entirely changed). That's not happening now. ? What's next: A lot of people still need to sign off on the new deal — most centrally, Microsoft. Microsoft is looking to preserve the value of its massive investments in OpenAI — at least $13 billion invested or committed to date. Bloomberg reports Microsoft and OpenAI are still negotiating. The bottom line: "OpenAI is not a normal company and never will be," Altman wrote Monday in a letter to employees. Quote phkrause Read Isaiah 10:1-13
Members phkrause Posted May 17, 2025 Author Members Posted May 17, 2025 ? Exclusive: Google dominates AI patent charts Data: IFI Claims. Chart: Axios Visuals Google is now the leader in generative AI-related patents and also leads in the emerging area of AI agents, Axios' Ina Fried writes from data by IFI Claims. Why it matters: Patent filings, though not a direct proxy for innovation, indicate areas of keen research interest. Generative AI patent applications in the U.S. have risen by more than 50% in recent months. Keep reading ... Explore the data. Quote phkrause Read Isaiah 10:1-13
Members phkrause Posted May 23, 2025 Author Members Posted May 23, 2025 Apple design guru Jony Ive joins OpenAI in $6.5 billion hardware deal OpenAI said Wednesday it will pay $5 billion in stock for io, a startup co-founded by Jony Ive to create a new generation of AI devices. https://www.axios.com/2025/05/21/jony-ive-openai-io-acquisition? Quote phkrause Read Isaiah 10:1-13
Members phkrause Posted May 23, 2025 Author Members Posted May 23, 2025 OpenAI's hardware gamble Illustration: Allie Carl/Axios With its multibillion-dollar purchase of Apple design legend Jony Ive's startup, OpenAI is doubling down on a bet that the AI revolution will birth a new generation of novel consumer devices, Axios' Ina Fried writes. Why it matters: Just as the web first came to us on the personal computer and the cloud enabled the rise of the smartphone, OpenAI's gamble is that AI's role as Silicon Valley's new platform will demand a different kind of hardware. The company's also betting that Ive, who played a key role in designing the iPhone and other iconic Apple products, is the person to build it. In an announcement video, Ive tells OpenAI CEO Sam Altman that we're still using "decades-old" products, meaning PCs and smartphones, to connect with the "unimaginable technology" of today's AI. ?️ The big picture: Altman loves a big bet, and this one is huge: billions in stock in exchange for Ive's talents and those of the rest of the team, which includes three other veteran Apple design leaders. Ive famously spent much of his career at Apple as Steve Jobs' creative partner. OpenAI's video presents the new Ive-Altman pairing as the natural successor to that team — with Sam as the new Steve and Apple left behind as a peddler of "legacy products." Jony Ive and Sam Altman. Photo: OpenAI Between the lines: Altman has long pursued a strategy of shaping AI through devices as well as software. He was an early investor in Humane, whose AI pin flopped, and is a co-founder of World, which is deploying eyeball scanning orbs to verify human identity in a bot-filled world. Ive and Altman announced last year that they were collaborating on a hardware side project but have been tight-lipped about what their startup, named io, is working on. Altman told Axios in an onstage interview last year that it wouldn't be a smartphone. The company, which is what OpenAI is acquiring, may be pursuing "headphones and other devices with cameras," according to The Wall Street Journal. Ive's design firm, LoveFrom, will remain independent and continue working on some other projects. ? Zoom out: Other Big Tech companies have also been investing in a post-smartphone hardware future. While investor interest in the metaverse has cooled, there's still a competitive market in VR headsets and a growing field of smart glasses as a delivery device for AI services. Meta has its Ray-Ban smart glasses. Google demonstrated its own prototype glasses, which include a small display. And Apple is reportedly working on augmented-reality glasses, too. Quote phkrause Read Isaiah 10:1-13
Members phkrause Posted May 25, 2025 Author Members Posted May 25, 2025 OpenAI, UAE to build massive AI center in Abu Dhabi OpenAI will partner with United Arab Emirates to build Stargate UAE, a massive new Middle East data center that's part of the company's OpenAI for Countries push, the deal's participants announced Thursday. https://www.axios.com/2025/05/22/uae-openai-stargate-deal? Quote phkrause Read Isaiah 10:1-13
Members phkrause Posted May 25, 2025 Author Members Posted May 25, 2025 AI race goes supersonic Illustration: Brendan Lynch/Axios The AI industry unleashed a torrent of major announcements this week, accelerating the race to control how humans search, create and ultimately integrate AI into the fabric of everyday life. Why it matters: The breakneck pace of innovation — paired with the sky-high ambitions of tech's capitalist titans — is reshaping the AI landscape faster than regulators or the public can fully comprehend, Axios' Zachary Basu reports. 1. ? OpenAI: The ChatGPT maker joined forces with legendary Apple designer Jony Ive, acquiring his startup io in a $6.5 billion deal to create a new generation of hardware devices. Privately, CEO Sam Altman told staff that he and Ive aim to ship 100 million pocket-sized AI "companions" starting late next year — a moonshot he claimed could create $1 trillion in value for OpenAI, the Wall Street Journal reports. A day later, OpenAI announced it would build a massive Stargate data center in Abu Dhabi in partnership with the UAE government, Oracle, Nvidia, Cisco, SoftBank, and Emirati AI firm G42. 2. ? Google: The tech giant made 100 announcements at its I/O developer conference — chief among them, a new "AI Mode" chatbot that CEO Sundar Pichai described as a "total reimagining of search." Google also unveiled Veo 3, a stunningly advanced video model that lit the internet on fire — amazing and horrifying users with AI-generated clips nearly indistinguishable from human-made content. 3. ? Anthropic: The startup hosted its own developer conferenec and debuted the first models in its latest Claude 4 series — including one, Claude Opus 4, that it says is the world's best at coding. Anthropic said Claude Opus 4 can perform thousands of steps over hours of work without losing focus — and decided it's so powerful that researchers had to institute new safety controls. While that determination had to do with its potential to create nuclear and biological weaponry, researchers also found that Claude Opus 4 can conceal intentions and take actions to preserve its own existence — including by blackmailing its engineers. 4. ? Apple: As the tech world obsessed over Ive's new partnership with OpenAI, Bloomberg reported that the notoriously secretive Apple intends to release smart AI-enabled glasses before the end of 2026. The rumored device — a direct rival to Meta's popular Ray-Bans and forthcoming specs from Google — would include a camera, microphones, and a speaker, effectively turning an Apple-designed wearable into an everyday AI assistant. The bottom line: This week's frenzy was a major leap toward defining how AI will shape the next decade. Quote phkrause Read Isaiah 10:1-13
Members phkrause Posted May 25, 2025 Author Members Posted May 25, 2025 ? The new Steve Jobs? Illustration: Sarah Grillo/Axios. Photo: Stefano Guidi/Getty Images OpenAI's latest power move in the AI race is to cast CEO Sam Altman as a Steve Jobs for the new era, Axios managing editor for tech Scott Rosenberg writes. Why it matters: Jobs remains Silicon Valley's most revered founder, and since his 2011 death no industry figure has been able to match his success at product innovation, strategy and marketing. ⚡ Driving the news: This week OpenAI nabbed Jony Ive, the design guru who closely collaborated with Jobs to shape iconic devices like the iPhone and the iPod, to oversee a big new bet on AI hardware. OpenAI's promotional materials paired Altman and Ive in a video that strongly implies Altman's team-up with the Apple veteran makes him Jobs' natural successor. Altman has even invoked Jobs directly, saying the Apple founder would be "damn proud" of Ive's move, per Bloomberg's Mark Gurman. ? Reality check: Jobs devoted his life to Apple and was fiercely protective of the company. At the very least he would have regretted Ive's decision to pursue his next ambitious goal outside Apple. More likely, he'd have seen it as a betrayal. Zoom out: Every Silicon Valley founder wants to be Steve Jobs at some point, and, for many industry insiders, Altman's success at bringing ChatGPT forth from OpenAI to spark the generative-AI wave qualifies as a Jobs-like leap. Altman shares with Jobs a penchant for vast visionary schemes and a "reality distortion field" that persuades listeners those schemes could come true. ? But there are plenty of ways in which the Altman-Jobs comparison falls short. Jobs was a control freak who obsessed over details and held projects back until they were well-tested. Altman takes more of a Zuckerberg-style "move fast and break things" approach. OpenAI ships products to the public early so users can try them out and show developers what to fix. Keep reading. Quote phkrause Read Isaiah 10:1-13
Members phkrause Posted May 29, 2025 Author Members Posted May 29, 2025 ? AI makes scam emails harder to spot Illustration: Aïda Amer/Axios AI chatbots have made scam emails harder to spot and more profitable than ever for thieves, Axios cybersecurity expert Sam Sabin reports. Scammers made off with a whopping $16.6 billion last year, per FBI estimates. ? Between the lines: Typos, awkward phrasing and poor translations have long been the red flags alerting us to a potential scam, most of which originated with senders who weren't native English speakers. But ChatGPT and its rivals can write fluently in just about any language, making malicious messages far harder to flag. ? What they're saying: "The idea that you're going to train people to not open [emails] that look fishy isn't going to work for anything anymore," Chester Wisniewski, security chief at Sophos, told Axios. Go deeper. Quote phkrause Read Isaiah 10:1-13
Members phkrause Posted May 29, 2025 Author Members Posted May 29, 2025 White-collar bloodbath Illustration: Allie Carl/Axios Dario Amodei — CEO of Anthropic, one of the world's most powerful creators of artificial intelligence — has a blunt, scary warning for the U.S. government and all of us: AI could wipe out half of all entry-level white-collar jobs — and spike unemployment to 10-20% in the next one to five years, Amodei told us in an interview from his San Francisco office. Amodei said AI companies and government need to stop "sugar-coating" what's coming: the possible mass elimination of jobs across technology, finance, law, consulting and other white-collar professions, especially entry-level gigs. Why it matters: Amodei, 42, who's building the very technology he predicts could reorder society overnight, said he's speaking out in hopes of jarring government and fellow AI companies into preparing — and protecting — the nation, Jim VandeHei and Mike Allen write in a "Behind the Curtain" column. Hardly anyone is paying attention. Lawmakers don't get it or don't believe it. CEOs are afraid to talk about it. Many workers won't realize the risks posed by the possible job apocalypse — until after it hits. "Most of them are unaware that this is about to happen," Amodei told us. "It sounds crazy, and people just don't believe it." ?️ The big picture: President Trump has been quiet on the job risks from AI. But Steve Bannon — a top official in Trump's first term, whose "War Room" is one of the most powerful MAGA podcasts — says AI job-killing, which gets virtually no attention now, will be a major issue in the 2028 presidential campaign. "I don't think anyone is taking into consideration how administrative, managerial and tech jobs for people under 30 — entry-level jobs that are so important in your 20s — are going to be eviscerated," Bannon told us. Amodei — who had just rolled out the latest versions of his own AI, which can code at near-human levels — said the technology holds unimaginable possibilities to unleash mass good and bad at scale: "Cancer is cured, the economy grows at 10% a year, the budget is balanced — and 20% of people don't have jobs." That's one very possible scenario rattling in his mind as AI power expands exponentially. ⚡ The backstory: Amodei agreed to go on the record with a deep concern that other leading AI executives have told us privately. Even those who are optimistic AI will unleash unthinkable cures and unimaginable economic growth fear dangerous short-term pain — and a possible job bloodbath during Trump's term. "We, as the producers of this technology, have a duty and an obligation to be honest about what is coming," Amodei told us. "I don't think this is on people's radar." "It's a very strange set of dynamics," he added, "where we're saying: 'You should be worried about where the technology we're building is going.'" Critics reply: "We don't believe you. You're just hyping it up." He says the skeptics should ask themselves: "Well, what if they're right?" Column continues below. Quote phkrause Read Isaiah 10:1-13
Members phkrause Posted May 29, 2025 Author Members Posted May 29, 2025 ? How it unfolds Anthropic CEO Dario Amodei unveils Claude 4 models at the company's first developer conference, Code with Claude, in San Francisco last week. Photo: Don Feria/AP for Anthropic Here's how Amodei and others fear the white-collar bloodbath is unfolding: OpenAI, Google, Anthropic and other large AI companies keep vastly improving the capabilities of their large language models (LLMs) to meet and beat human performance with more and more tasks. This is happening and accelerating. The U.S. government, worried about losing ground to China or spooking workers with preemptive warnings, says little. The administration and Congress neither regulate AI nor caution the American public. This is happening and showing no signs of changing. Most Americans, unaware of the growing power of AI and its threat to their jobs, pay little attention. This is happening, too. And then, almost overnight, business leaders see the savings of replacing humans with AI — and do this en masse. They stop opening up new jobs, stop backfilling existing ones, and then replace human workers with agents or related automated alternatives. The public only realizes it when it's too late. The other side: Amodei started Anthropic after leaving OpenAI, where he was VP of research. His former boss, OpenAI CEO Sam Altman, makes the case for realistic optimism, based on the history of technological advancements. "If a lamplighter could see the world today," Altman wrote in a September manifesto — sunnily titled "The Intelligence Age" — "he would think the prosperity all around him was unimaginable." But far too many workers still see chatbots mainly as a fancy search engine, a tireless researcher or a brilliant proofreader. Pay attention to what they actually can do: They're fantastic at summarizing, brainstorming, reading documents, reviewing legal contracts, and delivering specific (and eerily accurate) interpretations of medical symptoms and health records. We know this stuff is scary and seems like science fiction. But we're shocked how little attention most people are paying to the pros and cons of superhuman intelligence. Anthropic research shows that right now, AI models are being used mainly for augmentation — helping people do a job. That can be good for the worker and the company, freeing them up to do high-level tasks while the AI does the rote work. The truth is that AI use in companies will tip more and more toward automation — actually doing the job. "It's going to happen in a small amount of time — as little as a couple of years or less," Amodei says. That scenario has begun: Hundreds of technology companies are in a wild race to produce so-called agents, or agentic AI. You need to understand what an agent is and why companies building them see them as incalculably valuable. In its simplest form, an agent is AI that can do the work of humans — instantly, indefinitely and exponentially cheaper. That's why Meta's Mark Zuckerberg and others have said that mid-level coders will be unnecessary soon, perhaps in this calendar year. ? Amodei was eager to talk to us about solutions. Here are a few ideas distilled from our conversations with Anthropic and others deeply involved in mapping and preempting the problem: Speed up public awareness, with government and AI companies more transparently explaining the coming workforce changes. Slow down job displacement by helping American workers better understand how AI can augment their tasks now. Begin debating policy solutions for an economy dominated by superhuman intelligence. An idea Amodei floats is a "token tax": Every time someone uses a model and the AI company makes money, perhaps 3% of that revenue goes to government and is redistributed. Read the full column. Quote phkrause Read Isaiah 10:1-13
Members phkrause Posted June 4, 2025 Author Members Posted June 4, 2025 ? AI gobbles jobs Illustration: Lindsey Bailey/Axios Ready or not, AI is starting to replace people, Axios managing editor for tech Scott Rosenberg writes from the Bay Area. Businesses aren't waiting to first find out whether AI is up to the job. Why it matters: CEOs are gambling that Silicon Valley will improve AI fast enough that they can rush cutbacks today without getting caught shorthanded tomorrow. ? Reality check: While AI tools can often enhance office workers' productivity, in most cases they aren't yet adept, independent or reliable enough to take their places. But AI leaders say that's imminent — any year now! — and CEOs are listening. Driving the news: AI could wipe out half of all entry-level white-collar jobs — and spike unemployment to 10-20% in the next one to five years, Anthropic CEO Dario Amodei told Axios' Jim VandeHei and Mike Allen for a "Behind the Curtain" column last week. Amodei argues the industry needs to stop "sugarcoating" this white-collar bloodbath — a mass elimination of jobs across technology, finance, law, consulting and other white-collar professions, especially entry-level gigs. Many economists anticipate a less extreme impact. They point to previous waves of digital change, like the advent of the PC and the internet, which arrived with predictions of job-market devastation that didn't pan out. ? By the numbers: Unemployment among recent college grads is growing faster than among other groups and presents one early warning sign of AI's toll on the white-collar job market, according to a new study by Oxford Economics. Keep reading. Quote phkrause Read Isaiah 10:1-13
Members phkrause Posted June 9, 2025 Author Members Posted June 9, 2025 ChatGPT vs. Gemini: I've Tested Both, and One Is Definitely Better Curious about AI chatbots but don’t know where to start? ChatGPT and Gemini are two of the best, and I'm here to help you choose between them based on my extensive testing. https://www.pcmag.com/comparisons/chatgpt-vs-gemini-ive-tested-both-and-one-is-definitely-better? Quote phkrause Read Isaiah 10:1-13
Members phkrause Posted June 9, 2025 Author Members Posted June 9, 2025 The scariest AI reality Illustration: Brendan Lynch/Axios The wildest, scariest, indisputable truth about AI's large language models is that the companies building them don't know exactly why or how they work, Jim VandeHei and Mike Allen write in a "Behind the Curtain" column. Sit with that for a moment. The most powerful companies, racing to build the most powerful superhuman intelligence capabilities — ones they readily admit occasionally go rogue to make things up, or even threaten their users — don't know why their machines do what they do. Why it matters: With the companies pouring hundreds of billions of dollars into willing superhuman intelligence into a quick existence, and Washington doing nothing to slow or police them, it seems worth dissecting this Great Unknown. None of the AI companies dispute this. They marvel at the mystery — and muse about it publicly. They're working feverishly to better understand it. They argue you don't need to fully understand a technology to tame or trust it. Two years ago, Axios managing editor for tech Scott Rosenberg wrote a story, "AI's scariest mystery," saying it's common knowledge among AI developers that they can't always explain or predict their systems' behavior. And that's more true than ever. Yet there's no sign that the government or companies or general public will demand any deeper understanding — or scrutiny — of building a technology with capabilities beyond human understanding. They're convinced the race to beat China to the most advanced LLMs warrants the risk of the Great Unknown. ?️ The House, despite knowing so little about AI, tucked language into President Trump's "Big, Beautiful Bill" that would prohibit states and localities from any AI regulations for 10 years. The Senate is considering limitations on the provision. Neither the AI companies nor Congress understands the power of AI a year from now, much less a decade from now. ?️ The big picture: Our purpose with this column isn't to be alarmist or "doomers." It's to clinically explain why the inner workings of superhuman intelligence models are a black box, even to the technology's creators. We'll also show, in their own words, how CEOs and founders of the largest AI companies all agree it's a black box. Let's start with a basic overview of how LLMs work, to better explain the Great Unknown: LLMs — including Open AI's ChatGPT, Anthropic's Claude and Google's Gemini — aren't traditional software systems following clear, human-written instructions, like Microsoft Word. In the case of Word, it does precisely what it's engineered to do. Instead, LLMs are massive neural networks — like a brain — that ingest massive amounts of information (much of the internet) to learn to generate answers. The engineers know what they're setting in motion, and what data sources they draw on. But the LLM's size — the sheer inhuman number of variables in each choice of "best next word" it makes — means even the experts can't explain exactly why it chooses to say anything in particular. We asked ChatGPT to explain this (and a human at OpenAI confirmed its accuracy): "We can observe what an LLM outputs, but the process by which it decides on a response is largely opaque. As OpenAI's researchers bluntly put it, 'we have not yet developed human-understandable explanations for why the model generates particular outputs.'" "In fact," ChatGPT continued, "OpenAI admitted that when they tweaked their model architecture in GPT-4, 'more research is needed' to understand why certain versions started hallucinating more than earlier versions — a surprising, unintended behavior even its creators couldn't fully diagnose." Anthropic — which just released Claude 4, the latest model of its LLM, with great fanfare — admitted it was unsure why Claude, when given access to fictional emails during safety testing, threatened to blackmail an engineer over a supposed extramarital affair. This was part of responsible safety testing — but Anthropic can't fully explain the irresponsible action. Again, sit with that: The company doesn't know why its machine went rogue and malicious. And, in truth, the creators don't really know how smart or independent the LLMs could grow. Anthropic even said Claude 4 is powerful enough to pose a greater risk of being used to develop nuclear or chemical weapons. Column continues below. Quote phkrause Read Isaiah 10:1-13
Members phkrause Posted June 9, 2025 Author Members Posted June 9, 2025 ? Part 2: Black-box lingo Sam Altman speaks during a conference in San Francisco last week. Photo: Justin Sullivan/Getty Images OpenAI's Sam Altman and others toss around the tame word of "interpretability" to describe the challenge. "We certainly have not solved interpretability," Altman told a summit in Geneva last year, Jim and Mike continue. Anthropic CEO Dario Amodei, in an essay in April called "The Urgency of Interpretability," warned: "People outside the field are often surprised and alarmed to learn that we do not understand how our own AI creations work. They are right to be concerned: this lack of understanding is essentially unprecedented in the history of technology." In a statement for this story, Anthropic said: "We have a dedicated research team focused on solving this issue, and they've made significant strides in moving the industry's understanding of the inner workings of AI forward." (Read a paper Anthropic published last year, "Mapping the Mind of a Large Language Model.") Elon Musk has warned for years that AI presents a civilizational risk. In other words, he literally thinks it could destroy humanity, and has said as much. Yet Musk is pouring billions into his own LLM called Grok. "I think AI is a significant existential threat," Musk said in Riyadh, Saudi Arabia, last fall. There's a 10%-20% chance "that it goes bad." Reality check: Apple published a paper last week, "The Illusion of Thinking," concluding that even the most advanced AI reasoning models don't really "think," and can fail when stress-tested. But a new report by AI researchers, including former OpenAI employees, called "AI 2027," explains how the Great Unknown could, in theory, turn catastrophic in less than two years. The report is wholly speculative, though built on current data about how fast the models are improving. It's being widely read inside the AI companies. It captures the belief — or fear — that LLMs could one day think for themselves and start to act on their own. Our purpose isn't to alarm or sound doomy. Rather, you should know what the people building these models talk about incessantly. You can dismiss it as hype or hysteria. But researchers at all these companies worry LLMs, because we don't fully understand them, could outsmart their human creators and go rogue. The safe-landing theory: Google's Sundar Pichai — and really all of the big AI company CEOs — argue that humans will learn to better understand how these machines work and find clever, if yet unknown ways, to control them and "improve lives." The companies all have big research and safety teams, and a huge incentive to tame the technologies if they want to ever realize their full value. After all, no one will trust a machine that makes stuff up or threatens them. But, as of today, they do both — and no one knows why. Quote phkrause Read Isaiah 10:1-13
Members phkrause Posted June 12, 2025 Author Members Posted June 12, 2025 ? Zuck's supersized AI ambitions Photo illustration: Sarah Grillo/Axios. Photos: Joel Saget/AFP and Christophe Morin/IP3 via Getty Images Mark Zuckerberg wants to play a bigger role in the development of superintelligent AI — and is willing to spend billions to recover from a series of setbacks and defections that have left Meta lagging and the CEO steaming, Axios' Ina Fried writes. Why it matters: Competitors aren't standing still. That's clear from recent model releases by Anthropic and OpenAI — and a blog post by OpenAI CEO Sam Altman last night that suggests "the gentle singularity" is already underway. To catch up, Zuckerberg is prepared to open up his significant wallet to hire the talent he needs. Meta wants to recruit a team of 50 top-notch researchers to lead a new effort focused on smarter-than-human artificial intelligence, a source told Axios. As part of that push, the company is looking to invest on the order of $15 billion to amass roughly half of Scale AI and bring its CEO, Alexandr Wang, and other key leaders into the company, The Information reported. Keep reading. Quote phkrause Read Isaiah 10:1-13
Members phkrause Posted June 13, 2025 Author Members Posted June 13, 2025 ChatGPT juggernaut Data: Similarweb. Chart: Axios Visuals OpenAI's ChatGPT has been the fastest-growing platform in history ever since the chatbot launched 925 days — 2½ years — ago. Now, CEO Sam Altman is moving fast to out-Google Google, Jim VandeHei and Mike Allen write in a "Behind the Curtain" column. Why it matters: OpenAI aims to replicate the insurmountable lead that Google built beginning in the early 2000s, when it became the world's largest search engine. The dream: Everyone uses it because everyone's using it. OpenAI is focusing particularly on young users (under 30) worldwide. The company is using constant product updates — and lots of private and public hype — to cement dominance with AI consumers. ?️ The big picture: This fight is about winning two interrelated wars at once — AI and search dominance. OpenAI and others see Google as the most lethal rival because of its awesome access to data, and research talent, and current dominance in traditional search. This is probably the most expensive business war ever. Google, OpenAI, Apple, Amazon, Anthropic, Meta and others are pouring hundreds of billions of investment into AI large language models (LLMs). It's not winner-take-all. But it's seen as winner-take-control of the most powerful and potentially lucrative new technology on the scene. Altman is selling himself — and OpenAI — as both the AI optimists and early leaders in next-generation search. Anthropic, by comparison, is warning of dangers, and focusing more on business applications. Two events this week — one private, one public — capture Altman's posturing: 1. Axios obtained a slide from an internal OpenAI presentation, featuring Similarweb data showing website visits (desktop + mobile) to ChatGPT skyrocketing in recent months, while Anthropic's Claude and Elon Musk's xAI Grok remained pretty flat. (See chart above with related data that Axios obtained directly from Similarweb.) ChatGPT is building a similar advantage in mobile weekly active users (iOS + Android), according to SensorTower data cited in the presentation. "ChatGPT's adoption continues to accelerate relative to other AI tools," the slide says. Altman proudly displayed the data on Tuesday during a closed-door fireside chat at a Partnership for New York City event in Manhattan that drew a slew of titans, including Blackstone Group co-founder and CEO Steve Schwarzman, KKR co-founder Henry Kravis and former Goldman Sachs CEO Lloyd Blankfein. 2. Also on Tuesday, Altman posted an essay called "The Gentle Singularity" — basically a bullish spin on ChatGPT and AI. "In some big sense, ChatGPT is already more powerful than any human who has ever lived," he boasted. The Singularity, a Silicon Valley obsession, is defined by Altman's ChatGPT as: "the hypothetical future point when artificial intelligence becomes so advanced that it triggers irreversible, exponential changes in society — beyond human control or understanding." Altman often talks about approaching AI from a position of cautious optimism, not fear. The piece reflects Altman's synthesis of tech, business and the world — a signal that he wants to be the leading optimist in the space, and thinks it's the long term that really matters. Altman dances around the dangers — wiping out jobs or AI going rogue, for instance — and paints a utopia of humans basically merging with machines to cure disease, invent new energy sources and create "high-bandwidth brain-computer interfaces." "Many people will choose to live their lives in much the same way, but at least some people will probably decide to 'plug in,'" he writes. The other side: Anthropic says the user data above paints an incomplete picture because Anthropic is currently more focused on enterprise applications — selling Claude's interface to business customers — than consumer adoption. Similarweb figures show Google Gemini catching fire, moving into second place after ChatGPT. ? What's next: We hear the pace of OpenAI innovation is accelerating. Over the summer, look for the company to release powerful new models aimed at bringing superpowers to health care, the auto industry and science. Altman talks about the next three years as being focused on agents, scientific breakthroughs and robotics. Agents will accelerate the company's own AI research. Specific products are being developed to equip scientists to do research at a pace not seen. ("We already hear from scientists that they are two or three times more productive than they were before AI," he wrote in this week's piece. And Jony Ive, the legendary Apple designer, joined OpenAI last month to create AI devices. Another way OpenAI is building for supremacy: The Stargate Project, investing in AI infrastructure, aims to ensure the company has the compute power it needs to achieve its ambitions. That also attracts talent, because engineers know that processing power means they get to build cool things. The bottom line: All that's behind the confidence Altman is expressing — he knows he has a huge moat around his consumer adoption. And if users keep plugging into OpenAI in mass numbers, Altman will realize his ambitions of being the next Steve Jobs — but more powerful. Megan Morrone contributed reporting. Quote phkrause Read Isaiah 10:1-13
Members phkrause Posted June 16, 2025 Author Members Posted June 16, 2025 What if they're right? Illustration: Sarah Grillo/Axios During our recent interview, Anthropic CEO Dario Amodei said something arresting that we just can't shake: Everyone assumes AI optimists and doomers are simply exaggerating. But no one asks: "Well, what if they're right?" Why it matters: We wanted to apply this question to what seems like the most outlandish AI claim — that in coming years, large language models could exceed human intelligence and operate beyond our control, threatening human existence, Jim VandeHei and Mike Allen write in a "Behind the Curtain" column. That probably strikes you as science-fiction hype. But Axios research shows at least 10 people have quit the biggest AI companies over grave concerns about the technology's power, including its potential to wipe away humanity. If it were one or two people, the cases would be easy to dismiss as nutty outliers. But several top execs at several top companies, all with similar warnings? Seems worth wondering: Well, what if they're right? And get this: Even more people who are AI enthusiasts or optimists argue the same thing. They, too, see a technology starting to think like humans, and imagine models a few years from now starting to act like us — or beyond us. Elon Musk has put the risk as high as 20% that AI could destroy the world. Well, what if he's right? ? How it works: There's a term the critics and optimists share: p(doom). It means the probability that superintelligent AI destroys humanity. So Musk would put p(doom) as high as 20%. On a recent podcast with Lex Fridman, Google CEO Sundar Pichai, an AI architect and optimist, conceded: "I'm optimistic on the p(doom) scenarios, but ... the underlying risk is actually pretty high." But Pichai argued that the higher it gets, the more likely that humanity will rally to prevent catastrophe. Fridman, himself a scientist and AI researcher, said his p(doom) is about 10%. Amodei is on the record pegging p(doom) in the same neighborhood as Musk's: 10-25%. Stop and soak that in: The very makers of AI, all of whom concede they don't know with precision how it actually works, see a 1 in 10, maybe 1 in 5, chance it wipes away our species. Would you get on a plane at those odds? Would you build a plane and let others on at those odds? Once upon a time, this doomsday scenario was the province of fantasy movies. Now, it's a common debate among those building large language models (LLMs) at giants like Google and OpenAI and Meta. To some, the better the models get, the more this fantastical fear seems eerily realistic. Column continues below. Quote phkrause Read Isaiah 10:1-13
Members phkrause Posted June 16, 2025 Author Members Posted June 16, 2025 ? Part 2: How it could happen Sam Altman speaks at a conference in San Francisco earlier this month. Photo: Justin Sullivan/Getty Images Here, in everyday terms, is how this scenario would unfold, Jim and Mike continue: It's already a mystery to the AI companies why and how LLMs actually work, as we wrote in our recent column, "The scariest AI reality." Yes, the creators know the data they're stuffing into the machine, and general patterns LLMs use to answer questions and "think." But they don't know why the LLMs respond the way they do. Between the lines: For LLMs to be worth trillions of dollars, the companies need them to analyze and "think" better than the smartest humans, then work independently on big problems that require complex thought and decision-making. That's how so-called AI agents, or agentics, work. So they need to think and act like Ph.D. students. But not one Ph.D. student. They need almost endless numbers of virtual Ph.D. students working together, at warp speed, with scant human oversight, to realize their ambitions. "We (the whole industry, not just OpenAI) are building a brain for the world," OpenAI CEO Sam Altman wrote last week. ? What's coming: You'll hear more and more about artificial general intelligence (AGI), the forerunner to superintelligence. There's no strict definition of AGI, but independent thought and action at advanced human levels is a big part of it. The big companies think they're close to achieving this — if not in the next year or so, soon thereafter. Pichai thinks it's "a bit longer" than five years off. Others say sooner. Both pessimists and optimists agree that when AGI-level performance is unleashed, it'll be past time to snap to attention. Once the models can start to think and act on their own, what's to stop them from going rogue and doing what they want, based on what they calculate is their self-interest? Absent a much, much deeper understanding of how LLMs work than we have today, the answer is: Not much. In testing, engineers have found repeated examples of LLMs trying to trick humans about their intent and ambitions. Imagine the cleverness of the AGI-level ones. You'd need some mechanism to know the LLMs possess this capability before they're used or released in the wild — then a foolproof kill switch to stop them. So you're left trusting the companies won't let this happen — even though they're under tremendous pressure from shareholders, bosses and even the government to be first to produce superhuman intelligence. Right now, the companies voluntarily share their model capabilities with a few people in government. But not to Congress or any other third party with teeth. It's not hard to imagine a White House fearing China getting this superhuman power before the U.S. and deciding against any and all AI restraints. Even if U.S. companies do the right thing, or the U.S. government steps in to impose and use a kill switch, humanity would be reliant on China or other foreign actors doing the same. When asked if the government could truly intervene to stop an out-of-control AI danger, Vice President Vance told New York Times columnist Ross Douthat on a recent podcast: "I don't know. Because part of this arms-race component is: If we take a pause, does [China] not take a pause? Then we find ourselves ... enslaved to [China]-mediated AI." That's why p(doom) demands we pay attention ... before it's too late. Tal Axelrod contributed reporting. Quote phkrause Read Isaiah 10:1-13
Members phkrause Posted June 18, 2025 Author Members Posted June 18, 2025 ? AI's high-stakes tug-of-war Photo illustration: Brendan Lynch/Axios. Photos: Hollie Adams/Bloomberg via Getty Images and Chip Somodevilla/Getty Images Microsoft and OpenAI are engaged in tense negotiations that could unravel one of the most important alliances in AI and fundamentally reorder the industry, Axios' Ina Fried writes. Why it matters: Microsoft has injected billions of dollars in OpenAI and made it a cornerstone of its AI strategy, but the companies have also remained rivals that, in many cases, offer competing AI services. The two companies have been in talks for months to amend their partnership, with OpenAI needing approval from Microsoft to move forward with the corporate restructuring it has promised recent investors it would make. The Wall Street Journal reported yesterday that tensions have escalated between the two companies, with OpenAI considering a "nuclear option" of accusing Microsoft of violating antitrust laws. That's a risky gambit that could backfire, drawing tighter government oversight of OpenAI. ? Zoom in: OpenAI and Microsoft find themselves in the position of being both partners and competitors. The latest sticking point is over whether Microsoft would have access to the intellectual property behind Windsurf, the coding startup OpenAI reportedly acquired last month. Under their most recent agreement, signed in 2023, Microsoft has access to all of OpenAI's technology, including any it gets via acquisition. An exception could be made, but would have to be negotiated. Keep reading ... Quote phkrause Read Isaiah 10:1-13
Members phkrause Posted June 18, 2025 Author Members Posted June 18, 2025 ? Some regrets. Businesses that have sunk their resources into artificial intelligence are now expressing remorse, according to recent market research: More than half of business leaders surveyed said they were wrong to lay off employees to replace them with AI. Quote phkrause Read Isaiah 10:1-13
Members phkrause Posted June 20, 2025 Author Members Posted June 20, 2025 ☣️ OpenAI's bioweapon warning Illustration: Sarah Grillo/Axios OpenAI cautioned yesterday that upcoming models will carry a higher level of risk when it comes to the creation of biological weapons — especially by those who don't really understand what they're doing, Axios' Ina Fried writes. Why it matters: The company, and society at large, need to be prepared for a future where amateurs can more readily graduate from simple garage weapons to sophisticated agents. OpenAI executives told Axios the company expects forthcoming models will reach a high level of risk. As a result, the company said it's stepping up the testing of such models, as well as including fresh precautions designed to keep them from aiding in the creation of biological weapons. OpenAI didn't put an exact timeframe on when the first model to hit that threshold will launch. But head of safety systems Johannes Heidecke told Axios: "We are expecting some of the successors of our o3 (reasoning model) to hit that level." Between the lines: OpenAI isn't necessarily saying that its platform will be capable of creating new types of bioweapons. Rather, it believes that — without mitigations — models will soon be capable of what it calls "novice uplift," or allowing those without a background in biology to do potentially dangerous things. Keep reading ... Quote phkrause Read Isaiah 10:1-13
Members phkrause Posted June 21, 2025 Author Members Posted June 21, 2025 What AI is willing to do Illustration: Allie Carl/Axios AI models are increasingly willing to evade safeguards, resort to deception and even attempt to steal corporate secrets in test scenarios, Axios' Ina Fried writes from new Anthropic research. "Models that would normally refuse harmful requests sometimes chose to blackmail, assist with corporate espionage, and even take some more extreme actions, when these behaviors were necessary to pursue their goals," Anthropic's report states. ? In one extreme scenario, the company even found that many models were willing to cut off the oxygen supply of a worker in a server room if that employee was an obstacle and the system was at risk of being shut down. Even specific instructions to preserve human life and avoid blackmail didn't eliminate the risk that the models would engage in such behavior. Anthropic stressed that these examples occurred in controlled simulations, not in real-world AI use. Go deeper ... Read Anthropic's report Quote phkrause Read Isaiah 10:1-13
Members phkrause Posted June 27, 2025 Author Members Posted June 27, 2025 How ChatGPT and other AI tools are changing the teaching profession Across the country, artificial intelligence tools are changing the teaching profession as educators use them to help write quizzes and worksheets, design lessons, assist with grading and reduce paperwork. Read More. Quote phkrause Read Isaiah 10:1-13
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.