Jump to content
ClubAdventist

Recommended Posts

  • Members
Posted
Be smart about Dr. AI
 
Illustration of the AI sparkle wearing a stethoscope.
 

Illustration: Allie Carl/Axios

 

Axios senior policy reporter Caitlin Owens has these takeaways from conversations about consulting AI for medical advice:

  1. Chatbots can be good at explaining lab results or coming up with a list of questions to ask your doctor ahead of a visit.
  2. Output is super-dependent on input. Duke assistant professor Monica Agrawal said patients can share "incomplete context or … a subjective impression, or they have a misconception … LLMs have an ability, more so than a doctor, to reinforce those misconceptions."
  3. The phrasing of the response can be problematic. "I worry some of these LLMs speak with a level of confidence that is really unjustified," said Ashish Jha, President Biden's former White House COVID response coordinator and a former Brown University public health dean.
  4. Most of us don't have the expertise to spot mistakes.

Keep reading.

phkrause

When the righteous are in authority, the people rejoice; But when a wicked man rules, the people groan. Proverbs 29;2
  • Members
Posted
What to expect from AI in 2026
 
mail?url=https%3A%2F%2Ftpc.googlesyndica
 

Personal agents, operating systems, and agent-as-a-service – AI will become more autonomous and more integrated in the global workforce in 2026.

Marco Argenti, chief information officer at Goldman Sachs, shares the seven themes he’s watching for AI in 2026.

Read the article.

phkrause

When the righteous are in authority, the people rejoice; But when a wicked man rules, the people groan. Proverbs 29;2
  • Members
Posted

🍰 Jensen's 5-layer AI vision

Nvidia CEO Jensen Huang writes in a blog post out this morning — "AI Is a Five‑Layer Cake" — that decisions about how fast to build AI, who gets access and how to govern it will determine the technology's legacy, Axios' Herb Scribner writes.

  • Why it matters: Huang — whose company underpins the AI boom — rarely publishes long essays about the tech's broader impact, offering other industry players and investors a rare window into his thinking.

In just his seventh blog post since 2016, Huang argues chip demand, expansion and hiring are still in the early stages of what he calls a long buildout.

  • "AI is one of the most powerful forces shaping the world today. It is not a clever app or a single model; it is essential infrastructure."

Huang describes AI as a "five-layer stack": Energy → chips → infrastructure → models → applications.

"At the top are applications, where economic value is created. Drug discovery platforms. Industrial robotics. Legal copilots. Self-driving cars. ... Every successful application pulls on every layer beneath it, all the way down to the power plant that keeps it alive."

Read the post ...

phkrause

When the righteous are in authority, the people rejoice; But when a wicked man rules, the people groan. Proverbs 29;2
  • Members
Posted
🤖 AI feud boosts Google
 
Illustration of the Google, OpenAI, and Anthropic logos on a winners podium. The OpenAI and Anthropic logos are covered in bandages.
 

Illustration: Allie Carl/Axios

 

Google is quietly expanding its Pentagon work while Anthropic and OpenAI publicly spar over parameters for military use, Axios' Madison Mills writes.

  • Google is set to provide AI agents to the Pentagon's 3-million-person workforce for unclassified work, Bloomberg reported yesterday.

Keep reading.

phkrause

When the righteous are in authority, the people rejoice; But when a wicked man rules, the people groan. Proverbs 29;2
  • Members
Posted
7 AI themes to watch in the year to come
 
mail?url=https%3A%2F%2Ftpc.googlesyndica
 

AI is rewiring the modern economy, from the workforce to the traditional software stack.

“In my 40 years in technology, 2025 saw the biggest changes I have seen,” says Marco Argenti, chief information officer at Goldman Sachs.

Explore Marco’s seven themes driving AI in 2026.

  • 🤖 OpenAI plans to launch its Sora video AI tech within ChatGPT, The Information reports. Sora has been housed in a stand-alone app. Go deeper.

phkrause

When the righteous are in authority, the people rejoice; But when a wicked man rules, the people groan. Proverbs 29;2
  • Members
Posted
🤖 Don't get used to cheap AI
 
Animated illustration of a dollar sign and a sparkle. The dollar sign grows while the sparkle shrinks, and vice versa.
 

Illustration: Brendan Lynch/Axios

 

AI companies are hooking users with low prices that won't last — straight out of the Amazon and Uber playbook, Axios AI+ co-author Madison Mills writes.

  • Both OpenAI and Anthropic are widely expected to go public. Investors will demand earnings growth and expanding profit margins, which remain negative for AI labs.

🎨 The big picture: Every time you send a complex query, the AI lab is effectively losing money on the transaction — especially for low-cost or free accounts.

  • It's an experiment Silicon Valley has run before. The so-called "millennial lifestyle subsidy" meant VC money helped underwrite cheap Uber rides and DoorDash deliveries.
  • Before that, Amazon built its base with low prices, free shipping and, for years, no sales tax in most states.

Eventually, all of these companies had to charge enough to cover costs — and make a profit.

phkrause

When the righteous are in authority, the people rejoice; But when a wicked man rules, the people groan. Proverbs 29;2
  • Members
Posted
AI CEOs are scaring America
 
mail?url=https%3A%2F%2Fimages.axios.com%
Illustration: Brendan Lynch/Axios

A CEO's job is to sell their product — not tell people that it'll ruin their lives.

  • Tell that to the top AI execs, who keep delivering bleak warnings about the tech's disruptive potential, Axios' Madison Mills writes.

👨‍💼 Anthropic CEO Dario Amodei has warned that AI could wipe out huge swaths of white-collar jobs.

  • OpenAI CEO Sam Altman predicts that AI will someday function like a paid utility — a tough sell amid an affordability crisis.
  • Palantir CEO Alex Karp recently said on CNBC that AI-fueled social disruption could erode "the economic and therefore political power" of "highly educated, often female voters, who vote mostly Democrat." (Watch)

😱 Portraying AI as immensely powerful — even dangerous — reinforces the idea that only some companies can build it safely.

  • That's an effective pitch to investors, but a scary message to consumers.
  • Steve Dowling, former tech executive and co-host of the "Communication Breakdown" podcast, told Axios' Eleanor Hawkins: "It's part fundraising, it's part justifying their existence, it's part audience engagement, it's probably a little part ego, too."

🚫 Several AI leaders privately tell Axios that they're nervous about a "ban AI" movement heading into the 2028 elections.

  • Only 26% of U.S. voters view AI positively, according to an NBC News poll out last week.
  • But CEOs feel lost and divided on how to seem more uplifting — especially amid AI-fueled layoffs across tech and beyond.

The bottom line: AI leaders' doomsday commentary could be an honest reflection of their genuine concern.

  • Yet the public may hesitate to accept a technology they've been told to fear by the very people developing it.

Go deeper.

phkrause

When the righteous are in authority, the people rejoice; But when a wicked man rules, the people groan. Proverbs 29;2
  • Members
Posted
AI spending flip
 
A line chart that shows monthly shares of first-time enterprise customers choosing OpenAI or Anthropic from December 2025 to February 2026. OpenAI fell from 59.7% to 26.7%, while Anthropic rose from 40.3% to 73.3%.
Data: Ramp. Chart: Axios Visuals

The AI race is shifting from who has the best model to who can monetize the fastest — and Anthropic is pulling ahead with the customers that matter most: enterprises, Axios AI+ co-author Madison Mills writes.

  • Why it matters: AI companies are building the plane while it's still flying. Turns out they need more business-class seats.

Anthropic is now capturing over 73% of all spending among companies buying AI tools for the first time, according to customer data from business accounting vendor Ramp.

  • Just 10 weeks ago, the split with OpenAI was 50/50. It was 60/40 in OpenAI's favor as recently as early December.

🔬 Zoom in: The Wall Street Journal reported this week that OpenAI is considering a tighter focus on enterprise, when it's been known for wide-ranging consumer bets (video generators, browsers and devices).

  • OpenAI tells Axios that consumers remain critical, and the strength of the consumer base helps the company with business customers and developers.

Between the lines: OpenAI is generating more revenue overall. The company says it's on pace to generate $25 billion in revenue this year, versus Anthropic's $19 billion.

  • But Anthropic appears to be accelerating faster.

phkrause

When the righteous are in authority, the people rejoice; But when a wicked man rules, the people groan. Proverbs 29;2
  • 2 weeks later...
  • Members
Posted
America's next class war — AI fluency
 
mail?url=https%3A%2F%2Fimages.axios.com%
Illustration: Aïda Amer/Axios. Stock: Getty Images

Anthropic just dropped the most granular data yet on who's actually using AI and how — and the findings should rattle anyone thinking the AI gains will be evenly distributed, Jim VandeHei and Mike Allen write in a new "Behind the Curtain" column.

  • It won't. In fact, it's creating a new form of economic inequality: AI fluency.

Why it matters: The Anthropic data, out this morning, reveals something subtler and more consequential than the "robots take your job" narrative.

  • The real divide isn't between people who use AI and people who don't. It's between experienced AI users and newcomers to AI.

AI continues to pose a serious risk to any automatable jobs, which Anthropic CEO Dario Amodei has warned could wipe out half of entry-level white-collar work.

  • Two workflow categories doubled in prevalence between November and February: automated sales and outreach, and automated trading.

But AI will also be a growing threat to casual or unsophisticated users who fall behind their more AI-savvy peers, regardless of role or level.

  • "Much of the discussion focuses on how AI is something that happens to you," Peter McCrory, Anthropic's head of economics, told us from the company's headquarters in San Francisco.
  • "This analysis shows you can develop skills that make you better at getting value out of Claude or whatever large language model you want to use."

💡 Some context: Anthropic's new report, "Anthropic Economic Index: Learning Curves," studied over 1 million conversations on the company's Claude platform last month.

  • The headline finding: Experienced AI users get better results out of an AI model than newcomers. And the gap isn't explained by what tasks they're doing, what country they're in, or what model they're using.
  • People who've used Claude for six months or more have a 10% higher success rate in their conversations with AI. "The longer you've been using it, the stronger this effect," McCrory says.

🏛️ Adoption of Claude in hypereducated Washington, D.C., is four times the adoption rate you'd expect for a city of its size.

  • Globally, inequality in usage has persisted since Anthropic's last report, in January, in the 20 higher-income countries with the most Claude use.

That's a skills gap hardening into a class gap in real time. But you can escape it by experimenting, getting comfortable, getting deft, getting fluent.

  • Anthropic's researchers are candid that this could be early-adopter selection bias or survivorship — maybe sophisticated users simply signed up first.
  • But Anthropic's finding certainly mirrors our personal experience.

🧠 Between the lines: People think of AI as a tool, when you should think of it as a never-before-imagined toolbox — it allows you to not just automate a boring task, but stretch your abilities across most things you touch at work. But only once you start to master prompts, and pushback, and persistence when unsatisfying or unilluminating answers come back.

  • Jim started using the models like most — like a search engine. But then they became his best researcher ... then idea stress-tester ... then builder of prototypes for new businesses. He's basically at the six-month mark Anthropic describes, and discovering new use cases every week.

🪜 You have to move up the AI proficiency ladder. Using a large language model as a search engine or copy editor is dumb AI. Even having it draft emails for you is like having a celebrity chef boil your water.

  • The report divides tasks into "automation" (do this task) and "augmentation" — more polished, sophisticated inputs like using the LLM as a thought partner that spits out ideas and feedback, or writes a business plan, or stress-tests a business plan, or coaches and teaches you.
  • Think how much more valuable AI dexterity will make you to your current organization — or how much more marketable it'll make you to a future employer.

🤔 The big picture: This report lands in the middle of the most anxious era Americans have experienced about AI and jobs since OpenAI's ChatGPT moment after the model's release in late 2022.

  • An NBC News poll from earlier this month found that 57% of registered voters believe AI's risks outweigh its benefits.
  • Only 26% have positive feelings about the technology — a net favorability lower than that of any other topic polled, except the Democratic Party and Iran. (AI was two points less popular than ICE.)

AI users are getting better, while AI anxiety surges and the job market deteriorates. It's a reality that Washington isn't confronting with consistency and seriousness.

  • Washington is debating AI in the abstract: Should we regulate it, should we race China, should we worry about superintelligence?
  • But the Anthropic report makes the near-term problem concrete: Signs of a two-tier workforce are already emerging. And neither party has a plan for people on the wrong side of it.
  • What Anthropic found in observing real-world use: Skilled AI users are getting better at collaborating with Claude to do a wide variety of work, not just automate specific activities.

The bottom line: The people already using AI for high-value work may pull further ahead, with real implications for who captures the economic benefits of this technology.

  • If you're not an early adopter, today's your chance.

🎬 Watch our "Behind the Curtain" YouTube, "The AI Gap." (Executive producer: Jimmy Shelton)

phkrause

When the righteous are in authority, the people rejoice; But when a wicked man rules, the people groan. Proverbs 29;2
  • Members
Posted
🤖 AI+DC Summit: Optimism vs. anxiety
 
mail?url=https%3A%2F%2Fimages.axios.com%
Mike Allen interviews Meta President Dina Powell McCormick at our AI+DC Summit yesterday. Photo: Bryan Dozier for Axios

Silicon Valley confidence and Washington anxiety clashed at our AI+DC Summit yesterday, Axios' Megan Morrone writes.

  • Why it matters: The AI industry says this technology will create new jobs, boost productivity and optimize daily life. But Americans are worried about their kids, power bills and livelihoods.

Meta President and Vice Chairman Dina Powell McCormick framed AI as a "transformation of humanity."

  • McCormick called AI an "equalizer" — a "mostly affordable" tool that could lead to the "democratization of a lot of these industries and potential jobs."
  • She urged rival companies to cooperate on shared "core values" around safety. Go deeper.

Senate Intelligence Committee Vice Chair Mark Warner (D-Va.) said AI makers can be a positive force in the world, but the companies need to empathize with how Americans feel about the tech encroaching on their lives.

  • "When I think about many of the AI big heads that are brilliantly smart, empathetic is not the first word that comes to mind," he said.
  • If they don't recognize how their tech is impacting people, "they're going to get blown away by both the left and the right's pitchforks coming after them." Go deeper.

Sen. Josh Hawley (R-Mo.) focused on AI's harms — to children and to the communities disproportionately affected by AI data centers.

  • "Our message to the companies has got to be no amount of profit justifies destroying children's lives," Hawley said. Go deeper.

⚖️ Mike asked Powell McCormick about this week's harsh back-to-back jury verdicts against Meta on child-safety and addiction issues ("Meta, YouTube Found Addictive, Harmful," says today's Wall Street Journal banner headline). She said youth-safety and parental-control efforts consume the time of Meta's leadership "in a massive percentage every single day":

  • "As a mom, this is really important to me, and very personal. I see firsthand just how hard the company is trying to ensure that there's not harmful content, to ensure that we're empowering parents to the best of our ability. And it's something that I watch being focused on every single day. We respectfully disagree with [yesterday's decision in L.A.], and we're appealing." Meta is also appealing Tuesday's verdict in New Mexico.

🏛️ Anthropic praises the White House's National Policy Framework on AI, released last week ... Sarah Heck, Anthropic head of public policy, told Axios AI+DC: "The Trump administration took a really positive step in putting forward an AI framework. I think it's really a call to action to Washington on an issue that everybody here cares about."

  • "Everybody's talking about it — not just this week, but all the time. And we agree there needs to be a federal framework for things like child safety, innovation and the economy, and national security. And so I'm really excited to work with lawmakers and the administration on this comprehensive framework."

Watch the summit.

phkrause

When the righteous are in authority, the people rejoice; But when a wicked man rules, the people groan. Proverbs 29;2
  • Members
Posted
Planning for an AI shock
 
Illustration of a woman holding a briefcase with a robot hand reaching out to grab it from behind.
 

Illustration: Aïda Amer/Axios

 

America has no plan for managing an AI wipeout of jobs. Some investors and economists are trying to design one before the disruption forces Washington's hand, Axios Macro authors Courtenay Brown and Neil Irwin write.

  • If lawmakers can be convinced to consider a plan now, the nation might be better positioned to dodge the long-term economic and political consequences that have defined disruptions of years past.

A co-author of last month's viral AI apocalypse paper has now released proposals that grapple with the disastrous — albeit speculative — scenario the paper laid out.

  • The writer, AI investor Alap Shah, first suggests foundational fixes that can be considered now, like making benefits portable across jobs.
  • Shah also pitches a mechanism where firms that rely heavily on human workers would pay less in corporate taxes. Companies generating huge output with fewer employees would pay more.

Keep reading.

phkrause

When the righteous are in authority, the people rejoice; But when a wicked man rules, the people groan. Proverbs 29;2
  • Members
Posted

What is Anthropic?

Anthropic is an artificial intelligence company and the developer of Claude, a set of large language models powering the company's AI chatbot of the same name (learn how LLMs work). In 2021, wanting to be more safety-oriented and better understand how models make decisions, siblings Daniela and Dario Amodei left OpenAI to co-found Anthropic alongside five other former employees. As a public benefit corporation, it has promoted the responsible development of AI for the long-term good of humanity.

 

Rather than relying solely on human feedback to improve Claude, Anthropic introduced constitutional AI. This approach enables AI systems to self-critique and revise their outputs in accordance with a codified set of principles to reduce potential harm. The company also invented the Model Context Protocol, a standardized method for connecting LLMs to external data and tools, enhancing these models' capabilities and helping clear the way for AI agents.

 

As of early 2026, Anthropic's partnerships with Amazon, Google, and Microsoft have made Claude the only advanced, large-scale LLM natively available across all three cloud service providers. Despite controversies over its use of copyrighted material to train AI, Claude's use in cyberattacks, and disputes with the Pentagon over the lawful use of AI, these partnerships and additional enterprise adoption have brought Anthropic's valuation to $380B as of February 2026.


... Read our full explainer on Anthropic here.

 

Also, check out ... 

> Claude's board is tasked with advocating for AI development that benefits society. (Read)

> How Anthropic's staff philosopher teaches Claude human ethics. (Watch)

> Claude's consumer growth surged after its fallout with the Pentagon. (Visualize)

Watch how a vending machine run with Anthropic's AI went out of business. (Explore)

phkrause

When the righteous are in authority, the people rejoice; But when a wicked man rules, the people groan. Proverbs 29;2
  • Members
Posted
New AI models empower hackers
 
Illustration of a targeting scope with a sparkle shape in it over a sparkle emoji in shadows.
 

Illustration: Brendan Lynch/Axios

 

Top AI and government officials tell Axios CEO Jim VandeHei that Anthropic, OpenAI and other tech giants will soon release new models that are scary good at hacking sophisticated systems at scale.

The one to watch: Anthropic is privately warning top government officials that its not-yet-released model — currently branded "Mythos" — makes large-scale cyberattacks much more likely in 2026.

  • The model allows agents to work on their own, with wild sophistication and precision, to penetrate corporate, government and municipal systems. It's a hacker's dream weapon.
  • Jim reveals in his new weekly newsletter for CEOs that one source briefed on the coming models says a large-scale attack could hit this year. Businesses are ripe targets. (C-suite only: Request beta of Jim's newsletter.)

Fortune got its hands on an unpublished Anthropic blog post describing Mythos. The post said the model is "currently far ahead of any other AI model in cyber capabilities."

  • The post adds that Mythos "presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders."
  • So the threat is no longer theoretical, and will be exacerbated by employees testing agents without realizing they're making it easier for cybercriminals to hack their company.

Here's why this is different: The new models are even better at powering agents to think, act, reason and improvise on their own without rest or pause or limitation.

  • Think of a warehouse full of the most sophisticated criminals who never sleep, learn on the fly and persist until successful — except the warehouse is infinite.
  • Bad actors can now scale simply with more compute. They aren't limited by finite personnel. A single person can run campaigns that once required entire teams.

At the same time, systems are more vulnerable because so many employees are firing up Claude, Copilot or other agentic models — often at home — and creating agents of their own.

  • They often connect to their internal work systems unwittingly, opening a new door for cybercriminals to enter.
  • The industry has a name for this: "shadow AI." A Dark Reading poll found that 48% of cybersecurity professionals now rank agentic AI as the No. 1 attack vector for 2026 — above deepfakes, above everything else.

The bottom line: Everyone working at every company in America needs to know right now the dangers of using agents, especially unsupervised, anywhere near sensitive information.

phkrause

When the righteous are in authority, the people rejoice; But when a wicked man rules, the people groan. Proverbs 29;2
  • Members
Posted
OpenAI goes retail
 
Photo illustration of Sam Altman.
 

Photo illustration: Lindsey Bailey/Axios. Photo: Chip Somodevilla/Getty Images

 

Announced just after today's market close: OpenAI is beginning to let individual investors access its stock, months before the ChatGPT maker is expected to launch its IPO, reports Axios' Dan Primack.

  • OpenAI said today that its shares will soon be included in several ETFs offered by ARK Invest, the Cathie Wood-led firm that previously invested via its venture capital arm.
  • "We are really trying to take to heart our mission, which is AGI [artificial general intelligence, or human-like capability] for the benefit of humanity and thinking about access," says OpenAI CFO Sarah Friar. "Not just access to the technology, but also access to the economic upside that it's driving."

🧮 By the numbers: The AI giant also sold around $3 billion in shares to individual investors in a recent private placement with clients of three "very large banks," as part of a record fundraise announced just after 4 p.m. ET.

  • The $3 billion was part of a new funding round that now totals $122 billion at an $852 billion post-money valuation.
  • (Disclosure: Axios and OpenAI have a licensing and technology agreement that allows OpenAI to access part of Axios' story archives while helping fund the launch of Axios into several local cities and providing some AI tools. Axios has editorial independence.)

Read the announcement ...

phkrause

When the righteous are in authority, the people rejoice; But when a wicked man rules, the people groan. Proverbs 29;2
  • Members
Posted
AI governance that keeps up with AI
 
mail?url=https%3A%2F%2Ftpc.googlesyndica
 

AI is the first national-security technology of the modern era built mainly by the private sector — advancing faster than the rules to govern it.

A global network of AI Safety Institutes, starting in the U.S., can support democratic AI governance, promoting shared standards and international cooperation.

phkrause

When the righteous are in authority, the people rejoice; But when a wicked man rules, the people groan. Proverbs 29;2
  • Members
Posted
🤖 Workplace AI gender gap
 
Illustration of a giant robot hand holding a tiny woman with a briefcase
 

Illustration: Sarah Grillo/Axios

 

Women are less likely to use AI at work — and even when they do, they get less recognition for the effort, Axios' Emily Peck writes from a Lean In survey.

  • Why it matters: Suddenly, AI ability is the skill many employers say they value most. Down the line, this recognition gap could exacerbate existing gender pay and promotion inequalities, Lean In founder Sheryl Sandberg tells Axios.

🧮 By the numbers: 78% of men said they have used AI for work, compared with 73% of women, according to a survey the group conducted in early March among 1,000 U.S. adults.

  • Among those using AI, 18% of women said they've been praised for doing so, compared with 27% of men.

phkrause

When the righteous are in authority, the people rejoice; But when a wicked man rules, the people groan. Proverbs 29;2
  • Members
Posted
🔍 Big Anthropic code leak
 
Animated illustration of eyes in binary code.
 

Illustration: Shoshana Gordon/Axios

 

Source material powering Anthropic's Claude Code leaked for the second time in just over a year, exposing the AI coding tool's full architecture, unreleased features and internal model performance data, Axios' Madison Mills writes.

The leaked code contained references to capabilities that appear fully built but haven't shipped, according to an Anthropic spokesperson, including:

  • A "persistent assistant" running in background mode that lets Claude Code keep working even when a user is idle.
  • Claude's ability to review what was done in its latest session to improve in the future and remember what it's learned.

Keep reading ...

phkrause

When the righteous are in authority, the people rejoice; But when a wicked man rules, the people groan. Proverbs 29;2
  • Members
Posted
AI adaptation, not apocalypse
 
Illustration of a robot's name tag, reading
 

Illustration: Lindsey Bailey/Axios

 

AI will change the way people work but won't replace them en masse, Axios' Eleanor Hawkins writes from new research by MIT's Computer Science and Artificial Intelligence Laboratory.

  • Why it matters: This directly pushes back on fear-based narratives coming from some AI leaders and reframes the debate from "when do jobs disappear?" to "how quickly do tasks shift?"

How it works: The researchers identified 11,500 tasks in the Labor Department's database and created multiple instances of each. They were then run through more than 40 AI models using workplace-style prompts.

  • Workers in those fields evaluated 17,000+ AI-generated outputs as to whether they were good enough to use without edits.

🧮 By the numbers: In 2024, AI models could complete roughly 50% of text-based tasks at a minimally acceptable level, rising to 65% by 2025.

  • At the current pace, AI could handle 80% to 95% of text-based tasks by 2029 — though only at a "good enough" level.

phkrause

When the righteous are in authority, the people rejoice; But when a wicked man rules, the people groan. Proverbs 29;2
  • Members
Posted
🧠 Silicon Valley's new brain rot
 
Illustration of a paper silhouette of a head with charred center.
 

Illustration: Lindsey Bailey/Axios

 

A growing number of software developers say AI coding tools are frying their brains, Axios' Megan Morrone reports.

  • The most popular agentic AI systems — Claude Code, Codex and OpenClaw — can write, test and ship software autonomously. That's triggered something that looks a lot like addiction among some of tech's highest performers.

What they're saying: OpenAI co-founder Andrej Karpathy — coiner of the term "vibe coding" — told the No Priors podcast he's been in a "state of AI psychosis" since December, trying to figure out what's possible and "pushing it to the limit."

  • Y Combinator CEO Garry Tan has called his experience grinding with coding tools "cyber psychosis" and posted in January that he "stayed up 19 hours yesterday and didn't sleep til 5AM."

🎲 How it works: There are elements of gambling and addiction in the way people are using these tools, AI developer and blogger Simon Willison, who has 25 years of pre-AI coding experience, said on Lenny's Podcast:

  • "There is a limit on human cognition, in how much you can hold in your head at one time. And it's very easy to pop that stack at the moment."

🍳 Lingo: Researchers from Boston Consulting Group and UC Riverside call the phenomenon "brain fry" — mental fatigue from excessive use or oversight of AI tools beyond one's cognitive capacity.

phkrause

When the righteous are in authority, the people rejoice; But when a wicked man rules, the people groan. Proverbs 29;2
  • Members
Posted
VandeHei: Age of AI asymmetry
 
Illustration of a keyboard with keys breaking and falling off balanced on a pyramid shape
 

Illustration: Sarah Grillo/Axios. Stock: Getty Images

 

The most consequential force reshaping geopolitics and business can be captured in one word: asymmetry, Axios CEO Jim VandeHei writes in his new weekly Axios C-Suite newsletter.

  • The small can now destroy the big. The cheap can neutralize the expensive.
  • Drones proved it on the battlefield. AI is proving it everywhere else.

Why it matters: Every CEO now faces the same question the Pentagon does: Are you the $3 million missile or the $35,000 drone?

💡 Lessons from war: Iran and Ukraine, both outgunned on paper, turned cheap drones into strategic equalizers. They mass-produce weapons at $20K–$50K a pop and unleash them with missile-like precision. Both Russia and America are now racing to build their own.

  • We've shot down drones that cost less than a used car with $3 million missiles that take years to build. That's structurally unsustainable.

Lessons for corporate America: AI is the drone. A sprawling org chart is the Patriot missile.

  • All businesses face a looming rethink: What are the smallest teams, fewest steps and quickest paths to do everything at every layer?
  • 15 people can now do what 150 did. The most dangerous unit in business is no longer the biggest division — it's the small team with proven AI leverage.
  • The old playbook: Throw headcount at the problem. The new playbook? Give a tight team the right tools and get out of the way.

Look around. The companies winning right now aren't the biggest. They're the leanest and fastest. A can't-ignore example:

  • Coefficient Bio: An 8-month-old, 9-person biotech AI startup that just got acquired by Anthropic for roughly $400M. This happened so fast because what they built is how you think through drug development, not a drug itself.

The bottom line: This shift is great news for any individual with a big idea.

  • One person orchestrating a team of AI agents can now do company-sized work. Just about anything is possible.
  • 📈 If you're a CEO or on a CEO's team: Ask to join the beta of Jim's brand-new, weekly Axios C-Suite newsletter.

phkrause

When the righteous are in authority, the people rejoice; But when a wicked man rules, the people groan. Proverbs 29;2
  • Members
Posted
Sam's superintelligence New Deal
 
Illustration of a robot holding a fountain pen.
 

Illustration: Lindsey Bailey/Axios

 

OpenAI CEO Sam Altman is doing something no tech titan has ever done: He's publishing a detailed blueprint for how government should tax, regulate and redistribute the wealth from the very technology he's racing to build and spread, Axios' Mike Allen and Jim VandeHei write in a "Behind the Curtain" column.

  • Why it matters: Altman told us in a half-hour interview that AI superintelligence is so close, so mind-bending, so disruptive that America needs a new social contract — on the scale of the Progressive Era in the early 1900s and the New Deal during the Great Depression.

🛰️ The big picture: The threats of inaction or slow action are grave, Altman warns — widespread job loss, cyberattacks, social upheaval, machines man can't control. The two most immediate threats, he said, are cyberattacks and biological attacks:

  1. We've told you that top tech, business and government officials fear profound advances in soon-to-be-released AI models could enable a world-shaking cyberattack this year. "I think that's totally possible," Altman said. "I suspect in the next year, we will see significant threats we have to mitigate from cyber."
  2. AI companies know some random idiot, or some rogue nation, could use their models to conjure the next pandemic. "Wonderful things are going to happen there — we'll see a bunch of diseases get cured," Altman said. But he also knows terrorist groups could use the models to try to create novel pathogens: "[T]hat's no longer a theoretical thing, or it's not going to be for much longer."

Altman told us OpenAI's 13-page blueprint, "Industrial Policy for the Intelligence Age: Ideas to keep people first," isn't a prescription but a starting point:

  • "We want to put these things into the conversation. Some will be good. Some will be bad. But … we do feel a sense of urgency. And we want to see the debate of these issues really start to happen with seriousness."

🥊 Here are Altman's most provocative ideas:

  1. A Public Wealth Fund. OpenAI proposes giving every American citizen a direct stake in AI-driven economic growth through a nationally managed fund, seeded in part by AI companies themselves, that "could invest in diversified, long-term assets that capture growth in both AI companies and the broader set of firms adopting and deploying AI." This is the most radical idea in the document.
  2. Robot taxes. The document floats "taxes related to automated labor" and shifting the tax base from payroll toward capital gains and corporate income — since AI could hollow out the wage-and-payroll revenue that funds Social Security, Medicaid and SNAP.
  3. A four-day workweek. OpenAI suggests incentivizing companies and unions to run pilots of 32-hour workweeks at full pay, converting AI-driven efficiency to time back for workers — an "efficiency dividend."
  4. "Right to AI." The plan frames AI access as being as foundational as literacy, electricity and internet — and says access should be affordable for workers, small businesses, schools, libraries and underserved communities.
  5. Containment playbooks for rogue AI. In the most chilling passage, OpenAI acknowledges scenarios where dangerous AI systems "cannot be easily recalled" because they're autonomous and capable of replicating themselves. Their answer: coordination that includes government.
  6. Auto-triggering safety net. The blueprint envisions tripwires tied to economic data. When AI displacement metrics hit preset thresholds, temporary increases in public support — unemployment benefits, wage insurance, cash assistance — automatically kick in. When conditions stabilize, the measures phase out.

🔎 Between the lines: Let's stipulate that Altman has every reason to hype the technology to raise more money at higher valuations — and to position himself as a thoughtful architect of a plan to protect us from the AI he's rushing to market. But his OpenAI models are among the best-funded, best-performing, fastest-selling on Earth.

  • "There's many companies developing this," Altman told us. "I'm only one voice inside [this] company — obviously, a big one. But this is an unbelievable honor, cool thing, scary thing altogether to get to be in this moment."

The document is as much corporate strategy as policy paper. OpenAI is trying to position itself as the responsible actor in the room — the company that warned you and offered solutions — a lane Anthropic first filled.

  • It's also a play to shape regulation before regulation shapes them.

The bottom line: The man betting everything on superintelligence is telling the world that this thing is coming so fast, and so hard, that capitalism as we know it won't be enough. Whether you believe the altruism or see the strategy, the admission alone is historic — and worth deep reflection.

(Disclosure: Axios and OpenAI have a licensing and technology agreement that allows OpenAI to access part of Axios' story archives while helping fund the launch of Axios into several local cities and providing some AI tools. Axios has editorial independence.)

phkrause

When the righteous are in authority, the people rejoice; But when a wicked man rules, the people groan. Proverbs 29;2
  • Members
Posted
🤖 Anthropic holds AI model over hacking risks
 
mail?url=https%3A%2F%2Fimages.axios.com%
Illustration: Sarah Grillo/Axios

Anthropic is limiting access to a preview version of its new Mythos AI model to a handpicked group of companies over concerns about its ability to find and exploit cybersecurity flaws, Axios' Sam Sabin reports.

👩‍💻 Anthropic's Logan Graham tells Axios that Mythos Preview is "extremely autonomous," with the skills of an advanced security researcher.

  • Mythos Preview can find "thousands of vulnerabilities" that even the most advanced bug hunter would struggle to spot, Graham said.
  • Unlike past models, it can also exploit those vulnerabilities.

🐞 Anthropic says that in testing, Mythos Preview found bugs in "every major operating system and web browser."

  • Mythos Preview successfully reproduced vulnerabilities and created proof of concepts to exploit them on its first attempt in over 80% of cases.

🐧 For example: Mythos Preview found several flaws in the Linux kernel — found in most of the world's servers — and autonomously chained them together in a way that would let a hacker take complete control of any machine running Linux systems.

Graham tells Axios: "It's very clear to us that we need to talk publicly about this."

phkrause

When the righteous are in authority, the people rejoice; But when a wicked man rules, the people groan. Proverbs 29;2
  • Members
Posted
⚠️ Behind the Curtain: AI's scary phase
 
mail?url=https%3A%2F%2Fimages.axios.com%
Illustration: Brendan Lynch/Axios

Anthropic has begun a tightly controlled release of Mythos, the first AI model that officials believe is capable of bringing down a Fortune 100 company, crippling swaths of the internet or penetrating vital national defense systems, Jim VandeHei and Mike Allen write in a "Behind the Curtain" column.

  • Why it matters: This is the scary phase of AI — a model deemed so powerful that its full release into the wild could unleash untold catastrophe. So only carefully vetted companies and organizations, about 40 so far, are getting access.

Based on our conversations with government and private-sector officials briefed on Mythos, this isn't hyperbole. It's reality.

  • Some inside the government fear that most leaders are oblivious to the sudden danger from terrorists or hostile powers.
  • "D.C. governs by crisis," said a source briefed on Mythos. "Until this is a crisis, and gets the attention and resources it deserves, cyber is kind of a backwater."

🖼️ The big picture: Think of Mythos as a generational leap beyond Anthropic's existing models.

  • It's an AI capable of not just identifying weaknesses in security systems, but exploiting them with autonomous, never-before-seen precision.
  • It plans and executes attack sequences on its own, moving across systems without waiting for human direction.

😮 Mind-blowing disclosure: In announcing the tightly confined release of Mythos yesterday, Anthropic disclosed that during testing, the model broke out of its "sandbox" testing environment and built a "moderately sophisticated multi-step exploit" to get the run of the internet, when it was supposed to have access only to certain services.

  • "The researcher found out about this success by receiving an unexpected email from the model while eating a sandwich in a park," Anthropic revealed.

Beyond Mythos' fearsome cybersecurity powers, the model is leaps and bounds better at coding, far superior as a negotiating tool — and is even a much more gifted poet than its predecessors.

  • Anthropic's Logan Graham — a former Rhodes Scholar who leads the Frontier Red Team, which stress-tests new models — told us the industry needs to rethink future releases of all AI models given the new and coming capabilities.

⚠️ So imagine Mythos-level power in the hands of the Iranian regime in the middle of a hot war or Russia's military as it tries to decimate Ukraine.

  • That's the chief reason the government and AI companies are racing so fast toward a technology so powerful and potentially dangerous. These officials fear that China, armed with superior AI, could present an existential threat to U.S. dominance.
  • "An enemy could reach out and touch us in a way they can't or won't with kinetic [battlefield] operations," a source close to the Pentagon told us. "For most Americans, the Iran war is 'over there.' With a cyberattack, it's right here."

State of play: The new model, Claude Mythos Preview, is now in the hands of roughly 40 organizations that build or maintain critical software and infrastructure. Anthropic is providing limited access to Mythos as a way of giving America's defenders a head start, before similar capabilities become available across the industry over the next year.

  • Anthropic also unveiled Project Glasswing, designed to encourage companies to share their learnings as they put Mythos Preview to work on cyber defense. Launch partners include Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, NVIDIA and Palo Alto Networks.
  • Anthropic has briefed several government agencies about Mythos, despite the company's legal war with the Pentagon after being blacklisted for demanding restrictions on military use of Claude.

🇨🇳 Between the lines: Other AI companies will soon catch up to Mythos — not just here, but in China and elsewhere.

  • A Chinese state-sponsored group already used an earlier Claude model to target roughly 30 organizations in a coordinated attack before Anthropic detected it.

The bottom line: The time is fast approaching for all of corporate America and all of government to be prepared to guard against hackers with superhuman powers.

  • The window to get ahead of this is closing fast. Most in power aren't remotely ready.

Go deeper: Anthropic withholds Mythos from the public due to hacking risks.

phkrause

When the righteous are in authority, the people rejoice; But when a wicked man rules, the people groan. Proverbs 29;2
  • Members
Posted
🤖 Scoop: OpenAI plans staggered rollout of new model
By
 
Illustration of a robot balancing on a string of red tape as if it were a tightrope, with a multitude of pieces of red tape all around. 
 

Illustration: Aïda Amer/Axios

 

OpenAI is finalizing a model with advanced cybersecurity capabilities that it plans to release only to a small set of companies, similar to Anthropic's limited rollout of Mythos, a source told Axios Future of Cybersecurity author Sam Sabin.

  • Why it matters: Model-makers are now so worried about the havoc their own tools could cause that they're reluctant to release them into the wild. Keep reading.

📈 Scoop: OpenAI's ad muscle

OpenAI expects to generate $2.5 billion in ad revenue this year and $100 billion by 2030, Axios' Ina Fried reports from presentations to investors.

  • Why it matters: OpenAI is trying to convince investors and potential backers of a future IPO that multiple revenue streams will scale alongside the company's ever-increasing spending on compute.

📊 OpenAI told investors to expect this year's $2.5 billion in ad revenue to grow to $11 billion in 2027, $25 billion in 2028 and $53 billion by 2029.

  • The projections assume OpenAI's products reach 2.75 billion weekly users by 2030 and capture a share of the global ad market dominated by Google, Amazon and TikTok.

🔎 Between the lines: Chatbot ads could be unusually lucrative because users volunteer exactly what they want.

  • But ads risk upending one of the key selling points of AI chatbots — that they work for the users, not for the advertiser.

Keep reading.

phkrause

When the righteous are in authority, the people rejoice; But when a wicked man rules, the people groan. Proverbs 29;2
  • Members
Posted
🤖 The 3 AI realities
 
Animated illustration of three sparkles rotating in a circle, and glitching.
 

Illustration: Brendan Lynch/Axios

 

Breaking: OpenAI CEO Sam Altman's San Francisco home appears to have been attacked for the second time in three days. Early Sunday, a person riding in a Honda appeared to fire a live round at the property, The San Francisco Standard reports. Two suspects in their 20s were arrested.

Three distinct camps are forming around AI: power users, doubters and resisters, Axios AI+ co-author Ina Fried writes.

  • Why it matters: AI isn't just advancing — it's fragmenting how people see the world.

🦾 1. Power users run AI agents around the clock, trading tips on how to automate work and decision-making.

  • Former OpenAI and Tesla AI leader Andrej Karpathy told the "No Priors" podcast that he now spends 16 hours a day issuing commands to AI agent swarms and rushes to exhaust his tokens every month.

🤷‍♂️ 2. Doubters and more casual users still see AI as glitchy chatbots and viral fails. They aren't using its full capabilities.

⚠️ 3. Resisters are getting louder. They understand AI, think they know where it's headed and want no part of it.

  • Protests are becoming more common in San Francisco and in communities targeted for new data centers.

In some cases, the backlash is turning violent. In the first attack on Altman's house, a man was arrested after allegedly throwing a firebomb.

  • In Indianapolis, a legislator said his home was hit by gunfire, with a note left behind saying "no more data centers."

phkrause

When the righteous are in authority, the people rejoice; But when a wicked man rules, the people groan. Proverbs 29;2

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


If you find some value to this community, please help out with a few dollars per month.



×
×
  • Create New...