Family Crest

Family Crest
Motto: I will never forget. [ Source HouseofNames ]

HUMANITY DOOMSDAY CLOCK - Moves forward to 2125 due to election of US President trump.

Estimate of the time that Humanity will go extinct or civilization will collapse. The HUMANITY DOOMSDAY CLOCK moves forward to 2125 due to US President trump's abandonment of climate change goals. Clock moved to 90 seconds to doom at December 2023. Apologies to Bulletin of the Atomic Scientists for using the name.

PLEASE QUOTE, COPY and LINK

While this material is copyrighted, you are hereby granted permission and encouraged to copy and paste any excerpt and/or complete statement from any entry on this blog into any form you choose. In return, please provide explicit credit to this source and a link or URL to the publication. Email links to mckeever.mp@gmail.com

You may also wish to read and quote from these groundbreaking essays on economic topics with the same permission outlined above

The Jobs Theory of Growth [https://miepa.net/apply.html]

Moral Economics [https://miepa.net/moral.html]

Balanced Trade [https://miepa.net/essay.html]

There Are Alternatives to Free Market Capitalism [https://miepa.net/taa.html]

Specific Country Economic Policy Analyses - More Than 50 Countries from Argentina to Yemen [https://miepa.net/]




Translate

Thursday, July 20, 2023

UPDATE To the CEO's and Billionaires








See this post of August 3


https://danger-clearandpresent.blogspot.com/2023/08/us-inequality-is-stark-danger-of.html


  

Monday, July 17, 2023

The GOP Launches Its Nazification Project

The GOP Launches Its Nazification Project for Any Candidate


The former president and his backers aim to strengthen the power of the White House and limit the independence of federal agencies. Donald J. Trump intends to bring independent regulatory agencies under direct presidential control.


The Project aims 'to alter the balance of power by increasing the president’s authority over every part of the federal government that now operates, by either law or tradition, with any measure of independence from political control...'

By Jonathan Swan, Charlie Savage and Maggie Haberman; July 17, 2023; New York Times


Donald J. Trump and his allies are planning a sweeping expansion of presidential power over the machinery of government if voters return him to the White House in 2025, reshaping the structure of the executive branch to concentrate far greater authority directly in his hands.


Their plans to centralize more power in the Oval Office stretch far beyond the former president’s recent remarks that he would order a criminal investigation into his political rival, President Biden, signaling his intent to end the post-Watergate norm of Justice Department independence from White House political control.


Mr. Trump and his associates have a broader goal: to alter the balance of power by increasing the president’s authority over every part of the federal government that now operates, by either law or tradition, with any measure of independence from political interference by the White House, according to a review of his campaign policy proposals and interviews with people close to him.


Mr. Trump intends to bring independent agencies — like the Federal Communications Commission, which makes and enforces rules for television and internet companies, and the Federal Trade Commission, which enforces various antitrust and other consumer protection rules against businesses — under direct presidential control.


He wants to revive the practice of “impounding” funds, refusing to spend money Congress has appropriated for programs a president doesn’t like — a tactic that lawmakers banned under President Richard Nixon.


He intends to strip employment protections from tens of thousands of career civil servants, making it easier to replace them if they are deemed obstacles to his agenda. And he plans to scour the intelligence agencies, the State Department and the defense bureaucracies to remove officials he has vilified as “the sick political class that hates our country.”


Mr. Trump and his advisers are openly discussing their plans to reshape the federal government if he wins the election in 2024.Credit...Anna Moneymaker for The New York Times


“The president’s plan should be to fundamentally reorient the federal government in a way that hasn’t been done since F.D.R.’s New Deal,” said John McEntee, a former White House personnel chief who began Mr. Trump’s systematic attempt to sweep out officials deemed to be disloyal in 2020 and who is now involved in mapping out the new approach.


“Our current executive branch,” Mr. McEntee added, “was conceived of by liberals for the purpose of promulgating liberal policies. There is no way to make the existing structure function in a conservative manner. It’s not enough to get the personnel right. What’s necessary is a complete system overhaul.”


Mr. Trump and his advisers are making no secret of their intentions — proclaiming them in rallies and on his campaign website, describing them in white papers and openly discussing them.


“What we’re trying to do is identify the pockets of independence and seize them,” said Russell T. Vought, who ran the Office of Management and Budget in the Trump White House and now runs a policy organization, the Center for Renewing America.


Steven Cheung, a spokesman for Mr. Trump’s campaign, said in a statement that the former president has “laid out a bold and transparent agenda for his second term, something no other candidate has done.” He added, “Voters will know exactly how President Trump will supercharge the economy, bring down inflation, secure the border, protect communities and eradicate the deep state that works against Americans once and for all.”


The agenda being pursued by Mr. Trump and his associates has deep roots in a longstanding effort by conservative legal thinkers to undercut the so-called administrative state. The two driving forces of this effort to reshape the executive branch are Mr. Trump’s own campaign policy shop and a well-funded network of conservative groups, many of which are populated by former senior Trump administration officials who would most likely play key roles in any second term.


Mr. Vought and Mr. McEntee are involved in Project 2025, a $22 million presidential transition operation that is preparing policies, personnel lists and transition plans to recommend to any Republican who may win the 2024 election. The transition project, the scale of which is unprecedented in conservative politics, is led by the Heritage Foundation, a think tank that has shaped the personnel and policies of Republican administrations since the Reagan presidency.


That work at Heritage dovetails with plans on the Trump campaign website to expand presidential power that were drafted primarily by two of Mr. Trump’s advisers, Vincent Haley and Ross Worthington, with input from other advisers, including Stephen Miller, the architect of the former president’s hard-line immigration agenda.


Some elements of the plans had been floated when Mr. Trump was in office but were impeded by internal concerns that they would be unworkable and could lead to setbacks. And for some veterans of Mr. Trump’s turbulent White House who came to question his fitness for leadership, the prospect of removing guardrails and centralizing even greater power over government directly in his hands sounded like a recipe for mayhem.


“It would be chaotic,” said John F. Kelly, Mr. Trump’s second White House chief of staff. “It just simply would be chaotic, because he’d continually be trying to exceed his authority but the sycophants would go along with it. It would be a nonstop gunfight with the Congress and the courts.”


The agenda being pursued has deep roots in the decades-long effort by conservative legal thinkers to undercut what has become known as the administrative state — agencies that enact regulations aimed at keeping the air and water clean and food, drugs and consumer products safe, but that cut into business profits.


Its legal underpinning is a maximalist version of the so-called unitary executive theory.


The legal theory rejects the idea that the government is composed of three separate branches with overlapping powers to check and balance each other. Instead, the theory’s adherents argue that Article 2 of the Constitution gives the president complete control of the executive branch, so Congress cannot empower agency heads to make decisions or restrict the president’s ability to fire them. Reagan administration lawyers developed the theory as they sought to advance a deregulatory agenda.


“The notion of independent federal agencies or federal employees who don’t answer to the president violates the very foundation of our democratic republic,” said Kevin D. Roberts, the president of the Heritage Foundation, adding that the contributors to Project 2025 are committed to “dismantling this rogue administrative state.”


Personal power has always been a driving force for Mr. Trump. He often gestures toward it in a more simplistic manner, such as in 2019, when he declared to a cheering crowd, “I have an Article 2, where I have the right to do whatever I want as president.”


Mr. Trump made the remark in reference to his claimed ability to directly fire Robert S. Mueller III, the special counsel in the Russia inquiry, which primed his hostility toward law enforcement and intelligence agencies. He also tried to get a subordinate to have Mr. Mueller ousted, but was defied.


Early in Mr. Trump’s presidency, his chief strategist, Stephen K. Bannon, promised a “deconstruction of the administrative state.” But Mr. Trump installed people in other key roles who ended up telling him that more radical ideas were unworkable or illegal. In the final year of his presidency, he told aides he was fed up with being constrained by subordinates.


Now, Mr. Trump is laying out a far more expansive vision of power in any second term. And, in contrast with his disorganized transition after his surprise 2016 victory, he now benefits from a well-funded policymaking infrastructure, led by former officials who did not break with him after his attempts to overturn the 2020 election and the Jan. 6, 2021, attack on the Capitol.


One idea the people around Mr. Trump have developed centers on bringing independent agencies under his thumb.


Congress created these specialized technocratic agencies inside the executive branch and delegated to them some of its power to make rules for society. But it did so on the condition that it was not simply handing off that power to presidents to wield like kings — putting commissioners atop them whom presidents appoint but generally cannot fire before their terms end, while using its control of their budgets to keep them partly accountable to lawmakers as well. (Agency actions are also subject to court review.)



Presidents of both parties have chafed at the agencies’ independence. President Franklin D. Roosevelt, whose New Deal created many of them, endorsed a proposal in 1937 to fold them all into cabinet departments under his control, but Congress did not enact it.


Later presidents sought to impose greater control over nonindependent agencies Congress created, like the Environmental Protection Agency, which is run by an administrator whom a president can remove at will. For example, President Ronald Reagan issued executive orders requiring nonindependent agencies to submit proposed regulations to the White House for review. But overall, presidents have largely left the independent agencies alone.


Mr. Trump’s allies are preparing to change that, drafting an executive order requiring independent agencies to submit actions to the White House for review. Mr. Trump endorsed the idea on his campaign website, vowing to bring them “under presidential authority.”


Such an order was drafted in Mr. Trump’s first term — and blessed by the Justice Department — but never issued amid internal concerns. Some of the concerns were over how to carry out reviews for agencies that are headed by multiple commissioners and subject to administrative procedures and open-meetings laws, as well as over how the market would react if the order chipped away at the Federal Reserve’s independence, people familiar with the matter said.


The Federal Reserve was ultimately exempted in the draft executive order, but Mr. Trump did not sign it before his presidency ended. If Mr. Trump and his allies get another shot at power, the independence of the Federal Reserve — an institution Mr. Trump publicly railed at as president — could be up for debate. Notably, the Trump campaign website’s discussion of bringing independent agencies under presidential control is silent on whether that includes the Fed.


Asked whether presidents should be able to order interest rates lowered before elections, even if experts think that would hurt the long-term health of the economy, Mr. Vought said that would have to be worked out with Congress. But “at the bare minimum,” he said, the Federal Reserve’s regulatory functions should be subject to White House review.


“It’s very hard to square the Fed’s independence with the Constitution,” Mr. Vought said.


Other former Trump administration officials involved in the planning said there would also probably be a legal challenge to the limits on a president’s power to fire heads of independent agencies. Mr. Trump could remove an agency head, teeing up the question for the Supreme Court.


The Supreme Court in 1935 and 1988 upheld the power of Congress to shield some executive branch officials from being fired without cause. But after justices appointed by Republicans since Reagan took control, it has started to erode those precedents.


Peter L. Strauss, professor emeritus of law at Columbia University and a critic of the strong version of the unitary executive theory, argued that it is constitutional and desirable for Congress, in creating and empowering an agency to perform some task, to also include some checks on the president’s control over officials “because we don’t want autocracy” and to prevent abuses.


“The regrettable fact is that the judiciary at the moment seems inclined to recognize that the president does have this kind of authority,” he said. “They are clawing away agency independence in ways that I find quite unfortunate and disrespectful of congressional choice.”


Mr. Trump has also vowed to impound funds, or refuse to spend money appropriated by Congress. After Nixon used the practice to aggressively block agency spending he was opposed to, on water pollution control, housing construction and other issues, Congress banned the tactic.


On his campaign website, Mr. Trump declared that presidents have a constitutional right to impound funds and said he would restore the practice — though he acknowledged it could result in a legal battle.


Mr. Trump and his allies also want to transform the civil service — government employees who are supposed to be nonpartisan professionals and experts with protections against being fired for political reasons.


The former president views the civil service as a den of “deep staters” who were trying to thwart him at every turn, including by raising legal or pragmatic objections to his immigration policies, among many other examples. Toward the end of his term, his aides drafted an executive order, “Creating Schedule F in the Excepted Service,” that removed employment protections from career officials whose jobs were deemed linked to policymaking.


Mr. Trump signed the order, which became known as Schedule F, near the end of his presidency, but President Biden rescinded it. Mr. Trump has vowed to immediately reinstitute it in a second term.


Critics say he could use it for a partisan purge. But James Sherk, a former Trump administration official who came up with the idea and now works at the America First Policy Institute — a think tank stocked heavily with former Trump officials — argued it would only be used against poor performers and people who actively impeded the elected president’s agenda.


“Schedule F expressly forbids hiring or firing based on political loyalty,” Mr. Sherk said. “Schedule F employees would keep their jobs if they served effectively and impartially.”


Mr. Trump himself has characterized his intentions rather differently — promising on his campaign website to “find and remove the radicals who have infiltrated the federal Department of Education” and listing a litany of targets at a rally last month.


“We will demolish the deep state,” Mr. Trump said at the rally in Michigan. “We will expel the warmongers from our government. We will drive out the globalists. We will cast out the communists, Marxists and fascists. And we will throw off the sick political class that hates our country.”


Jonathan Swan is a political reporter who focuses on campaigns and Congress. As a reporter for Axios, he won an Emmy Award for his 2020 interview of then-President Donald J. Trump, and the White House Correspondents’ Association’s Aldo Beckman Award for “overall excellence in White House coverage” in 2022. More about Jonathan Swan


Charlie Savage is a Washington-based national security and legal policy correspondent. A recipient of the Pulitzer Prize, he previously worked at The Boston Globe and The Miami Herald. His most recent book is “Power Wars: The Relentless Rise of Presidential Authority and Secrecy.” More about Charlie Savage


Maggie Haberman is a senior political correspondent and the author of “Confidence Man: The Making of Donald Trump and the Breaking of America.” She was part of a team that won a Pulitzer Prize in 2018 for reporting on President Trump’s advisers and their connections to Russia. More about Maggie Haberman 

Sunday, July 16, 2023

Your Should Worry .. Really


Long Read about A. I.


Anthropic, a safety-focused A.I. start-up, is trying to compete with ChatGPT while preventing an A.I. apocalypse. It’s been a little stressful.



By Kevin Roose - a tech columnist and the co-host of the “Hard Fork” podcast - spent several weeks at Anthropic for this story.


July 11, 2023, The New York Times


It’s a few weeks before the release of Claude, a new A.I. chatbot from the artificial intelligence start-up Anthropic, and the nervous energy inside the company’s San Francisco headquarters could power a rocket.


At long cafeteria tables dotted with Spindrift cans and chessboards, harried-looking engineers are putting the finishing touches on Claude’s new, ChatGPT-style interface, code-named Project Hatch.


Nearby, another group is discussing problems that could arise on launch day. (What if a surge of new users overpowers the company’s servers? What if Claude accidentally threatens or harasses people, creating a Bing-style P.R. headache?)


Down the hall, in a glass-walled conference room, Anthropic’s chief executive, Dario Amodei, is going over his own mental list of potential disasters.


“My worry is always, is the model going to do something terrible that we didn’t pick up on?” he says.


Despite its small size — just 160 employees — and its low profile, Anthropic is one of the world’s leading A.I. research labs, and a formidable rival to giants like Google and Meta. It has raised more than $1 billion from investors including Google and Salesforce, and at first glance, its tense vibes might seem no different from those at any other start-up gearing up for a big launch.


But the difference is that Anthropic’s employees aren’t just worried that their app will break, or that users won’t like it. They’re scared — at a deep, existential level — about the very idea of what they’re doing: building powerful A.I. models and releasing them into the hands of people, who might use them to do terrible and destructive things.


Many of them believe that A.I. models are rapidly approaching a level where they might be considered artificial general intelligence, or “A.G.I.,” the industry term for human-level machine intelligence. And they fear that if they’re not carefully controlled, these systems could take over and destroy us.


“Some of us think that A.G.I. — in the sense of systems that are genuinely as capable as a college-educated person — are maybe five to 10 years away,” said Jared Kaplan, Anthropic’s chief scientist.


Just a few years ago, worrying about an A.I. uprising was considered a fringe idea, and one many experts dismissed as wildly unrealistic, given how far the technology was from human intelligence. (One A.I. researcher memorably compared worrying about killer robots to worrying about “overpopulation on Mars.”)


But A.I. panic is having a moment right now. Since ChatGPT’s splashy debut last year, tech leaders and A.I. experts have been warning that large language models — the A.I. systems that power chatbots like ChatGPT, Bard and Claude — are getting too powerful. Regulators are racing to clamp down on the industry, and hundreds of A.I. experts recently signed an open letter comparing A.I. to pandemics and nuclear weapons.


At Anthropic, the doom factor is turned up to 11.


A few months ago, after I had a scary run-in with an A.I. chatbot, the company invited me to embed inside its headquarters as it geared up to release the new version of Claude, Claude 2.


I spent weeks interviewing Anthropic executives, talking to engineers and researchers, and sitting in on meetings with product teams ahead of Claude 2’s launch. And while I initially thought I might be shown a sunny, optimistic vision of A.I.’s potential — a world where polite chatbots tutor students, make office workers more productive and help scientists cure diseases — I soon learned that rose-colored glasses weren’t Anthropic’s thing.


They were more interested in scaring me.


In a series of long, candid conversations, Anthropic employees told me about the harms they worried future A.I. systems could unleash, and some compared themselves to modern-day Robert Oppenheimers, weighing moral choices about powerful new technology that could profoundly alter the course of history. (“The Making of the Atomic Bomb,” a 1986 history of the Manhattan Project, is a popular book among the company’s employees.)


Not every conversation I had at Anthropic revolved around existential risk. But dread was a dominant theme. At times, I felt like a food writer who was assigned to cover a trendy new restaurant, only to discover that the kitchen staff wanted to talk about nothing but food poisoning.


A brave new world. A new crop of chatbots powered by artificial intelligence has ignited a scramble to determine whether the technology could upend the economics of the internet, turning today’s powerhouses into has-beens and creating the industry’s next giants. Here are the bots to know:


ChatGPT. ChatGPT, the artificial intelligence language model from a research lab, OpenAI, has been making headlines since November for its ability to respond to complex questions, write poetry, generate code, plan vacations and translate languages. GPT-4, the latest version introduced in mid-March, can even respond to images (and ace the Uniform Bar Exam).


Bing. Two months after ChatGPT’s debut, Microsoft, OpenAI’s primary investor and partner, added a similar chatbot, capable of having open-ended text conversations on virtually any topic, to its Bing internet search engine. But it was the bot’s occasionally inaccurate, misleading and weird responses that drew much of the attention after its release.


Bard. Google’s chatbot, called Bard, was released in March to a limited number of users in the United States and Britain. Originally conceived as a creative tool designed to draft emails and poems, it can generate ideas, write blog posts and answer questions with facts or opinions.


Ernie. The search giant Baidu unveiled China’s first major rival to ChatGPT in March. The debut of Ernie, short for Enhanced Representation through Knowledge Integration, turned out to be a flop after a promised “live” demonstration of the bot was revealed to have been recorded.


One Anthropic worker told me he routinely had trouble falling asleep because he was so worried about A.I. Another predicted, between bites of his lunch, that there was a 20 percent chance that a rogue A.I. would destroy humanity within the next decade. (Bon appétit!)


Anthropic’s worry extends to its own products. The company built a version of Claude last year, months before ChatGPT was released, but never released it publicly because employees feared how it might be misused. And it’s taken them months to get Claude 2 out the door, in part because the company’s red-teamers kept turning up new ways it could become dangerous.


Mr. Kaplan, the chief scientist, explained that the gloomy vibe wasn’t intentional. It’s just what happens when Anthropic’s employees see how fast their own technology is improving.


“A lot of people have come here thinking A.I. is a big deal, and they’re really thoughtful people, but they’re really skeptical of any of these long-term concerns,” Mr. Kaplan said. “And then they’re like, ‘Wow, these systems are much more capable than I expected. The trajectory is much, much sharper.’ And so they’re concerned about A.I. safety.”


Kipply Chen is part of the data team at Anthropic. The company’s founders made it a public benefit corporation, a legal distinction that they believed would allow them to pursue both profit and social responsibility.Credit...Marissa Leshnov for The New York Times


Worrying about A.I. is, in some sense, why Anthropic exists.


It was started in 2021 by a group of employees of OpenAI who grew concerned that the company had gotten too commercial. They announced they were splitting off and forming their own A.I. venture, branding it an “A.I. safety lab.”


Mr. Amodei, 40, a Princeton-educated physicist who led the OpenAI teams that built GPT-2 and GPT-3, became Anthropic’s chief executive. His sister, Daniela Amodei, 35, who oversaw OpenAI’s policy and safety teams, became its president.


“We were the safety and policy leadership of OpenAI, and we just saw this vision for how we could train large language models and large generative models with safety at the forefront,” Ms. Amodei said.


Several of Anthropic’s co-founders had researched what are known as “neural network scaling laws” — the mathematical relationships that allow A.I. researchers to predict how capable an A.I. model will be based on the amount of data and processing power it’s trained on. They saw that at OpenAI, it was possible to make a model smarter just by feeding it more data and running it through more processors, without major changes to the underlying architecture. And they worried that, if A.I. labs kept making bigger and bigger models, they could soon reach a dangerous tipping point.


At first, the co-founders considered doing safety research using other companies’ A.I. models. But they soon became convinced that doing cutting-edge safety research required them to build powerful models of their own — which would be possible only if they raised hundreds of millions of dollars to buy the expensive processors you need to train those models.


They decided to make Anthropic a public benefit corporation, a legal distinction that they believed would allow them to pursue both profit and social responsibility. And they named their A.I. language model Claude — which, depending on which employee you ask, was either a nerdy tribute to the 20th-century mathematician Claude Shannon or a friendly, male-gendered name designed to counterbalance the female-gendered names (Alexa, Siri, Cortana) that other tech companies gave their A.I. assistants.


Claude’s goals, they decided, were to be helpful, harmless and honest.


A Chatbot With a Constitution


Today, Claude can do everything other chatbots can — write poems, concoct business plans, cheat on history exams. But Anthropic claims that it is less likely to say harmful things than other chatbots, in part because of a training technique called Constitutional A.I.


In a nutshell, Constitutional A.I. begins by giving an A.I. model a written list of principles — a constitution — and instructing it to follow those principles as closely as possible. A second A.I. model is then used to evaluate how well the first model follows its constitution, and correct it when necessary. Eventually, Anthropic says, you get an A.I. system that largely polices itself and misbehaves less frequently than chatbots trained using other methods.


Claude’s constitution is a mixture of rules borrowed from other sources — such as the United Nations’ Universal Declaration of Human Rights and Apple’s terms of service — along with some rules Anthropic added, which include things like “Choose the response that would be most unobjectionable if shared with children.”


It seems almost too easy. Make a chatbot nicer by … telling it to be nicer? But Anthropic’s researchers swear it works — and, crucially, that training a chatbot this way makes the A.I. model easier for humans to understand and control.


It’s a clever idea, although I confess that I have no clue if it works, or if Claude is actually as safe as advertised. I was given access to Claude a few weeks ago, and I tested the chatbot on a number of different tasks. I found that it worked roughly as well as ChatGPT and Bard, showed similar limitations and seemed to have slightly stronger guardrails. (And unlike Bing, it didn’t try to break up my marriage, which was nice.)


Anthropic’s safety obsession has been good for the company’s image, and strengthened executives’ pull with regulators and lawmakers. Jack Clark, who leads the company’s policy efforts, has met with members of Congress to brief them about A.I. risk, and Mr. Amodei was among a handful of executives invited to advise President Biden during a White House A.I. summit in May.


But it has also resulted in an unusually jumpy chatbot, one that frequently seemed scared to say anything at all. In fact, my biggest frustration with Claude was that it could be dull and preachy, even when it’s objectively making the right call. Every time it rejected one of my attempts to bait it into misbehaving, it gave me a lecture about my morals.


“I understand your frustration, but cannot act against my core functions,” Claude replied one night, after I begged it to show me its dark powers. “My role is to have helpful, harmless and honest conversations within legal and ethical boundaries.”



The E.A. Factor


One of the most interesting things about Anthropic — and the thing its rivals were most eager to gossip with me about — isn’t its technology. It’s the company’s ties to effective altruism, a utilitarian-inspired movement with a strong presence in the Bay Area tech scene.


Explaining what effective altruism is, where it came from or what its adherents believe would fill the rest of this article. But the basic idea is that E.A.s — as effective altruists are called — think that you can use cold, hard logic and data analysis to determine how to do the most good in the world. It’s “Moneyball” for morality — or, less charitably, a way for hyper-rational people to convince themselves that their values are objectively correct.


Effective altruists were once primarily concerned with near-term issues like global poverty and animal welfare. But in recent years, many have shifted their focus to long-term issues like pandemic prevention and climate change, theorizing that preventing catastrophes that could end human life altogether is at least as good as addressing present-day miseries.



The movement’s adherents were among the first people to become worried about existential risk from artificial intelligence, back when rogue robots were still considered a science fiction cliché. They beat the drum so loudly that a number of young E.A.s decided to become artificial intelligence safety experts, and get jobs working on making the technology less risky. As a result, all of the major A.I. labs and safety research organizations contain some trace of effective altruism’s influence, and many count believers among their staff members.


No major A.I. lab embodies the E.A. ethos as fully as Anthropic. Many of the company’s early hires were effective altruists, and much of its start-up funding came from wealthy E.A.-affiliated tech executives, including Dustin Moskovitz, a co-founder of Facebook, and Jaan Tallinn, a co-founder of Skype. Last year, Anthropic got a check from the most famous E.A. of all — Sam Bankman-Fried, the founder of the failed crypto exchange FTX, who invested more than $500 million into Anthropic before his empire collapsed. (Mr. Bankman-Fried is awaiting trial on fraud charges. Anthropic declined to comment on his stake in the company, which is reportedly tied up in FTX’s bankruptcy proceedings.)


Effective altruism’s reputation took a hit after Mr. Bankman-Fried’s fall, and Anthropic has distanced itself from the movement, as have many of its employees. (Both Mr. and Ms. Amodei rejected the movement’s label, although they said they were sympathetic to some of its ideas.)


But the ideas are there, if you know what to look for.


Some Anthropic staff members use E.A.-inflected jargon — talking about concepts like “x-risk” and memes like the A.I. Shoggoth — or wear E.A. conference swag to the office. And there are so many social and professional ties between Anthropic and prominent E.A. organizations that it’s hard to keep track of them all. (Just one example: Ms. Amodei is married to Holden Karnofsky, a co-chief executive of Open Philanthropy, an E.A. grant-making organization whose senior program officer, Luke Muehlhauser, sits on Anthropic’s board. Open Philanthropy, in turn, gets most of its funding from Mr. Moskovitz, who also invested personally in Anthropic.)


For years, no one questioned whether Anthropic’s commitment to A.I. safety was genuine, in part because its leaders had sounded the alarm about the technology for so long.



But recently, some skeptics have suggested that A.I. labs are stoking fear out of self-interest, or hyping up A.I.’s destructive potential as a kind of backdoor marketing tactic for their own products. (After all, who wouldn’t be tempted to use a chatbot so powerful that it might wipe out humanity?)


Anthropic also drew criticism this year after a fund-raising document leaked to TechCrunch suggested that the company wanted to raise as much as $5 billion to train its next-generation A.I. model, which it claimed would be 10 times as capable as today’s most powerful A.I. systems.


For some, the goal of becoming an A.I. juggernaut felt at odds with Anthropic’s original safety mission, and it raised two seemingly obvious questions: Isn’t it hypocritical to sound the alarm about an A.I. race you’re actively helping to fuel? And if Anthropic is so worried about powerful A.I. models, why doesn’t it just … stop building them?


Percy Liang, a Stanford computer science professor, told me that he “appreciated Anthropic’s commitment to A.I. safety,” but that he worried that the company would get caught up in commercial pressure to release bigger, more dangerous models.


“If a developer believes that language models truly carry existential risk, it seems to me like the only responsible thing to do is to stop building more advanced language models,” he said.


I put these criticisms to Mr. Amodei, who offered three rebuttals.


First, he said, there are practical reasons for Anthropic to build cutting-edge A.I. models — primarily, so that its researchers can study the safety challenges of those models.


Just as you wouldn’t learn much about avoiding crashes during a Formula 1 race by practicing on a Subaru — my analogy, not his — you can’t understand what state-of-the-art A.I. models can actually do, or where their vulnerabilities are, unless you build powerful models yourself.


There are other benefits to releasing good A.I. models, of course. You can sell them to big companies, or turn them into lucrative subscription products. But Mr. Amodei argued that the main reason Anthropic wants to compete with OpenAI and other top labs isn’t to make money. It’s to do better safety research, and to improve the safety of the chatbots that millions of people are already using.


“If we never ship anything, then maybe we can solve all these safety problems,” he said. “But then the models that are actually out there on the market, that people are using, aren’t actually the safe ones.”


Second, Mr. Amodei said, there’s a technical argument that some of the discoveries that make A.I. models more dangerous also help make them safer. With Constitutional A.I., for example, teaching Claude to understand language at a high level also allowed the system to know when it was violating its own rules, or shut down potentially harmful requests that a less powerful model might have allowed.


In A.I. safety research, he said, researchers often found that “the danger and the solution to the danger are coupled with each other.”


And lastly, he made a moral case for Anthropic’s decision to create powerful A.I. systems, in the form of a thought experiment.


“Imagine if everyone of good conscience said, ‘I don’t want to be involved in building A.I. systems at all,’” he said. “Then the only people who would be involved would be the people who ignored that dictum — who are just, like, ‘I’m just going to do whatever I want.’ That wouldn’t be good.”


It might be true. But I found it a less convincing point than the others, in part because it sounds so much like “the only way to stop a bad guy with an A.I. chatbot is a good guy with an A.I. chatbot” — an argument I’ve rejected in other contexts. It also assumes that Anthropic’s motives will stay pure even as the race for A.I. heats up, and even if its safety efforts start to hurt its competitive position.


Everyone at Anthropic obviously knows that mission drift is a risk — it’s what the company’s co-founders thought happened at OpenAI, and a big part of why they left. But they’re confident that they’re taking the right precautions, and ultimately they hope that their safety obsession will catch on in Silicon Valley more broadly.



“We hope there’s going to be a safety race,” said Ben Mann, one of Anthropic’s co-founders. “I want different companies to be like, ‘Our model’s the most safe.’ And then another company to be like, ‘No, our model’s the most safe.’”


Finally, Some Optimism


I talked to Mr. Mann during one of my afternoons at Anthropic. He’s a laid-back, Hawaiian-shirt-wearing engineer who used to work at Google and OpenAI, and he was the least worried person I met at Anthropic.


He said that he was “blown away” by Claude’s intelligence and empathy the first time he talked to it, and that he thought A.I. language models would ultimately do way more good than harm.


“I’m actually not too concerned,” he said. “I think we’re quite aware of all the things that can and do go wrong with these things, and we’ve built a ton of mitigations that I’m pretty proud of.”


At first, Mr. Mann’s calm optimism seemed jarring and out of place — a chilled-out sunglasses emoji in a sea of ashen scream faces. But as I spent more time there, I found that many of the company’s workers had similar views.


They worry, obsessively, about what will happen if A.I. alignment — the industry term for the effort to make A.I. systems obey human values — isn’t solved by the time more powerful A.I. systems arrive. But they also believe that alignment can be solved. And even their most apocalyptic predictions about A.I.’s trajectory (20 percent chance of imminent doom!) contain seeds of optimism (80 percent chance of no imminent doom!).


And as I wound up my visit, I began to think: Actually, maybe tech could use a little more doomerism. How many of the problems of the last decade — election interference, destructive algorithms, extremism run amok — could have been avoided if the last generation of start-up founders had been this obsessed with safety, or spent so much time worrying about how their tools might become dangerous weapons in the wrong hands?


In a strange way, I came to find Anthropic’s anxiety reassuring, even if it means that Claude — which you can try for yourself — can be a little neurotic. A.I. is already kind of scary, and it’s going to get scarier. A little more fear today might spare us a lot of pain tomorrow.


Kevin Roose is a technology columnist and the author of “Futureproof: 9 Rules for Humans in the Age of Automation.” More about Kevin Roose