It is incredible how far the overton window has moved on this issue.
When I graduated in 2007, it was common for tech companies to refuse to let their systems be used for war, and it was an ordinary thing when some of my graduating classmates refused to work at companies that did let their systems be used for war. Those refusals were on moral grounds.
Now Anthropic wants to have two narrow exceptions, on pragmatic and not moral grounds. To do so, they have to couch it in language clarifying that they would love to support war, actually, except for these two narrow exceptions. And their careful word choice suggests that they are either navigating or expect to navigate significant blowback for asking for two narrow exceptions.
> it was an ordinary thing when some of my graduating classmates refused to work at companies that did let their systems be used for war. Those refusals were on moral grounds.
(spoiler alert)
Wasn't this one of the plot points of the Val Kilmer movie Real Genius? They had to trick the students into creating a weapon by siloing them off from each other and having them build individual but related components? How far we've fallen! Nobody has to take ethics during undergrad anymore I guess...
Reminds me of the story of someone's woman working for a research lab to improve the computer-controlled automatic emergency landings of planes with total power failure.
... or so she was told.
She was unknowingly designing glide-bomb avionics.
I feel like these stories are apocryphal. I mean, I can't say for certain that no US DoD research program used subterfuge to trick the performers into working on The Most Racist Bomb. But I can say that in 20 years I've never seen a dearth of people ready, willing, able, and actively participating with full knowledge that they are creating The Fastest Bomb and The Sneakiest Bom and The Biggest Bomb Without Actually Going Nuclear.
IDK, maybe it's different outside the National Capitol Region. But here, you could probably shout "For The Empire" as a toast in the right bars and people wouldn't think you were joking.
What? I'm not questioning whether the weapons research actually happened. I'm questioning the sincerity of people claiming they didn't know what they were doing. I've seen plenty of weapons programs. They aren't a secret to the people working on them. My point is, the government doesn't need to lie to researchers or even pay them very well to get them to develop weapons because there are plenty of intelligent-enough people willing to do it almost for free.
If "This doesn't fit into my mental model, so everyone else must be lying" is how you deal with things you didn't personally experience, do what you have to.
You must be joking. Which values, set by who? Jobs the marketer, Ellison the tyrant, or Gates the sociopath?
Please, spare us. They built a surveillance state masquerading as marketing companies and banal products. Don't play remember when if you don't actually remember.
Values relating to mistrust of the military (as per the context of the post I responded to) as well as values relating to ownership of the tech you bought and of personal privacy.
Get off your high horse and stop talking down to a person you don't know. Take your anger out on someone else.
Hard to say for sure. In that instance I can only reasonably speak for myself. So far at least, the evidence suggests the more I have, the more distracted I get by new projects.
If LLM's are indeed a game changer professionally, you kind of need to pick one.
Personally, I loathe seeing power shift towards mega corporations like that, away from being able to run your own computer with free software, but it feels like the economics are headed that way in terms of productivity.
Yes, and even their two exceptions, only one is on moral grounds. They don't want to provide tools for autonomous killing machines because the technology isn't good enough, yet. Once that 'yet' is passed they will be fine supplying that capability. Anthropic is clearly the better company over OpenAI, but that doesn't mean they are good. 'lesser evil' is the correct term here for sure.
Hypothetically if we had a choice between sending in humans to war or sending in fully autonomous drones that make decisions on par with humans, the moral choice might well be the drones - because it doesn't put our service members at risk.
Obviously anyone who has used LLMs know they are not on par with humans. There also needs to be an accountability framework for when software makes the wrong decision. Who gets fired if an LLM hallucinates and kills people? Perhaps Anthropic's stance is to avoid liability if that were to happen.
The danger is that we won't be sending these fully-autonomous drones to 'war', but anytime a person in power feels like assassinating a leader or taking out a dissident, without having to make a big deal out of it. The reality is that AI will be used, not merely as a weapon, but as an accountability sink.
> Hypothetically if we had a choice between sending in humans to war or sending in fully autonomous drones that make decisions on par with humans, the moral choice might well be the drones - because it doesn't put our service members.
I guess let the record state that I am deeply morally opposed to automated killing of any kind.
I am sick to my stomach when I really try to put myself in the shoes of the indigenous peoples of Africa who were the first victims of highly automatic weapons, “machine guns” or “Gatling guns”. The asymmetry was barbaric. I do hope that there is a hell, simply that those who made the decision to execute en masse those peoples have a place to rot in internal hellfire.
To even think of modernizing that scene of inhumane depravity with AI is despicable. No, I am deeply opposed to automated killing of any kind.
What do you mean, "hallucinates and kills people"? Killing people is the thing the military is using them for; it's not some accidental side effect. It's the "moral choice" the same way a cruise missile is — some person half a world away can lean back in their chair, take a sip of coffee, click a few buttons and end human lives, without ever fully appreciating or caring about what they've done.
The people that actually target and launch these things do think about what they have done. It is the people ordering them to do it that don't. There is a difference, I hope.
War is not moral. It may be necessary, but it is never moral. The only best choice is to fight at every turn making war easy. Our adversaries will, or likely already have, gone the autonomous route. We should be doing everything we can to put major blockers on this similar to efforts to block chemical, biological and nuclear weapons. The logical end of autonomous targeting and weapons is near instant mass killing decisions. So at a minimum we should think of autonomous weapons in a similar class as those since autonomy is a weapon of mass destruction. But we currently don't think that way and that is the problem.
Eventually, unfortunately, we will build these systems but it is weak to argue that the technology isn't ready right now and that is why we won't build them. No matter when these systems come on line there will be collateral damage so there will be no right time from a technology standpoint. Anthropic is making that weak argument and that is primarily what I am dismissive of. The argument that needs to be made is that we aren't ready as a society for these weapons. The US government hasn't done the work to prove they can handle them. The US people haven't proven we are ready to understand their ramifications. So, in my view, Anthropic shouldn't be arguing the technology isn't ready, no weapon of war is ever clean and your hands will be dirty no matter how well you craft the knife. Instead Anthropic should be arguing that we aren't ready as a society and that is why they aren't going to support them.
I think it's the opposite. The human cost of war is part of what keeps the USA from getting into wars more than it already is - no politician wants a second Vietnam.
If war is safe to wage, then it just means we'll do it more and kill more people around the globe.
Isn't this the moral hazard of war as it becomes more of a distance sport? That powerful governments can order the razing of cities and assassinate leaders with ease?
We need to do it because our enemies are doing it, in any case.
I do not think that anyone but the US and Israel have assassinated leaders in the last 30 years. I also question their autonomous drone advancement. Russia and China did not have the means to help Venezuela and they do not have the means to help Iran.
It came later than I anticipated, but it did come after all. There is a reason companies like 9mother are working like crazy on various way to mitigate those risks.
The flip side is it's very unlikely that AI won't become that good any time soon, so it'll always remain a means to hold out. Especially since nobody has explicitly defined what "good enough" entails.
If you graduated in 2007, your classmates were born around 1985. Their parents were mostly born in the mid 50s to the mid 60s and came to political consciousness either during the Vietnam War or immediately thereafter. No war since has been even close to as unpopular or frankly as salient. It’s the passing out of cultural relevance of that war that you are noticing.
> No war since has been even close to as unpopular or frankly as salient.
Iraq.
Spoiler alert, a bunch of the current ones are going to be seen similarly too.
Also keep in mind when making comparisons that the Vietnam war was not unpopular with Americans at the beginning, and many people justified it all throughout, using language that will be similar to observers of later wars.
Correct that there was no Iraq generation because there was no draft and numbers were way smaller. Vietnam had over half a million troops at the height of that war. Iraq had under 170k.
But the war was still deeply unpopular. There is a reason America did the extraordinary - to that point - and elect its first black president.
The economic toll will be greater with these wars than Vietnam.
I'm a decade older so maybe I missed the memo but I think you'll have a hard time naming tech companies that actually refused to work with the military, which were large enough and important enough to be in danger of selling something to the military (i.e. not Be Inc. or Beenz.com)
Clearly, all of the traditional big leagues were lined up to take the Army's money. IBM, Control Data, Cray, SGI, and HP all viewed weapons research as a major line of business. DEC was the default minicomputer of the DoD and Sun created features to court the intelligence community including the DoD "Trusted Workstation". Sperry Rand defined "military industrial complex".
But tell me, what would you like your country to do when conflicts arise due to want of natural resources? Would you want your country to just give up that resource your people depend on, like may be 50/50?
Do you believe it will always be possible to settle on a solution in a peaceful way that works for everyone?
Since Pearl Harbor, the US has done 100% of the attacks on other people's soil; even 9/11 was a response to what the US did in other countries, so it stands to say that defending your country is very different from giving weapons to the Department of War, to conduct war, and most likely supply them to its Middle East ally, who will also use them to start wars, kill civilians and children.
If the country wage wars for bad reasons, that is another problem that probably should be fixed elsewhere, or you should leave that country and be somewhere who government you can fully get behind.
> defending your country
I am afraid that this does not always have to be an incoming attack. What if some country has a resource that your country badly needs, without which your people will suffer badly and imagine the same is true with the other country. How much of an hit on economic and QoL are you willing to sustain before you ask your government to go out there and get the required resource by force.
I totally get that war is profitable, and most of the wars cannot be justified. But ideas like this sounds like sabotaging your own country and thus your own existence.
> they have to couch it in language clarifying that they would love to support war, actually,
Yes they do because they are trying to sell to the Department of War.
No one made Anthropic try to be a military contractor. It’s pretty much the definition of being a military contractor that your product helps to kill people.
For almost all of history, including recent history, tech and military went together. Whether compound bows, or spears or metallurgy.
Euler used his math to develop artillery tables for the Prussian army.
von Neumann helped develop the atom bomb.
The military played a huge role in creating Silicon Valley.
However, to people who grew up in the mid to late 90s, it is easy to miss that that period was a major aberration. You had serious people talking about the end of history. You had John Perry Barlow's utterly naive Declaration of Independence of Cyberspace which looks more and more naive every year.
When people (myself included FWIW) warn about the dangers of American imperialism, it's because:
1. As President Eisenhower said in his farewell address in 1961 [1], every dollar spent on the military-industrial complex is a dollar not spent on schools or houses or hospitals or bridges;
2. Every American company with sufficient size eventually becomes a defense contractor. That's really what's happened with the tech companies. They're moving in lockstep with the administration on both domestic and foreign policy;
3. The so-called "imperial boomerang" [2]. Every tactic, weapon and strategy used against colonial subjects are eventually used against the imperial core eg [3]. Do you think it's an accident that US police forces have become increasingly militarized?
The example I like to give is China's high speed rail. China started building HSR only 20 years ago and now has over 32,000 miles of HSR tracks taking ~4M passengers per day. The estimated cost for the entire network is ~$900B. That's less than the US spends on the military every year.
I really what Steve Jobs would've done were he still alive. Tim Apple has bent the knee and kissed the ring. Would Steve Jobs have done the same? I'm not so sure. He may well have been ousted (again) because of it.
Then again, I think Steve Jobs was the only Silicon Valley billlionaire not in a transhumanist polycule with a more than even chance of being in the files.
> I really what Steve Jobs would've done were he still alive. Tim Apple has bent the knee and kissed the ring. Would Steve Jobs have done the same? I'm not so sure. He may well have been ousted (again) because of it.
Given that Steve Jobs was best friends with Larry Ellison, I’d say he wouldn’t have bent the knee because he would’ve been standing hand in hand with Trump, just like Larry.
The Overton window has not shifted, at least not among rank-and-file tech workers. There was very loud and vocal internal opposition to building and selling weapons[0]. They all lost the argument in the boardrooms because the US government writes very big checks. But I am told they are very much still around.
CEOs are bound to sociopathically amoral behavior - not by the law, but by the Pareto-optimal behavior of the job market for executives. The law obligates you to act in the interests of the shareholders, but it does not mandate[1] that Line Go Up. That is a function of a specific brand of shareholder that fires their CEOs every 18 months until the line goes up.
In 2007, Big Tech had plenty of the consumer market to conquer, so they could afford to pretend to be opposed to selling to the military. But the game they were playing was always going to end with them selling to the military. Once they were entrenched they could ignore the no-longer-useful-to-us-right-now dissenters, change their politics on a dime, and go after the "real money".
[0] Several of the sibling comments are mentioning hypothetical scenarios involving dual-use technologies or obfuscated purposes. Those are also relevant, but not the whole story.
[1] There are plenty of arguments a CEO could use to defend against a shareholder lawsuit that they did not take a particularly short-sighted action. Notably, that most line-go-up actions tend to be bad long-term decisions. You're allowed to sell low-risk investments.
CBC news (canadian outlet) released an investigation on this yesterday, and found:
> While the facility was functioning as a school, CBC News has confirmed a previous New York Times report stating the building was once part of an Islamic Revolutionary Guard Corps (IRGC) base.
Around 10 years ago, in college, in Calculus class I had a very ambitious classmate, wanted to go to DARPA and work on Robotics. I asked if he was thinking it through solely from technical perspective or considering ethics side as well. Clearly, he didn't understand the question and I directly inquired - what if the code you write or autonomous machine you contribute to used for killing? His response - that's not my problem.
After spending couple of years studying in the US, I came to conclusion that executives and board members in industry doesn't care about society or humans, even universities don't push students towards critical thinking and ethics, and all has turned into a vocational training, turning humans into thinking tools.
The same time, at Harvard, I attended VR innovation week and the last panel discussion of the day was Ethics and Law, which was discussed by Law Professor, a journalist and a moderator and was attended a handful of people. I inquired why founders, CEOs or developers weren't in part of the discussion or in attendance? Moderator responded that they couldn't find them qualified enough to take part in the discussion. The discussion basically was - how product companies build affects the society? Laws aren't founders problem, that's what lawyers are for, and ethics - who cares, right?
This frenzy, this rat race towards next billion dollar company at any cost, has tore down the fabric of the society to the individual thinking level; or more like not thinking, just wanting and needing.
> Our most important priority right now is making sure that our warfighters and national security experts are not deprived of important tools in the middle of major combat operations.
> we had been having productive conversations with the Department of War over the last several days, both about ways we could serve the Department that adhere to our two narrow exceptions, and ways for us to ensure a smooth transition if that is not possible.
Why are people leaving openAI when this is Anthropic's stance?
Are their two narrow requirements enough to draw the ethical boundary people are comfortable with?
What’s a “warfighter?” Do they come from the “Gulf of America?” We used to call them servicemen or service members. Emphasizing they served the people. I guess that’s too effeminate for our roided up and ironically hyper-insecure Secretary of Defense.
There are so many inference providers not working for Department of War. Even Alibaba and sure China has lots of issues but they are not bombing anyone now if that's your first priority. Or else, smaller US / European / Asian companies with pure civilian focus. SOTA open weights models they serve are perfectly suitable for coding and chat. I run a local Qwen3.5-122B-A10B-NVFP4 instance and it writes entire Android apps from scratch and that's a midsized model.
Can you give a list of high quality alternatives? Morally speaking i would put China on par with the US if not worse (due to their ongoing Uyghur genocide). I will check out Qwen3 but would be interested in others.
Frankly it’s a shitshow all around.
The truth is that nobody gives a fuck about this. They have no moral qualms, just practical.
And these are the people that should bring us the future.
Man what a depressing scenario.
The Department of Defense was named as such after the detonations of the atomic bomb in Hiroshima and Nagasaki
We - as a humanity - collectively recognized the weight of our creation, and decided to walk back
Discussing “AI alignment” in the same breadth as aligning with a “Department of War” (in any country) is simply not an intellectually sound position
None of the countries we’ve attacked this year pose an existential threat to humanity. In contrast, striking first and pulling Europe, Russia, and China into a hot war beginning in the Middle East surely poses a greater collective threat than bioweapons, sentient AI, or the other typical “AI alignment” concerns
Why aren’t there more dissidents among the researcher ranks?
Among those who would resist, half would've done so outwardly by now and been fired, the other half would be hiding their activity. In both cases we wouldn't be hearing about them now.
> Why aren’t there more dissidents among the researcher ranks?
Because they’ve likely all lost faith in humanity watching Trump get reelected and now just want to get rich and hope to insulate their families from the reality we’re all living in.
"We both want a docile American public who go along with our desires so we can achieve goals that may be contrary to the interests of the American public."
Would love to enumerate those commonalities. Run by a psychopath? Commitment to violent lethality? Burning billions of dollars for uncertain goals? (ok there's one)
Dario, you are making a conscious choice to start developing autonomous AI weapons. That is what all of this is about, that is what you have offered to work with the DoW towards. Your red line is not that autonomous AI weapons are inherently wrong, potentially an existential threat to humanity and should be banned via treaty like chemical and biological weapons; rather you believe Claude is just not there yet and you want to help close the gap.
Do you have plans to work on a kinder, gentler form of domestic mass surveillance as well? Or will you simply leave it up to others to disguise the eventual turning of your foreign surveillance models inwards towards the United States themselves?
After hearing Palmer Luckey's argument for the name change[0], I tend to think it's good change.
Some of his arguments:
It used to be called the department of war, and it had a better track record with regard to foreign conflict, under that name then it did under the DoD name.
Department of war is a more honest name, department of defense is a somewhat newspeak term, although "Department of Peace" would be worse.
It's harder to seek funding for "war", then it is to seek funding for "defense".
If you ask someone, "Do you want to spend money on education or war?", you will get a different answer asking, "Do you want to spend money on education or defense?".
The problem with this argument is that the _original_ Department of War is now called the Department of the Army, which existed alongside the Department of the Navy. Besides, it’s a moot point unless Congress actually changes the name.
It'll be very interesting to see how this case gets resolved - in court and in the court of public opinion. I believe it's incredibly important and I hope they prevail.
As much as Trump and Hegseth would like it to be called the Department of War, it still takes an act of Congress to change the name of the Department of Defense. No reason to call it by anything else until that happens.
I think this is one of the weaknesses of rationalism and effective altruism, is that it tries to make a clean break from the common law legal reasoning that the government, and thus corporations, operate on. While I find rationalism to be a useful lens, the fact is that the common law legal framework is totally dominant, and so these deontological arguments made rationally collapse very quickly when translated to the dominant framework.
Not everything has to be a conspiracy or some 4D chess business move. Dario is a morally motivated person and regretted the tone that was being conveyed in that memo, so he apologized.
What a world we live in now where private companies are apologising for the "tone" of their speech while official representatives of the government daily express blatant lies and misrepresentations without the slightest fear of consequence.
It really is incredibly sad that what was one of the most respected countries in the world has descended to this - an utter mockery of a functioning democracy.
The OpenAI astroturfers jumped on this one. Their only interest is in trying to spin Anthropic as not meaningfully better to dissuade people from switching, not to get people to drop both companies altogether.
DoD still has not meaningfully moved to the DoW moniker, to me it represents the most fascist tendency, to make announcements and presume that’s enough to change the truth on the ground. The legal entity one contracts with is DoD. Going along with “DoW” is signal to me that a party has capitulated to the most absurd form of governance.
Pragmatically, it's for the best to use its preferred name instead of legal name when sucking up to the department and Trump to try to get back in good graces.
I don't think we won't get AGI if Anthropic were to implode, and frankly, right now, I'd rather have someone say clearly, "They cannot stomach the existence of someone telling them 'No' or adhering to moral principles. Like spoiled children they can't hear the former and are terrified by later because it might expose them to the condemnation they deserve."
Long time ago I worked for a company that I learned was selling it's software to help target people during the Iraq war. I quit because I cannot support building software that kills people.
This is a message to people working for that line of business at Anthropic. You don't have to do it, you can quit. If you are helping this insane administration to conduct war on Iran quit. You don't need to have that kind of blood on your hands.
I saw a someone's hypothesis that a generative model was used to help classify buildings to decide what to bomb and that the Girls school was misclassified. If this was an Anthropic model, I can imagine what it feels like being a worker there in that line of business.
I've also quit a job where the products I was working were meant to be deployed to CBP to hunt down immigrants. It's a nice gesture, but it won't stop these companies. They just hired someone else without an ethical backbone, and continued the project like nothing happened.
Tech leadership is rotten to the core, and that can't be fixed by individuals making a stand.
At a technical level, I don't believe they're specifically working on targeting anyone. They're providing a general-purpose API that Palantir is presumably using to build the target-finding software.
I imagine that's why the implementation got so far along before this blew up. Someone at Anthropic talked with someone at Palantir and they had a "you did what? Did you read the contract terms" moment, and that was after it went into production.
You got me wondering, so I checked to see how much Anthropic's bribed Trump so far. According to Dario, Trump has been soliciting bribes, but they refused to pay, and the contract "renegotiation" is retribution:
"Amodei claimed that tensions between his company and the Trump administration stem partly from the firm’s refusal to financially support Trump and its approach to AI regulation and safety issues."
The internal memo did read as fairly unhinged and political, which is not the message Dario likes to present. I'm glad he addressed this. It was unprofessional and unhelpful - even if Sam Altman is, in fact, a disgusting lunatic.
The one where he accuses Trump of retaliating against Anthropic after failing to solicit a bribe?
That should be the headline here. We know Trump personally made $4B last year, and we know he's been using the full power of the US gov't to retaliate against people that don't "support" him.
Come 2029, when there's an opportunity for the corruption trials to start, this sort of behavior needs to be front of the public mind, both at the top, and throughout his network of appointees.
"As we wrote on Thursday, we are very proud of the work we have done together with the Department, supporting frontline warfighters with applications such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more."
When I was living in SF, we had lived in the same apartment for 5 years and then our landlord sold the building. The new owner was doing a condo-conversion and so we got 'evicted' (in reality he paid us a small sum of money to move out since evictions are complex there).
My partner and I were both employed, we were going to be fine (although paying much higher rent) but there was this visceral, "The place that we thought was home is being taken and there's nothing we can do about it" unease in the pit of my stomach that stuck with me for months and months.
This really feels the same as that really unpleasant time.
When I graduated in 2007, it was common for tech companies to refuse to let their systems be used for war, and it was an ordinary thing when some of my graduating classmates refused to work at companies that did let their systems be used for war. Those refusals were on moral grounds.
Now Anthropic wants to have two narrow exceptions, on pragmatic and not moral grounds. To do so, they have to couch it in language clarifying that they would love to support war, actually, except for these two narrow exceptions. And their careful word choice suggests that they are either navigating or expect to navigate significant blowback for asking for two narrow exceptions.
My, the world has changed.
(spoiler alert)
Wasn't this one of the plot points of the Val Kilmer movie Real Genius? They had to trick the students into creating a weapon by siloing them off from each other and having them build individual but related components? How far we've fallen! Nobody has to take ethics during undergrad anymore I guess...
>But don’t get distracted by that; I didn’t know at the time.
Caleb Hearth: "Don't Get Distracted" https://calebhearth.com/dont-get-distracted
... or so she was told.
She was unknowingly designing glide-bomb avionics.
IDK, maybe it's different outside the National Capitol Region. But here, you could probably shout "For The Empire" as a toast in the right bars and people wouldn't think you were joking.
They're not. But if it makes you feel better to believe that, everyone has their own coping mechanism.
Aside from that - there's a lot more people in tech now. It grew too fast too quick to maintain all the values it had back in the 00's and earlier.
You must be joking. Which values, set by who? Jobs the marketer, Ellison the tyrant, or Gates the sociopath?
Please, spare us. They built a surveillance state masquerading as marketing companies and banal products. Don't play remember when if you don't actually remember.
Get off your high horse and stop talking down to a person you don't know. Take your anger out on someone else.
there's a corrupting force we're not coming to terms with here
This is what baffles me when I see people flocking to them for subscriptions based on these events.
Personally, I loathe seeing power shift towards mega corporations like that, away from being able to run your own computer with free software, but it feels like the economics are headed that way in terms of productivity.
Obviously anyone who has used LLMs know they are not on par with humans. There also needs to be an accountability framework for when software makes the wrong decision. Who gets fired if an LLM hallucinates and kills people? Perhaps Anthropic's stance is to avoid liability if that were to happen.
I guess let the record state that I am deeply morally opposed to automated killing of any kind.
I am sick to my stomach when I really try to put myself in the shoes of the indigenous peoples of Africa who were the first victims of highly automatic weapons, “machine guns” or “Gatling guns”. The asymmetry was barbaric. I do hope that there is a hell, simply that those who made the decision to execute en masse those peoples have a place to rot in internal hellfire.
To even think of modernizing that scene of inhumane depravity with AI is despicable. No, I am deeply opposed to automated killing of any kind.
Eventually, unfortunately, we will build these systems but it is weak to argue that the technology isn't ready right now and that is why we won't build them. No matter when these systems come on line there will be collateral damage so there will be no right time from a technology standpoint. Anthropic is making that weak argument and that is primarily what I am dismissive of. The argument that needs to be made is that we aren't ready as a society for these weapons. The US government hasn't done the work to prove they can handle them. The US people haven't proven we are ready to understand their ramifications. So, in my view, Anthropic shouldn't be arguing the technology isn't ready, no weapon of war is ever clean and your hands will be dirty no matter how well you craft the knife. Instead Anthropic should be arguing that we aren't ready as a society and that is why they aren't going to support them.
If war is safe to wage, then it just means we'll do it more and kill more people around the globe.
We need to do it because our enemies are doing it, in any case.
Iraq.
Spoiler alert, a bunch of the current ones are going to be seen similarly too.
Also keep in mind when making comparisons that the Vietnam war was not unpopular with Americans at the beginning, and many people justified it all throughout, using language that will be similar to observers of later wars.
Not in same ballpark. There’s no Iraq generation the way there’s a Vietnam one.
> Spoiler alert, a bunch of the current ones are going to be seen similarly too.
No they won’t. The lack of a draft and mass domestic casualties dramatically changes the picture. Especially on the saliency axis.
But the war was still deeply unpopular. There is a reason America did the extraordinary - to that point - and elect its first black president.
The economic toll will be greater with these wars than Vietnam.
https://en.wikipedia.org/wiki/15_February_2003_Iraq_War_prot...
Clearly, all of the traditional big leagues were lined up to take the Army's money. IBM, Control Data, Cray, SGI, and HP all viewed weapons research as a major line of business. DEC was the default minicomputer of the DoD and Sun created features to court the intelligence community including the DoD "Trusted Workstation". Sperry Rand defined "military industrial complex".
For every company that stands on values, there is another that will do some shady shit for a dollar.
https://www.google.com/maps/@37.6735255,-122.389804,3a,31.2y...
I don't want wars.
But tell me, what would you like your country to do when conflicts arise due to want of natural resources? Would you want your country to just give up that resource your people depend on, like may be 50/50?
Do you believe it will always be possible to settle on a solution in a peaceful way that works for everyone?
> defending your country
I am afraid that this does not always have to be an incoming attack. What if some country has a resource that your country badly needs, without which your people will suffer badly and imagine the same is true with the other country. How much of an hit on economic and QoL are you willing to sustain before you ask your government to go out there and get the required resource by force.
I totally get that war is profitable, and most of the wars cannot be justified. But ideas like this sounds like sabotaging your own country and thus your own existence.
Yes they do because they are trying to sell to the Department of War.
No one made Anthropic try to be a military contractor. It’s pretty much the definition of being a military contractor that your product helps to kill people.
Watch as the same people pushing for war today will pretend they were always against it 10 years from now.
I guess we're just doomed to repeat the same cycles.
No. Your tech experience was an aberration.
For almost all of history, including recent history, tech and military went together. Whether compound bows, or spears or metallurgy.
Euler used his math to develop artillery tables for the Prussian army.
von Neumann helped develop the atom bomb.
The military played a huge role in creating Silicon Valley.
However, to people who grew up in the mid to late 90s, it is easy to miss that that period was a major aberration. You had serious people talking about the end of history. You had John Perry Barlow's utterly naive Declaration of Independence of Cyberspace which looks more and more naive every year.
1. As President Eisenhower said in his farewell address in 1961 [1], every dollar spent on the military-industrial complex is a dollar not spent on schools or houses or hospitals or bridges;
2. Every American company with sufficient size eventually becomes a defense contractor. That's really what's happened with the tech companies. They're moving in lockstep with the administration on both domestic and foreign policy;
3. The so-called "imperial boomerang" [2]. Every tactic, weapon and strategy used against colonial subjects are eventually used against the imperial core eg [3]. Do you think it's an accident that US police forces have become increasingly militarized?
The example I like to give is China's high speed rail. China started building HSR only 20 years ago and now has over 32,000 miles of HSR tracks taking ~4M passengers per day. The estimated cost for the entire network is ~$900B. That's less than the US spends on the military every year.
I really what Steve Jobs would've done were he still alive. Tim Apple has bent the knee and kissed the ring. Would Steve Jobs have done the same? I'm not so sure. He may well have been ousted (again) because of it.
Then again, I think Steve Jobs was the only Silicon Valley billlionaire not in a transhumanist polycule with a more than even chance of being in the files.
[1]: https://www.archives.gov/milestone-documents/president-dwigh...
[2]: https://en.wikipedia.org/wiki/Imperial_boomerang
[3]: https://www.amnestyusa.org/blog/with-whom-are-many-u-s-polic...
Given that Steve Jobs was best friends with Larry Ellison, I’d say he wouldn’t have bent the knee because he would’ve been standing hand in hand with Trump, just like Larry.
CEOs are bound to sociopathically amoral behavior - not by the law, but by the Pareto-optimal behavior of the job market for executives. The law obligates you to act in the interests of the shareholders, but it does not mandate[1] that Line Go Up. That is a function of a specific brand of shareholder that fires their CEOs every 18 months until the line goes up.
In 2007, Big Tech had plenty of the consumer market to conquer, so they could afford to pretend to be opposed to selling to the military. But the game they were playing was always going to end with them selling to the military. Once they were entrenched they could ignore the no-longer-useful-to-us-right-now dissenters, change their politics on a dime, and go after the "real money".
[0] Several of the sibling comments are mentioning hypothetical scenarios involving dual-use technologies or obfuscated purposes. Those are also relevant, but not the whole story.
[1] There are plenty of arguments a CEO could use to defend against a shareholder lawsuit that they did not take a particularly short-sighted action. Notably, that most line-go-up actions tend to be bad long-term decisions. You're allowed to sell low-risk investments.
> While the facility was functioning as a school, CBC News has confirmed a previous New York Times report stating the building was once part of an Islamic Revolutionary Guard Corps (IRGC) base.
https://www.cbc.ca/news/world/iran-school-bombing-investigat...
Assuming AI was used for finding targets, perhaps the training data was out of date?
After spending couple of years studying in the US, I came to conclusion that executives and board members in industry doesn't care about society or humans, even universities don't push students towards critical thinking and ethics, and all has turned into a vocational training, turning humans into thinking tools.
The same time, at Harvard, I attended VR innovation week and the last panel discussion of the day was Ethics and Law, which was discussed by Law Professor, a journalist and a moderator and was attended a handful of people. I inquired why founders, CEOs or developers weren't in part of the discussion or in attendance? Moderator responded that they couldn't find them qualified enough to take part in the discussion. The discussion basically was - how product companies build affects the society? Laws aren't founders problem, that's what lawyers are for, and ethics - who cares, right?
This frenzy, this rat race towards next billion dollar company at any cost, has tore down the fabric of the society to the individual thinking level; or more like not thinking, just wanting and needing.
> we had been having productive conversations with the Department of War over the last several days, both about ways we could serve the Department that adhere to our two narrow exceptions, and ways for us to ensure a smooth transition if that is not possible.
Why are people leaving openAI when this is Anthropic's stance? Are their two narrow requirements enough to draw the ethical boundary people are comfortable with?
Why wouldn’t you move your dollars to someplace incrementally better?
Their statement doesn't make it sound they are incrementally better, they are trying to bend over backwards to keep working for war.
We - as a humanity - collectively recognized the weight of our creation, and decided to walk back
Discussing “AI alignment” in the same breadth as aligning with a “Department of War” (in any country) is simply not an intellectually sound position
None of the countries we’ve attacked this year pose an existential threat to humanity. In contrast, striking first and pulling Europe, Russia, and China into a hot war beginning in the Middle East surely poses a greater collective threat than bioweapons, sentient AI, or the other typical “AI alignment” concerns
Why aren’t there more dissidents among the researcher ranks?
Because they’ve likely all lost faith in humanity watching Trump get reelected and now just want to get rich and hope to insulate their families from the reality we’re all living in.
"We both want a docile American public who go along with our desires so we can achieve goals that may be contrary to the interests of the American public."
This is not the forbidden love story I would've asked for.
Do you have plans to work on a kinder, gentler form of domestic mass surveillance as well? Or will you simply leave it up to others to disguise the eventual turning of your foreign surveillance models inwards towards the United States themselves?
"Palantir's Maven uses Anthropic's Claude code, sources say."
https://www.reuters.com/technology/palantir-faces-challenge-...
It is always astonishing that the reviled mainstream press is more critical than hackers these days.
To me the most Orwellian thing is everyone using the newspeak name for the DoD.
Some of his arguments:
It used to be called the department of war, and it had a better track record with regard to foreign conflict, under that name then it did under the DoD name.
Department of war is a more honest name, department of defense is a somewhat newspeak term, although "Department of Peace" would be worse.
It's harder to seek funding for "war", then it is to seek funding for "defense". If you ask someone, "Do you want to spend money on education or war?", you will get a different answer asking, "Do you want to spend money on education or defense?".
[0] Palmer Luckey talking to Mike Rowe about the name change: https://youtu.be/dejWbn_-gUQ?t=1007
Calling it the Department of Defense implies a system of laws, checks and balances which no longer exists.
What a world we live in now where private companies are apologising for the "tone" of their speech while official representatives of the government daily express blatant lies and misrepresentations without the slightest fear of consequence.
It really is incredibly sad that what was one of the most respected countries in the world has descended to this - an utter mockery of a functioning democracy.
Posted here: https://news.ycombinator.com/item?id=47195085
This is a message to people working for that line of business at Anthropic. You don't have to do it, you can quit. If you are helping this insane administration to conduct war on Iran quit. You don't need to have that kind of blood on your hands.
I saw a someone's hypothesis that a generative model was used to help classify buildings to decide what to bomb and that the Girls school was misclassified. If this was an Anthropic model, I can imagine what it feels like being a worker there in that line of business.
Tech leadership is rotten to the core, and that can't be fixed by individuals making a stand.
I imagine that's why the implementation got so far along before this blew up. Someone at Anthropic talked with someone at Palantir and they had a "you did what? Did you read the contract terms" moment, and that was after it went into production.
https://news.ycombinator.com/item?id=47269649
"Amodei claimed that tensions between his company and the Trump administration stem partly from the firm’s refusal to financially support Trump and its approach to AI regulation and safety issues."
That should be the headline here. We know Trump personally made $4B last year, and we know he's been using the full power of the US gov't to retaliate against people that don't "support" him.
Come 2029, when there's an opportunity for the corruption trials to start, this sort of behavior needs to be front of the public mind, both at the top, and throughout his network of appointees.
May we all see better times.
When I was living in SF, we had lived in the same apartment for 5 years and then our landlord sold the building. The new owner was doing a condo-conversion and so we got 'evicted' (in reality he paid us a small sum of money to move out since evictions are complex there).
My partner and I were both employed, we were going to be fine (although paying much higher rent) but there was this visceral, "The place that we thought was home is being taken and there's nothing we can do about it" unease in the pit of my stomach that stuck with me for months and months.
This really feels the same as that really unpleasant time.
Be proud of that!