When my sister and I would play monopoly as kids, we had lost the manual so whenever we didn’t like the outcome of whatever happened, we would make up rules about what was right. Technically then, it was very easy stay compliant while still being able to do well because we could rewrite the rules.
Also, since I was older I feel like I was able to get away with those redefinitions a lot more often…
The big reason it's "obvious" when tech megacorps do it is because big tech is new to the game and doesn't have an existing regulatory capture system already up and running and legitimized like medical, civil engineering, energy, agriculture, chemical, etc, do.
If this were 3M making nasty stuff for Northrop to put in bombs and drop on brown people or Exxon scheming up something bad in Alaska or bulldozing a national park for solar panels or some other legacy BigCo doing slimy things that are in the interests of them and the government but against the interest of the public they'd have 40yr of preexisting trade group publications, bought and paid for academic and media chatter, etc, etc, they could point to and say "look, this is fine because the stuff we paid into in advance to legitimize these sorts of things as they come up says it is" though obviously they'd use very different words.
> If this were 3M making nasty stuff for Northrop to put in bombs and drop on brown people or Exxon scheming up something bad in Alaska or bulldozing a national park for solar panels or some other legacy BigCo doing slimy things that are in the interests of them and the government but against the interest of the public they'd have 40yr of preexisting trade group publications, bought and paid for academic and media chatter, etc, etc, they could point to and say "look, this is fine because the stuff we paid into in advance to legitimize these sorts of things as they come up says it is" though obviously they'd use very different words.
My friend, this paragraph needed some periods. I could not follow what you were trying to say - but it seemed interesting enough to consider retyping.
If you add to that the very broad view of what the current administration considers "legal" (as in "pretty much anything we want to do"), I can understand feeling uneasy as a Google employee...
working to directly advance a product used substantially to oppress people via surveillance or war crimes, when you have many other choices, is immoral. easy.
Correct. It depends. For example, it might depend on what the collaboration is likely to result in. Perhaps it would be more likely to be moral there were some boundaries in place, like "no mass domestic surveillance" or "no fully autonomous weapons".
Because the US government currently believes it is legal to blow up civilian drug traffickers and wage war without congressional approval. So at some point, yes, collaboration is immoral.
The US military has deployed fully autonomous weapons since at least 1979, and potential adversaries are now doing the same. For better or worse that ship has sailed.
Look, a dumb bomb is a fully autonomous weapon once it's launched. Let's be real: an LLM making decisions on who to target and when and where to launch munitions represents a meaningful change in our concept of autonomous weapons.
So we are wrong to express any opposition or desire to maybe raise the bar here? Aren’t we supposed to be “the good guys”? Or should we just accept a role as the menace of the world, wildly throwing its weight around whenever we have an unscrupulous president?
I'm dripping with sarcasm here, but as far as I know that's actually what macho Pete believes. He believes he blew those girls to hell with god's own fury. Fuck you, Pete, fuck you.
Who said otherwise? Clearly it’s about facilitating specific acts by the government. Why are y’all acting like it was so wildly broad? No one said “working with the government is inherently immoral.”
Idk about morality, but it’s certainly a way to stop dystopian mass surveillance nightmares if everyone capable of building one refuses.
So if you live in the US and don’t want one government agency in the US to have this power (that is ambiguous under current law), one way you can try to avoid it is by refusing to sell it to them and urging others to do the same.
It’s a long shot sure, but it certainly seems more effective than hoping the legislature wakes up and reigns in the executive these days.
Given most government policies and direct engagement in all kind of monstrosities over the last millennia, there is really no reason to limit the case to USA, indeed.
Thankfully Russia, China, etc have the same qualms as we do in the United States and will refused to send their brightest engineers to work on weapons so they don't become "morally compromised"!!!
I don't know if you're being sarcastic(sounds like you are!) but indeed a lot of engineers left Russia after the war in Ukraine started as they didn't want to be drafted and didn't want to contribute to the war effort in some way, even if indirectly. Of course, many stayed or even willingly help. See how many engineers from Iran work abroad too, for moral and other reasons.
The point is - this happens everywhere, it's not just some weird western thing.
Greenland, 100%. It’s imperative that the U.S. controls Arctic shipping lanes and doesn’t allow Russia or China to dominate them. However that has to happen is entirely justified.
Iran is different, it’s only a national security interest to Israel, and the U.S. wants to maintain a relationship in the ME and with Mossad. I disagree with those things strongly, but they’re doing whether they use Gemini or not.
This government doesn't GAF what is "lawful" and what isn't. Was what happened to Pretti and Good in Minneapolis lawful? Would you work for ICE/CBP with no qualms at all?
See also the new national sport of hunting for fishing boats off the South American coast. Is that "lawful?"
And yes, since you went there: everything the Nazis did was "lawful." To the extent it wasn't "lawful," they made it "lawful."
All of those things are lawful and 100% justified, yes. Don't attack law enforcement with a deadly weapon, whether it's a vehicle or gun.
ICE is objectively more effective at protecting American citizens and interests than any conflict in Iraq or Afghanistan ever was.
Retrofitted "fishing boats" packed full of narco-terrorists and fentanyl being shipped to the US are entirely lawful to blow sky-high once they're in international waters.
> Don't attack law enforcement with a deadly weapon, whether it's a vehicle or gun.
How do you attack law enforcement with a gun while on your knees, with your arms pinned behind you and the gun is holstered? It's interesting how we can watch the same video, and some people only see what they are told to see.
Every gun training course will tell you that the person with the weapon has a duty to deescalate and remove himself from a volatile situation. Alex Pretti did the exact opposite.
Police are operating in an emergency environment with limited info.
They know that:
Someone showed up armed to a protest, became violent, and hit an LEO. They removed a gun from his waistband, and then a gunshot goes off because of Sig’s faulty strikers.
They don’t know if he has a second weapon in his front waistband, they’re responding quickly to a man who placed himself in this situation via violence.
I wish he hadn’t been stupid enough to place himself in that position, but it was 100% justified to neutralize a potential threat to citizens and law enforcement.
> Any AI researcher who continues to work here is morally compromised.
Arguably it's exactly the opposite. In the same way we ask billionaires to pay their taxes because the regulatory regime is what allowed them the structure to make their billions in the first place, the national security of the country the AI researchers are in is what allows them to make a vast salary to work on interesting, leading edge capabilities like AI. They should feel obligated to help the military.
It’s funny to me how many progressive people I know and am friends with who work at these AI companies which are marginalized demographics (Trans, Gay, Latino, Black).
Still have faded Bernie stickers on their cars, No Kings organizers, “fuck SF I’m in the east bay for life fuck tech” - and you all make 7 figures Monday - Friday by supporting the death of society and democracy.
I don’t dare say anything though because “money is money”, the bay is expensive..but I do sure as shit judge every single person I know who joined OAI, Anthropic, Google, and Meta.
I mean no harm in saying what I said, I love my friends. I just can’t stomach the hypocrisy, it’s what the companies are preying and feeding off of.
My friends are incredibly bright and good at what they do, it’s why they all have the roles they have. It makes me sad (and frustrated) knowing they are lured in by enough money dangling in front of them that makes them swallow their souls and identity, while fuelling the fire in the same breath.
I have a deep amount of respect and gratitude for my friends (and anyone else) who chooses to work at non-profits, and more ethical - mission based companies for less. I hate how much these AI companies and roles are offering people, it’s completely forced lots of gifted people into a war machine.
Do you suspect there is any chance they are fully independent adult human beings with full agency, who have looked at the pros and cons, and chosen to make the choices they did with clear eyes? Do you think there's any context that might square their choices with their own internal principles that don't make them hypocrites? I mean these as real questions. For "friends you love" you really seem to take a dim view of their intelligence.
I’ll be honest and say it’s made me question and reposition some of my friendships with a number of these friends. Some joined well before we knew the fallout of how AI has affected and impacted society negatively, some have joined in recent years because they were offered 2x their currently already high comp package, and others will take any job they can get (who, admittedly, I judge far less as I know they are just needing to survive in a HCOL city).
My dim view is more on the AI companies being absurdly overvalued, with too much money to know what to do, which feeds downwards into compensation packages, which lure in “innocent” individuals who can’t say no. It’s not been a healthy market to be vulnerable in, most companies outside AI are just not getting the same funding or can compete at all - and it’s a shit storm.
I made another comment above. People contain multitudes. Different contexts, different choices, not everyone is in a box defined by the viewer's world view. You can't really know what's going on with someone else, in their heads, in their context, so give them some grace. Instead, this person's "friends" are "hypocrites" who were "lured" into their choices. It's very condescending. I am suggesting the poster re-examine their own views on other people in light of this.
This all works if you assume that any action the government takes must be “lawful”. The assumption here is that the Pentagon is obeying the law and any unlawful use would go through normal reporting / violation channels - same as any illegal order or violation or whistleblower report.
The Pentagon does not want Google or anyone else deciding what they can and cannot use their AI for. They’re saying we won’t break the law, and that should be enough for you - pinky swear!
And that seems to be enough for Google. Though I might request some auditing capability that is agentic to verify rather than take them at their word.
Next step: is Google FEDRAMP’d yet for this and for classified enclaves? Or do they also go through Palantir’s AI vehicle?
That's presumably the trick, and it's not a subtle one; it's why the article puts it on quotes in the headline. Google gets to claim that it stood up for principles because it boldly insisted that the government obey the law, and the government will claim that whatever it decides to do is lawful. It's the same as what OpenAI did except not handled buffoonishly.
And since the court has no way to physically force anything - that's the executive branch's function, (it's right there in the name) - lawful has no meaning whatsoever if it's the executive branch that wants to break the law.
No it doesn't at all. Private corporations shouldn't be telling the government what it can and can't do. That's the job of the people. You want private corporation overriding your vote?
Especially concerning with the how creative the executive branch can be when it comes to what laws mean. With little oversight, it seems guaranteed that it will be used for unlawful activities (despite whatever tortured argument some lawyer will have put into a memo somewhere).
Please! That ship sailed a long time ago. Sure tell your congressman, who is most likely bribed (lobbying is bribing, lets use the real words) by the same companies to accept the deal. The courts can try, but who is going to enforce it when the people above says that its fine.
It kind of reminds me of a mix of Skynet in Terminator and Minority Report. But nowhere near as interesting. More annoying than anything else.
I am kind of mad at James Cameron here. Skynet was evil but interesting. Reallife controlled by Google is evil but not interesting - it is flat out annoying.
The classified aspect is probably the most concerning. How can I write my representative (and expect a form letter response six weeks later) if I don't know what I'm objecting to or even if I should be objecting?
How well does this hold up in terms of legal scrutiny when previous actions indicate that the Pentagon would retaliate against Google if they didn't accept this "lawful use only" farce?
Could Google back out of this agreement later by arguing that they were coerced?
Not trying to suggest that Google would be opposed to doing evil, but curious about how solid this agreement would be in practice.
I don’t get why this is always such a controversial topic. Should we be decrying Microsoft for selling the DoD/DoW Microsoft Office. They could use excel or PowerPoint to plan a strike package.
Having your work being used by the govt in ways you disagree with feels similar to having your taxes used in ways you disagree.
When you pay taxes you have no say in the bombs acquired with that and where they are dropped. The latter though doesn't seem to provoke the same push back
you answered your own implicit question. You have a choice who you sell your work to, you don't have a choice what your taxes do. Seems pretty straight forward why the former elicits more push back. The government forces you to pay taxes it doesn't force you to build them tools of surveillance or weapons.
> We remain committed to the private and public sector consensus that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight.
And starts the lying to our faces. The public and private (from your own employees!) consensus is that it should not be used for those things at all, regardless of “human oversight.”
So the rest of the world is fine to spy on, its the domestic part they don't agree with. So go on, destroy lives all around the world, helping the powers at be build the fascist state. Its fine to use Gemini to tell what building to blow up; its fine for Gemini to wrongly identify people and cause hundreds or thousands of deaths based on the telling the military who to attack.
It's pretty funny how these guys are all becoming some kind of internet version of, like, Halliburton. It seems pretty desperate. B2C and B2B applications didn't pan out I guess?
It's one of two identified uses for AI that is profitable today: writing code and blowing up schools. They are desperate to show the market that the technology is anything more than a money pit.
Doubtful it will even get that far, the DoJ will simply draft an appropriate fig leaf memo with a predetermined conclusion and the government will simply plow on ahead.
They simply say they have that memo. Who knows whether they even drafted it for real? And if anyone starts looking, Gemini can quickly draft one itself. Nice!
Lawful is meaningless in the context of the Trump administration. Should Google waver (which they won't), they'll be declared a supply chain risk or otherwise bullied into submission.
Google holds immense power in their position. Trump can make their life very difficult but Google can make life for Trump very difficult as well. They have no need to kneel, they are choosing to.
Capital and Big Tech have always been opportunistic enablers, not principled actors. Corporate Values have always been nothing but internal propaganda. "Don't be evil", what a farce.
Unsurprising from Google, but still bad. If Google has no right to object to a particular use, this is equivalent in practice to "any use, lawful or not".
Also, since I was older I feel like I was able to get away with those redefinitions a lot more often…
If this were 3M making nasty stuff for Northrop to put in bombs and drop on brown people or Exxon scheming up something bad in Alaska or bulldozing a national park for solar panels or some other legacy BigCo doing slimy things that are in the interests of them and the government but against the interest of the public they'd have 40yr of preexisting trade group publications, bought and paid for academic and media chatter, etc, etc, they could point to and say "look, this is fine because the stuff we paid into in advance to legitimize these sorts of things as they come up says it is" though obviously they'd use very different words.
My friend, this paragraph needed some periods. I could not follow what you were trying to say - but it seemed interesting enough to consider retyping.
Any AI researcher who continues to work here is morally compromised.
For a long time, and probably still, it was legal for the US to torture enemy combatants. It was never ethical.
Because the US government currently believes it is legal to blow up civilian drug traffickers and wage war without congressional approval. So at some point, yes, collaboration is immoral.
Hey, I think I'm starting to get how this organized religion thing works. Maybe I'll join a few to make sure I go to allllll the good places
So if you live in the US and don’t want one government agency in the US to have this power (that is ambiguous under current law), one way you can try to avoid it is by refusing to sell it to them and urging others to do the same.
It’s a long shot sure, but it certainly seems more effective than hoping the legislature wakes up and reigns in the executive these days.
"Our enemies would have no qualms building a weapon that will end life on earth! We better build it first because we're the good guys!"
The point is - this happens everywhere, it's not just some weird western thing.
It can also mean facilitating a militaristic surveillance state.
Not necessarily the same things, and at some point we might have to choose who's side we're on
Iran is different, it’s only a national security interest to Israel, and the U.S. wants to maintain a relationship in the ME and with Mossad. I disagree with those things strongly, but they’re doing whether they use Gemini or not.
In extremis, were the people working for Pol Pot just good patriots with no moral culpability?
We could surely at least agree that there are cases where working for the military of your home country doesn't fully excuse you from your actions.
In fact, I think international tribunals have existed which operated on just those principles.
You propose that other governments militaries would not be so compromising. Seems reasonable.
But the question then becomes, what is the operative distinction between the two?
The operative distinction is "lawful use" in the United States of America does not mirror Nazi Germany in even the slightest way.
The courts can intervene later, but they can't un-bomb a hospital.
See also the new national sport of hunting for fishing boats off the South American coast. Is that "lawful?"
And yes, since you went there: everything the Nazis did was "lawful." To the extent it wasn't "lawful," they made it "lawful."
ICE is objectively more effective at protecting American citizens and interests than any conflict in Iraq or Afghanistan ever was.
Retrofitted "fishing boats" packed full of narco-terrorists and fentanyl being shipped to the US are entirely lawful to blow sky-high once they're in international waters.
How do you attack law enforcement with a gun while on your knees, with your arms pinned behind you and the gun is holstered? It's interesting how we can watch the same video, and some people only see what they are told to see.
Police are operating in an emergency environment with limited info.
They know that: Someone showed up armed to a protest, became violent, and hit an LEO. They removed a gun from his waistband, and then a gunshot goes off because of Sig’s faulty strikers.
They don’t know if he has a second weapon in his front waistband, they’re responding quickly to a man who placed himself in this situation via violence.
I wish he hadn’t been stupid enough to place himself in that position, but it was 100% justified to neutralize a potential threat to citizens and law enforcement.
Arguably it's exactly the opposite. In the same way we ask billionaires to pay their taxes because the regulatory regime is what allowed them the structure to make their billions in the first place, the national security of the country the AI researchers are in is what allows them to make a vast salary to work on interesting, leading edge capabilities like AI. They should feel obligated to help the military.
Still have faded Bernie stickers on their cars, No Kings organizers, “fuck SF I’m in the east bay for life fuck tech” - and you all make 7 figures Monday - Friday by supporting the death of society and democracy.
I don’t dare say anything though because “money is money”, the bay is expensive..but I do sure as shit judge every single person I know who joined OAI, Anthropic, Google, and Meta.
My friends are incredibly bright and good at what they do, it’s why they all have the roles they have. It makes me sad (and frustrated) knowing they are lured in by enough money dangling in front of them that makes them swallow their souls and identity, while fuelling the fire in the same breath.
I have a deep amount of respect and gratitude for my friends (and anyone else) who chooses to work at non-profits, and more ethical - mission based companies for less. I hate how much these AI companies and roles are offering people, it’s completely forced lots of gifted people into a war machine.
My dim view is more on the AI companies being absurdly overvalued, with too much money to know what to do, which feeds downwards into compensation packages, which lure in “innocent” individuals who can’t say no. It’s not been a healthy market to be vulnerable in, most companies outside AI are just not getting the same funding or can compete at all - and it’s a shit storm.
The Pentagon does not want Google or anyone else deciding what they can and cannot use their AI for. They’re saying we won’t break the law, and that should be enough for you - pinky swear!
And that seems to be enough for Google. Though I might request some auditing capability that is agentic to verify rather than take them at their word.
Next step: is Google FEDRAMP’d yet for this and for classified enclaves? Or do they also go through Palantir’s AI vehicle?
> The classified deal apparently doesn’t allow Google to veto how the government will use its AI models.
Seems concerning?
question as old as time itself
So Google can't tell the government it needs a warrant to perform a search? Google can't sue over something the government did?
It's Google's product they want to buy.
Congress and the courts obviously.
If you think there's a hole in the law tell your congressman, don't, for some reason, try and put Google or any Ai company above the government.
The first is fully neutered. The second is far too slow.
"Nothing unlawful" needing to be in the contract is inherently concerning, as it's typically the default, assumed state of such a thing.
I am kind of mad at James Cameron here. Skynet was evil but interesting. Reallife controlled by Google is evil but not interesting - it is flat out annoying.
Could Google back out of this agreement later by arguing that they were coerced?
Not trying to suggest that Google would be opposed to doing evil, but curious about how solid this agreement would be in practice.
Having your work being used by the govt in ways you disagree with feels similar to having your taxes used in ways you disagree.
When you pay taxes you have no say in the bombs acquired with that and where they are dropped. The latter though doesn't seem to provoke the same push back
Vote in elections, local and general.
btw i am not making a judgement call on the ai usage issue itself, just saying that this and taxes are more equivalent than it might seem
And starts the lying to our faces. The public and private (from your own employees!) consensus is that it should not be used for those things at all, regardless of “human oversight.”
So the rest of the world is fine to spy on, its the domestic part they don't agree with. So go on, destroy lives all around the world, helping the powers at be build the fascist state. Its fine to use Gemini to tell what building to blow up; its fine for Gemini to wrongly identify people and cause hundreds or thousands of deaths based on the telling the military who to attack.
https://en.wikipedia.org/wiki/Torture_Memos
"When the president does it, that means that it is not illegal." - Richard Nixon
[1] https://www.nytimes.com/2025/09/20/us/politics/tom-homan-fbi...
Capital and Big Tech have always been opportunistic enablers, not principled actors. Corporate Values have always been nothing but internal propaganda. "Don't be evil", what a farce.