I did something similar to a local company here in Spain. Not medical, but a small insurance company. Believe it or not, yes, they vibecoded their CRM.
I sent them an email and they threatened to sue me. I was a bit in shock from such dumb response, but I guess some people only learn the hard way, so I filed a report to the AEPD (Data protection agency in Spain) for starters, known to be brutal.
I've also sent them a burofax demanding the removal of my data on their systems just last friday.
A similar thing happened to me back in the day when Wi-Fi was still new.
I joined an open network and it turned out to be a law firm. All their computers were on a Samba network with full C: drives shared.
I wrote README.txt files on their drives telling them about the issue, but after some time it was still the same.
Then I went directly to the place to talk to them and also with the idea I could land my first job fixing that mess. But... They got incredibly angry with me, since they claimed they had some very good and expensive contractors taking care of their computers and network, and that I had basically broken in.
At one point I worked as a customer support agent outsourced to Apple via the company. Apple forced us to us some very outdated browser UIs, basically for filling in forms, across maybe 4-5 different services in some cases. The machines we were given by this outsourcing company of course where Apple computers, fairly locked down.
But one thing they hadn't locked doll wn, was installing extensions in Safari, and given I had some development chops from coding a bunch in my freetime, I saw the opportunity to write a tiny extension that saved me a ton of time by merely copy-pasting stuff into the right forms and so on. Basically making the whole thing more efficient for me.
Everything was great, until the person next to me saw I had something different. Cautiously eager, I let them try the extension too, they loved it, and without thinking about it, spread it to other people in our team. Eventually, the manager and the IT team picked up what was going on, said they'investigate if I could maybe start doing those kind of things full-time instead of being a support agent, and just focus on tooling.
Fast forward two weeks, I get called into a meeting, apparently someone in the company had been "stealing" CC numbers from the customers on the calls, and since they don't think they've found the right person who did it (or something like that), the person who was known for "doing stuff to the computers" was the next possible suspect, and they fired me right there.
Eventually this firing let me find my first actual programming job, so I'm not too mad about it, but it really shows how out of touch lots of companies and people are when it comes to how computers actually work.
Nice. I wish more countries had something like that. Many of these organizations are lethargic and have to be forced into action by civilian efforts or the press.
AEPD are well known, even in the rest of the world. They have a different strategy compared to other countries. Ireland's DPC are also heavy handed, but focus on large companies mostly.
France's CNIL is also not bad. They are particularly hard against things like "you accidentally sign up for x y z services when only wanting to sign up to service A".
Gdpr in the EU is also miles ahead of what the US has, or at least what it has been enforcing for a long time.
Thta's wonderful! Most of europes GDPR/Data protection autohrities are completely worthless and seem to constantly side with big corps.
Only when they start to side with the people, actually fining business billions and billions will things start to change. I hope we'll see this happen in europe at large, and not only in a few countries.
> Thta's wonderful! Most of europes GDPR/Data protection autohrities are completely worthless and seem to constantly side with big corps.
AFAIK, most ones seems to be acting at least once every now and then, judging by https://www.enforcementtracker.com/, is there any specific countries you're thinking about here?
Particularly, Romania, Italy and Spain seem to have had lots of cases.
People building these apps often have no idea about various data privacy rules.
I am part of a forum with many small business owners. One particular owner has been gung-ho about how he built his entire business app using vibe coding. And my first reaction was - All the power to him. It’s his business and he is free to do so.
But then came the question of data privacy rules and he had no clue. This was concerning because the impact went beyond his business. His response when the oversight was pointed out to him was that being ignorant of the law was enough to save him. Still he went to one of the vibe coding Reddit subs to get help. Then came back fuming because devs on Reddit asked him to hire real developers. He believes that these developers are delusional and a dying breed and AI is so ahead that developers are going to be dead in a years time.
I'm also curious how much effort it would be to setup some OWASP tools with an agent and crawl for company tools. I'm sure I'm not the first one to think of this, but for local businesses it would give a solid rep, I suppose.
I have a feeling that next year's theme will be security. People have turned off their brain when it comes to tech.
> [burofax is] a service that allows you to send a document with certified proof of delivery and confirmation of the date of receipt, and this confirmation has legal validity
Meanwhile on Linkedin… Every sales bozo with zero technical understanding is screaming top of their virtual lungs that evrything must be done with AI and it is solution to every layoff, economic problem, everything.
It is just a matter of time when something really really bad happens.
That worked for years & traveled for tens of thousands of kilometers until the disaster. They were also quite aware of the risks associated with a hydrogen airship and did all sorts of mitigations to avoid them.
Compared to that vibe coding has no such qualities.
Looks like bad stuff is happening, really bad is a bit scary if you qualify that as threat to life or livelihood. Let's see what the next generation of models bring to this equation.
I think vibe-coding is cool, but it runs into limits pretty fast (at least right now).
It kinda falls apart once you get past a few thousand lines of code... and real systems aren't just big, they're actually messy...shit loads of components, services, edge cases, things breaking in weird ways. Getting all of that to work together reliably is a different game altogether.
And you still need solid software engineering fundamentals. Without understanding architecture, debugging, tradeoffs, and failure modes, it's hard to guide or even evaluate what's being generated.
Vibe-coding feels great for prototypes, hobby projects, or just messing around, or even some internal tools in a handful of cases. But for actual production systems, you still need real engineering behind it.
As of now, I'm 100% hesitant to pay for, or put my data on systems that are vibe-coded without the knowledge of what's been built and how it's been built.
There are all kinds of memory hacks, tools that index your code, etc.
The thing I have found that makes things work much better is, wait for it... Jira.
Everyone loves to hate on Jira, but it is a mature platform for managing large projects.
First, I use the Jira Rovo MCP (or cli, I don't wanna argue about that) to have Claude Code plan and document my architecture, features, etc. I then manually review and edit all of these items. Then, in a clean session, or many, have it implement, document decisions in comments etc. Everything works so much more reliably for large-ish projects like this.
When I first started doing this in my solo projects it was a major, "well, yeah, duh," moment. You wouldn't ask a human dev to magically have an entire project in their mind, why ask a coding agent to do that? This mental model has really helped me use the tools correctly.
edit: then there is context window management. I use Opus 4.6 1M all the time, but if I get much past 250k usage, that means I have done a poor job in starting new sessions. I never hit the auto-compact state. It is a universal truth that LLMs get dumb the more context you give them.
I think everyone should implement the context status bar config to keep an eye on usage:
But even spec-first, using opus4.6 with plan, the output is merely good, and not great. It isn't bad though, and the fixes are often minors, but you _have_ to read the output to keep the quality decent. Notably, I found that LLM dislike removing code that doesn't serve active purpose. Completely dead code, that they remove, but if the dead code have tests that still call it, it stays.
And small quality stuff. Just yesterday it used a static method where a class method was optimal. A lot of very small stuff I used to call my juniors on during reviews.
On another hand, it used an elegant trick to make the code more readable, but failed to use the same trick elsewhere for no reason. I'm not saying it's bad: I probably wouldn't have thought about it by myself, and kept the worse solution. But even when Claude is smarter than I am, I still have to overview it.
(All the discourse around AI did wonder for my imposter syndrome though)
Doesn't require Jira but yes, specification-first is the way to get better (albeit still not reliably good) results out of AI tools. Some people may call this "design-first" or "architecture-first". The point is really to think through what is being built before asking AI to write the implementation (i.e. code), and to review the code to make sure it matches the intended design.
Most people run into problems (with or without AI) when they write code without knowing what they're trying to create. Sometimes that's useful and fun and even necessary, to explore a problem space or toy with ideas. But eventually you have to settle on a design and implement it - or just end up with an unmaintainable mess of code (whether it's pure-human or AI-assisted mess doesn't matter lol).
I used to manually curate a whole set of .md files for specs, implementation logs, docs, etc. I operated like this for a year. In the end, I realized that I was rolling my own crappy version of Jira.
One of the key improvements for me when using Jira was that it has well defined patterns for all of these things, and Claude knows all about the various types of Jira tickets, and the patterns to use them.
Also, the spec driven approach is not enough in itself. The specs need sub-items, linked bug reports and fixes. I need comments on all of these tickets as we go with implementation decisions, commit SHAs, etc.
When I come back to some particular feature later, giving Claude the appropriate context in a way it knows how to use is super easy, and is a huge leap ahead in consistency.
I know I sound like some caveman talking about Jira here, but having Claude write and read from it really helped me out a lot.
It turns out that dumb ole Jira is an excellent "project memory" storage system for agentic coding tools.
It absolutely falls apart more often than not. And requires even better engineering practices than before, because people are just accepting the code changes without understanding the technical debt created by them. On this I agree.
There are models that can be run locally, this morning I tested Gemma 4 running on 128 GB of RAM. It was very slow, like 20 minutes to refactor something instead of 20 seconds, but it seems to be as capable as the paid models that run on an expensive cloud subscription on one of these hated data centers. And no data is uploaded to them.
I suggest actually using Claude code and make a sample app using it. It absolutely can make apps even if you don’t know any fundamentals. I think it can work up to 20k LOC from my experience. You do need a human to give feedback but not someone who understands software principles.
I saw something very similar a few months ago. It was a web app vibe coded by a surgeon. It worked, but they did not have an index .html file in the root web directory and they would routinely zip up all of the source code which contained all the database connection strings, API credentials, AWS credentials, etc.) and place the backup in the root web directory. They would also dump the database to that folder (for backup). So web browsers that went to https://example.com/ could see and download all the backups.
The quick fix was a simple, empty index.html file (or setting the -Indexes option in the apache config). The surgeon had no idea what this meant or why it was important. And the AI bots didn't either.
The odd part of this to me was that the AI had made good choices (strong password hashes, reasonable DB schema, etc.) and the app itself worked well. Honestly, it was impressive. But at the same time, they made some very basic deployment/security mistakes that were trivial. They just needed a bit of guidance from an experienced devops security guy to make it Internet worthy, but no one bothered to do that.
Edit: I do not recommend backing up web apps on the web server itself. That's another basic mistake. But they (or the AI) decided to do that and no one with experience was consulted.
interesting, so the ai got the hard stuff right. password hashing, schema design, fine. it fumbled the stuff that isn't really "coding" knowledge, feels more like an operational intuition? backup folder sitting in web root isn't a security question, it's a "have you ever been burned before" question, and surgeon hadn't. so they didn't ask and the model didn't cover it, imo that's the actual pattern. the model secures exactly what you ask about and has no way of knowing what you didn't think to ask. an experienced dev brings a whole graveyard of past mistakes into every project. vibe coders bring the prompt
This is what I’m noticing. At my workplace, we have 3 or 4 non-devs “writing” code. One was trying to integrate their application with the UPS API.
They got the application right, and began stumbling with the integration - created a developer account, got the API key, but in place of the applications URL, the had input “localhost:5345” and couldn’t get that to work, so they gave up. They never asked the tech team what was wrong, never figured out that they needed to host the application. Some of the fundamental computer literacy is the missing piece here.
I think (maybe hopeful) people will either level up to the point where they understand that stuff, or they will just give up. Also possible that the tools get good enough to explain that stuff, so they don’t have to. But tech is wide and deep and not having an understanding of the basic systems is… IMO making it a non-starter for certain things.
Maybe this is what's missing in the prompt? We've learned years ago to tell the AI they're the expert principal 100x software developer ninja, but maybe we should also honestly disclose our own level of expertise in the task.
A simple "I'm a professional surgeon, but sadly know nothing about making software" would definitely make the conversation play out differently. How? Needs to be seen. But in an idealized scenario (which could easily become real if models are trained for it), the model would coach the (self-stated) non-expert users on the topics it would ordinarily assume the (implicitly self-stated) expert already knows.
The fix is to not let users download the credentials. In fact, ideally the web server wouldn't have access to files containing credentials, it would handle serving and caching static content and offloading requests for dynamic content to the web application's code.
Disabling auto-indexing just makes it harder to spot the issue. (To clarify, also not a bad idea in principle, just not _the_ solution.) If the file is still there and can be downloaded, that's strictly something which should not be possible in the first place.
Agent-Native DevOps tools are probably necessary. There should be no reason they would do it manually.
How I see it happening: agents like CC have in built skills for deployment and uses building blocks from either AWS or other simpler providers. Payment through OAuth and seamless checkout.
Software engineering is looking more and more like it needs a professional body in each country, and accreditation and standards. Ie it needs to grow up and become like every other strand of engineering.
Gone should be the days of “I taught myself so now I can [design software in a professional setting / design a bridge in a professional setting].” I’m not advocating gatekeeping - if you want to build a small bridge at the end of your garden for personal use, go for it. If you want to build a bridge in your local town over a river, you’re gonna need professional accreditation. Same should be true for software engineering now.
I agree with that and stand by these words. If people want to call it gatekeeping, so be it. Programming, software engineering if you will, is a serious discipline, and this craze needs to stop. Software building should be regulated and properly accredited as any serious activity.
As the sibling pointed out, there are already plenty of laws about, for example, handling of personally identifiable data. Somehow there is a lack of awareness, perhaps what is needed is a couple of high-profile convictions (which can't be too far off).
One of the key functions of a professional body is to ensure all members are aware of existing and new laws, standards and codes of practice. And to ensure different grades of engineer are aware of different levels of the standards. And that sector-specific laws and standards are accredited accordingly.
High profile convictions are not a good way of dealing with this. Not in the short or long term. Sure they have an impact, and laws should be enforced, but that’s not a substitute for managing the industry properly.
Nothing would be more effective at killing open source and commercial software business that requiring everyone that writes and ships software to users, directly or indirectly (e.g. an open-source library) to have License To Program from Software Licensing Organization.
> aware of existing and new laws, standards and codes of practice
Yeah, because software business is not at all ruled by fads.
1997: you have to follow Extreme Programming (XP) or you don't get your license
2000: you now have to use XML for everything in XML or you don't get your license
2002: you now have to follow Agile or you don't get your license
2025: you now have to write everything in Rust or you don't get your license
I think the problem is that the person described had no idea what they were doing even in their own professional capacity. They needed to know about patient data management, but they didn't.
The way I see it, if they didn't even realize that they are doing something they shouldn't, they wouldn't have even known they need accreditation, even if that was required. Unless we restricted access to gazillions of tools without it of course.
I think it'll work itself out over time as what AI is/isn't and what data privacy means is discussed more. I'd leave accreditation entirely out of it, because we cannot even agree on what are the actual best practices or if they matter.
Professional bodies act as nothing more then gatekeepers and rent seekers for things of this nature. Anyone can write software, but not everyone writes security minded software.
We already have laws in place, and certifications that help someone understand if a given organization adheres to given standards. We can argue over their validity, efficacy, or value.
The infrastructure, laws, and framework exist for this. More regulation and beaurocracy doesn't help when current state isn't enforced.
There’s a reason why many professions have professional bodies and consolidated standards - from medicine to accountancy, actuarial work, civil engineering, aerospace, electronic and electrical engineering, law, surveying, and so many more.
In most of those professions, it is a crime or a civil violation to offer services without the proper qualifications, experience and accreditation from one of the appropriate professional bodies.
We DO NOT have this in software engineering. At all. Anyone can teach themselves a bit of coding and start using it in their professional life.
Analogous to law, you can draft a contract by yourself, but if it goes wrong you have a major headache. You cannot, however, offer services as a solicitor without proper qualifications and accreditation (at least in the UK). Yet in software engineering, not only can we teach ourselves and then write small bits of software for ourselves, we can then offer professional services with no further barriers or steps.
The mishmash of laws we have around data and privacy are not professional standards, nor are they accreditation. We don’t have the framework or laws around this. And I am not aware of the USA (federal level) or Europe (or member states) or China or Russia or India or etc having this.
For example, the BCS in the UK is so weak that although it exists, exceedingly few professional software engineers are even registered with them. They have no teeth. There’s no laws covering any of this stuff. Just good-ol’ GDPR and some sector-specific laws here and there trying to keep people mildly safe.
> There’s a reason why many professions have professional bodies and consolidated standards - from medicine to accountancy, actuarial work, civil engineering, aerospace, electronic and electrical engineering, law, surveying, and so many more.
Professional bodies = gatekeeping. The existence of the body means that the thing its surrounding will be barred from others to enter.
It means financial barriers & "X years of experience required" that actual programmers rightfully decry.
Caveat: When it comes to anything that will affect physical reality, & therefore the physical safety of others, the standards & accreditations then become necessary.
NOTE ON CAVEAT: Whilst *most* software will fall under this caveat, NOT ALL WILL. (See single-player offline video games)
To create a blanket judgement for this domain is to invite the death of the hobbyist. And you, EdNutting, may get your wish, since Google's locking down Android sideloading because they're using your desires for such safety as a scapegoat for further control.
The ability to build your own tools & apps is one of the rightfully-lauded reasons why people should be able to learn about building software, WITHOUT being mandated to go to a physical building to learn.
To wall off the ability for people to learn how computers work is a major part of modern computer illiteracy that people cry & complain about, yet seem to love doing the exact actions that lead to the death of computer competency.
Professional bodies are a necessary form of gatekeeping for practicing the craft of software engineering professionally.
You are then bringing a whole host of other issues that are related in nature but not in practice:
* Locking down of Android ecosystem
* Openness of education
* Remote teaching
* Remote or online examination
etc.
Professional bodies don't wall off the ability to learn nor to tinker at home, nor even to prototype or experiment (depending on scale and industry).
You can't confuse all these issues into one thing and say "we don't want this". It's a disingenuous way to argue the matter.
You don't want some gatekeeping on who will be doing surgery on you? You do obviously, and medical malpractice is a good thing if there is a problem.
Why don't you want the software engineer building your pacemaker or your medical CRM (or any other job where your immediate security is engaged) to have the same kind of verification and consequences for their actions?
It's mostly the problem of required regulations, so no we don't want mandatory gatekeeeping on surgeons as this is for example leading to doctor shortages
It's fine to set up voluntary standards and choose surgeons you think live up to those
So we want to enable more people to be able to create for example pacemakers because of things like Linus's law, "Given enough eyeballs, all bugs are shallow". If we exclude "non-professionals" from the process of creating "professional" products, we tend to have less participation in the process of innovation and therefore get less innovation
1. 99.999999% of software is not equivalent to "doing surgery" so doesn't need gatekeeping. I work on free, open-source PDF reader SumatraPDF. What kind of authorization should I get and from whom to ship this software to people?
2. pacemakers and other medical devices have to get approval from the government. So that's covered.
medical CRM software is covered by medical privacy laws which does what you say you want (criminalizes "bad" software) but in reality is a giant set of rules, many idiotic, that make health care more expensive for no benefit at all.
Adulterated food products, shoddy construction that burns like paper or crumples in an earth quake, snake oil medicine, etc. are well attested in underdeveloped nations and in history at scales far above what we see in societies with the kinds of professional bodies we’re talking about.
That said, the reality is that this safety comes at a cost, both monetary and in terms of “gatekeeping.” And many people would be fine (on paper) increasing risk 0.05% in exchange for 20% cut in costs or allowing disruption of established entities. But those 0.05% degradations add up quickly and unexpectedly.
Equating gatekeeping of professional bodies with grifting suggests you have no experience of why we have professional bodies in medicine or accountancy or civil engineering (to give just a few examples).
There are already laws and standards in almost every country. In this particular example, the people completely ignored all the privacy and data protection laws.
>> Software engineering is looking more and more like it needs a professional body in each country, and accreditation and standards.
Doesn't help much, accounting needs accreditation and standards, but that doesn't prevent competition level of some 100 accountants per job. Only way you prevent that is by limiting numbers, like lawyers do, case when connections and nepotism matter, you basically get a hereditary aristocratic caste.
> All "access control" logic lived in the JavaScript on the client side, meaning the data was literally one command away from anyone who looked
This is the top!
This is a typical example of someone using Coding Agents without being a developer: AI that isn't used knowingly can be a huge risk if you don't know what you're doing.
AI used for professional purposes (not experiments) should NOT be used haphazardly.
And this also opens up a serious liability issue: the developer has the perception of being exempt from responsibility and this also leads to enormous risks for the business.
Claude, opencode etc. Are brute force coding harnesses that literally use bash tools plus a whole bunch of vague prompting (skills, AGENT.md, MCP and all that stuff) to nudge them probabilistically into desirable behavior.
Without engineering specialized harnesses that control workflows and validate output, this issue won‘t go away.
We‘re in the wild west phase of LLM usage now, where problems emerge that shouldn’t exist in the first place and are being solved at the entirely wrong layer (outside of the harness) or with the entirely wrong tools (prompts).
The problem isn't AI, the problem is lack of an intelligent person somewhere in this whole situation. Way before AI I've seen a medical company create a service where frontend would tell backend what SQL queries to execute.
Every other field that's figured out high stakes failure models eventually landed on the same solution - make sure two people that understand the details are looking at it - pilots have copilots surgeons with checklists and nuclear plants have independent verification. Software was always the exception, cause when it broke it mostly just broke for you, vibe coding is not going to change the equation, it barely removes one check that existed before is that the people who wrote the code understood what was going on, but now that's gone too
We do have code reviews for pull requests. But on average I would guess there is great amount of complacency there. I suppose old proper QA phase was best answer we had. But that is expensive and slow.
fwiw i know tobias and its very very unlikely he made this up.
my guess its intentionally vague to not leak any information about the culprit which i guess is fair.
It's pure bs. If you read that blog post and think "this definitely happened", let alone "wow - this is interesting" then I have a monorail to sell you.
> Technical Background
> The entire application was a single HTML file with all JavaScript, CSS, and structure written inline. The backend was a managed database service with zero access control configured, no row-level security, nothing. All "access control" logic lived in the JavaScript on the client side, meaning the data was literally one curl command away from anyone who looked.
> All audio recordings were sent directly to external AI APIs for transcription and summarization.
> There was more, but this is already enough to get the idea.
Hmmmm... interesting, now that I have the "Technical Background" I for sure know that this medical app was 100% vibe coded by a Medical Practice in the Real World and exists! (TM)
It’s unlikely any LLM tasked with a prompt involving medical records did not automatically address separation of concerns. The type of data involved is worst case scenario. One JS file is also worst case scenario. This is why it may feel manufactured. If it is true, they truly deserve to be put on blast.
I can 100% imagine prompts that would even feel natural that would never hint at any medical background of the data being processed. Could be as simple as using customer instead of patient.
Given the subject matter, it would be highly unethical to reveal the name of the company before verifying it was indeed fixed. I'd be wary of getting sued.
The first time I stumbled onto a big security vulnerability (exposed stripe/aws/play store keys. I was poking around an API a web app was using, and instead of hitting /api/v1, if you just hit /api it served them. I wasn't trying to do anything malicious), the very first thing I did was contacted a security researcher friend to ask about covering my ass while performing responsible disclosure.
You hear too much about people being persecuted for trying to point out security vulnerabilities. (Guess they haven't heard about "don't shoot the messenger").
(It turned out fine after finally managing to speak with someone. Had to ring up customer service and say "look, here are the last digits of your stripe private key. Please speak with an engineer". Figuring out how to talk with someone was the difficult thing)
yeah keeping it vague makes sense to protect the place if it's still online but the whole thing doesn't really make sense?
The timelines mentioned are weird - he spoke to them before they built it? Or after? It's not that clear, he mentions they mentioned watching a video.
> The entire application was a single HTML file with all JavaScript, CSS, and structure written inline.
This is not my experience of how agents tend to build at all. I often _ask_ them to do that, but their tendency is to use a lot of files and structure
> They even added a feature to record conversations during appointments
So they have the front-desk laptop in the doctor's room? Or they were recording conversations anyway and now they for feed them into the system afterwards?
> All "access control" logic lived in the JavaScript on the client side, meaning the data was literally one curl command away from anyone who looked.
Also definitely not the normal way an agent would build something - security flaws yes, but this sounds more like someone who just learnt coding or the most upvoted post of all time on r/programmerhorror, not really AI.
Overall I'm skeptical of the claims made in this article until I see stronger evidence (not that I'm supporting using slop for a medical system in general).
I don't know what to make of the article. First I thought it seems like a made up LinkedIn story, it seems too crazy while talking about it in such a casual manner. Ultimately I don't know, maybe it was vague for a specific reason. I guess one thing I'd find odd is that whoever developed it, that they didn't run and get stuck with CORS issues, if everything was done client side to those services and that they managed to get API keys, subscription stuff everywhere while still making mistakes like this. And no mention of leaked api keys and creds which UI side there must have been, right?
> Everything that could go wrong, did go wrong.
Then this claim seems a bit too much, since what could have gone more wrong is malicious actors discovering it, right? Did they?
Maybe I have trouble believing that a medical professional could be that careless and naive in such a way, but anything could happen.
I guess another thought is... If they built it why would they share the URL to the author? Was author like "Ooh cool, let me check that out", and they just gave the url without auth? Because if it worked as it was supposed to it should have just shown a login screen right? That's the weirdest part to me, I suppose.
> The timelines mentioned are weird - he spoke to them before they built it? Or after? It's not that clear, he mentions they mentioned watching a video.
I took that all to mean she had explained the history of it to the author, but it had already been written and deployed. It is worded a little weird. It's also translated from german, I don't know if that is a factor or not.
> The timelines mentioned are weird - he spoke to them before they built it? Or after? It's not that clear, he mentions they mentioned watching a video.
Yeah although I didn't comment I found this weird as well. Chronology was vague and ill-defined. He went to a doctors office and the receptionist mentioned vibe coding their patient records system unprompted?
> A few days later, I started poking around the application.
What!? How... was there even a web-facing component to this system? Did the medical practice grant you access for some reason?
Yeah I'm back to calling bullshit. What a load of crap. Whole post probably written by an LLM.
Having experience working with medical software, I call BS on this article as presented, unless it was some minimal support app. When you deal with patient records, there's so much of local law, communication, billing rules and other things baked in that you CANNOT vibe code an app to handle even 1% of that. Your staff would rebel and your records would completely fall apart. Even basic things like appointment bookings have a HISTORY and it's a full blown room scheduling system that multiple people with different roles have to deal with (reception and providers). It takes serious time to even reverse engineer the database of existing apps, and you first have to know how to access the database itself. Then you'll see many magic IDs and will have to reverse engineer what they mean. (yes, LLMs are good at reverse engineering too, but you need some reference data and you can't easily automate that)
I have decompiled database updaters to get the root password for the local SQL Server instance with extremely restricted access rules. (can't tell you which one...) I have also written many applications auto-clicking through medical apps, because there's no other way to achieve some batch changes in reasonable time. I have a lot of collateral knowledge in this area.
Now for the "unless it was some minimal support app" - you'll see lots of them and they existed before LLMs as well. They're definitely not protecting patient data as much as other systems. If the story is true in any way, it's probably this kind of helper that solves one specific usecase that other systems cannot. For example I'm working on an app which handles some large vaccination events and runs on a side of the main clinic management application. But accidentally putting that online, accessible to everyone, and having actual patient data imported would be hard-to-impossible to achieve for a non-dev.
For the recording and transcription, there are many companies doing that at the moment and it would be so much easier to go with any of them. They're really good quality these days.
I don't think you read the article very carefully, the timeline is that he met a person, and that person told him that they made vibe-coded an app after having seen a video. He then investigated the app.
> On my last visit i actually casually discussed their IT system with a doctor.
Oh right, cool. Did it have a public-facing web-portal that you were able to "investigate" and that "Thirty minutes in, I had full read and write access to all patient data".
The level of credulity in these comments is immense.
No, they were complaining about using expensive, overly complicated third-party system that they need like only basic features like keeping text records about visits, and prescriptions and sending invoices to health insurers.
And in some practices you get direct access to your data as a patient.
I mean the story might be fake obviously, but is definitely plausible.
Yeah sure, as a matter of rule, every time I visit any health provider I am always discussing with the medical receptionist: the software they use, the challenges the business as a whole faces, the tensions between insurers and third parties.
Things that absolutely 100% happen everytime I - a tech guy - experiences when I go to the doctor/phyiso-therapist etc... etc... These are discussions that are happening.
I know, through personal acquaintance, of at least one boutique accounting firm that is currently vibe-building their own CRM with Lovable. They have no technical staff. I can't begin to comprehend the disasters that are in store.
Generally why build your own CRM? ERP and other resource planning systems I get becouse you can tailor made those to your back office. But for CRM you need mostly reliability.
What would a responsible on-boarding flow for all of these tools look like?
> Welcome to VibeToolX.
> By pressing Confirm you accept all responsibility for user data stewardship as regulated in every country where your users reside.
Would that be scary enough to nudge some risk analysis on the user's part? I am sure that would drop adoption by a lot, so I don't see it happening voluntarily.
We require someone with a professional engineering designation from an accredited engineering body to sign off and approve before a building can be built. If it is found to have structural issues later, that person can be directly liable and can lose their license to operate. Why this is not the case with health software I cannot explain. Every time I propose this the only argument I recieve against it is people who are mad that their field might dare to apply the same regulation every other field has.
Oh man, I have gone off on rants about software "engineering" here in the past.
My first office job was as an AutoCAD/network admin at a large Civil and Structural engineering firm. I saw how seriously real engineering is taken.
When I brought up your argument to my FAANG employed sibling, he said "well, what would it take to be a real software engineer in your mind!??"
My response was, and always will be: "When there is a path to a software Professional Engineer stamp, with the engineer's name on it, which carries legal liability for gross negligence, then I will call them Software Engineers."
People like to make this point, but traditional engineering has the opposite problem: insanely overwrought processes and box-checking that exists for no reason and slows everything down to a snail's pace. Yes there are safety-critical parts, but they surrounded by a ton of bullshit.
It's also absurd to think that there is no company which does genuine software "engineering". If you break ads at Google/Meta, streaming at Netflix, etc there are massive consequences. They are heavily incentivized to properly engineer their systems.
The main thing that governs whether time is spent to well-engineer something is if there is incentive to do it. In traditional engineering that incentive is the law (Getting council approval, not getting sued, etc). In software engineering that incentive is revenue.
Totally agree - not just medical software either. See replies to my other comment threads. Software engineers really don’t like the idea that they might have to show they can perform at a certain standard to be able to work as a software engineer.
Typically arguments come up:
“that’s gatekeeping” - yes, for good reason!
“Laws already exist” - yeah, and that’s not the same as professional accreditation, standards and codes of practice! Different thing, different purpose. Also the laws are a mishmash and not fit for purpose in most sectors.
I think the issue here is less about AI misbehaving and more about people doing things they should not be doing without thinking too hard about the consequences.
There are going to be a lot of accidents like this because it's just really easy to do. And some people are inevitably going to do silly things.
But it's not that different from people doing stupid things with Visual Basic back in the day. Or responding to friendly worded emails with the subject "I love you". Putting CDs/USB drives in work PCs with viruses, worms, etc.
That's what people do when you give the useful tools with sharp edges.
I'd argue that back in the visual basic/Delphi day, there was a minimum level of competence needed AND, more importantly, apps didn't have as much surface area because they weren't exposed to internet
Is there anybody making some framework where you declare the security intentions as code (for each CRUD action) and which agents can correctly do and unit test? I have seen a Lovable competitor's system prompt have 24 lines of "please consider security when generating select statements, please consider security when generating update statements..." since it expects to dump queries here and there.
The takeaway is to vet new companies one is dealing with - even just calling them up and asking if they've AI generated any system which deals with customer/patient data.
This is going to get more common (state sponsored hackers are going to have a field day)
It's reminiscent of the 90s, where every middle manager had dragged and dropped some boxes on some forms, and could get a salesman to sell it, without a care in the world for what was going on behind the scenes.
Until something crashed and recovery was needed, of course.
Or someone starts with an Excel spreadsheet just to "keep track of a few things". Then before they know it, it has become a critical part of the business but too monolithic and unorganised to be usable.
I believe there are various dimensions to vibe coding. If you work with an existing codebase, it is a tool to increase productivity. If you have domain specific knowledge, in this case - patient management system, you can build better systems.
Otherwise, you endup simulating the production. Lot of the non technical folks building products with AI Vibe coding are basically building Product Simulations. It looks like a product, functions like a product but behind the scene, you can poke holes.
I interviewed some years ago for an AI related startup.
After looking at the live product, first thing I see is their prod dB credentials and openAI api key publicly send in some requests...
Bad actors will be having a lot of fun these days
We don’t blame companies selling 3D Design software or 3D printers or mortar and cement, or graph paper and pencils. When people abuse those tools and build huts or houses or bridges that fall down, we usually blame the user for not having appropriate professional qualifications, accreditation, and experience. (Very occasionally we blame bugs in simulation software tools).
AI is a tool. It’s not intelligent, and it works at a much bigger scale than bricks and mortar, but it’s still just a tool. There’s lots we can blame AI companies for, but abuse of the tool isn’t a clear-cut situation. We should blame them for misleading marketing. But we should also blame users (who are often highly intelligent - eg doctors) for using it outside their ability. Much like doctors are fed up of patients using AI to try to act like doctors, software engineers are now finding out what it’s like when clients try to use AI to act like software engineers.
I largely agree, but if a company sold cement explicitly claiming that they will replace every job in the entire construction industry, that the cement is able to plan, verify, and build on its own, without supervision, and that any layperson can now create PhD level bridges with that cement without any input from or verification by professionals, some liability would definitely fall on the company selling that cement under these pretenses.
I have my doubts on the story. I consulted on a medtech project in the recent past in similar space, and at various points different individuals vibe-coded[0] not one but three distinct, independent prototypes of a system like the article describes, and neither of them was anywhere near that bad. On the frontend, you'd have to work pretty hard to force SOTA LLMs to give you what is being reported here. Backend-side, there's plenty of proper turn-key systems to get you started, including OSS servers you can just run locally, and even a year ago, SOTA LLMs knew about them and could find them (and would suggest some of them).
I might be biased by my experience, because we actually cared about GDPR and AI act and proper medical data processing, and I've spent my fair share of time investigating the options that exist. Still, I'm struggling to imagine how one could possibly screw it up anywhere near as what the article described. Like, I can't think of a way to do it, to the point I might need to ask an LLM to explain it to me.
--
[0] - Not as a means of developing an actual product, but solely to see if we can, plus it was easier to discuss product ideas while having some prototypes to click around.
To me it just sounds like eventually someone will figure out how to make vibecoding more reasonably secure (with prompts to have apps be looked at for security practices?)
unless cybersecurity is such a dynamic practice that we can't create automated processes that are secured
Essentially a question of what can be done to make vibecoding "secure enough"
The worst blunder I made was when I explored cloud resources to improve the product's performance.
I created a GCP project (my-app-dev) for exploring how to scale up the cloud service.
I added several resources to mock the production, like compute instances/cloud SQL/etc, then populated the data and run several benchmarks.
I changed the specs, number of instances and replicas, and configs through gcloud command.
But for some reason, at one point codex asked to list all projects;
I couldn't understand the reason, but it seemed harmless so I approved the command.
$ gcloud projects list
PROJECT_ID NAME PROJECT_NUMBER
my-app-test my app 123456789012
my-app-dev my app 234567890123 <- the dev project I was working on
my-app my app 345678901234 <- the production (I know it's a bad name)
And after this, for whatever reason it changed the target project from the dev (my-app-dev) to the production (my-app) without asking or me realizing.
Of course I checked every commands.
I couldn't YOLO while working on cloud resources, even in dev environment.
But I focused on the subommands and its content and didn't even think it had changed the project ID along the way.
It continued to suggest more and more aggressive commands for testing, and I approved them brain-deadly...
This is part of the reason deployments to production cloud environments should:
1. Only be allowed via CI/CD
2. All infra should be defined as code
3. Any deployment to production should be a delayed process that also has a human-approval step in the workflow (at least one, if not more)
(Exactly where that review step is placed depends on your organisation - culture, size, etc.)
And anyone that does need to touch production should do so from an isolated VM with temporary credentials. Developers shouldn't routinely have production access from their terminal. This last aspect is easy and cheap to set up on AWS. I presume it's also possible in Google Cloud.
There’s another version of the Mythos narrative that reads like:
AI companies realized that all this vibe coding has released a shitstorm of security vulnerabilities into the wild and so unless they release a much better model to fix that mess they’ll be found out and nobody will touch AI coding with a 100ft pole for the next 15 years. This article points more towards this narrative.
The only thing what helps is deleting the database. Every day. Until the thing goes down because the 'developer' thinks he has a bug that he can't find.
It's nothing new, dunning kruger existing long before AI entered coding realm.
Several years ago ran into one american company which consulted with me. They had 4000 paying customers and they rolled out their billing solution which accept crypto, paypal and stripe.
They had problem with payment going missing, i migrated them to WHMCs with hardening and they never had any issues after.
Now people may laugh at whmcs but use the right tool for job
U need battle tested billing solution then whmcs does count it can support VAT, taxes, reporting/accounting and pretty all which you'll error while you try to do it all yourself.
Too bad there aren't battle tested opensource solution for this
AI empowers bullshitters but for sure they existed before. The guys who do something quickly and are gone before it starts to fall over. It often works because everyone is impressed with them and the problems that arise are seen as the fault of whoever is left to clean up the mess. You can probably detect my bitterness :-D
Kinda crazy but hopefully the future holds a Clippy-esque thing for people who don’t know to set up CI, checkpoints, reviews, environments, etc. that just takes care of all that.
It sorta should do this anyway given that the user intent probably wasn’t to dump everyone’s data into Firebase or whatever.
I personally would like this as well since it gets tiring specifying all the guardrails and double-checking myself. Using this stuff feels too much like developing a skill I shouldn’t need while not focusing on real user problems.
This problem is unrelated to CI and dev practices etc, this is about trusting the output of generative AI without reading it, then using it to handle patient data.
Vibe coding is just a bad idea, unless you’re willing and able to vet the output, which most people doing it are not.
Fully agentic development is neat for scripts and utilities that you wouldn‘t have the time to do otherwise, where you can treat it as intput/output and check both.
In these cases you don’t necessarily care too much about the code itself, as long as it looks reasonable at a glance.
since its a .ch domain, i believe its in swiss.
In germany we have our DSGVO (GDPR), and you can report it too.
If a breach happen, you have to inform all your customers.
if its a first time and you tried to your own best, the punishment is not that hard, but since these are medical infos they should have known better.
Switzerland is very liberal in terms of business-oriented regulations to the point that you could crate a new year party in a closed cellar without emergency exists, not to mention anti-fire installation and burn people alive there.
Some people only care about actual consequences. Download all the data and send it, in the post on a flash drive, to the GDPR regulator's office and another copy to the medical licensing board because why not.
Don't blame the AI for what is clearly gross human negligence. It's like renovating your entire house and then acting surprised when the pipes burst because you used duct tape as a permanent fix.
At least part of the negligence is about the people who knowingly promote AI without also promoting knowledge of the limitations. Those who post stories about vibe-coding XXX in a week and don't bother to point out that they have no idea if it's not a piece of crap, waiting to explode, because there's no way they could have tested it properly in a week let alone read the mountains of code produced.
There's a hype machine working and lots of people riding on it.
That's what is meant by human negligence. There will always be a hype about something and that is not an excuse to have a devil may care attitude on any work being done
Negligence depends on what you believe to be true. If you're being told "this is possible and the AI will do it properly you don't have to worry" then it's not negligence really - on the part of the person who believes what they are told.
For the rest of us it is about being put under pressure by managers who don't understand whether to believe what you say or what they read about vibe coding on some linked-in post. As far as they are concerned you're not the authority and some hype-ster is.
> "this is possible and the AI will do it properly you don't have to worry" then it's not negligence really
Then that's lack of due diligence and and any manager is forcing you to ignore that, you should report them to compliance team. You cannot blame everyone else and bear no responsibility for your actions. If you decide to vibe code blindly and ignore all the laws and standards, then that was your decision and you decided to turn a blind eye.
Are duct tape manufacturers and their investors constantly hyping about how duct tape is the future, and how it is making professional plumbing obsolete?
I assume you haven't seen those advertisements where they put duct tapes on everything and present it as a universal solution, also there will always be a hype about something in this world and that is not an excuse to jump on the bandwagon unless you're braindead
Usually they would just use an off the shelf product and extend it, so they wouldn’t produce the absolute horror story described in the article, no.
I’m not even sure what your last comment means, are you contending that it is a good thing this company violated multiple laws with sensitive patient data?
> Usually they would just use an off the shelf product and extend it
AI does the same thing an agency or dev would do. Those vibe coding platforms have a template for these things which is usually Vite + React with Supabase for the backend, the same as a dev might use because surprise the LLM trained on the dev's work.
OP's point is that you're not guaranteed a good outcome hiring an agency or solo dev either, in fact I would say you're almost guaranteed a bad outcome either way.
If a consultant made the same mistakes I'd expect the consultant to be held accountable, not the client business that hired the consultancy - they knew they didn't have the requisite skills and so outsourced to an "expert" (and therefore can't be judged for not knowing how to secure their software since they did everything possible)
In this case the "client" is fully liable for the security issues.
It is possible. If you select consulting that you know nothing about, and they know nothing about programming and vibe coded it for you... and maybe you dont even have a contract to held them responsible and maybe they dont really have a company either... Then I can imagine something like this.
It is physically possible for a consultant to write bad code. But you'd hope that a consultant could understand that medical data is extremely important to keep secure, and actually write it to have some level of security
I sent them an email and they threatened to sue me. I was a bit in shock from such dumb response, but I guess some people only learn the hard way, so I filed a report to the AEPD (Data protection agency in Spain) for starters, known to be brutal.
I've also sent them a burofax demanding the removal of my data on their systems just last friday.
I joined an open network and it turned out to be a law firm. All their computers were on a Samba network with full C: drives shared. I wrote README.txt files on their drives telling them about the issue, but after some time it was still the same.
Then I went directly to the place to talk to them and also with the idea I could land my first job fixing that mess. But... They got incredibly angry with me, since they claimed they had some very good and expensive contractors taking care of their computers and network, and that I had basically broken in.
I left the place quickly...
But one thing they hadn't locked doll wn, was installing extensions in Safari, and given I had some development chops from coding a bunch in my freetime, I saw the opportunity to write a tiny extension that saved me a ton of time by merely copy-pasting stuff into the right forms and so on. Basically making the whole thing more efficient for me.
Everything was great, until the person next to me saw I had something different. Cautiously eager, I let them try the extension too, they loved it, and without thinking about it, spread it to other people in our team. Eventually, the manager and the IT team picked up what was going on, said they'investigate if I could maybe start doing those kind of things full-time instead of being a support agent, and just focus on tooling.
Fast forward two weeks, I get called into a meeting, apparently someone in the company had been "stealing" CC numbers from the customers on the calls, and since they don't think they've found the right person who did it (or something like that), the person who was known for "doing stuff to the computers" was the next possible suspect, and they fired me right there.
Eventually this firing let me find my first actual programming job, so I'm not too mad about it, but it really shows how out of touch lots of companies and people are when it comes to how computers actually work.
Nice. I wish more countries had something like that. Many of these organizations are lethargic and have to be forced into action by civilian efforts or the press.
France's CNIL is also not bad. They are particularly hard against things like "you accidentally sign up for x y z services when only wanting to sign up to service A".
Gdpr in the EU is also miles ahead of what the US has, or at least what it has been enforcing for a long time.
Also, generally, very, very, VERY slow. The massive fines you hear about are usually for behaviour _years_ ago.
Only when they start to side with the people, actually fining business billions and billions will things start to change. I hope we'll see this happen in europe at large, and not only in a few countries.
AFAIK, most ones seems to be acting at least once every now and then, judging by https://www.enforcementtracker.com/, is there any specific countries you're thinking about here?
Particularly, Romania, Italy and Spain seem to have had lots of cases.
I am part of a forum with many small business owners. One particular owner has been gung-ho about how he built his entire business app using vibe coding. And my first reaction was - All the power to him. It’s his business and he is free to do so.
But then came the question of data privacy rules and he had no clue. This was concerning because the impact went beyond his business. His response when the oversight was pointed out to him was that being ignorant of the law was enough to save him. Still he went to one of the vibe coding Reddit subs to get help. Then came back fuming because devs on Reddit asked him to hire real developers. He believes that these developers are delusional and a dying breed and AI is so ahead that developers are going to be dead in a years time.
I have a feeling that next year's theme will be security. People have turned off their brain when it comes to tech.
I think that having paper documentation will be safer very soon :)
It is just a matter of time when something really really bad happens.
Compared to that vibe coding has no such qualities.
It kinda falls apart once you get past a few thousand lines of code... and real systems aren't just big, they're actually messy...shit loads of components, services, edge cases, things breaking in weird ways. Getting all of that to work together reliably is a different game altogether.
And you still need solid software engineering fundamentals. Without understanding architecture, debugging, tradeoffs, and failure modes, it's hard to guide or even evaluate what's being generated.
Vibe-coding feels great for prototypes, hobby projects, or just messing around, or even some internal tools in a handful of cases. But for actual production systems, you still need real engineering behind it.
As of now, I'm 100% hesitant to pay for, or put my data on systems that are vibe-coded without the knowledge of what's been built and how it's been built.
The thing I have found that makes things work much better is, wait for it... Jira.
Everyone loves to hate on Jira, but it is a mature platform for managing large projects.
First, I use the Jira Rovo MCP (or cli, I don't wanna argue about that) to have Claude Code plan and document my architecture, features, etc. I then manually review and edit all of these items. Then, in a clean session, or many, have it implement, document decisions in comments etc. Everything works so much more reliably for large-ish projects like this.
When I first started doing this in my solo projects it was a major, "well, yeah, duh," moment. You wouldn't ask a human dev to magically have an entire project in their mind, why ask a coding agent to do that? This mental model has really helped me use the tools correctly.
edit: then there is context window management. I use Opus 4.6 1M all the time, but if I get much past 250k usage, that means I have done a poor job in starting new sessions. I never hit the auto-compact state. It is a universal truth that LLMs get dumb the more context you give them.
I think everyone should implement the context status bar config to keep an eye on usage:
https://code.claude.com/docs/en/statusline
And small quality stuff. Just yesterday it used a static method where a class method was optimal. A lot of very small stuff I used to call my juniors on during reviews.
On another hand, it used an elegant trick to make the code more readable, but failed to use the same trick elsewhere for no reason. I'm not saying it's bad: I probably wouldn't have thought about it by myself, and kept the worse solution. But even when Claude is smarter than I am, I still have to overview it.
(All the discourse around AI did wonder for my imposter syndrome though)
Most people run into problems (with or without AI) when they write code without knowing what they're trying to create. Sometimes that's useful and fun and even necessary, to explore a problem space or toy with ideas. But eventually you have to settle on a design and implement it - or just end up with an unmaintainable mess of code (whether it's pure-human or AI-assisted mess doesn't matter lol).
One of the key improvements for me when using Jira was that it has well defined patterns for all of these things, and Claude knows all about the various types of Jira tickets, and the patterns to use them.
Also, the spec driven approach is not enough in itself. The specs need sub-items, linked bug reports and fixes. I need comments on all of these tickets as we go with implementation decisions, commit SHAs, etc.
When I come back to some particular feature later, giving Claude the appropriate context in a way it knows how to use is super easy, and is a huge leap ahead in consistency.
I know I sound like some caveman talking about Jira here, but having Claude write and read from it really helped me out a lot.
It turns out that dumb ole Jira is an excellent "project memory" storage system for agentic coding tools.
The quick fix was a simple, empty index.html file (or setting the -Indexes option in the apache config). The surgeon had no idea what this meant or why it was important. And the AI bots didn't either.
The odd part of this to me was that the AI had made good choices (strong password hashes, reasonable DB schema, etc.) and the app itself worked well. Honestly, it was impressive. But at the same time, they made some very basic deployment/security mistakes that were trivial. They just needed a bit of guidance from an experienced devops security guy to make it Internet worthy, but no one bothered to do that.
Edit: I do not recommend backing up web apps on the web server itself. That's another basic mistake. But they (or the AI) decided to do that and no one with experience was consulted.
They got the application right, and began stumbling with the integration - created a developer account, got the API key, but in place of the applications URL, the had input “localhost:5345” and couldn’t get that to work, so they gave up. They never asked the tech team what was wrong, never figured out that they needed to host the application. Some of the fundamental computer literacy is the missing piece here.
I think (maybe hopeful) people will either level up to the point where they understand that stuff, or they will just give up. Also possible that the tools get good enough to explain that stuff, so they don’t have to. But tech is wide and deep and not having an understanding of the basic systems is… IMO making it a non-starter for certain things.
A simple "I'm a professional surgeon, but sadly know nothing about making software" would definitely make the conversation play out differently. How? Needs to be seen. But in an idealized scenario (which could easily become real if models are trained for it), the model would coach the (self-stated) non-expert users on the topics it would ordinarily assume the (implicitly self-stated) expert already knows.
Disabling auto-indexing just makes it harder to spot the issue. (To clarify, also not a bad idea in principle, just not _the_ solution.) If the file is still there and can be downloaded, that's strictly something which should not be possible in the first place.
How I see it happening: agents like CC have in built skills for deployment and uses building blocks from either AWS or other simpler providers. Payment through OAuth and seamless checkout.
This should be standardised
Gone should be the days of “I taught myself so now I can [design software in a professional setting / design a bridge in a professional setting].” I’m not advocating gatekeeping - if you want to build a small bridge at the end of your garden for personal use, go for it. If you want to build a bridge in your local town over a river, you’re gonna need professional accreditation. Same should be true for software engineering now.
Should be the same everywhere. Anyone can be a coder, but not everyone is an engineer
High profile convictions are not a good way of dealing with this. Not in the short or long term. Sure they have an impact, and laws should be enforced, but that’s not a substitute for managing the industry properly.
> aware of existing and new laws, standards and codes of practice
Yeah, because software business is not at all ruled by fads.
1997: you have to follow Extreme Programming (XP) or you don't get your license
2000: you now have to use XML for everything in XML or you don't get your license
2002: you now have to follow Agile or you don't get your license
2025: you now have to write everything in Rust or you don't get your license
etc., etc.
The way I see it, if they didn't even realize that they are doing something they shouldn't, they wouldn't have even known they need accreditation, even if that was required. Unless we restricted access to gazillions of tools without it of course.
I think it'll work itself out over time as what AI is/isn't and what data privacy means is discussed more. I'd leave accreditation entirely out of it, because we cannot even agree on what are the actual best practices or if they matter.
We already have laws in place, and certifications that help someone understand if a given organization adheres to given standards. We can argue over their validity, efficacy, or value.
The infrastructure, laws, and framework exist for this. More regulation and beaurocracy doesn't help when current state isn't enforced.
In most of those professions, it is a crime or a civil violation to offer services without the proper qualifications, experience and accreditation from one of the appropriate professional bodies.
We DO NOT have this in software engineering. At all. Anyone can teach themselves a bit of coding and start using it in their professional life.
Analogous to law, you can draft a contract by yourself, but if it goes wrong you have a major headache. You cannot, however, offer services as a solicitor without proper qualifications and accreditation (at least in the UK). Yet in software engineering, not only can we teach ourselves and then write small bits of software for ourselves, we can then offer professional services with no further barriers or steps.
The mishmash of laws we have around data and privacy are not professional standards, nor are they accreditation. We don’t have the framework or laws around this. And I am not aware of the USA (federal level) or Europe (or member states) or China or Russia or India or etc having this.
For example, the BCS in the UK is so weak that although it exists, exceedingly few professional software engineers are even registered with them. They have no teeth. There’s no laws covering any of this stuff. Just good-ol’ GDPR and some sector-specific laws here and there trying to keep people mildly safe.
Professional bodies = gatekeeping. The existence of the body means that the thing its surrounding will be barred from others to enter.
It means financial barriers & "X years of experience required" that actual programmers rightfully decry.
Caveat: When it comes to anything that will affect physical reality, & therefore the physical safety of others, the standards & accreditations then become necessary.
NOTE ON CAVEAT: Whilst *most* software will fall under this caveat, NOT ALL WILL. (See single-player offline video games)
To create a blanket judgement for this domain is to invite the death of the hobbyist. And you, EdNutting, may get your wish, since Google's locking down Android sideloading because they're using your desires for such safety as a scapegoat for further control.
https://keepandroidopen.org/
> We DO NOT have this in software engineering.
THIS IS A GOOD THING. FULLSTOP.
The ability to build your own tools & apps is one of the rightfully-lauded reasons why people should be able to learn about building software, WITHOUT being mandated to go to a physical building to learn.
To wall off the ability for people to learn how computers work is a major part of modern computer illiteracy that people cry & complain about, yet seem to love doing the exact actions that lead to the death of computer competency.
You are then bringing a whole host of other issues that are related in nature but not in practice: * Locking down of Android ecosystem * Openness of education * Remote teaching * Remote or online examination etc.
Professional bodies don't wall off the ability to learn nor to tinker at home, nor even to prototype or experiment (depending on scale and industry).
You can't confuse all these issues into one thing and say "we don't want this". It's a disingenuous way to argue the matter.
imo this is sold as "keeping people safe" but in practice it's really a gatekeeping grift that increases friction and prevents growth
Why don't you want the software engineer building your pacemaker or your medical CRM (or any other job where your immediate security is engaged) to have the same kind of verification and consequences for their actions?
It's fine to set up voluntary standards and choose surgeons you think live up to those
So we want to enable more people to be able to create for example pacemakers because of things like Linus's law, "Given enough eyeballs, all bugs are shallow". If we exclude "non-professionals" from the process of creating "professional" products, we tend to have less participation in the process of innovation and therefore get less innovation
2. pacemakers and other medical devices have to get approval from the government. So that's covered.
medical CRM software is covered by medical privacy laws which does what you say you want (criminalizes "bad" software) but in reality is a giant set of rules, many idiotic, that make health care more expensive for no benefit at all.
That said, the reality is that this safety comes at a cost, both monetary and in terms of “gatekeeping.” And many people would be fine (on paper) increasing risk 0.05% in exchange for 20% cut in costs or allowing disruption of established entities. But those 0.05% degradations add up quickly and unexpectedly.
I mean, people could voluntarily try to create rules of thumb they think are valuable and could try to popularize them
I don't think that requires further restrictive actions
Doesn't help much, accounting needs accreditation and standards, but that doesn't prevent competition level of some 100 accountants per job. Only way you prevent that is by limiting numbers, like lawyers do, case when connections and nepotism matter, you basically get a hereditary aristocratic caste.
I guess we better get used to going back being peasants working shit jobs barely above starvation since that's what the future of capitalism seems to bring: https://realityraiders.com/fringewalker/irreverent-humor/mon...
This is the top!
This is a typical example of someone using Coding Agents without being a developer: AI that isn't used knowingly can be a huge risk if you don't know what you're doing.
AI used for professional purposes (not experiments) should NOT be used haphazardly.
And this also opens up a serious liability issue: the developer has the perception of being exempt from responsibility and this also leads to enormous risks for the business.
Claude, opencode etc. Are brute force coding harnesses that literally use bash tools plus a whole bunch of vague prompting (skills, AGENT.md, MCP and all that stuff) to nudge them probabilistically into desirable behavior.
Without engineering specialized harnesses that control workflows and validate output, this issue won‘t go away.
We‘re in the wild west phase of LLM usage now, where problems emerge that shouldn’t exist in the first place and are being solved at the entirely wrong layer (outside of the harness) or with the entirely wrong tools (prompts).
But in any case it's so lacking in detail and so brief as to make it so uninteresting that it might as well be fake.
> Somebody "vibecodes" medical app/system. The app was insecure. Personal info leaked.
Okay cool.
It's a rarely updated personal blog, not a daily tabloid story.
> Technical Background
> The entire application was a single HTML file with all JavaScript, CSS, and structure written inline. The backend was a managed database service with zero access control configured, no row-level security, nothing. All "access control" logic lived in the JavaScript on the client side, meaning the data was literally one curl command away from anyone who looked.
> All audio recordings were sent directly to external AI APIs for transcription and summarization.
> There was more, but this is already enough to get the idea.
Hmmmm... interesting, now that I have the "Technical Background" I for sure know that this medical app was 100% vibe coded by a Medical Practice in the Real World and exists! (TM)
You hear too much about people being persecuted for trying to point out security vulnerabilities. (Guess they haven't heard about "don't shoot the messenger").
(It turned out fine after finally managing to speak with someone. Had to ring up customer service and say "look, here are the last digits of your stripe private key. Please speak with an engineer". Figuring out how to talk with someone was the difficult thing)
The timelines mentioned are weird - he spoke to them before they built it? Or after? It's not that clear, he mentions they mentioned watching a video.
> The entire application was a single HTML file with all JavaScript, CSS, and structure written inline.
This is not my experience of how agents tend to build at all. I often _ask_ them to do that, but their tendency is to use a lot of files and structure
> They even added a feature to record conversations during appointments
So they have the front-desk laptop in the doctor's room? Or they were recording conversations anyway and now they for feed them into the system afterwards?
> All "access control" logic lived in the JavaScript on the client side, meaning the data was literally one curl command away from anyone who looked.
Also definitely not the normal way an agent would build something - security flaws yes, but this sounds more like someone who just learnt coding or the most upvoted post of all time on r/programmerhorror, not really AI.
Overall I'm skeptical of the claims made in this article until I see stronger evidence (not that I'm supporting using slop for a medical system in general).
> Everything that could go wrong, did go wrong.
Then this claim seems a bit too much, since what could have gone more wrong is malicious actors discovering it, right? Did they?
Maybe I have trouble believing that a medical professional could be that careless and naive in such a way, but anything could happen.
I guess another thought is... If they built it why would they share the URL to the author? Was author like "Ooh cool, let me check that out", and they just gave the url without auth? Because if it worked as it was supposed to it should have just shown a login screen right? That's the weirdest part to me, I suppose.
I took that all to mean she had explained the history of it to the author, but it had already been written and deployed. It is worded a little weird. It's also translated from german, I don't know if that is a factor or not.
Copypasted and than dropped into hosting folder, sweet web 1.0 style
Yeah although I didn't comment I found this weird as well. Chronology was vague and ill-defined. He went to a doctors office and the receptionist mentioned vibe coding their patient records system unprompted?
> A few days later, I started poking around the application.
What!? How... was there even a web-facing component to this system? Did the medical practice grant you access for some reason?
Yeah I'm back to calling bullshit. What a load of crap. Whole post probably written by an LLM.
I have decompiled database updaters to get the root password for the local SQL Server instance with extremely restricted access rules. (can't tell you which one...) I have also written many applications auto-clicking through medical apps, because there's no other way to achieve some batch changes in reasonable time. I have a lot of collateral knowledge in this area.
Now for the "unless it was some minimal support app" - you'll see lots of them and they existed before LLMs as well. They're definitely not protecting patient data as much as other systems. If the story is true in any way, it's probably this kind of helper that solves one specific usecase that other systems cannot. For example I'm working on an app which handles some large vaccination events and runs on a side of the main clinic management application. But accidentally putting that online, accessible to everyone, and having actual patient data imported would be hard-to-impossible to achieve for a non-dev.
For the recording and transcription, there are many companies doing that at the moment and it would be so much easier to go with any of them. They're really good quality these days.
There are plenty of small medical practices with 1-2 doctors and a front desk.
On my last visit i actually casually discussed their IT system with a doctor.
Oh right, cool. Did it have a public-facing web-portal that you were able to "investigate" and that "Thirty minutes in, I had full read and write access to all patient data".
The level of credulity in these comments is immense.
I mean the story might be fake obviously, but is definitely plausible.
Things that absolutely 100% happen everytime I - a tech guy - experiences when I go to the doctor/phyiso-therapist etc... etc... These are discussions that are happening.
"Your claimed experience is different than my experience so you are lying"?
Don't try to bring social justice warrior talk onto a tech forum please.
you also claim that I am lying.
Are you willing to put money to be proven wrong? that it's normal to have a tech discussion with your doctor in Switzerland?
> Welcome to VibeToolX.
> By pressing Confirm you accept all responsibility for user data stewardship as regulated in every country where your users reside.
Would that be scary enough to nudge some risk analysis on the user's part? I am sure that would drop adoption by a lot, so I don't see it happening voluntarily.
My first office job was as an AutoCAD/network admin at a large Civil and Structural engineering firm. I saw how seriously real engineering is taken.
When I brought up your argument to my FAANG employed sibling, he said "well, what would it take to be a real software engineer in your mind!??"
My response was, and always will be: "When there is a path to a software Professional Engineer stamp, with the engineer's name on it, which carries legal liability for gross negligence, then I will call them Software Engineers."
It's also absurd to think that there is no company which does genuine software "engineering". If you break ads at Google/Meta, streaming at Netflix, etc there are massive consequences. They are heavily incentivized to properly engineer their systems.
The main thing that governs whether time is spent to well-engineer something is if there is incentive to do it. In traditional engineering that incentive is the law (Getting council approval, not getting sued, etc). In software engineering that incentive is revenue.
Typically arguments come up:
“that’s gatekeeping” - yes, for good reason!
“Laws already exist” - yeah, and that’s not the same as professional accreditation, standards and codes of practice! Different thing, different purpose. Also the laws are a mishmash and not fit for purpose in most sectors.
There are going to be a lot of accidents like this because it's just really easy to do. And some people are inevitably going to do silly things.
But it's not that different from people doing stupid things with Visual Basic back in the day. Or responding to friendly worded emails with the subject "I love you". Putting CDs/USB drives in work PCs with viruses, worms, etc.
That's what people do when you give the useful tools with sharp edges.
This is going to get more common (state sponsored hackers are going to have a field day)
It's reminiscent of the 90s, where every middle manager had dragged and dropped some boxes on some forms, and could get a salesman to sell it, without a care in the world for what was going on behind the scenes.
Until something crashed and recovery was needed, of course.
The piper always needs to be paid.
https://archive.ph/GsLvt
https://web.archive.org/web/20260331184500/https://www.tobru...
Edit: the archive.ph one works for me :)
Otherwise, you endup simulating the production. Lot of the non technical folks building products with AI Vibe coding are basically building Product Simulations. It looks like a product, functions like a product but behind the scene, you can poke holes.
Does the company which willingly sells the polymorphic virus editor bear any responsibility, or should the unaware vibe coder be incumbent ?
AI is a tool. It’s not intelligent, and it works at a much bigger scale than bricks and mortar, but it’s still just a tool. There’s lots we can blame AI companies for, but abuse of the tool isn’t a clear-cut situation. We should blame them for misleading marketing. But we should also blame users (who are often highly intelligent - eg doctors) for using it outside their ability. Much like doctors are fed up of patients using AI to try to act like doctors, software engineers are now finding out what it’s like when clients try to use AI to act like software engineers.
I might be biased by my experience, because we actually cared about GDPR and AI act and proper medical data processing, and I've spent my fair share of time investigating the options that exist. Still, I'm struggling to imagine how one could possibly screw it up anywhere near as what the article described. Like, I can't think of a way to do it, to the point I might need to ask an LLM to explain it to me.
--
[0] - Not as a means of developing an actual product, but solely to see if we can, plus it was easier to discuss product ideas while having some prototypes to click around.
unless cybersecurity is such a dynamic practice that we can't create automated processes that are secured
Essentially a question of what can be done to make vibecoding "secure enough"
I created a GCP project (my-app-dev) for exploring how to scale up the cloud service. I added several resources to mock the production, like compute instances/cloud SQL/etc, then populated the data and run several benchmarks.
I changed the specs, number of instances and replicas, and configs through gcloud command.
But for some reason, at one point codex asked to list all projects; I couldn't understand the reason, but it seemed harmless so I approved the command. And after this, for whatever reason it changed the target project from the dev (my-app-dev) to the production (my-app) without asking or me realizing.Of course I checked every commands. I couldn't YOLO while working on cloud resources, even in dev environment. But I focused on the subommands and its content and didn't even think it had changed the project ID along the way.
It continued to suggest more and more aggressive commands for testing, and I approved them brain-deadly...
It took a shamefully long time to realize codex was actually operating on production, so I DDoSed and SQL-injected to the production...Fortunately, it didn't do anything irreversible. But it was one of the most terrifying moments in my career.
1. Only be allowed via CI/CD
2. All infra should be defined as code
3. Any deployment to production should be a delayed process that also has a human-approval step in the workflow (at least one, if not more)
(Exactly where that review step is placed depends on your organisation - culture, size, etc.)
And anyone that does need to touch production should do so from an isolated VM with temporary credentials. Developers shouldn't routinely have production access from their terminal. This last aspect is easy and cheap to set up on AWS. I presume it's also possible in Google Cloud.
https://news.ycombinator.com/item?id=47707681
AI companies realized that all this vibe coding has released a shitstorm of security vulnerabilities into the wild and so unless they release a much better model to fix that mess they’ll be found out and nobody will touch AI coding with a 100ft pole for the next 15 years. This article points more towards this narrative.
Several years ago ran into one american company which consulted with me. They had 4000 paying customers and they rolled out their billing solution which accept crypto, paypal and stripe.
They had problem with payment going missing, i migrated them to WHMCs with hardening and they never had any issues after.
Now people may laugh at whmcs but use the right tool for job
U need battle tested billing solution then whmcs does count it can support VAT, taxes, reporting/accounting and pretty all which you'll error while you try to do it all yourself.
Too bad there aren't battle tested opensource solution for this
It sorta should do this anyway given that the user intent probably wasn’t to dump everyone’s data into Firebase or whatever.
I personally would like this as well since it gets tiring specifying all the guardrails and double-checking myself. Using this stuff feels too much like developing a skill I shouldn’t need while not focusing on real user problems.
Vibe coding is just a bad idea, unless you’re willing and able to vet the output, which most people doing it are not.
It says quite a lot about where we are with ai tooling that none of the big players have “no need to review, certified for market X” offerings yet.
In these cases you don’t necessarily care too much about the code itself, as long as it looks reasonable at a glance.
Someone with the right mindset needs to be there providing guidance and architectural input.
And even then that's not enough. Something like a super extensive testing set like in SQLite is the best we can do.
Lets really hope they learned from their mistakes
There's a hype machine working and lots of people riding on it.
For the rest of us it is about being put under pressure by managers who don't understand whether to believe what you say or what they read about vibe coding on some linked-in post. As far as they are concerned you're not the authority and some hype-ster is.
Then that's lack of due diligence and and any manager is forcing you to ignore that, you should report them to compliance team. You cannot blame everyone else and bear no responsibility for your actions. If you decide to vibe code blindly and ignore all the laws and standards, then that was your decision and you decided to turn a blind eye.
Lack of security theater is a good thing for most businesses
I’m not even sure what your last comment means, are you contending that it is a good thing this company violated multiple laws with sensitive patient data?
AI does the same thing an agency or dev would do. Those vibe coding platforms have a template for these things which is usually Vite + React with Supabase for the backend, the same as a dev might use because surprise the LLM trained on the dev's work.
OP's point is that you're not guaranteed a good outcome hiring an agency or solo dev either, in fact I would say you're almost guaranteed a bad outcome either way.
In this case the "client" is fully liable for the security issues.
> All "access control" logic lived in the JavaScript on the client side, meaning the data was literally one curl command away from anyone who looked.
They are not the same thing.