ChatGPT Health is a marketplace, guess who is the product?

(consciousdigital.org)

251 points | by yoaviram 2 days ago

21 comments

  • bwb 14 hours ago
    ChatGPT has made a material difference in my ability to understand health problems, test results, and to communicate with doctors effectively. My wife and I were talking last night about how helpful it was in 2025. I hope that it continues to be good at this.

    I want regulators to keep an eye on this and make smart laws. I don't want it to go away, as its value is massive in my life.

    (One example, if you are curious: I've been doing rehab for a back injury for about 10 years. I worked with a certified trainer/rehab professional for many years and built a program to keep me as pain-free as possible. I rebuilt the entire thing with ChatGPT/Gemini about 6 weeks ago, and I've had less pain than at any other point in my life. I spent at least 12 hours working with AI to test and research every exercise, and I've got some knowledge to help guide me, but I was amazed by how far it has come in 12 months. I ran the results by a trainer to double-check it was well thought out.)

    • hackitup7 22 minutes ago
      I've had a similar positive experience and I'm really surprised at the cynicism here. You have a system that is good at reading tons of literature and synthesizing it, which then applies basic logic. What exactly do the cynics think that doctors do?

      I don't use LLMs as the final say, but I do find them pretty useful as a positive filter / quick gut check.

    • trollbridge 14 hours ago
      This sounds like excellent evidentiary material for a future insurer or government health provider to decide you're uninsurable, not eligible for a job, and so on.

      And the great thing about it is that you already signed all your rights away for them to do this exact thing, when we could have had an open world with open models run locally instead where you got to keep your private health information private.

      • simianwords 13 hours ago
        Can you explain the exact way in which this is possible? It’s not legal to be denied jobs based on health. Not to deny insurance
        • Flatterer3544 13 hours ago
          And how would you know what they base their hiring upon? You would just get a generic automated response..

          You would not be privy to their internal processes, and thusfar not be able to prove wrong doing. You would just have to hope for a new Snowden and that the found wrongdoings would actually be punished this time.

          • bko 9 hours ago
            I don't get it, if you're medically unfit for a job, why would you want the job?

            For instance, if your job is to be on your feet all day and you can barely stand, then that job is not for you. I have never met employers that are so flush in opportunities of candidates that they just randomly choose to exclude certain people.

            And if it's insurance, there's a group rate. The difference only variable is what the employee chooses out of your selected plans (why make a plan available if you don't want people to pick that one?) and family size. It's illegal to discriminate of family size and that does add up to 10k extra on the employer side. But there are downsides to hiring young single people, so things may balance out.

            • jjmarr 11 minutes ago
              Very large employers don't have a group rate. The insurance company administers the plan on behalf of the company according to pre-agreed rules, then the company covers all costs according to the employee health situation.

              Read your policy!

            • zopa 5 hours ago
              Usually there's one or two job responsibilities among many, that you can do, but not the way everyone else does them. The ADA requires employers to make reasonable accommodations, and some employers don't want to.

              So less, the job requires you to stand all day, and more, once a week or so they ask you make a binder of materials, and the hole puncher they want you to use dislocates your hands (true story). Or, it's a desk job, but you can't get from your desk to the bathroom in your wheelchair unless they widen the aisles between desks (hypothetical).

            • rafterydj 7 hours ago
              I believe existing laws carve out exceptions for medical fitness for certain positions for this very reason. If I may, stepping back for a second: the reason privacy laws exist, is to protect people from bad behavior from employers, health insurance, etc.

              If we circumvent those privacy laws, through user licenses, or new technology - we are removing the protections of normal citizens. Therefore, the bad behavior which we already decided as a society to ban can now be perpetrated again, with perhaps a fresh new word for it to dodge said old laws.

              If I understand your comment, you are essentially wondering why those old laws existed in the first place. I would suggest racism or other systemic issues, and differences in insurance premiums, are more than enough to justify the existence of privacy laws. Take a normal office job as an example over a manual labor intensive job. No reason at all that health conditions should impact that. The idea of not being hired because I have a young child, or a health condition, that would raise the group rate from the insurer passing the cost to my employer (which would be in their best interest to do) is a terrible thought. And it happened before, and we banned that practice (or did our best to do so).

              All this to say, I believe HIPAA helps people, and if ChatGPT is being used to partially or fully facilitate medical decision making, they should be bound under strict laws preventing the release of that data regardless of their existing user agreements.

              • throwup238 5 hours ago
                > I believe existing laws carve out exceptions for medical fitness for certain positions for this very reason.

                It’s not just medical but a broad carve out called “bona fide occupational qualifications”. If there’s a good reason for it, hiring antidiscrimination laws allow exceptions.

          • Aurornis 5 hours ago
            This fails the classic conspiracy theory test: Any company practicing this would have to be large enough to be able to afford to orchestrate a chain of illegal transactions to get the data, develop a process for using it in hiring, and routinely act upon it.

            The continued secrecy of the conspiracy would then depend on every person involved in orchestrating this privacy violation and illegal hiring scheme keeping it secret forever. Nobody ever leaking it to the press, no disgruntled employees e-mailing their congress people, no concerned citizens slipping a screenshot to journalists. Both during and after their employment with the company.

            To even make this profitable at all, the data would have to be secretly sold to a lot of companies for this use, and also continuously updated to be relevant. Giant databases of your secret ChatGPT queries being sold continuously in volume, with all employees at both the sellers, the buyers, and the users of this information all keeping it perfectly quiet, never leaking anything.

            • drawnwren 3 hours ago
              It doesn't though. As an aside, I have been using a competitor to chatgpt health (nori) for a while now, and I have been getting an extreme amount of targeted ads about HRV and other metrics that the app consumes. I have been collecting health metrics through wearables for years, so there has been no change in my own search patterns or beliefs about my health. I just thought ai + health data was cool.
          • simianwords 12 hours ago
            Do corporations use my google searches as data to hire me?
            • well_ackshually 12 hours ago
              Do you have any proof they don't? Do you have any proof the "AI System" that they use to filter out candidates doesn't "accidentally" access data ? Are you willing to bet that Google, OpenAI, Anthropic, Meta, won't sell access to that information?

              Also, in some cases: they absolutely do. Try to get hired in Palantir and see how much they know about your browsing history. Anything related to national security or requiring clearances has you investigated.

              • linkregister 10 hours ago
                The last time I went through the Palantir hiring process, the effort on their end was almost exclusively on technical and cultural fit interviews. My references told me they had not been contacted.

                Calibrating your threat model against this attack is unlikely to give you any alpha in 2026. Hiring at tech companies and government is much less deliberate than your mental model supposes.

                The current extent of background checks is an API call to Checkr. This is simply to control hiring costs.

                As a heuristic, speculated information to build a threat model is unlikely to yield a helpful framework.

                • bossyTeacher 5 hours ago
                  >the effort on their end was almost exclusively on technical and cultural fit interviews

                  How could you possibly know if they use other undisclosed methods as part of the recruitment? You are assuming Palatir would behave ethically. Palantir, the company that will never win awards based on ethics

              • raw_anon_1111 11 hours ago
                As if any company that did that is a company I would want to work for.

                For instance back when I was interviewing at startups and other companies where I was going to be a strategic hire, I would casually mention how much I enjoyed spending time on my hobbies and with my family on the weekend so companies wouldn’t even extend an offer if they wanted someone “passionate” who would work 60 hours a week and be on call.

                • two_tasty 11 hours ago
                  I certainly understand this perspective.

                  But is it really so hard to imagine a world where your individual choice to "opt-out" or work for companies that don't use that info is a massive detriment to your individual life? It doesn't have to be every single company doing it for you to have no _practical_ choice about it (if you want to make market rate for your services.)

                  • raw_anon_1111 11 hours ago
                    I live my life by the “Ben Kenobi” principal. I’m 51, when things go completely to shit, I’ll just go out and live as a hermit somewhere.
                    • ares623 6 hours ago
                      Ah the ol’ “fuck you got mine” approach
                      • raw_anon_1111 5 hours ago
                        Exactly what am I suppose to do? I vote for politicians who talk about universal healthcare, universal child care, public funding of college education and trade schools etc.

                        But the country and the people who could most benefit from it are more concerned with whatever fake outrage Fox News comes up with an anti woke something or the other.

                        So yeah, if this is the country America wants, I’m over it. I’ve done my bid.

                        While other people talk about leaving the country, we are seriously doing research and we are going to spend a month and a half outside of the US this year and I’ve already looked at residency requirements in a couple of countries after retirement including the one we are going to in a month and a half.

                    • cindyllm 6 hours ago
                      [dead]
              • ffsm8 11 hours ago
                [flagged]
            • smsm42 11 hours ago
              Probably not directly, that would be too vulnerable. But they could hire a background check company, that could pay a data aggregator to check if you searched for some forbidden words, and then feed the results into a threat model...
            • Aurornis 4 hours ago
              No they do not.

              Anyone who has worked in hiring for any big company knows how much goes into ensuring hiring processes don't accidentally touch anything that could be construed as illegal discrimination. Employees are trained, policies and procedures are documented, and anyone who even accidentally says or does anything that comes too close to possibly running afoul of hiring laws will find themselves involved with HR.

              The idea that these same companies also have a group of people buying private search information or ChatGPT conversations for individual applicants from somewhere (which nobody can link to) and then secretly making hiring decisions based on what they find is silly.

              The arguments come with the usual array of conspiracy theory defenses, like the "How can you prove it's not happening" or the claims that it's well documented that it's happening but nobody can link to that documentation.

            • Aurornis 5 hours ago
              I'm kind of amazed that so many people in this comment section believe their Google searches and ChatGPT conversations are being sold and used.

              Under this conspiracy theory they'd have to be available for sale somewhere, right? Yet no journalist has ever picked up the story? Nobody has ever come out and whistleblown that their company was buying Google searches and denying applicants for searching for naughty words?

              • nyrikki 4 hours ago
                Google "doesn't sell your data" but RTB leaks that info, and the reason no one is called out for "buying Google searches and denying applicants for searching for naughty words" is because it is trivial to make legal.

                It is well documented in many many places, people just don't care.

                Google can claim that it doesn’t sell your data, but if you think that the data about your searches isn't being sold, here is just a small selection of real sources.

                https://www.iccl.ie/wp-content/uploads/2022/05/Mass-data-bre...

                And it isn't paranoia, consumer surveillance is a very real problem, and one of the few paths to profitability for OpenAI.

                https://techpolicy.sanford.duke.edu/data-brokers-and-the-sal...

                https://stratcomcoe.org/cuploads/pfiles/data_brokers_and_sec...

                https://www.ftc.gov/system/files/ftc_gov/pdf/26AmendedCompla...

                https://epic.org/a-health-privacy-check-up-how-unfair-modern...

                • Aurornis 4 hours ago
                  > and the reason no one is called out for "buying Google searches and denying applicants for searching for naughty words" is because it is trivial to make legal.

                  Citation needed for a claim of this magnitude.

                  > It is well documented in many many places, people just don't care.

                  Yes, please share documentation of companies buying search data and rejecting candidates for it.

                  Like most conspiracy theories, there are a lot of statements about this happening and being documented but the documentation never arrives.

                  • nyrikki 3 hours ago
                    Like most cults, you ignore direct links with cites from multiple governments agencies, but here is another.

                    https://www.upturn.org/work/comments-to-the-cfpb-on-data-bro...

                    > Most employers we examined used an ATS capable of integrating with a range of background screening vendors, including those providing social media screens, criminal background checks, credit checks, drug and health screenings, and I-9 and E-Verify.29 As applicants, however, we had no way of knowing which, if any, background check systems were used to evaluate our applications. Employers provided no meaningful feedback or explanation when an offer of work was not extended. Thus, a job candidate subjected to a background check may have no opportunity to contest the data or conclusions derived therefrom.30

                    If you are going to ignore a decade of research etc... I can't prove it to you.

                    > The agency found that data brokers routinely sidestep the FCRA by claiming they aren't subject to its requirements – even while selling the very types of sensitive personal and financial information Congress intended the law to protect.

                    https://www.consumerfinance.gov/about-us/newsroom/cfpb-propo...

                    > Data brokers obtain information from a variety of sources, including retailers, websites and apps, newspaper and magazine publishers, and financial service providers, as well as cookies and similar technologies that gather information about consumers’ online activities. Other information is publicly available, such as criminal and civil record information maintained by federal, state, and local courts and governments, and information available on the internet, including information posted by consumers on social media.

                    > Data brokers analyze and package consumers’ information into reports used by creditors, insurers, landlords, employers, and others to make decisions about consumers

                    https://files.consumerfinance.gov/f/documents/cfpb_fcra-nprm...

                    And that CFPB proposal was withdrawn:

                    https://www.consumerfinancialserviceslawmonitor.com/2025/05/...

                    Note screen shots of paywalled white papers from large HR orgs:

                    https://directorylogos.mediabrains.com/clientimages/f82ca2e3...

                    Image from here:

                    https://vendordirectory.shrm.org/company/839063/whitepapers/...

                    But I am betting you come back with another ad hominem, so I will stay in the real world while you ignore it, enjoy having the last word.

                    • rendaw 1 hour ago
                      You keep straying from the question. The question was: who has access to google searches? RTB isn't google searches. Background screening isn't google searches. Social media isn't google searches. Cookies aren't google searches. etc etc

                      Every link you provided is for tangential things. They're bad, yes, but they're not google searches. Provide a link where some individual says "Yes, I know what so-and-so searched for last wednesday."

            • anal_reactor 12 hours ago
              Not yet. But Google itself would ask you for your resume if you happened to search for a lot of things related to programming.
              • LPisGood 12 hours ago
                Yes, I remember a friend that interned there a couple times showed me that. One of them was “list comprehensive python” and the Google website would split in 2 and give you some really fun coding challenges. I did a few, and you get 4(?) right you get a guaranteed interview I think. I intended to come back and spend a lot of time on an additional one, but I never did. Oops
                • anal_reactor 12 hours ago
                  I think I only did three or something and I didn't hear back from them. Honestly my view of Google is that they aren't as cool as they think they are. My current position allows me to slack off as much as I want and it's hard to beat that, even if they offer more money (they won't in the current market).
              • Aurornis 5 hours ago
                "Ask you for your resume" is a funny way of saying "Show an advertisement to invite people to apply for a job"
          • purrcat259 9 hours ago
            > And how would you know what they base their hiring upon?

            GDPR Request. Ah wait, regulation bad.

        • DetectDefect 12 hours ago
          > It’s not legal to be denied jobs based on health.

          There is a vast gap between what is not legal and what is actually actionable in a court of law, which is well known to a large power nexus.

        • trollbridge 12 hours ago
          How are you ever going to prove this?

          You just get an automated denial from the ATS that's based on the output from AI inference engine.

        • joe_the_user 4 hours ago
          The ADA made it illegal to discriminate against job seekers for health conditions and ObamaCare made it illegal to base cover and rates on pre-existing conditions.

          What are the chances those bills last long in the current administration and supreme court?

      • matt3D 9 hours ago
        These strawman arguments lack nuance.

        If the person can use AI to lead a noticeably better life, something that may have been impossible previously due to economic circumstance, then the first order benefits outweigh the second order drawbacks.

        I’m not disputing what you’re saying, I just think that treating it like a zero sum game every time the conversation comes up is showing an immense amount of privilege.

        You, me, the parent commenter; we’re all dying, we don’t have time to optimise for the best outcome.

        • newyankee 1 hour ago
          there is also no easy way to build a perfect health AI without giving up some privacy. Now there will always be risks, but this is why I think China might overtake everyone else in Healthcare AI at the least
      • beepbopboopp 5 hours ago
        This is an argument against the general data collection internet NOT chatGPT.
      • carlosjobim 3 hours ago
        What do you consider the purpose of life to be? To me being in good health is immensely more important than health insurance, a government health plan, or a job.

        I know that neither health insurers nor any government agency nor anybody else have even 0,0000000000000001% as much interest in my health, well being and survival as I do.

        When it is the matter of my health and my life, I care as much about what an insurer or employer thinks as I would care about what the Ayatollah of Iran thinks. Or what you think. Ie: Those opinions are without any value at all.

        • paulryanrogers 3 hours ago
          Most of us cannot afford to pay the full cost of healthcare for an emergency or major intervention. Medical bankruptcy is an increasingly common phenomenon.

          So if insurers can cut you off based on your ChatGPT queries or test results then you may find yourself in serious debt, homeless, without medical care, etc

          • carlosjobim 2 hours ago
            Money is imaginary. Health is health. Sacrificing your health and your life in order to hedge against a completely hypothetical situation is not a dignified existence.

            Bankruptcy is of course much preferable to not having your health. Even having to argue this is bizarre. We are not human batteries in the Matrix. Our purpose is not to please institutions or destroy our own lives for fear of hypothetical situations.

            Don't you understand that you only have one life and one body. That's it. You have 70 or 80 years with one body and one mind. That is the only thing which matters.

      • SecretDreams 14 hours ago
        System working as intended!
      • lotsofpulp 13 hours ago
        If an insurer is able to reduce (or recoup) costs from likelier risks, then the remaining insureds benefit from lower premiums.

        If the goal is providing subsidies (i.e. wealth transfers), then insurance is not the way to do it. That is the government’s role.

        • alpinisme 13 hours ago
          Insurance that is maximally responsive to patient health changes in terms of cost (ie making healthier people pay less) ends up being an inefficient way of just having people pay for their healthcare directly.

          And it naturally means the people with highest premiums are the least likely to be able to afford it (the elderly, the disabled, those with chronic conditions that make them less likely to maintain high earning jobs steadily, etc)

          • Workaccount2 12 hours ago
            The obvious retort to this is:

            "If I focused on my health, ate clean and exercised daily, why should I also be subsidizing Billy "video-games-are-my-exercise" fatass's chronic health conditions?"

            This is why there is a hyperfixation on shifting blame away from (failing) individuals. The logic breaks when Billy has to admit he just hates exercising.

            And yes, before you comment, I know "maybe Billy has (condition outside all control) so it's not on him". Please, see what I just said in the previous statement.

            • alpinisme 10 hours ago
              In some respects, the ideal world is one in which everyone’s premiums are tied to a free and easy Apple Watch-like device that silently tracks exercise, blood sugar at a frequency that can tell when you ate a big dessert, air quality (and the presence of smoke or pollution), blood alcohol content, whether you are in speeding cars, whether you are participating in dangerous sports, etc. Such a system would directly confront individuals with the cost of their behaviors in an economic way, probably leading many or even most people to improve their habits in the aggregate.

              But such a system comes at other costs that most people intuitively feel infringes on core values they have.

              Edit to add: this system would actually have some great advantages over an “existing conditions” tax in that now you pay low rates until you have diabetes, all during the time you are leading the unhealthy lifestyle. But once you have it you are not rewarded for starting to exercise and eat healthy and get it under control. In the hypothetical scenario above, you’d be punished economically during the period you were building bad habits and you would be able to restore sane costs after course correction

              • Terr_ 3 hours ago
                There's a similar phenomenon when people grouse about paying taxes for "roads I don't even use." Even if we assume zero indirect benefits, the billing infrastructure necessary to truly achieve that goal would create a creepy panopticon of constant surveillance.

                This is difficult to convey to certain brands of self-styled libertarians.

            • duskdozer 11 hours ago
              It also fails to take into account the fact that eating clean and exercising daily doesn't eliminate your risk of getting cancer at age 40 or having your car's brakes fail randomly.
              • Workaccount2 8 hours ago
                Its dumb to create an insurance program using anecdotes.

                The system can accommodate (and frankly is the ideal many people strive for) some health nut getting long drawn out cancer battle at 41. Its rare enough to be noise in the giant money payout pool.

                Obesity and it's litany of health effects are not rare, and next to age, are a dominating signal drawing money from the pool.

            • OverTheTetons 12 hours ago
              Is the obvious retort to this:

              I don't think we should play arbiter for who has and hasn't lived a healthy enough life to still believe they should get healthcare?

              • Analemma_ 11 hours ago
                Yeah, I think what Workaccount2 is not realizing is that there's no bottom to "you have higher risk factors, why should I pay for you?", and so once you start down that way you may not like where it ends up. Some hobbies have higher injury rates, why should I pay for your health care if you choose to play those? Some parts of the country have lower life expectancies, why should I pay for your health care if you choose to live there?
                • Workaccount2 11 hours ago
                  The actual realization, which usually comes years after the realization that there is no bottom, is that there is no top either.

                  The battle along the spectrum of privatizing gains (lower healthcare premiums for a healthy lifestyle - high premiums for unhealthy lifestyle) vs socializing losses (paying $20/mo to get $1200/mo of care - paying $1200/mo for $0/mo of care) is constant and boundless in either direction.

                  • ben_w 11 hours ago
                    But there is a bound in both directions?

                    On end, it's "national insurance", functionally equivalent to fully-tax-funded healthcare like the NHS or the German system with several providers competing but regulated to near identical results, but moreso as the UK and Germany also has private care; on the other, it's the absence of insurance.

                    • Workaccount2 8 hours ago
                      This is a macro problem larger than health insurance, and exists everywhere from employee bonuses, high school group project grades, handicap parking, gas prices, Everest summits, to gas prices.

                      Those might all seem wildly disconnected, but they all have systems of unfair allocation to compensate for unequal outcomes.

                      Generally national healthcare programs are entirely dependent on young healthy people paying into the system despite rarely needing it, and then hopefully enough dieing quick deaths or having multiple children to cover their costs. These rebalancing systems are artificial and humans are generally terrible at managing them.

            • ImPostingOnHN 11 hours ago
              The entire purpose of health insurance is spreading risk across a wide and diverse risk pool.

              > why should I also be subsidizing Billy "video-games-are-my-exercise" fatass's chronic health conditions?"

              Nobody is asking you to: enrolling in insurance is a choice in the USA.

              Also, replace "chronic health conditions" with "unavoidable inherited genetic risk factors". We don't want Billy to be screwed for life just because he was born to a suboptimal combination of parents.

              • Workaccount2 11 hours ago
                The most cataclysmic thing that could happen to healthcare would be chronically healthy individuals creating their own health insurance.
                • ImPostingOnHN 10 hours ago
                  Not really (in fact that doesn't even make sense), but in any case, I think you replied to the wrong post. Your reply doesn't seem to have anything to do with the post it replied to
                  • Workaccount2 7 hours ago
                    You said enrolling in insurance was a choice. If everyone with a BMI under 22 who did at least an hour of cardio a week had their own health insurance club, they would be doing excellent with cheap premiums and great care, while everyone else drowned in medical debt as their plans collapsed.

                    The choice right now is all or nothing. There is no choice for healthy people to only share a plan with other healthy people. If there was, everyone else, especially the least healthy, would be totally screwed.

                    • ImPostingOnHN 7 hours ago
                      I am still having trouble understand how that pertains to the post to which it replied. Here are those 2 points again:

                      1. In the US, enrolling in health insurance is currently a choice (I'm can't tell whether your hypothesis about healthy people is agreeing or disagreeing with this point).

                      2. We don't want Billy to be screwed for life just because he was born to a suboptimal combination of parents.

                      Would you mind clarifying the connection here, please?

                      • Workaccount2 5 hours ago
                        1. I'm saying the choice is faux. Healthy people will have dramatically lower medical bills throughout their life, so why shouldn't they get discounted health insurance? Which leads to your second point which I already addressed in my initial post:

                        2. >This is why there is a hyperfixation on shifting blame away from (failing) individuals. The logic breaks when Billy has to admit he just hates exercising.

                        • ImPostingOnHN 4 hours ago
                          > Healthy people will have dramatically lower medical bills throughout their life, so why shouldn't they get discounted health insurance?

                          I never said they shouldn't. It's just a topic totally unrelated to whether or not people are forced to buy insurance, and I'm not interested in discussing that new, unrelated topic.

                          > Billy has to admit he just hates exercising.

                          This logic breaks down when Billy actually exercises and eats just fine, but was nonetheless born to a suboptimal combination of parents.

            • FireBeyond 10 hours ago
              > "If I focused on my health, ate clean and exercised daily, why should I also be subsidizing Billy "video-games-are-my-exercise" fatass's chronic health conditions?"

              Then why are you not asking your insurer why they cover a lot less preventative health or other options. For example, Kaiser flat out refuses to prescribe GLP-1s for weight loss, others insurers are the same with gym subsidies or not covering nutritionists.

              But they'll happily pay for your gastric bypass.

          • IanCal 6 hours ago
            > Insurance that is maximally responsive to patient health changes in terms of cost (ie making healthier people pay less) ends up being an inefficient way of just having people pay for their healthcare directly.

            That's true for predictable costs, but not true for unpredictable ones - which is the point of most insurance (housing, car, etc). The point and use of insurance is to move risk to entities that can bear it.

            Utility is non-linear with money, and so you easily have situations where spending X times more on something "costs" you more than X times if measured in how useful the money is to you.

            Typically, as you have more money, each further dollar doesn't provide as much benefit as the last (sometimes things are lumpy, the difference between "not quite enough to pay rent" and "just enough to pay rent" is huge, but broadly this is true). Going from $1000 to $10000 is more impactful than $1001000 to $1010000.

            That means that moving the other way, each additional dollar spent has a greater personal cost to you.

            Therefore, sharing unlikely but high expenses can mean that your expected cost is the same (if there's no profit/middleman) or a bit higher, but your expected personal cost is lower.

        • monooso 13 hours ago
          Not a US citizen, so a genuine question: do US health insurance companies have a track record of passing on such savings to consumers?

          That has not been my impression as an outside observer.

          • zdragnar 13 hours ago
            Ostensibly, the Affordable Care Act was supposed to reduce the average family's premiums by $2,500 a year.

            When that didn't happen, the story changed to that number being how much more premiums would have risen.

            Insurance premiums have only gone up as far as I can remember, though there's a ton of variables at play here. Inflation is an obvious one, plus continual introduction of more and more costly treatments- biologic injections, cancer therapies and so forth. The unfortunate increase in obesity rates in my lifetime (along with all the health complications) has been a significant contributor as well.

            It all adds up.

            • tzs 10 hours ago
              > Insurance premiums have only gone up as far as I can remember, though there's a ton of variables at play here.

              An interesting thing about rising health costs is that it has happened at roughly similar rates in most first world countries for the last 50+ years.

              For example in 1990 the UK, FR, and US were paying 2.0, 2.2, and 2.6 times their 1980 costs per capita. By 2000 that was 4.1, 4.1, and 4.2. By 2018 (the last year I had data for when I calculated this a few years ago) it was 10.6, 7.5, and 10.2.

              Here's the 2000 to 2018 increase for those and some others: DE, FR, CA, IT, JP, UK, US were 2.1, 1.8, 2.0, 1.7, 2.6, 2.6, 2.3.

              When politicians in the US talk about rising health care costs they usually put the blame on recent policies from opposing politicians. That so many first world countries with so many different health care systems all have seen similar increases for the last 50+ years suggests that it is due to something they all have in common and that government policy doesn't affect it much.

            • alright2565 12 hours ago
              The individual mandate part of the ACA was the part designed to reduce premiums. You need healthy participants in any health insurance scheme to subsidize unhealthy people.

              That was eliminated by a Republican bill, the Tax Cuts and Jobs Act of 2017.

              • zdragnar 11 hours ago
                Yes, and that was years after the ACA took full effect and the rates did not decrease.

                Most people get insurance through their employer, and most employer plans (at medium to large companies) are self funded by the company and merely administered by insurance companies.

                That means the healthy participants had no effect on those plans whatsoever. Even at peak, the individual mandate had only cut the number of uninsured by half, and the effect on rates was negligible.

              • kjkjadksj 11 hours ago
                FWIW the state of california has its own individual mandate.
            • lotsofpulp 8 hours ago
              In 2010, it was already known the proportion of old to young was increasing, and the proportion of doctors was decreasing.

              Prices were always going to increase.

          • vjvjvjvjghv 9 hours ago
            "passing on such savings to consumers"

            Absolutely not. They inflate prices by 200% and then give you 20% "savings" back. The whole idea of a health insurance company as publicly traded corporation is totally insane. They are designed to extract maximum profit from wherever they can get. The is no incentive to save money for patients. Any savings go to shareholders.

            • lotsofpulp 8 hours ago
              There are plenty of health insurers that are not publicly traded, and in fact are non profit, and yet they have the same premiums as the publicly traded ones. See Kaiser, Providence, Cambia, Regence, and the dozens of other BCBS affiliated plans.

              If your claims were true, then the publicly traded businesses would have no customers.

              • Buttons840 5 hours ago
                > There are plenty of health insurers

                Yeah. And they all ultimately have to be paid by my health premiums.

                When I visit my parents, and they have the TV on, and I see 3 90-second advertisments for prescription drugs every single commercial break, I remind myself that, no matter what, we have to keep funding these commercials. Whatever the US decides to do for healthcare, I guarantee these commercials will continue to be paid for.

                I also guarantee that pharmacy company executives, and insurance company executives will continue to make millions of dollars a year. We have to keep paying them as well.

                The huge bureaucracy of insurance workers who decide what is and isn't approved, that all will have to be paid for as well.

                I used to work for a company that did background checks on doctors, we had different customers in every state, every state had their own companies and their own system for maintaining and verifying doctors licenses. These different companies in each state have to be paid. I made good money as a programmer doing background checks for these various companies and my paycheck also ultimately came from your medical premiums.

                I think we need to stop and appreciate the patriotic duty we all have to pay high premiums and medical costs, because every time we do we're propping up a huge portion of US workers. -- Just kidding. The truth is a lot of companies need to go out of business before things get cheaper.

              • vjvjvjvjghv 5 hours ago
                I would argue the non profit insurers are as profit oriented as the publicly traded ones.
          • butvacuum 13 hours ago
            you're correct. UHC is so hated because they're a "pharmacy benefits manager." - an organization that exists soley to make your life so miserable you give up on getting your medication.
            • FireBeyond 10 hours ago
              Aetna has their own scumminess. Want the convenience of 90 day refills? Have to use their mail order service. They'll refuse to authorize >30 day supplies of medication through any other pharmacy.
              • DANmode 10 hours ago
                These comments (on UHC, Aetna) always strike me the same way as complaining about the lack of inflight meal,

                baggage fees,

                or lack of WiFi,

                on, say, Spirit Airlines.

                You looked at the list of insurers/jobs offering health insurance carriers,

                selected the cheapest, or second-cheapest option,

                and you’re surprised they’re harder to work with?

                How?

                • FireBeyond 8 hours ago
                  Oh yes, because the vast majority of employers in the US say "pick whatever healthcare plan you want from whatever carrier you want, we'll make it work" and not "You'll get what you're given and be glad for it".

                  Even in tech spaces with money to throw around, that just means that maybe your partner and dependent's premiums will be covered/negligible, or that your deductible will be low.

                  You're still going to be fucked by their policies, though.

                  • DANmode 6 hours ago
                    You’ve highlighted why I do not work for the vast majority of employers in the US.

                    Pay me enough in cash to secure my own wellness with whatever organizations I choose, or: next, please.

          • lotsofpulp 13 hours ago
            Health insurance companies in the US are not allowed to deny coverage, and can only price premiums based on age (highest premium capped at 3x lowest premium, location, and tobacco use.

            https://www.healthcare.gov/coverage/pre-existing-conditions/

            https://www.healthcare.gov/how-plans-set-your-premiums/

            Health insurance premiums in the US are more tax than insurance. They also have low single digit profit margins with less than desirable shareholder returns (many are non profit in the first place), so they don’t have much room to lower premiums without also reducing healthcare expenses.

            The insurance business in general is very competitive and not very profitable, so an insurer that tries to collect outsized premiums will usually suffer a loss of business.

            • monooso 13 hours ago
              > The insurance business in general is very competitive and not very profitable...

              Knowing several Americans, and how much they pay for health insurance (and are still required to pay for some things "out of pocket"), this is incredible to me. And that's before you even get to the process of making a claim.

              Such a broken system.

        • SpicyLemonZest 13 hours ago
          We agree that insurance is not the right way to handle health as a product, since some people predictably need much more medical treatment than others. But it’s how the US has chosen to do it, so we have to do it in a way that works. Correctly identifying a systemic issue won’t pay your medical bills.
          • lotsofpulp 13 hours ago
            That is not how the US has chosen to do it. The ACA prohibits using anything other than age, location, and tobacco use for premium pricing, and the ACA prohibits denying coverage (resulting in a wealth transfer from healthy to sick).

            Even the age rating factor is capped at 3, so there are also massive wealth transfers from young to old.

            Mathematically, health insurance premiums in the US are more tax than insurance premium.

            • trollbridge 12 hours ago
              The ACA doesn't prevent some magical fairy-dust AI from pricing premiums, though, which is currently all the rage amongst insurers. (Not because AI will be accurate or anything, but because it offers a completely opaque pricing process.)
              • avalys 11 hours ago
                Why not? If the only factors allowed to be used in settings premiums are age, location and smoking status, then those are the only parameters that could be input to an AI model, no?
                • kyboren 10 hours ago
                  Warning: I am not in this industry and the below is speculation:

                  AIUI the idea is to predict the "correct" price for an individual premium, Y, which is restricted to being the result of some insurance model function, f(), that is itself restricted to the domain of age (A), location (L), and smoking status (S):

                    Y = f(A, L, S)
                  
                  My impression was that the idea was that this would handicap insurers' natural desire to price premiums individually and have a smoothing effect on prices over the population.

                  But why is location useful for insurers to price premiums? I assume because healthcare has different costs in different locations, and different utilization rates: People living in coal mining towns or near oil refineries may be expected to use more healthcare and thus cost more to insure.

                  Thus, you can imagine insurers building a price map (like a heat map) overlay for the state/country, plotting each applicant on it, and checking the "color" there as part of their model function. So they are effectively embedding out-of-band information (prices and utilization rates for locations) into the model function f() itself.

                  What "AI", or large-parameter Deep Neural Networks, fundamentally change here is:

                     - They can approximate any effectively computable function, the same class of functions that a programmer can write and execute on a computer[0].
                  
                     - They can be scaled up to an arbitrarily large number of parameters, i.e. an arbitrarily precise approximation limited only by the number of parameters and the amount and quality of model training data.
                  
                     - Absolutely critically: They are programmed implicitly, through brute-force training on input-output pairs, rather than explicitly, with some programmer writing an explicit series of instructions to compute the function.
                  
                  This last point is probably the most important.

                  Large insurers previously had sophisticated models for f() hand-built by math whizzes, but they were limited in the amount of out-of-band information they could encode into that function by the limited cognitive and programmatic capacity of a human team.

                  But now with DNNs you can scalably encode unlimited out-of-band information into the function f(), while also totally obscuring how you're computing that location-based price adjustment.

                  The result, in extremis, is that f() is not some fancy algorithm cooked up by a few dozen math whizzes. Instead f() becomes a fancy database, allowing the tuple (A, L, S) to act merely as an index to an individualized premium Y, which defeats the entire purpose of restricting the domain of their model function.

                  [0]: https://en.wikipedia.org/wiki/Universal_approximation_theore...

                  • avalys 5 hours ago
                    That’s not how courts and laws work. If you cheat and use other illegal factors to compute a premium for each person, and then create an AI model that effectively looks up your illegally calculated premium of each person by their location, they’re going to reach the obvious conclusion - you are calculating a premium using illegal factors.
                    • kyboren 3 hours ago
                      Like I said: It's only useful to have location in your model's domain if you have side-channel information embedded in the model function itself about what those location data mean for the correct premium price. What we're talking about here is just a way to embed much more information in your model function than a human reasonably could.

                      Given the magnificent ability of DNN models to serve as obfuscatory black boxes and the general techno-ignorance of legislators and regulators, I suspect that "AI laundering" your violations actually a very effective way to juke all sorts of laws and regulations.

                      But both of us are just speculating. If you have insider industry knowledge or can point to regulatory guidance and/or enforcement actions in this area that corrects or confirms my understanding, I would love to read about it.

    • class3shock 3 hours ago
      Or it's a placebo effect.

      And if it didn't work out and made you worse or, god forbid, the advice caused you to get seriously injured, then what? ChatGPT won't take any responsibility.

      I have so many issues with our current health system but an alternative is not an unreliable search tool that takes no responsibility for the information it provides.

      • yonaguska 3 hours ago
        on a similar vein, I have recurring back issues due to a spinal issue. I gave the issue to ChatGpT and it gave me almost all of the exercise I had been given years ago by a chiropractor. It's nowhere near a replacement for having someone coach me through movements though.
    • JKCalhoun 13 hours ago
      I also think health (and car-problem diagnosis) are excellent tasks for LLMs.

      The you-are-the-product thing, and privacy, has me wondering when Apple will step in and provide LLM health in a way we can trust.

      I know I say that and I face the slings and arrows of those distrusting Apple, but I still believe they're the one big company out there that knows that there is money in being the one guy that doesn't sell your data.

    • narmiouh 13 hours ago
      I don't think one can deny the benefits here. The detractors here are like don't build a side walk coz someone may trip and fall or don't plant trees in your front yard coz of what happened to the Texas governor.

      Most would likely agree that everything needs a balanced approach, bashing a service completely as evil and fully advocating people to stay away vs claiming the service is flawless (which the OP isn't doing btw) aren't either a balanced position.

      Think different doesn't have to mean think extreme.

    • nitwit005 1 hour ago
      If you'd been doing the rehab for 10 years, what did you need exactly? It seems like you should have had a decade to ask whatever questions you wanted.
    • jsheard 14 hours ago
      On the other hand, sometimes you end up like this guy. Are you feeling lucky?

      https://arstechnica.com/health/2025/08/after-using-chatgpt-m...

      • zaptheimpaler 10 minutes ago
        No, there's a difference between radically changing your diet and changing up your stretch/strength routine.. you don't just "end up" like one of them, you can evaluate that the downside risk of the latter is much lower and try it safely while recognizing that an extreme diet might not be so safe to try without any professional guidance.
      • vjvjvjvjghv 13 hours ago
        You could also list plenty of horror stories where people went to medical professionals and got screwed over. There is this myth that people can go to doctors and get perfect attention and treatment. Reality is far from that
        • datsci_est_2015 13 hours ago
          There’s the concept of “personal advocacy” when receiving healthcare. Unfortunately, you’ll only get the best outcomes if you continually seek out treatment with diligence and patience.

          But framing it as a “myth [of] perfect attention and treatment” sounds a bit like delegitimizing the entire healthcare industry in a way that makes me raise my eyebrow.

          • vjvjvjvjghv 9 hours ago
            "But framing it as a “myth [of] perfect attention and treatment” sounds a bit like delegitimizing the entire healthcare industry in a way that makes me raise my eyebrow."

            It doesn't delegitimize the whole industry. It points out real problems. A lot of patients are not given enough attention and don't get the correct treatment because the doctors didn't listen but rushed through things.

            • datsci_est_2015 9 hours ago
              I was criticizing the rhetoric, not the sentiment. I’m skeptical of an argument when it flies too close to what I associate with irrationality and pseudoscience, especially considering what’s happened in medicine over the past 5 years.

              The “myth [of] perfect attention and treatment” is an easy strawman for grifters and conmen to take advantage of: see RFK Jr.

          • cm2012 11 hours ago
            Id say the Healthcare industry works hard but is probably working at like 20% of their possible productivity due to systemic issues.
            • datsci_est_2015 9 hours ago
              How do you measure productivity? Profit per employee has never been higher, probably, as PE and other rent-seeking leeches (residency caps) have wrapped their fingers around the throat of the industry.

              Positive outcomes per patient is probably also higher, due to research and technology advances. So many lives saved that would have been written off just a decade or two ago (e.g. spina bifida).

              But I agree with you that there’s a hypothetical universe where seeking healthcare as an American doesn’t suck, I just don’t know if “productive” is the right word to describe it.

          • willparks 10 hours ago
            Yes, there's been a tension between personal advocacy and the system for a long time. Doctors roll there eyes when a patient mentions they self diagnosed on WebMD. LLM's will accelerate self diagnosis immensely. This has the potential to help patients, but it is just a starting point. Of course, it should be verified from actual trained doctors.
            • datsci_est_2015 9 hours ago
              Agreed, I described it elsewhere in this thread as K-shaped outcomes.
        • arter45 12 hours ago
          A big part of the legal implications of LLMs and AI in general is about accountability.

          If you are treated by a human being and it goes sideways, you could sue them and/or the hospital. Now, granted, you may not always win, it may take some time, but there is some chance.

          If you are "treated" by an LLM and it goes sideways, good luck trying to sue OpenAI or whoever is running the model. It's not a coincidence that LLM providers are trying to put disclaimers and/or claims in their ToS that LLM advice is not necessarily good.

          Same goes for privacy. Doctors and hospital are regulated in a way that you have a reasonable, often very strong, expectation of privacy. Consider doctor-patient confidentiality, for example. This doesn't mean that there is no leak, but you can hold someone accountable. If you send your medical data to ChatGPT and there is a leak, are you going to sue OpenAI?

          The answer in both cases is, yes, you should probably be able to sue an LLM provider. But because LLM providers have a lot of money (way more than any hospital!), are usually global (jurisdiction could be challenging) and, often, they say themselves that LLM advice is not necessarily good (which doctors cannot say that easily), you may find that way more challenging than suing a doctor or a hospital.

        • lionkor 13 hours ago
          Are medical professionals not usually held accountable, globally speaking?
          • ipaddr 13 hours ago
            Lawsuits against medical professionals are difficult in many cases impossible for the average person to win. They are held less accountable compared to other professions.
      • JKCalhoun 12 hours ago
        "…a 60-year-old man who had a “history of studying nutrition in college” decided to try a health experiment: He would eliminate all chlorine from his diet…"

        You can see already that this can easily go sideways. This guy is already exploring the nether regions of self-medication.

        It would be ideal if LLMs recognized this and would not happily offer up bromine as a substitute for chlorine, but I suspect this guy would have greedily looked for other shady advice if LLMs had never existed.

      • bwb 10 hours ago
        You have to use your head, just like online forums or with doctors :)

        I've had doctors tell me to do insane things. Some that caused lasting damage. Better to come with a trust-but-verify attitude to humans and AI.

      • wiseowise 13 hours ago
        Natural selection at work. I don’t see anything suspicious here.
      • glemion43 13 hours ago
        The man in the article did not use it as a research help and did not verify it with experts.

        So what's your argument?

      • mhb 13 hours ago
        Did he also drive into a lake following Google Maps' driving directions?
    • JumpCrisscross 1 hour ago
      > to communicate with doctors effectively

      Did the doctors agree? I never thought of AI as a good patient navigator, but maybe that’s its proper role in healthcare.

    • datsci_est_2015 13 hours ago
      It seems like outcomes are probably K-shaped: those who are capable of critical thinking and deciding what type of information should be confirmed by a healthcare professional and what type of information is relatively riskless to consume from ChatGPT should have positive outcomes.

      Those who are prone to disinformation and misinterpretation may experience some very negative health outcomes.

    • ryan_n 13 hours ago
      That's awesome that it's helped you so much, chronic back pain is awful. Is it possible though, that this could be interpreted as a failure of the trainer to come up with a successful treatment plan for you? "Sudden" relief after 10 years of therapy just because you changed the program seems like they were just having you perform the wrong exercises no?
      • narmiouh 13 hours ago
        We have to also understand that the trainer didn't get to spend 12 hours of researching every minutia or do a trial and error study to get to where OP got to. This doesn't necessarily mean the trainer failed, just that they were constrained by time, which OP wasn't. And I think that is the essence of this tech, when used wisely, I can lead to results like these which you can't get despite having access to the best talent for a limited time. Only the well afforded can afford a full time trainer/therapist.
        • ryan_n 13 hours ago
          Absolutely, I didn't mean any disrespect towards any of the professionals helping OP with their back issues. It can be an incredibly hard thing to treat.
      • bwb 10 hours ago
        The trainer was a godsend, got me to ~95% pain-free, and taught me all kinds of things. He is amazing.

        But 2025 was maybe down to 90% pain-free, and I want to get stronger. So I did a big rewrite of my entire workout plan and checked everything. AI wasn't perfect, but it was amazing when you already know some.

        It is still a tool I had to direct, and it took a few days of work. But I'm amazed at where it got me to. It took the injury into consideration and my main sport, and built around that. In the past I tried do this online and couldn't do it given the numerous factors involved. It was not perfect, but over the course of a few days, I was able to sort it out (and test with a trainer on the approach a few weeks after).

        I've been 100% pain free for 6 weeks in a way I haven't felt in a long time.

        • DrammBA 10 hours ago
          Without getting into your specific injury or sport, what was the biggest change compared to the trainer’s program?

          Was it something unexpected like "exercise this seemingly unrelated muscle group that has nothing do with your injury but just happens to reduce pain by 75% for some inexplicable reason"?

          Or was it something more mundane like "instead of exercising this muscle every day, do it every other day to give it time to rest"?

    • rconti 11 hours ago
      It can be helpful, but also untrustworthy.

      My mother-in-law has been struggling with some health challenges the past couple of months. My wife (her daughter) works in the medical field and has been a great advocate for her mother. This whole time I've also been peppering ChatGPT with questions, and in turn I discuss matters with my wife based on this.

      I think it was generally correct in a lot of its assertions, but as time goes on and the situation does it improve, I occasionally revisit my chat and update it with the latest results and findings, and it keeps insisting we're at a turning point and this is exactly what we should expect to be happening.

      6 weeks ago, I think its advice was generally spot on, but today it's just sounding more tone-deaf and optimistic. I'd hate to be _relying_ on this as my only source of advice and information.

      • codexjourneys 11 hours ago
        Totally agree, it can be a bit of an echo chamber. I had an infection post-dental-work. Bing Chat insisted I had swollen lymph nodes from a cold that would resolve on their own, then decided I had a salivary gland infection. After a follow-up with a real-world ENT, it was (probably accurately) diagnosed as a soft-tissue infection that had completely resolved on two rounds of antibiotics. The AI never raised that possibility, whereas the ENT and dentist examined me and reached that conclusion immediately.

        I do think AI is great for discussing some health things (like "how should I interpret this report or test result?"), but it's too echo chamber-y and suggestion-prone for accurate diagnosis right now.

        • bwb 10 hours ago
          Ya I wouldn't trust it for diagnosis at this point. But it can help you get pointed in the right direction so human, tests, and the scientific process can try to figure out the rest.

          Doctors struggle with diagnosis as well. I have stories and I bet everyone has stories about being passed from doctor to doctor to doctor, and none of them talk to each other or work holistically.

    • atmosx 14 hours ago
      I agree. LLMs cannot and should not replace professionals but there are huge gaps that can be filled by intro provided and the fact that you can dig deeper into any subject is huge.

      This is probably a field that MistralAI could use privacy and GDPR as leverage to build LLMs around that.

      • phatfish 13 hours ago
        One of the big issues I have with LLMs that when you start a prompting session with an easy question it all goes great. It bring up points you might not have considered and appears very knowledgeable. Fact checking at this stage will show the LLM is invariably correct.

        Then you start "digging deeper" on a specific sub-topic, and this is where the risk of an incorrect response grows. But it is easy to continue with the assumption the text you are getting is accurate.

        This has happened so many times with the computing/programming related topics i usually prompt about, there is no way I would trust a response from an LLM on health related issues I am not already very familiar with.

        Given that the LLM will give incorrect information (after lulling people with a false sense of it being accurate), who is going to be responsible for the person that makes themselves worse off by doing self diagnosis, even with a privacy focused service?

        • atmosx 11 hours ago
          The responsibility falls always to the patient. That’s true with doctors are as well: you visit two doctors they give you different diagnosis, one tells to go for surgery, the other tells you it’s not worth the hassle. Who can decide? The patient does.

          LLMs are yet another powerful tool under our belt, you know it’s hallucinating so be careful. That said, even asking specialized info about this or that medical topic can be a great thing for patients. That’s why I believe it’s a good thing to have specialized LLMs that can tailor responses on individual health situations.

          The problem is the framework and the implementation end goal. IMO state owned health data is a goldmine for any social welfare system and now with AI they can make use of it in novel ways.

        • JKCalhoun 12 hours ago
          That's a good point—and I have probably fallen victim to it as well: the "sliding scale" of an LLM's authority.

          Like you, I fact-check it (well, search the internet to see if others validate the claims/points) but I don't do so with every response.

    • jmyeet 13 hours ago
      This kind of comment scares me because it's an example of people substituring professional advice for an LLM where LLMs are known to hallucinate or otherwise simply make stuff up. I see this all the time when I write queries and get the annoying Gemini AI snippet on a subject I know about and often I'll see the AI make provably and objectively false statements.
      • bwb 10 hours ago
        You have to use critical thinking + it helps to have some info on the subject + it shouldn't be used to perform self-surgery :)

        I spent about 12 hours over 2 days, checking, rechecking, and building out a plan. Then I did 2-hour sessions on YouTube, over several weeks, learning the new exercises with proper form (and that continues as form is hard). Followed by an appointment with a trainer to test my form and review the workout as a hole (which he approved of). No trainer really knows how this injury will manifest, so a lot is also helped because I have 10 years of exp.

        This isn't a button click, and now follow the LLM lemming. This is a tool like Google search but better.

        I could not have done this before using the web. I would have had to read books and research papers, then try to understand which exercises didn't target x muscle groups heavily, etc. I just couldn't do that. The best case would have been a trainer with the same injury, maybe.

      • simianwords 13 hours ago
        You are exaggerating. LLMs simply don’t hallucinate all that often, especially ChatGPT.

        I really hate comments such as yours because anyone who has used ChatGPT in these contexts would know that it is pretty accurate and safe. People also can generally be trusted to identify good from bad advice. They are smart like that.

        We should be encouraging thoughtful ChatGPT use instead of showing fake concern at each opportunity.

        Your comment and many others just try to signal pessimism as a virtue and has very less bearing on reality.

        • avalys 11 hours ago
          All we can do is share anecdotes here, but I have found ChatGPT to be confidently incorrect about important details in nearly every question I ask about a complex topic.

          Legal questions, question about AWS services, products I want to buy, the history a specific field, so many things.

          It gives answers that do a really good job of simulating what a person who knows the topic would say. But details are wrong everywhere, often in ways that completely change the relevant conclusion.

          • DBNO 1 hour ago
            I definitely agree that ChatGPT can be incorrect. I’ve seen that myself. In my experience, though, it’s more often right than wrong.

            So when you say “in nearly every question on complex topics", I’m curious what specific examples you’re seeing.

            Would you be open to sharing a concrete example?

            Specifically: the question you asked, the part of the answer you know is wrong, and what the correct answer should be.

            I have a hypothesis (not a claim) that some of these failures you are seeing might be prompt-sensitive, and I’d be curious to try it as a small experiment if you’re willing.

          • Jarwain 7 hours ago
            I don't think that LLM's do a significantly worse job than the average human professional. People get details wrong all the time, too.
        • ipaddr 12 hours ago
          LLM give false information often. The ability for you to catch incorrect facts is limited by your knowledge and ability and desire to do independent research.

          LLMs are accurate with everything you don't know but are factually incorrect with things you are an expert in is a common comment for a reason.

          • pgwhalen 12 hours ago
            As I used LLMs more and more for fact type queries, my realization is that while they give false information sometimes, individual humans also give false information sometimes, even purported subject matter experts. It just turns out that you don’t actually need perfectly true information most of the time to get through life.
          • simianwords 12 hours ago
            No they don’t give false information often.
            • ziml77 12 hours ago
              They do. To the point where I'm getting absolutely furious at work at the number of times shit's gotten fucked up and when I ask about how it went wrong the response starts with "ChatGPT said"
            • ipaddr 12 hours ago
              Do you double check every fact or are you relying on yourself being an expert on the topics you ask an llm? If you are an expert on a topic you probably aren't asking ab llm anyhow.

              It reminds me of someone who reads a newspaper article about a topic they know and say its most incorrect but then reading the rest of the paper and accepting those articles as fact.

            • tempest_ 12 hours ago
              I have them make up stuff constantly for smaller rust libraries that are newish or dont get a lot of use.
            • mythrwy 12 hours ago
              "Often" is relative but they do give false information. Perhaps of greater concern is their confirmation bias.

              That being said, I do agree with your general point. These tools are useful for exploring topics and answers, we just need to stay realistic about the current accuracy and bias (eager to agree).

            • mythrwy 12 hours ago
              I just asked chatGPT.

              "do llms give wrong information often?"

              "Yes. Large language models produce incorrect information at a non-trivial rate, and the rate is highly task-dependent."

              But wait, it could be lying and they actually don't give false information often! But if that were the case, it would then verify they give false information at a non trivial rate because I don't ask it that much stuff.

      • travisgriggs 13 hours ago
        I have this same reaction.

        But I also have to honestly ask myself “aren’t humans also prone to make stuff up” when they feel they need to have an answer, but don’t really?

        And yet despite admitting that humans hallucinate and make failures too, I remain uncomfortable with ultimate trust in LLMs.

        Perhaps, while LLMs simulate authority well, there is an uncanny valley effect in trusting them, because some of the other aspect of interacting with an authority person are “off”.

    • DANmode 10 hours ago
      It doesn’t even have to be that well-read (although it is),

      it just has to listen to your feedback more than 11 minutes per visit,

      so it can have a chance at effectively steering you…

    • bossyTeacher 5 hours ago
      >my ability to understand health problems

      How do you know that this understanding is correct? To me, epistemologically, this is not too different from gaining your health knowledge from a homeopath or gaining your physics knowledge from a Flat Earther. You are in no position to discern the validity of your "knowledge".

      • carlosjobim 3 hours ago
        > You are in no position to discern the validity of your "knowledge".

        He is the only one who is in that position, because he is the only person who is inside his body. He is physically and mentally a hundred percent in the position to discern the validity of the advice.

      • zemvpferreira 5 hours ago
        It's a lot easier to double-check, cross-reference, or test the validity of advice given by a gpt. It has no authority, no persuasion mechanisms, and its opinions are there in plain text ready to be picked apart. You can ask it for references and non-confrontationally challenge it on the things you're sceptical about. It generally avoids woo in my experience though it's hardly always correct in specific advice, it can definitely point you in productive directions. Which is completely different from discussing anything health with a homeopath, who at best will try to get you to avoid productive treatment and at worst poison you.
    • metalliqaz 14 hours ago
      Anything you say can and will be used against you.
      • bwb 10 hours ago
        I've been on the web since it was born. When will this happen? :)

        How is this manifesting in reality?

      • JKCalhoun 12 hours ago
        You bring up an interesting point.

        How is it we have come to a place in society where we second-guess everything we type? But perhaps also where we go (with our "tracking devices" in our pockets…).

        I mean, obviously the internet is what changed everything. But it is like you have a megaphone strapped to your face whenever you connect to a site and make a comment.

        Maybe this is not a good thing.

  • brianhur 1 hour ago
    If it is genuinely beneficial, this will become an open source project that everyone is able to run with a local agent in their house that runs cold. I will make one if no one else will, but discovering how to make it ubiquitously helpful and not drought with legal liability is challenging. I welcome a company willing to take this early risk.
  • zkmon 14 hours ago
    Google did that, Facebook did that and every other company who boasted their user-base numbers did that. They sold user attention and harvested user data. Nothing new here.
  • aurareturn 14 hours ago
    Do users find value in it? Thats the ultimate question. I think it is a resounding yes.
    • MattRix 14 hours ago
      Users found value in leaded gasoline too.
      • glemion43 13 hours ago
        Which is not a helpful argument in this discussion.

        It's to blunt and feels more ignorant.

        • MattRix 12 hours ago
          I think the argument that “value to users is the only thing that matters” is the blunt one. My point is that most situations are much more nuanced than that.
        • PunchTornado 13 hours ago
          If you have nothing to say, but insult the other, better not to post.
          • glemion43 7 hours ago
            I clearly say something.

            I evaluate transparent his argument

        • Drunkfoowl 13 hours ago
          [dead]
  • TinyBig 3 hours ago
    A phrase I liked to describe what we're doing with LLMs is "building a personal panopticon". The benefits are immense but you're placing a huge bet on the landlord of the tower.
  • seydor 14 hours ago
    ChatGPT mostly refuses to talk health issues , while i have found Gemini is reasonably cooperative when asking for things like symptoms and treatments .

    This makes me not wanting to try out their new offering.

    • al_borland 1 hour ago
      My dad used ChatGPT to guide him through testing, diagnosis, preparation, and recovery from a quadruple bypass. He never mentioned it refusing to talk about anything, just about how indispensable it was in the process and saved him months of time over skipping it and relying on the medical system alone, without help.

      For example, it told him to go out and get a test on his own before meeting with a certain specialist, so the specialist didn’t order it during the first meeting and then need to wait for a follow up after reviewing it. He did this, gave the results to ChatGPT, which sounded the alarm. He sent a message with the results to his doctor and his appointment with the specialist was moved up to the same week, instead of 3 months out.

    • pruetj 14 hours ago
      Is that foreshadowing of ChatGPT redirecting users to ChatGPT Health in some way? Base product no longer answers everything, now I need to pay an add-on fee to talk about health related things?
      • kmoser 11 hours ago
        I doubt it. It's probably a CYA thing. There are a whole bunch of hot-button topics ChatGPT won't talk about, not because OpenAI will ever try to monetize them but because they're fodder for lawsuits.
    • trollbridge 13 hours ago
      It's quite easy to "jailbreak" by asking it to discuss hypotheticals, help you write accurate information for a fictional account, etc.
      • accrual 4 hours ago
        This is my experience too. Most bots are happy to discuss health stuff in a vacuum, which works for some queries.
    • ramoz 11 hours ago
      Ive had fairly complex health issues and have never had issues with ChatGPT - other than I worry about the vast majority people in my scenario who do not understand AI.

      AI can enable very misleading analysis and misinformation when a patient drives the conversation a certain way. Something I've observed in the community I'm a part of.

      Not talking about acid reflux or back pain.

    • astura 14 hours ago
      What sorts of things did it refuse to talk to you about? I've talked with it about my health extensively and uploaded lab reports which it happily interpreted.
      • seydor 14 hours ago
        it reads the lab report but every one of its answers is prefaced with a refusal. It does provide some information but insists on general answers.

        Example phrases it uses: "I can’t give medical advice or tell you what to do based on this report". "I will keep this general and informational, not personalized medical instructions."

        • astura 13 hours ago
          Weird, This is not my experience at all. For example, here's chat gpt giving me an incorrect diagnosis (the anemia is due to a genetic condition, not iron deficiency).

          =============

          You said: MyChartPLUS - Test Details.pdf PDF Can you interpret these blood test results ChatGPT said: Here’s an interpretation of the blood test results you provided:

          White Blood Cell Count (WBC) Value: 9.2 (Normal: 3.8 - 10.8 Thousand/uL)

          Within the normal range, indicating no immediate signs of infection or inflammation.

          Red Blood Cell Count (RBC) Value: 5.29 (Normal: 3.8 - 5.1 Million/uL)

          Elevated (High). This could indicate dehydration, stress, or other conditions like polycythemia.

          Hemoglobin Value: 10.9 (Normal: 11.7 - 15.5 g/dL)

          Low. Suggests anemia, possibly due to iron deficiency or chronic disease.

          Hematocrit Value: 36.0% (Normal: 35.0 - 45.0%)

          Within the normal range, but on the lower side, supporting the presence of mild anemia.

          Mean Corpuscular Volume (MCV) Value: 68.1 (Normal: 80.0 - 100.0 fL)

          Low. Indicates microcytic anemia, often due to iron deficiency.

          Mean Corpuscular Hemoglobin (MCH) Value: 20.6 (Normal: 27.0 - 33.0 pg)

          Low. Suggests hypochromia (paler red blood cells), often associated with iron-deficiency anemia.

          Mean Corpuscular Hemoglobin Concentration (MCHC) Value: 30.3 (Normal: 32.0 - 36.0 g/dL)

          Slightly low. May reflect reduced hemoglobin in red blood cells, consistent with iron-deficiency anemia.

          Red Cell Distribution Width (RDW) Value: 15.8% (Normal: 11.0 - 15.0%)

          Elevated. Indicates variability in red blood cell size, common in iron-deficiency anemia.

          Platelet Count Value: 397 (Normal: 140 - 400 Thousand/uL)

          Normal, but on the higher side of the range.

          Other Key Metrics Absolute Neutrophils, Lymphocytes, Monocytes, Eosinophils, Basophils: All within normal ranges, suggesting no significant abnormalities in immune cell populations.

          Anisocytosis (1+): Indicates variation in red blood cell size.

          Hypochromasia (1+): Suggests pale red blood cells, supporting iron-deficiency anemia.

          Summary These results suggest iron-deficiency anemia, characterized by low hemoglobin, low MCV, low MCH, and high RDW. The elevated red blood cell count may reflect compensatory mechanisms or mild dehydration.

          Your provider recommends a heart-healthy, low-fat diet, which is great for overall health. However, addressing the anemia might also involve increasing dietary iron or taking iron supplements.

          If you have symptoms such as fatigue, shortness of breath, or dizziness, it’s advisable to follow up with your healthcare provider for further management.

          =============

          Later on (months later) I gave it a different lab report that contained iron as well, and without the iron deficiency red herring it guessed correctly.

          =============

          You said: lab.pdf PDF ChatGPT said: Here’s what jumps out from your lab report. lab

          Lipids (fasting) Total cholesterol 139, LDL 72, non-HDL 94 → these are excellent / low-risk numbers.

          Triglycerides 141 → normal (borderline-ish but still under 150).

          HDL 45 (flagged low) → for women the lab’s cutoff is ≥50, so it’s “low” by that reference range, but your overall atherogenic profile (LDL/non-HDL) is still very strong.

          CBC: clear microcytic anemia pattern These are the key pieces:

          Hemoglobin 10.5 (low) and Hematocrit 34.9 (low) → anemia.

          MCV 67.8 (low) + MCH 20.4 (low) → red cells are small and low-hemoglobin (microcytic, hypochromic pattern).

          RBC count 5.15 (slightly high) + RDW 15.8 (high) and morphology showing anisocytosis → this combo is important.

          Iron studies: not iron-deficiency looking Ferritin 109 (normal)

          Iron 83 (normal)

          % saturation 24 (normal)

          TIBC 343 (normal)

          That’s not the classic pattern of iron deficiency (where ferritin is usually low and iron saturation often low). lab

          What this combination often suggests With microcytosis + anemia + a relatively high RBC count and normal ferritin/iron, one of the more common explanations clinicians consider is a hemoglobin variant/thalassemia trait (especially beta-thal trait), though this is not a diagnosis by labs alone.

          There are other possibilities too (less common, but part of the “microcytic anemia with normal iron” differential), so the right next step is targeted follow-up testing rather than guessing.

          What I’d ask your clinician about next Repeat CBC (to confirm) + reticulocyte count

          Peripheral smear review (if not already done beyond the automated morphology flags)

          Hemoglobin electrophoresis (or HPLC) to evaluate for beta-thalassemia trait/hemoglobin variants

          If electrophoresis is normal but suspicion remains: alpha-thalassemia testing can be considered (often not seen on standard electrophoresis)

          If there’s any reason to suspect it: lead level (uncommon, but classic cause of microcytosis)

          When to seek urgent care vs routine follow-up Routine follow-up is usually fine, but get urgent evaluation if you have:

          chest pain, fainting, severe shortness of breath at rest

          black/tarry stools, heavy ongoing bleeding, or rapidly worsening symptoms

          If you tell me whether you’ve had heavy periods, recent pregnancy/postpartum, GI symptoms, or a known family history of anemia/thalassemia, I can help you interpret which follow-up path is most likely to be useful—but even without that, the “microcytic anemia + normal iron studies” piece is the main thing to bring to your ordering clinician.

          =============

    • levocardia 6 hours ago
      "While I can't provide medical advice..." is the 2025 version of "As a large language model trained by OpenAI..."
  • tamakiiroha 3 hours ago
    I think the consequences of hackers obtaining health data like this would be unimaginable. OpenAI is far inferior to Apple when it comes to privacy and security protection.
  • SecretDreams 13 hours ago
    LLMs for medical info are good, but they're easily abuseable. I've got a friend who is an anxious mom. They use gpt/Gemini to "confirm" all of their suspicions and justify far more doctor/medical visits than is at all reasonable, while also getting access to more recurring antibiotics than is reasonable. LLMs are basically giving them the gun powder to waste the doctor's time and slam an already stressed medical system when all their kids need most of the time is some rest and soup.
    • ramoz 11 hours ago
      Yea, I'm in a particular health community. A lot of anxious individuals, for good reason, end up posting a lot of nonsense they derived from self-influenced chatgpt conversations.

      That said, when used as a tool you have power over - ChatGPT has also freed up some of my own anxiety. I've learned a ton thanks to ChatGPT as well. It's often been more helpful than the doctors and offers itself as an always-available counsel.

      • accrual 3 hours ago
        Another user above described the curve as K-shaped and that resonates to me as well. Above a certain line of knowledge and discernment the user is likely to benefit from the tool. Below the line, the tool can become harmful.
    • hsuduebc2 13 hours ago
      Yeah, it’s a very powerful tool, and it needs to be used carefully and with intent. People on Hacker News mostly get that already, but for ordinary users it’s a full-on paradigm shift.

      It moved from: A very precise source of information, where the hardest part was finding the right information.

      To: Something that can produce answers on demand, where the hardest part is validating that information, and knowing when to doubt the answer and force it to recheck the sources.

      This happened in a year or two so I can't really blame. The truth machine where you doesn't needed to focus too much on validating answers changed rapidly to slop machine where ironically, your focus is much more important.

      • JumpCrisscross 1 hour ago
        > People on Hacker News mostly get that already

        It’s super easy to stop fact checking these AIs and just trust they’re reading the sources correctly. I caught myself doing it, went back and fact checked past conversations, and lo and behold in two cases shit was made up.

        These models are built to engage. They’re going to reinforce your biases, even without evidence, because that’s flattering and triggers a dopamine hit.

      • SecretDreams 13 hours ago
        > This happened in a year or two so I can't really blame. The truth machine where you doesn't needed to focus too much on validating answers changed rapidly to slop machine where ironically, your focus is much more important.

        Very much this for the general public. I view it as borderline* dangerous to anyone looking for confirmation bias.

        • hsuduebc2 12 hours ago
          Yea. Especially with absolutele garbage that is gooogle ai summary, which is just slightly worse than their "AI mode". I never saw anything hallucinate that much. It is much worse that it is included in every search and it have the google "stamp of quality" which was usually mark of well functioning product.
          • SecretDreams 12 hours ago
            It's funny because their thinking* Gemini with good prompting is solid, but the injected summaries they give could easily be terrible if the people doing the querying is lacking a certain base knowledge on the query.
          • hsuduebc2 12 hours ago
            And tiny text at the bottom which shows only after clicking "show more" statement "Al responses may include mistakes" will certainly not fix that.

            At least wording should be "is making mistakes" rather vaguely stating that it may occasionally in some cases produce mistake. Mistake can also be perceived as wrongly placed link and not absolutely made up information.

  • redmattred 13 hours ago
    The HHS is asking for recommendations on how to leverage AI for healthcare: https://www.hhs.gov/press-room/hhs-ai-rfi.html

    This is probably part of an effort to position them a potential vendor to help the government with this.

    • aabajian 10 hours ago
      As an interventional radiologist, I want it to be easier to see images from outside hospitals. Epic has nearly solved the problem of seeing outside medical records. Yet, I still can't see the images for the CT scan you had from the hospital across the street unless I call the file room and get the images transferred.

      I imagine once data sharing is more robust, it would be easier to validate AI models (at least specifically for radiology).

    • andersonpico 9 hours ago
      > This is probably part of an effort to position them (...)

      who is "them" referring to in this sentence?

  • kmoser 10 hours ago
    This sounds like it will be one of those products which starts out as an optional service, but eventually becomes required to use if you want to participate in the overall healthcare system.
  • liamconnell 5 hours ago
    Not mentioned in the article, but one interesting area where OpenAI could play is in participant identification and recruitment for clinical trials. In fact, ChatGPT could also help operate the clinical trials which is a highly paperwork intensive business, and therefore something that AI could add value to.

    Ultimately pharmaceutical companies pay up to $100,000 per participant to hospital networks these charges must be itemized as expenses from the hospital on the most part (bounties are illegal usually.) open AI would provide a cheap way in for pharmaceutical companies to identify participants given that OpenAI has an incredible perspective into the physical and psychological state of their users. Imagine how much more is shared with OpenAI compared to a clinical trial coordinator at a hospital when a psychiatric drug is being tested.

    This would also give OpenAI leverage in partnering with pharmaceutical companies. OpenAI executives have stated this is a goal, but otherwise they’ve made little progress on it.

    It’s wild to imagine - someone with borderline personality disorder having delusional conversations with an AI chat Bot for six months, receiving an offer to participate in a clinical trial, and then having their subsequent AI conversations used as evidence to analyze the efficacy of the drug. The ironic thing is if that person had delusions about hidden forces listening to them…they’d be RIGHT!

  • glemion43 6 hours ago
    Me and I hope they are selling me something to fix my RLS

    I would give a lot of money to do so

  • nusl 14 hours ago
    I find it ironic that the article is warning against AI use while it uses an AI-made cover image. Surely they find the same fault with copyright issues and AI art? Right?
    • visarga 3 hours ago
      > This isn’t just a health assistant. This is infrastructure for a healthcare marketplace.

      It's an AI written article IMHO.

      • kurtoid 1 hour ago
        I interpreted that line as tongue-in-cheek, but on a second reading, I think you're right
    • tigerBL00D 13 hours ago
      There's some irony in that, sure. But at the heart of the matter is privacy. It's about what you input into the chat, not about the output.
    • atmosx 13 hours ago
      It’s not against AI. It’s against privacy issues arising though data mining & double speech.
  • hereme888 9 hours ago
    My level of trust for data:

    1) Claude

    2) OpenAI

    3) Grok

    4) Gemini

  • DudeOpotomus 14 hours ago
    Dystopian and frankly, gross. Its amazing to me that so many people are willing to give up control over their lives and in this case, their bodies, for the smallest inkling of ease.

    The only thing you have control of in this world is your body (men only, women have already been denied body autonomy in the US), so giving this to the very entities that "do harm" as opposed to those who pledge to "do-no-harm", is straight up bonkers.

    It's not the data or the use of said data for the intended purpose. There is a law of sorts in life that says what ever they promise, it will be broken. The data and its intended purpose will be perverted and ultimately used as a weapon against the very people who provided the data.

    • glemion43 13 hours ago
      Blah blah blah.

      I have a more niche genetic issue and I'm glad for you that you can think like this but no one cares enough to do the proper research for my problem.

      If ml, massive compute, Google/chatgpt health do something in this direction (let's be honest anything) I'm glad for it.

      • DudeOpotomus 13 hours ago
        You will be denied coverage and treatment because you volunteered your personal data with zero controls over its use and your rights. \
        • lotsofpulp 13 hours ago
          Denied by who? Most developed nations will not, including the US. Either way, that’s a separate problem solved by legislation or increased wealth transfers. Denying people information doesn’t help.
          • DudeOpotomus 13 hours ago
            A truly naive understanding of American business and politics.
      • threetonesun 13 hours ago
        Do you believe that ChatGPT is doing the the research? I'm all in favor of better access and tools to research but at least in the US all of the research is being defunded, we're actively kicking researchers out of the country, and a bunch of white billionaires are proposing this as an alternative, based on training data they won't share.

        This is a product feature that invalidates WebMD and the like. It does not solve any health problems.

    • azan_ 14 hours ago
      > Dystopian and frankly, gross. Its amazing to me that so many people are willing to give up control over their lives and in this case, their bodies, for the smallest inkling of ease.

      I've read people with chronic conditions reporting that chatgpt actually helped them land correct diagnosis that doctors did not consider so people are not just using that for "inkling of ease".

      • DudeOpotomus 13 hours ago
        Yes, trading your privacy and autonomy for perceived ease is how they are going to steal your future and your freedom.
        • azan_ 13 hours ago
          Please read my comment again. If you lived with chronic pain that multiple doctors failed to correctly diagnose and ChatGPT actually suggested correct diagnosis then you wouldn’t call it just perceived ease, but something that made your life much, much better. I’m doctor and I’m all for empowering patients (as long as they consult ChatGPT output with actual doctors). It’s very easy to criticize people resorting to llms if you do not have any rare debilitating condition that’s not correctly diagnosed.
          • DudeOpotomus 13 hours ago
            With all due respect, you are thinking like a good person, a human being who spent decades of their life to learn how to care for people. You took a pledge to Do-no-Harm. You are looking at these tools as tools.

            The owners and future owners of said data do not care about anything other than profits and exploitation. They do not care about the patient, the doctor let alone the consequences of their doings. They took a pledge to make-profits regardless of the harm. A position fundamentally opposed to that of the medical doctor.

            • cm2012 11 hours ago
              I'll take a mysterious and amorphous future risk of privacy violations any day over current chronic pain.
              • Forgeties79 11 hours ago
                It’s not mysterious and amorphous. We have seen the results with social media for what? 15 years now? This is a known issue with clear parallels. And health data is way riskier to have floating around.
                • whatevermom4 1 hour ago
                  That's obviously a take from someone who never suffered chronic pain. If you have a life-long mystery illness that doctors don't care about, obviously you're going to give your data to ChatGPT Health because at least, it looks like it's listening...
                  • Forgeties79 1 hour ago
                    You have no clue what my medical history is and I will not be sharing it. Clearly you have an axe to grind. You refuse to try and actually discuss this topic without assuming you have the high ground that you’re depending on to spike any attempt at conversation.

                    Have a good rest of your weekend.

          • Forgeties79 13 hours ago
            What they seem to be saying is “this is how they get you,” which I agree with. Whether or not it’s immensely helpful is not being debated. There’s a very serious cost no matter what.
    • lotsofpulp 14 hours ago
      How is someone seeking for a way to deal with an inherited or environmentally caused illness giving up control of their body?
      • wiseowise 13 hours ago
        Don’t forget that majority of the commenters on this platform live in a country that views suffering in pain from incurable disease as a “god intended way” (and a horse dose of morphine). Take it with a grain of salt.
        • ImPostingOnHN 10 hours ago
          Setting aside that you're factually incorrect, this sort of negative stereotyping of others based on their nationality (or ethnicity, or race) is inappropriate, especially on this forum. We don't need more bigotry here.
      • DudeOpotomus 13 hours ago
        You will be assigned an individualized risk figure that will determine whether or not you are given coverage and treatment. Those decisions will happen without you or any MDs involvement. You will never know it happened and it will follow you for the rest of your life and your children's lives.
    • wiseowise 13 hours ago
      > Dystopian and frankly, gross. Its amazing to me that so many people are willing to give up control over their lives and in this case, their bodies, for the smallest inkling of ease.

      You have to be extremely privileged to say something like this.

      a) nobody is giving up control of their lives

      b) get off your high horse, son

  • Forgeties79 14 hours ago
    There’s no real smoking gun here showing what they are taking, storing, and using (or how they’re using it). But I do agree that nobody should dive in to this until that’s better understood as it is likely they are not following HIPAA and will not respect your privacy. Definitely not when there’s money on the table. Given their track record the concern is warranted.
  • kaffekaka 14 hours ago
    Once again, glad to live in Europe.
    • _qua 13 hours ago
      If you lived in the US and you didn't like this product, you can just choose to not use it. What benefit do you as a citizen of Europe derive from having this withheld from you?
      • trollbridge 13 hours ago
        "You can just choose not to use it", sure, until signing a consent form to use ChatGPT becomes mandatory for a doctor visit, just like all kinds of other technology (like having a cell phone to verify SMS, for example) is basically essential now to function in society.
        • gordian-mind 12 hours ago
          Doctors in Europe already use LLMs to treat you.
          • stevekemp 9 hours ago
            That sounds like the kind of hallucinated statement you might expect from ChatGPT.

            Which doctors, in which countries, are using LLMs to treat patients?

            • lurking_swe 5 hours ago
              i’m not the person you replied to. but a quick google search is just as much effort (on your part) as replying with a sassy “this sounds like a hallucination”. A low value comment in my opinion.

              I found this:

              https://www.who.int/europe/news/item/19-11-2025-is-your-doct...

              Quote:

              > “AI is already a reality for millions of health workers and patients across the European Region,” said Dr Hans Henri P. Kluge, WHO Regional Director for Europe. “But without clear strategies, data privacy, legal guardrails and investment in AI literacy, we risk deepening inequities rather than reducing them.”

      • b3kart 13 hours ago
        > you didn't like this product, you can just choose to not use it

        This is an over-simplification. I might like the product, but not be aware of the various ways it violates my privacy. Having laws that make it more risky for companies to do nefarious things makes me more confident that if a product is available in the EU market it doesn't do obviously bad things.

      • kaffekaka 12 hours ago
        It is not that "this product is withheld from me". It is that we have laws to protect against abusive corporations. ChatGPT Health not being launched in EU is because OpenAI themselves realized it abuses peoples privacy.
      • Forgeties79 13 hours ago
        I get some of us here in the US have a near-allergic reaction to regulations or prohibition of any kind, but come on man. At some point you have to acknowledge we need the government to protect us from corporate greed, even on rare occasion. “Just don’t use it” is not a real argument when basically everyone is now expected to use LLM’s at work and beyond
        • _qua 13 hours ago
          Well, I've really love being protected from cookies. Has done wonders for my experience on the web.
  • simianwords 13 hours ago
    [flagged]
    • DetectDefect 12 hours ago
      > It is inconceivable for this demographic to think of a product that can enrich both the corporation and the users.

      Would you care to provide an example of such a product, to dissuade this perceived demographic cohort? I am not so certain your creative name calling will yield much results, however.

      • maxerickson 12 hours ago
        It's literally everything that people in relatively free markets buy.

        Microsoft Windows is a great example that people will get really angry about, but like bubble gum also enriches the producer, seller and buyer.

        • ipaddr 12 hours ago
          Those are not great examples.

          Bubble gum puts the buyer in a worse dental health situation.

          Windows is a monopoly that controls the desktop market and the buyer would have been better off with a richer market with a variety of options.

          You could have said cigarettes. They provide the same pleasure benefits as gum with unhealthy outcomes for the buyer.

          • maxerickson 12 hours ago
            You are making the mistake of asserting that the buyer exactly shares your values.

            With Windows, I didn't assert that it was the value maximizing case, merely that it benefits both parties. Which it does, mostly likely to the advantage of the buyers.

            • DetectDefect 11 hours ago
              By the same logic, an inmate in prison is benefiting from incarceration because they are receiving food and housing. Obviously, the costs of being imprisoned is greater than any "enrichment" from it, and this is exactly the case with Windows and almost all technology.
              • maxerickson 11 hours ago
                I would you my statement about relatively free markets addresses that.

                Businesses that expand their licensing agreements with Microsoft aren't doing it because they are being coerced.

                • DetectDefect 9 hours ago
                  "Free" market coercion is probably the only reason companies like Microsoft are still in business.
        • simianwords 12 hours ago
          It’s concerning that this was even a question!!!
      • brianf0 12 hours ago
        I mean… you don’t need to look very hard. Smart phones? Google Maps?

        Is your life not enhanced by these products?

        • DetectDefect 12 hours ago
          Not really. The provide convenience, sure, but fundamentally all technology creates dependence and limits freedom by influencing behavior in subtle ways, while disproportionately benefiting the technology owner/operator.
          • brianf0 7 hours ago
            How does, say, refrigeration create dependence and limit freedom?

            What are we “supposed” to be like?

          • simianwords 12 hours ago
            You are part of the demographic I described.
            • DetectDefect 12 hours ago
              Cool. Got anything interesting to say to us, besides expressing your disbelief at our mere existence?
      • simianwords 12 hours ago
        ChatGPT
        • DetectDefect 12 hours ago
          This "enrichment" is wildly asymmetrical to users, however.
          • simianwords 12 hours ago
            For sure I get much much more value out of it than I pay
            • DetectDefect 12 hours ago
              That is not the asymmetrical power dynamic described between you and technology.
              • simianwords 12 hours ago
                “Power dynamic” yet another metaphysical concern
                • DetectDefect 12 hours ago
                  Concern for metaphysics is shared by many great thinkers, including Aristotle and Kant, so your unwillingness to even apprehend it is disappointing.
                  • simianwords 11 hours ago
                    next time you buy shampoo from the store, please do think about Aristotle and Kant