Brand MU Day
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Login

    AI Megathread

    Scheduled Pinned Locked Moved No Escape from Reality
    393 Posts 52 Posters 57.4k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • PavelP
      Pavel @Hobbie
      last edited by

      @Hobbie said in AI Megathread:

      tl;dr if you let dumb AI learn from dumb AI, AI gets dumber.

      So now I should put my poses through all the LLMs, and eventually they’ll break!

      He/Him. Opinions and views are solely my own unless specifically stated otherwise.
      BE AN ADULT

      1 Reply Last reply Reply Quote 3
      • somasatoriS
        somasatori
        last edited by

        Some poor med student attempting to use GPT to write a paper for their ophthalmology class:

        “The patient’s orbs glistened with lacrimal fluid like morning dew…”

        "And the Fool says, pointing to the invertebrate fauna feeding in the graves: 'Here a monarchy reigns, mightier than you: His Majesty the Worm.'"
        Italo Calvino, The Castle of Crossed Destines

        1 Reply Last reply Reply Quote 8
        • AriaA
          Aria @Hobbie
          last edited by Aria

          @Hobbie said in AI Megathread:

          I genuinely hope this bubble bursts with the force of a nuke because at some point in the near future an AI will introduce a genuinely serious problem that requires human resolution and there’s no humans around who have the knowledge to fix it for them.

          tl;dr if you let dumb AI learn from dumb AI, AI gets dumber.

          I cannot “THIS!!!” this hard enough.

          My company is currently on a kick of “We’re gonna teach the product managers to code their own products! And the UXers! And the scrum leads! Everyone’s gonna vibe code and AI code and we’re going to release new features so fucking fast and it’s gonna be AWESOME!”

          Meanwhile, I’m just sitting here watching this with a look of vague horror on my face because 1) my company works in one of the most heavily regulated industries in the country 2) is large enough that a single fuck-up can and has resulted in articles in national news publications because everyone likes to watch a giant stumble. So I keep looking at this “model of the future” thinking that we’re basically going to turn our code into a nightmare of non-functionality as hundreds of people get their sticky little fingers into it and only, like, half of them have any idea how it works. Meanwhile, the other half just shoves whatever the newest ‘agentic coding tool’ says into production because that’s what the computer told them and the computer must be right.

          We’re going to get slapped with the sort of regulatory fine that could pay for 20 developers for the next five years, and then everyone’s going to stand there looking surprised.

          a cat is sitting on a couch with its mouth open .

          I’m pretty sure we’re all just living in the plot of Wall-E now and I hate it here.

          FaradayF 1 Reply Last reply Reply Quote 4
          • FaradayF
            Faraday @Aria
            last edited by Faraday

            @Aria said in AI Megathread:

            I’m just sitting here watching this with a look of vague horror on my face

            Oh dear heavens, all the sympathy. That sounds like my worst nightmare.

            I worked in FDA-regulated software for awhile. I’m sure @Aria knows this well, but for non-software folks: In safety-critical, regulated industries, it is a well-known fact—learned through bitter experience, horrific recalls, and lost lives—that it is utterly impossible to test software thoroughly enough to ensure it’s safe once it’s already been built. There are just too many edge cases and permutations. Safety/quality has to be baked in through sound design practices.

            AI has no concept of this. You can ask it to “write secure code” or whatever, but it fundamentally doesn’t know how to do that. I sincerely hope it does not take a rash of Therac 25 level disasters to teach the software industry that lesson again.

            AriaA 1 Reply Last reply Reply Quote 6
            • AriaA
              Aria @Faraday
              last edited by

              @Faraday said in AI Megathread:

              @Aria said in AI Megathread:

              I’m just sitting here watching this with a look of vague horror on my face

              AI has no concept of this. You can ask it to “write secure code” or whatever, but it fundamentally doesn’t know how to do that. I sincerely hope it does not take a rash of Therac 25 level disasters to teach the software industry that lesson again.

              Yeeep. I’m not in medical research anymore, but I used to work in the office that monitored clinical trials for massive university hospital system, including training clinical research coordinators on how to maintain documentation to standard.

              You do not mess around with people’s lives, livelihoods, and life savings. If you break those, there’s really no coming back. I don’t understand why we have to keep learning this, but I guess some tech bro billionaire and all his investors that can’t actually follow along with what he’s saying need the money to upgrade their yachts.

              1 Reply Last reply Reply Quote 2
              • HobbieH
                Hobbie
                last edited by

                I work in fintech. We are involved with some big cranky banks. The current AI push driven by our CEO is several GDPR breaches waiting to happen and my security guy is sitting there pulling his… actually he has no hair so I suppose he’s pulling his beard out! I’m right there with him, both on the frustration and lack of hair.

                Devs, infra, solution design et al, we don’t get paid to write lines of code, we get paid to write the right lines of code. That’s why we have PRs and reviewing them is where all the productivity maybe-gained is being absolutely-lost.

                I’m not even on the dev teams, I’m in infra, and even I’m copping it from product people trying to push code to my repos now. UGH.

                Third EyeT D R 3 Replies Last reply Reply Quote 4
                • Third EyeT
                  Third Eye @Hobbie
                  last edited by

                  @Hobbie
                  LOLOLOL. I also work at a Fintech, in compliance, and the direction on AI is incredibly schizophrenic. They’re like WE LOVE AI THE CEO WANTS MORE AI…but also don’t put ANY of your work into public AI, and probably not even the proprietary ChatGPT rip-off we have internally, because you’re probably gonna violate personally identifiable information agreements. I just don’t use it and am doing all right. There have been some nice internal robotics enhancements that’ve come out of this whole mess but none of them have given me any confidence this stuff will take my job anytime soon.

                  I want something else to get me through this
                  Semi-charmed kinda life, baby, baby
                  I want something else, I'm not listening when you say good-bye

                  She/Her or They/Them

                  1 Reply Last reply Reply Quote 5
                  • D
                    dvoraen @Hobbie
                    last edited by

                    @Hobbie said in AI Megathread:

                    I work in fintech. We are involved with some big cranky banks. The current AI push driven by our CEO is several GDPR breaches waiting to happen and my security guy is sitting there pulling his… actually he has no hair so I suppose he’s pulling his beard out! I’m right there with him, both on the frustration and lack of hair.

                    Devs, infra, solution design et al, we don’t get paid to write lines of code, we get paid to write the right lines of code. That’s why we have PRs and reviewing them is where all the productivity maybe-gained is being absolutely-lost.

                    I’m not even on the dev teams, I’m in infra, and even I’m copping it from product people trying to push code to my repos now. UGH.

                    But don’t worry, the solution will be to train an “AI” to accept and deny the right PRs so that way it’ll eventually get it right and then you can work on more important things and let “AI” fill in the rest.

                    … Twenty years later, when maybe something marketed as “AI” learns enough to write proper code instead of parrot it. (And after “a few” lawsuits and payouts related to GDPR and other data leaks, company implosions all over the world, etc. etc.)

                    But I’m not going to hold my breath on this.

                    PavelP 1 Reply Last reply Reply Quote 0
                    • PavelP
                      Pavel @dvoraen
                      last edited by

                      @dvoraen And somehow someone still needs to know COBOL.

                      He/Him. Opinions and views are solely my own unless specifically stated otherwise.
                      BE AN ADULT

                      FaradayF 1 Reply Last reply Reply Quote 0
                      • FaradayF
                        Faraday @Pavel
                        last edited by

                        @Pavel Which GenAI will almost certainly never be able to because there isn’t enough COBOL stuff out there for it to steal for training data.

                        PavelP 1 Reply Last reply Reply Quote 1
                        • PavelP
                          Pavel @Faraday
                          last edited by

                          @Faraday All COBOL knowledge is held exclusively by two men, both called Steve. They’re not allowed to travel at the same time, to avoid the risk of all worldly knowledge of COBOL being lost in the same incident.

                          He/Him. Opinions and views are solely my own unless specifically stated otherwise.
                          BE AN ADULT

                          1 Reply Last reply Reply Quote 1
                          • R
                            Rathenhope @Hobbie
                            last edited by Rathenhope

                            @Hobbie As another person who works in fintech, I have been incredibly lucky on this score - I’m the most senior tech other than the CTO, and both of us are incredibly sceptical of LLMs, so while there has been the occasional push to Do More With AI we’ve been able to stand firm and not bring it in to general use.

                            And we both got vindicated this quarter when two potential clients (the biggest we’d have) both went “we consider any use of AI to be high risk and we don’t want client information anywhere near it” and we went “excellent that’s our philosophy too.”

                            That said I’ve been banned from talking about LLMs on our all-hands calls as it invariably turns into a 20 minute rant about why they are bad for our purposes and the calls are only mean to be 30 minutes long.

                            1 Reply Last reply Reply Quote 5
                            • First post
                              Last post