Brand MU Day
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    AI Megathread

    Scheduled Pinned Locked Moved No Escape from Reality
    265 Posts 46 Posters 31.3k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • O
      Ominous @Juniper
      last edited by Ominous

      @Juniper said in AI Megathread:

      People call this “hallucination”. I think we should stop letting them assign a new name to an existing phenomenon. The LLM is malfunctioning.

      No, it is not. What the AI spits out is meaningless to the AI. It just spits out what the most likely next set of words are based on the input it received it has already given. For it to malfunction, it needs to start writing sentences devoid of grammar. As long as what it writes is grammatically correct and a somewhat rational statement, it has succeeded

      It is saying things that are wrong.

      Yes, but factually accurate answers are not the purpose. Grammatically correct sentences are.

      It is failing to do what it was designed to do.

      No. It is doing exactly what it is designed to do. It’s failing at doing what entrepreneurs, marketers, and other Pat Barnum snake oil salespeople want the public to think it can do.

      EDIT: To actually contribute something to the thread, about the only thing I am willing to use AI on for MU purposes is played-bys and maybe creating images on the wiki for the various venues on the grid. I grew tired of seeing Jason Momoa as the image for Seksylonewolf McRippedabs.

      Ceterum censeo Carthaginem esse delendam

      1 Reply Last reply Reply Quote 1
      • HobbieH
        Hobbie @Faraday
        last edited by

        @Faraday I boycott Amazon because AWS absolutely sucks.

        I’d explain more but I have to go write more Lambda functions while closing every pop-up telling me to use Q to streamline my processes.

        1 Reply Last reply Reply Quote 1
        • RozR
          Roz @Jynxbox
          last edited by

          @Jynxbox said in AI Megathread:

          @Pavel said in AI Megathread:

          @Faraday said in AI Megathread:

          The LLM doesn’t know whether something is true, and it doesn’t care.

          I know this may seem like a quibble, but I feel it’s an important distinction: It can’t do either of those things, because it’s not intelligent. It’s a very fancy word predictor, it can’t think, it can’t know, it can’t create.

          (This very obvious rant is not directed to anyone in particular but I still feel like it needed to be said…)

          Neither can a computer. It can’t think, it can’t know, it can’t create any more than an AI can. It can’t get nearly as close as AI can.

          This seems like a strange separation: AI is run on computers. A computer is simply a larger tool that you can run all sorts of smaller tools on.

          A computer can know some things, if you program it to do so. If you program a calculator, you instruct it on immutable facts of how numbers work.

          If GenAI successfully reports that 1+1=2, all it’s saying is that a lot of people on the internet have mentioned that’s probably the case. It’s searching a massive database of random shit and finding a bunch of instances where someone mentioned the text “1+1” and seeing that a bunch of those instances ended with “=2”. It’s giving you a statistically probable sentence. Due to this, it’s ridiculously, laughable easy to manipulate.

          The calculator on your computer knows that 1+1=2 because it knows what 1 is, and it knows what addition is, and it knows how to sum two instances of 1 together. Computers are very good at following strict rules and working within them when they are programmed to do so. And computers are very good at analyzing and iterating, and people have written really effective automation and AI tools (of the non-generative variety) to do that over the years.

          But yes: as you said, computers can’t produce raw creation. Which is kind of the point being made.

          she/her | playlist

          FaradayF 1 Reply Last reply Reply Quote 0
          • PavelP
            Pavel
            last edited by

            alt text

            He/Him. Opinions and views are solely my own unless specifically stated otherwise.
            BE AN ADULT

            1 Reply Last reply Reply Quote 2
            • FaradayF
              Faraday @Roz
              last edited by

              @Roz said in AI Megathread:

              If you program a calculator, you instruct it on immutable facts of how numbers work.

              I mean… kinda? A calculator app doesn’t really know math facts the way a third grader does. It doesn’t intuitively know that 1x1=2. It just responds to keypresses, turns them into bits, and shuffles the bits around in a prescribed manner to get an answer.

              I don’t point that out to be pedantic, but just to further contrast it with the way a LLM handles “what is 1+1”. Like you said, it’s based on statistical associations. It may conclude that 1+1=2 because that’s most common, but it could just as easily land on 1+1=3 because that’s a common joke on the internet. LLMs contain deliberate randomization to keep the outputs from being too repetitive. This is the exact opposite of the behavior you want in a calculator or fact-finder. And if you ask it some uncommon question, like 62.7x52.841, you’ll just get nonsense.

              Now sure, some GenAI apps have put some scaffolding around their LLMs to specifically handle math problems. But a LLM itself is still ill-suited for such a task. And when we understand why, we can start to understand why it’s also ill-suited for giving other accurate information.

              1 Reply Last reply Reply Quote 0
              • First post
                Last post