Brand MU Day
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Login

    AI PBs

    Scheduled Pinned Locked Moved Game Gab
    83 Posts 28 Posters 3.4k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic was forked from PBs Tez
    This topic has been deleted. Only users with topic management privileges can see it.
    • JennkrystJ
      Jennkryst
      last edited by

      Everyone is forgetting that this forum would not exist, save for an AI PB who went ‘beep boop, repsect muh authoritah’ and then banned everyone who went ‘lol, no’.

      … wait, does AI PB not mean Robotic Purring Barrister?

      Mummy Pun? MUMMY PUN!
      She/her

      1 Reply Last reply Reply Quote 0
      • PavelP
        Pavel @RedRocket
        last edited by

        @RedRocket said in AI PBs:

        That’s why, legally speaking, it isn’t copying.

        That very much remains to be seen.

        He/Him. Opinions and views are solely my own unless specifically stated otherwise.
        BE AN ADULT

        1 Reply Last reply Reply Quote 3
        • FaradayF
          Faraday @RedRocket
          last edited by Faraday

          @RedRocket said in AI PBs:

          That’s exactly what artists do when they make art. The human artist has the ability to choose which patterns to combine into a new product but it’s all alchemy! It’s just mixing things together to get something new.

          Just because the output is the same as a human doesn’t mean that the process is the same. A human, an abacus, a calculator, Google Sheets, and an LLM (sometimes) can all calculate 8+6, but only the human actually understands math and can merge that understanding with an understanding of the actual world.

          This reminds me of something Gary Marcus said in an article about AI hallucinations., where he explains how a ChatGPT answer hallucinated a simple fact (birthplace) about a celebrity (Shearer).

          Because LLMS statistically mimic the language people have used, they often fool people into thinking that they operate like people.

          But they don’t operate like people. They don’t, for example, ever fact check (as humans sometimes, when well motivated, do). They mimic the kinds of things of people say in various contexts. And that’s essentially all they do.

          You can think of the whole output of an LLM as a little bit like Mad Libs.

          [Human H] is a [Nationality N] [Profession P] known for [Y].

          By sheer dint of crunching unthinkably large amounts of data about words co-occurring together in vast of corpora of text, sometimes that works out. Shearer and Spinal Tap co-occur in enough text that the systems gets that right. But that sort of statistical approximations lacks reliability. It is often right, but also routinely wrong. For example, some of the groups of people that Shearer belongs to, such as entertainers, actors, comedians, musicians and so forth includes many people from Britain, and so words for entertainers and the like co-occur often with words like British. To a next-token predictor, a phrase like Harry Shearer lives in a particular part of a multidimensional space. Words in that space are often followed by words like “British actor”. So out comes a hallucination.

          That is just not how human brains operate.

          1 Reply Last reply Reply Quote 1
          • First post
            Last post