Brand MU Day
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Griatch
    3. Posts
    • Profile
    • Following 0
    • Followers 0
    • Topics 3
    • Posts 24
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: Installing Arxcode?

      @Cobalt

      Alas, as is outlined in the old tutorial, Arxcode is maintained by the Arx folks and not by Evennia devs.

      I wrote that tutorial as a service back then when it made sense (and it did indeed easily install at the time, promise!) … but at this point I don’t think ArxCode has kept up with Evennia development (it was using an old Evennia version already when it was created, it’s even still on Python2).

      I’ll check in with Tehom if there are plans to update the tutorial (or Arxcode).

      posted in Game Gab
      GriatchG
      Griatch
    • Evennia 4.0 released

      I realized I’ve not posted here in a while, but Evennia, the Python MU* creation system, is moving forward, now at version 4.0.0 (we follow semantic versioning).

      Release post

      Evennia 4.0.0

      March 17, 2024

      Major release. Check out for backwards-incompatible changes below.

      Version updates

      • Feature: Support Python 3.12 (Griatch). Currently supporting 3.10,3.11 and 3.12. Note that 3.10 support will be removed in a future release.
      • Feature: Update evennia[extra] scipy dependency to 1.12 to support latest Python. Note that this may change which (equivalent) path is being picked when following an xyzgrid contrib pathfinding.
        Backwards incompatible changes
      • Feature: Backwards incompatible: DefaultObject.get_numbered_name now gets object’s name via .get_display_name for better compatibility with recog systems.
      • Feature: Backwards incompatible: Removed the (#dbref) display from DefaultObject.get_display_name, instead using new .get_extra_display_name_info method for getting this info. The Object’s display template was extended for optionally adding this information. This makes showing extra object info to admins an explicit action and opens up get_display_name for general use.
      • Fix: (partly Backwards incompatible depending on your usage): DefaultObject.get_numbered_name used .name instead of
        .get_display_name before, which broke recog systems.

      New features

      • Feature: Add ON_DEMAND_HANDLER.set_dt(key, category, dt) and .set_stage(key, category, stage) to allow manual tweaking of task timings, for example for a spell speeding a plant’s growth (Griatch)
      • Feature: Add ON_DEMAND_HANDLER.get_dt/stages(key,category, **kwargs), where the kwargs are passed into any stage-callable defined with the stages. (Griatch)
      • Feature: Add use_assertequal kwarg to the EvenniaCommandTestMixin testing class; this uses django’s assertEqual over the default more lenient checker, which can be useful for testing table whitespace (Griatch)
      • Feature: New utils.group_objects_by_key_and_desc for grouping a list of objects based on the visible key and desc. Useful for inventory listings (Griatch)
      • Feature: Add DefaultObject.get_numbered_name return_string bool kwarg, for only returning singular/plural based on count instead of a tuple with both (Griatch)

      Bug and security fixes

      • Fix Removed the @reboot alias to @reset to not mislead people into thinking you can do a portal+server reboot from in-game (you cannot) (Griatch)
      • Fix: Refactor Clothing contrib’s inventory command align with Evennia core’s version (michaelfaith84, Griatch)
      • Fix: Limiting search by tag didn’t take search-string into account (Griatch)
      • Fix: SSH connection caused a traceback in protocol (Griatch)
      • Fix: Resolve a bug when loading on-demand-handler data from database (Griatch)
      • Security: Potential O(n2) regex exploit in rpsystem regex (Griatch)
      • Security: Fix potential redirect vulnerability in character page redirect (Griatch)
      • Doc fixes (iLPdev, Griatch, CloudKeeper)
      posted in Helping Hands evennia
      GriatchG
      Griatch
    • RE: AI in Games

      One interesting suggestion I saw was to use a categorization LLM (so not one trained for chat, but for categorizing text input into known categories), in order to allow players to input free-form text that then gets categorized/interpreted into one of a few hand-written responses, like quest info etc.

      posted in Game Gab
      GriatchG
      Griatch
    • RE: AI in Games

      With Evennia now supporting AI npcs, it will be interesting to see what people come up with 🙂

      posted in Game Gab
      GriatchG
      Griatch
    • RE: AI Megathread

      @Rinel said in AI Megathread:

      I very strongly support the implementation of strict regulations on how the models are trained, with requirements that all training data be listed and freely discoverable by the public.

      Proprietary models like Midjourney and OpenAI don’t release any of this stuff, alas. But if you stick to OSS models, like Stable Diffusion, you can freely search their training data here (they also use other public sources). There are tens of thousands of LLM models for various purposes and active research on hugging face alone; they tend to be based on publicly available training data sets.

      posted in No Escape from Reality
      GriatchG
      Griatch
    • RE: AI Megathread

      @Faraday You talk as if it’s a clear-cut thing that these models are based on “theft”. Legally speaking, I don’t think this is really established yet - it’s a new type of technology and copyright law has not caught up.

      If you (the human) were to study Picachu (as presented in publicly available, but copyrighted images) and learn in detail how he looks, you would not be breaching copyright. Not until you actually took that knowledge and made fan-art of him would you be in breach of copyright (yes, fan-art is breaching copyright, it’s just that it’s usually beneficial to the brand and most copyright holders seldomly enforce their copyright unless you try to compete or make money off it).

      In the same way, an AI may know how Picachu looks, but one could argue that this knowledge does not in itself infringe on copyright - it just knows how Picachu looks after all, similarly to you memorizing his looks by just looking.

      One could of course say that this knowledge inherently makes it easier for users of the AI to breach copyright. If you were to commission Picachu from a human artist, both you and the artist could be on the hook for copyright infrigement.

      So would that put both the AI (i.e. the company behind the AI) and the commissioning human in legal trouble the moment they write that Picachu prompt? It’s interesting that the US supreme court has ruled that AI-generated art cannot be copyrighted in itself. So this at least establihes that the AI does not itself has a person-hood that can claim copyright (which makes sense).

      Now, I personally agree with the sentiment that it doesn’t feel good to have my works be included in training sets without my knowledge (yes, I’ve found at least 5 of my images in the training data). But my feelings (or the feelings of other artists) don’t in itself make this illegal or an act of thievery. That’s up to the legal machinery to decide on, and I think it’s not at all clear-cut.

      posted in No Escape from Reality
      GriatchG
      Griatch
    • RE: AI Megathread

      @Rinel said in AI Megathread:

      @Faraday said in AI Megathread:

      Now I realize that copyright laws and internet regulations are imperfect, but imagine what the world would look like if everyone had just folded over Napster.

      More seriously, LLMs are far, far worse than Napster, which hurt recording companies way more than it hurt actual musicians. I’m not taking a stance on the ethics of pirating, but there’s a difference between people copying things that others have made and people outright displacing human creators.

      So, if I understand you right, it’s not unethical training sourcing that is the issue for you (as it seems to be for Faraday), but the societal implications of the tech itself?
      That’s a valid view. But while we can regulate and fix ethics of training sets, we won’t realistically stop AI being used and possibly upending a lot of people’s jobs in the same way as was done by countless new technologies in the past.

      I’m not saying I want people to lose their jobs, I’m just saying this is something we need to learn and adapt to rather than hope that the genie can be put back in its bottle.

      posted in No Escape from Reality
      GriatchG
      Griatch
    • RE: AI Megathread

      By the way, here’s an example of a smaller AI model trained using only public-domain images. The tech will move on also if regulators pull the brakes.

      Now I realize that copyright laws and internet regulations are imperfect, but imagine what the world would look like if everyone had just folded over Napster. “Oh well, data can be shared easily now; screw the musicians.” Or if YouTube had just let everyone upload every movie they owned, free for anyone to watch. “Oh well, movies can be shared easily now; screw the filmmakers.”

      You can already generate your own AI music. In a year you’ll be able to generate your own movies from a prompt, so that industry is also in for an upheaval …

      posted in No Escape from Reality
      GriatchG
      Griatch
    • RE: UrsaMU, back from the project graveyard.

      @Kumakun said in UrsaMU, back from the project graveyard.:

      @Griatch Honestly? It taught me that I didn’t have to get super fancy about things. Seriously. It’s not at all a slight. Evennia isn’t overly engineered, and, is approachable when making customizations. It’s kept simple, and I really appreciate how pythonic the project is.
      I’m also in love with just having your business logic to worry about. I’m not a fan of having to dig through my evenv to find it, but I could say the same about digging through a node_modules folder.
      The thing that really got me moving though, was the EJS. I silent screamed a little when going through the web side of things. Totally started reliving EJS nightmares. I could have made a custom skin for it, sure - but, the error reporting for EJS is horrible, at least was back when I didn’t have many other choices!
      Ultimately, when I deem this experiment successful? I’ll rewrite it in Golang, and harden it past a prototype MVP - at least that’s the plan!

      Thanks for the answer! Glad to hear Evennia’s design was a little inpirational. 😄 The webclient code is admittedly first written at a time when Typescript was but a glimmer in the eye of some guru somewhere; it was written to be as generic as possible. One could certainly rework it with a more modern style today. Time, time …

      posted in Game Gab
      GriatchG
      Griatch
    • RE: AI Megathread

      @shit-piss-love said in AI Megathread:

      @Griatch said in AI Megathread:

      But yeah, for professional artists, I fear the future will be grim unless they find some way to go with the flow and find a new role for themselves; it will become hard for companies not using AI to compete.

      This is what I expected, but the first people that made me start considering the positive effects of AI on the field of professional art are my friends who are professional artists. Most of them seem really happy with adding to their toolkit. Concept Art in hours what would previously take them days, or in the same amount of time being able to do significantly more iterations resulting in what they feel is a better final product. The takeaway I’ve got from conversations with them is that it will come down to quality of studio whether they use AI tools to level-up the art departments, or attempt to replace them. But as one said “An AI isn’t going to take my job. It will be a professional peer who knows how to use AI better than me.”

      That’s encouraging to hear! If your friends feel they are on top of the coming changes, all the more power to them.

      posted in No Escape from Reality
      GriatchG
      Griatch
    • RE: AI Megathread

      @imstillhere said in AI Megathread:

      @Griatch said in AI Megathread:

      As for me, I find it’s best to embrace it; Digital art will be AI-supported within the year. Me wanting to draw something will be my own hobby choice rather than necessity.

      are you aware that artists for whom this is NOT a hobby are suffering from this thing you’re excited to “embrace”

      When photography displaced illustrators there was a new human art form that supported human creativity and jobs. When digital art allowed quick work in a new medium it was still human artists at work.

      Yes, I expect this will dramatically change the art industry. I can see why people are legitmately concerned. Same is true for a lot of white-collar jobs (for once, the blue-collar workers may be safest off). While I don’t work as a professional artist, I expect my own job in IT to fundamentally change or even go away too, as programmers eventually become baby-sitters of AI programmers rather than actually code ourselves. Since I think that this is inevitable, I’m trying to learn as much as I can about it already.

      AI removes the human and removes the employment and does so by unethical sourcing of human effort.

      There’s definitely discussions to be had about the ethical sources of the training data; OSS models (which is what I use, since I run these things locally) are already trying to shift to using more ethically sourced, freely available data sets (but yes, there are still issues there, considering the size of the corpus). You can in fact look into those data sets if you want - they are publicly searchable. Companies with proprietary solutions (Midjourney is particularly bad here) will hopefully be forced to do so by lawsuits and regulation, eventually. But that said, I’d think that even an AI completely trained on public-domain images will still change the industry, so it’s not like this changes the fundamental fact of the matter: LLM processing is here to stay.

      To say that’s no different than painting in photoshop is naive at best and disingenuous at worst.

      AI image generation is only one aspect of LLMs. It on its own is certainly not the same as painting in Photoshop, and I never suggested as much. But I do expect photoshop to have AI support to speed up your painting process in the future - for example, you sketch out a face and the AI cleans it up for you, that kind of thing (not that I use Photoshop, I’m an OSS guy. 😉 ). But yeah, for professional artists, I fear the future will be grim unless they find some way to go with the flow and find a new role for themselves; it will become hard for companies not using AI to compete.

      posted in No Escape from Reality
      GriatchG
      Griatch
    • RE: AI Megathread

      @Faraday said in AI Megathread:

      @Griatch said in AI Megathread:

      ut the LLMs are advancing very quickly now, and I think it’s unrealistic to think they will remain as comparatively primitive as they are now. In a year or two, you will be able to get that Picachu image exactly like you want it, with realistic light sabers, facial expressions and proper lighting.

      The writing models are extremely unlikely to advance in this same kind of way because they lack context. They are word calculators, stringing words together without really understanding what those words mean because there is no actual intelligence behind the engines. From my understanding, the art versions work in similar ways and are therefore unlikely to make the same leaps, but admittedly I haven’t studied them as much.

      It’s indeed a limit for text generation, not so much for image generation as far as I understand. That said, I believe what will happen is that multiple agents will be working together instead, each a specialist in its field, holding its own context. This is how our brain works (if you squint a bit) and how GPT-4 is designed apparently. But note that a million-token context research paper is already out, it had several follow-ups since. And considering how fast new research is coming out on LLMs, it would not surprise me if we see a few more breakthroughs sooner rather than later. 🤷

      posted in No Escape from Reality
      GriatchG
      Griatch
    • RE: AI Megathread

      @Rinel said in AI Megathread:

      @Griatch said in AI Megathread:

      But the LLMs are advancing very quickly now, and I think it’s unrealistic to think they will remain as comparatively primitive as they are now. In a year or two, you will be able to get that Picachu image exactly like you want it, with realistic light sabers, facial expressions and proper lighting.

      I think this is premised on certain beliefs about the scaling of LLM outputs with their datasets. These things struggle terribly with analogy. A human can be presented with an image of a mermaid and one of a centaur and then be told “draw a half-human/half-lion like what you just saw,” and they can do that. LLMs can’t. It’s fundamentally not how they operate.

      Fair enough, I guess we’ll see in a year or two. I’m not particularly advocating for this to happen, I just expect it to inevitable. The cat’s out of the bag; the technological advancement will not stop.

      (I know you could accomplish the same thing with an LLM by refining input to be something like “human from the waist up, lion from the waist down” or some other more method, but that doesn’t change my underlying point. LLMs are hugely limited in their capacity to adapt on the fly.)

      Yes, they are machines basically solving matrix math. Thing is, one can argue that so are we. It’s a matter of scale and helper methods.

      @Griatch said in AI Megathread:

      As for me, I find it’s best to embrace it; Digital art will be AI-supported within the year. Me wanting to draw something will be my own hobby choice rather than necessity.

      Barring an economic revolution that is long-coming and never here, the result of this particular utopia is the collapse of widespread art as the practice reverts to only those privileged enough to spend large amounts of time on hobbies that they can’t use to help make a living. It’s difficult to fully describe how horrific this scenario is, but it’s the death of dreams and creativity for literal millions of people.

      I agree: The rise of AI will change society. Many jobs will change or be lost. I expect my day job (computer development) to be fundamentally changed in just a year or two. Not because I advocate for it necessarily, I just think that’s the way it’ll go. You may wish to go back and put the genie back in the bottle, but there’s no practical reason this would ever happen - the advantages of AI integration are so great that someone else will just leverage it in your stead.

      posted in No Escape from Reality
      GriatchG
      Griatch
    • RE: AI Megathread

      I have done art for a long time, and I see AI generation as a very interesting thing. While prompting and handling technical details around generating an AI image is certainly not trivial if you really dive into the details of it and want your own style to it, I think the process is less of being an artist and more of being a very picky commissioner - you are commisioning an artwork from the AI, requesting multiple revisions to get it right.

      For my own art, I’m experimenting with generating sketches this way - quickly generating a scene from different angles or play with directions of lighting - basically to quickly play with concepts that I then use as a reference when painting it myself. In this sense, LLMs are artists’ tools. People forget that doing digital art at all was seen as ‘cheating’ not too long ago too.

      You can certainly argue about legality or ethics when it comes to building a data set. There are OSS systems that have taken more care to only use actually allowed works in the training. But I think there are some misconceptions about what is actually happening in an LLM and how similar its work process actually is to that of a human.

      A decade ago, the new hot thing for digital artists was “Alchemy”. This was a little program that allows you to randomly throw shapes and structures onto the canvas. You’d then look at those random shapes and have it jolt your imagination - maybe that line there could be an arm? Is that the shape of a dragon’s head? And so on. It’s like finding shapes in the clouds, and then fleshing them out to a full image. David Revoy showcased the process nicely at the time.
      The interesting thing with AI generation is that it’s doing the same. It’s just that the AI starts from random noise (instead of random shapes big enough for a human eye to see). It then goes about ‘removing the noise’ from the image that surely is hidden in that noise. If you tell it what you are looking for (prompting), it will try to find that in the noise. As it ‘de-noisifies’ the image, the final result emerges. The process is eerily similar to me getting inspired by looking at random shapes. It’s ironic that ‘creativity’ is one of the things AI’s would catch up on first.

      Does the AI understand what it is producing? Not in a sentient kind of way, but sort-of. It doesn’t have all of those training images stored anywhere, that’s the whole point - it only knows the concept of how an arm appears, and looks for that in the noise. Now, the relationships between these and the hollistic concept of a 3D object is hard to train - this is why you get arms in strange places, too many fingers etc. The AI is only as good as its data set.

      We humans have the advantage of billions of years of evolution to understand the world we live in, as well as decades of 24/7 training of our own neural nets ever since we were born. But the LLMs are advancing very quickly now, and I think it’s unrealistic to think they will remain as comparatively primitive as they are now. In a year or two, you will be able to get that Picachu image exactly like you want it, with realistic light sabers, facial expressions and proper lighting.
      I’m sure some will always dislike AI art because of what it is; but that will quickly become a subjective (if legit) opinion; it will not take long before there are no messed up fingers, errant ears or stiff composition making an image feel ‘ai-generated’.

      As for me, I find it’s best to embrace it; Digital art will be AI-supported within the year. Me wanting to draw something will be my own hobby choice rather than necessity. Should I want to, I could train an LLM today with my 400+ pieces of artwork and have it generate images in my style. I don’t because I enjoy painting myself rather than commisioning an AI to do it for me. 🙂


      TLDR: AIs has more human-like creativity than we’d like to give it credit for. Very soon we will have nothing objectively to complain about when it comes to art-technical prowess in AI-generated art.

      posted in No Escape from Reality
      GriatchG
      Griatch
    • RE: AI Megathread

      @somasatori I can’t speak for Jumpscare, but I’m pretty sure no game is using this yet. It’s in main branch since two weeks or so.

      posted in No Escape from Reality
      GriatchG
      Griatch
    • RE: AI Megathread

      BTW, Latest main branch Evennia now supports adding NPCs backed by an LLM chat model. 🙂

      https://www.evennia.com/docs/latest/Contribs/Contrib-Llm.html

      posted in No Escape from Reality
      GriatchG
      Griatch
    • RE: UrsaMU, back from the project graveyard.

      Good luck! Always good to see more options in the MU* realm. 👍

      Our of curiosity, apart from the language,
      was there some particular part of Evennia that inspired you, or which you aim to improve upon with your server?

      posted in Game Gab
      GriatchG
      Griatch
    • RE: What is a MUSH?

      @Faraday

      Sure. It probably comes down to game styles, yes. For example a multi-descer is completely unheard of in most other genres of MU*. In other styles, dynamic descriptions is a big deal.

      Here are some examples. The Evennia FuncParser inlines actually support nested calls (so they will be executed from the inside out like a regular python call), but for brevity, assume we have made some custom functions for one’s particular game and is using the rpsystem contrib:

      $You() have pock-marked skin and is in his $approxage(10). He wears $clothes(). He looks $health(). 
      $ifguild(thieves, He has the secret thief's guild mark on his hand.)
      $ifskill_gt(detectmagic, 10, He has a faint aura of magic around him.)
      

      Here, someone who’s can’t detect magic, doesn’t know me and is not in the thieve’s guild will see

      The tall man has pock-marked skin and is in his thirties. He wears a black hoodie. He looks a bit sick.
      

      Wheras if you are know me (has recognized me previously), is in the thieve’s guild and has enough in the detectmagic skill, you will see

      Griatch has pock-marked skin and is in his thirties. He wears a black hoodie of the type that has a lot of hidden pockets. He looks a bit sick. 
      He has the secret thief's guild mark on his hand. 
      He has a faint aura of magic around him.
      

      This works because when the FuncParser parses the description it is fully aware of who this description is targeted at. So we can have the $clothes() inline give some extra info (it probably checks for the looker’s thief’s guild membership inside its code), and so on.

      As for the stance example, here’s how it could look for Evennia:

      Director stance:

      msg = f"{char.key} changes $gender(poss) stance to $stance()."
      

      Actor stance:

      msg = "$You() $conj(change) $gender(poss) stance to $stance()."
      

      (note: The $gender() inline is not in default Evennia since core objects don’t have genders; it’s something deemed game specific. But maybe we should add it to make this structure easier to make in actor stance).

      This is called in code as

      room.msg_contents(msg, from_obj=char)
      

      And would come out as

      Faraday changes her stance to cover.  # director stance
      You change your stance to cover.         # actor stance, your view
      Faraday changes her stance to cover.  # actor stance, everyone else's view.
      
      posted in Game Gab
      GriatchG
      Griatch
    • RE: What is a MUSH?

      Sorry for double-post, catching up on some stuff in the thread.

      @somasatori

      As it stands, the multidescer contrib mentioned above will probably be easier for use as a multidescer (see link above). To be honest, it was created before the FuncParser (what is called ‘code in descs’ above) existed so it’s quite likely one could implement a multuidescer using the FuncParser too, haven’t tried.

      The FuncParser is overall considerably more than just ‘code in descs’ though. It is a full, safe parser, for dynamically modifying strings based on situation or who sees them. You embed strings like $foo(arg, arg) and the return of these (Python functions you implement under the hood) will replace that position in the string. Descriptions are just one use case; another one is to implement ‘actor-stance emoting’, so that you can have your game systems send strings like "$You() $conj(smile) to $you(otherperson)" and everyone will see their appropriate strings (You vs your name etc). It’s used for prototypes and other places too.

      posted in Game Gab
      GriatchG
      Griatch
    • RE: What is a MUSH?

      Unsurprisingly, I agree with @Faraday concerning softcode 🙂

      At least concerning Evennia, the FuncParser is indeed the closest we have moved to something similar to softcode, but it’s intentionally stopping at being a way to create advanced formatting solutions rather than offering the pieces needed to be turing-complete. As I also write in the documentation to it, one could in principle make some sort of LISP-like language using FuncParser (lots of parentheses, all functional programming), but it’s not something I plan to add to core Evennia at least.

      As for a multidescer - yes Evennia has an optional multidescer contrib. I created one once people from another MUSH-heavy forum told me that this was one of the more common things MUSHers use softcode for. You can read about the Evennia implementation here: https://www.evennia.com/docs/latest/Contribs/Contrib-Multidescer.html

      posted in Game Gab
      GriatchG
      Griatch