>>11541What does it matter if it's .org or the government of Canada?
Basically there is only one thing we can do right now and that is remove that shit manually, and change the rules.
I also said this in the chat:
I am just very skeptical of having a way to "code around" AI spam, because to me the only real way is to have AI do it. That's just what I've read about like training AI to detect AI. And I am not that well-read on the subject anyway.
I didn't really expect this to be a problem so soon, and maybe it's not the end of the world just yet, but when I was designing spamnoticer I just said let's do the best we can before I have to use AI. So yeah a database of existing shit won't help here. There may be other approaches that are not AI-based but I am not aware of them.
What I was thinking is that having a large enough dataset to fine-tune an existing large language model (like llama) would let us have a chatbot that tells you if the post is AI or not. But that's like an active area of research, requires some unknown-to-me amount of examples to train it and a lot of compute to run. It's theoretically possible but not realistic right this second.