Jump to content

Wikipedia talk:Administrators' noticeboard

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia


Request an advice

[edit]

Can I open a case at ANI based on Wikipedia talk:Did you know#Really? regarding reviewers not fulfilling their duties and pretending to be busy? As a result, my nominations were closed due to timeout without any fault of my own. These closed cases have undermined my efforts and hard work. I'm asking here because I'm not sure if my case is eligible for the ANI process. Thanks Hteiktinhein (talk) 05:40, 14 November 2024 (UTC)[reply]

No. We are all volunteers here, which means that if people are busy, then they're busy. We don't submit timesheets for our work on Wikipedia, and no editor is ever required to make an edit if they don't want to. Writ Keeper  14:19, 14 November 2024 (UTC)[reply]
So why Dyk is stand for? Hteiktinhein (talk) 20:42, 14 November 2024 (UTC)[reply]
If you are only editing to get recognition at DYK then you are not here to build an encyclopedia. DYK is incidental to our work here: it is not the main point of it. Phil Bridger (talk) 20:55, 14 November 2024 (UTC)[reply]
To answer your question, yes, you could open a thread asking for there to be consequnces for other users not doing what you want them to do, but the most likely result of such a thread would be a WP:BOOMERANG for you, so you probably shouldn't do that. Just Step Sideways from this world ..... today 04:17, 20 November 2024 (UTC)[reply]
No, absolutely not. This is absurd. Wikipedia is a volunteer project. We can stop editors from breaking policies, but we can't actively make any editors actively do any tasks. What would intervention even do? Do you think warnings, blocks or topic bans are going to help increase DYK participation? Sergecross73 msg me 12:00, 20 November 2024 (UTC)[reply]
Not to mention that there appeared to be significant problems with the nomination, which is a BLP. Black Kite (talk) 15:19, 20 November 2024 (UTC)[reply]
I think generally more difficult cases get passed over, particularly with the reciprocal nature of DYK. People are naturally going to want to approve an easy one. Secretlondon (talk) 18:57, 26 November 2024 (UTC)[reply]

Are ANI notices necessary when requesting to revoke TPA?

[edit]

A few times I've come to ANI to request TPA be revoked for blocked users. I have served such users with ANI notices because the instructions say to do so. But I am wondering if that is necessary. First since the user cannot comment at ANI, and second because I only do so in cases of obvious promotion, vandalism, or other bad-faith edits. TornadoLGS (talk) 03:35, 20 November 2024 (UTC)[reply]

This can be useful for talk page watchers (and sometimes if an admin watches the talk page, and the request is reasonable, they can revoke a TPA), but of course the general efficiency is low, as of pretty much everything which happens on Wikipedia. Ymblanter (talk) 10:07, 20 November 2024 (UTC)[reply]

Speedy archive AN(I) topics about LTA MAB?

[edit]

I think we should archive topics discussing the LTA MidAtlanticBaby very early when we are done talking about them. Maybe a 1 day (or 1.5 day) archival time would suit?

At the moment, those threads are nothing but a magnet for that LTA, as it seems the discussions have run their course. Perhaps it would hurt a lot less to archive early than to block off all IP and new editors for literally the entirety of the time that those threads linger on the noticeboards for? — AP 499D25 (talk) 11:10, 25 November 2024 (UTC)[reply]

I agree. WP:RBI. Phil Bridger (talk) 13:36, 25 November 2024 (UTC)[reply]
...Assuming he doesn't edit-war to readd them. I wouldn't put it past them. —Jéské Couriano v^_^v threads critiques 19:27, 26 November 2024 (UTC)[reply]

Remarkably consistent ChatGPT report templates

[edit]

Lately AN/I seems to have been inundated with incident reports that are obviously the products of an LLM. What I find kind of confusing though is the consistency of the report templates combined with their inappropriateness for adjudicating incidents of the type this page is designed to address.

Is there any way we can just have a blanket operating procedure to immediately close any incidents created using these templates? It'd probably save a lot of people a lot of time. Simonm223 (talk) 18:12, 26 November 2024 (UTC)[reply]

Can you give some recent examples? Floquenbeam (talk) 18:54, 26 November 2024 (UTC)[reply]
Here's the most recent. They keep popping up.-- Ponyobons mots 18:59, 26 November 2024 (UTC)[reply]
Oh. Duh. I was looking at AN. Floquenbeam (talk) 19:02, 26 November 2024 (UTC)[reply]
(after edit conflict) It's pretty obvious which reports they are, but I won't go into details because people will just tell their favourite LLM to format things differently and they will be harder to spot. WP:BEANS may be relevant. Phil Bridger (talk) 19:07, 26 November 2024 (UTC)[reply]
ChatGPT sure has a strange idea of what an effective ANI request looks like. signed, Rosguill talk 19:19, 26 November 2024 (UTC)[reply]
I know, right? It's like the bot produces something remarkably formal, rigid and entirely lacking relevant information. Simonm223 (talk) Simonm223 (talk) 19:26, 26 November 2024 (UTC)[reply]
Seems like we may as well just close any of these immediately as they arise. -- asilvering (talk) 19:28, 26 November 2024 (UTC)[reply]
n.b. I ended up blocking the filing editor for sockpuppetry as this was a clear behavioral match to the sockfarm that has been plaguing Hamis Kiggundu signed, Rosguill talk 19:39, 26 November 2024 (UTC)[reply]
(edit conflict) "entirely lacking relevant information", good description. It also describes unblock requests generated by ChatGPT and the like, of which we get a lot. Note there are the templates, Template:uw-ai1 through Template:uw-ai4, which may be helpful. I endorse just closing any such report, as per asilvering. --Yamla (talk) 19:30, 26 November 2024 (UTC)[reply]
Does ChatGPT actually know how ANI/AN work? Wikipedia is just one website among many; perhaps it assumes that what works for a petition or a job interview also works for a report on a Wikipedia noticeboard. Jo-Jo Eumerus (talk) 08:39, 27 November 2024 (UTC)[reply]
Yes, it knows the purpose, the audience and the nature of typical discussions of both ANI and AN, and the differences, with the caveat that no one currently knows what 'knows' really means for LLMs. Sean.hoyland (talk) 09:19, 27 November 2024 (UTC)[reply]

As an aside, I decided to have a LLM chatbot summarize this conversation and propose solutions:

Artificially intelligent analysis — Red-tailed hawk (nest) 20:04, 26 November 2024 (UTC)[reply]
The following discussion has been closed. Please do not modify it.

The conversation highlights an issue with incident reports on the Administrator's Noticeboard for Incidents (AN/I) that are suspected to be generated by ChatGPT or similar large language models (LLMs). These reports follow a rigid, formal template but often lack the contextual or specific information necessary for effective adjudication.

Key Points Raised:
  1. Template Consistency:
    The reports are strikingly similar in structure, making them easy to identify as potentially LLM-generated.
  2. Lack of Relevance:
    Despite their formal appearance, these reports fail to include critical details relevant to the incidents, making them unhelpful.
  3. Proposed Action:
    Several editors, including Simonm223, suggest a blanket policy to close such reports immediately upon identification, as handling them wastes time and resources.
  4. Sockpuppetry Connection:
    In at least one instance, the editor filing a report was found to be a sockpuppet, raising concerns about misuse of LLMs for evading scrutiny or advancing disruptive agendas.
  5. Existing Tools:
    The mention of templates (e.g., Template:uw-ai1 through Template:uw-ai4) suggests resources are already in place to address AI-generated contributions, but the community may need more proactive measures.
  6. Broader Context:
    Similar issues arise with unblock requests generated by LLMs, indicating this is not limited to AN/I reports but part of a larger challenge in managing AI-generated content on the platform.
Suggested Solutions
  1. Policy Development: Establish clear guidelines for handling suspected AI-generated content.
  2. Automation: Explore tools or bots to flag and review such contributions before they reach AN/I.
  3. Awareness and Training: Educate editors and administrators on identifying and addressing LLM-generated content effectively.

This issue underscores the need for a balance between leveraging AI's utility and mitigating its misuse in collaborative platforms like Wikipedia.

[FBDB]

But on a more serious note, yes; I have seen this sort of formatting. It's somewhat obvious, since nobody naturally writes complaints at ANI in the way that ChatGPT does. The problem is that the AI detection algorithms really do get quite a lot of false positives (they misclassify a good number of pieces of text as AI when they aren't); at best, we're left to our hunches. And I really can't block someone on a mere hunch.

Red-tailed hawk (nest) 20:04, 26 November 2024 (UTC)[reply]

No algo needed to observe and identify this kind of thing. Your own eyes and brain work fine. -- asilvering (talk) 20:07, 26 November 2024 (UTC)[reply]
I'm not so sure about that. I can see the same thing happening at some of the village pump pages, and the danger is that people will see it and think that it's normal to format things in that way and to bury valid points in loads of meaningless platitudes. If we're not careful life could mimic art. Phil Bridger (talk) 22:31, 26 November 2024 (UTC)[reply]
Regarding that last one at ANI, if you look at the quality of English of the author's previous posts, it was fairly obvious that they didn't write it themselves. This is quite a common reason for people using LLMs, because their own language skills are not good. Black Kite (talk) 22:34, 26 November 2024 (UTC)[reply]
People aren't going to think it's normal to format things in that way if we're immediately closing them for being meaningless AI-generated platitudes. -- asilvering (talk) 01:00, 27 November 2024 (UTC)[reply]
Why do people like the formal style? This is also ChatGPT for the same request.
  • Oi, listen up, you lot! I’ve been watchin’ the carryings-on with that Hamis Kiggundu page, and it’s a right mess, innit? Some geezer, Timtrent, has been runnin’ the show, blockin’ proper updates and throwin’ his weight around like he owns the gaff. Every edit request gets binned, even the ones that follow Wikipedia’s bleedin’ rules to a T! He’s callin’ Ugandan media “churnalism” like he’s some big-time expert, as if local news ain't good enough unless it’s got the BBC’s stamp on it. Worse yet, he’s slinged accusations left, right, and centre—sockpuppetry, paid editing, the lot—without a shred of proof, just to shut people up. Legit editors are too scared to touch the page, and now it’s a dusty relic stuck in 2020! It’s high time the admins stepped in, sorted Timtrent out, and gave this page a proper shake-up. Fair’s fair, innit?
Sean.hoyland (talk) 04:18, 27 November 2024 (UTC)[reply]
Thanks for the laugh, ChavGPT. -- asilvering (talk) 06:11, 27 November 2024 (UTC)[reply]