Have you ever heard of the Burnerverse?

You might not recognize the moniker, but you intuitively know what it is.

In brief, Burnerverse is said to be any online social media or Internet chatter-based community that stridently relies on anonymity. Being anonymous is the crux of the matter. Participants in a Burnerverse go out of their way to try and ensure that no one knows who they are.

This is considered an upfront proposition, namely that rather than necessarily trying to pretend to be someone in particular and trick others into thinking you are this or that person, the participants openly proclaim their anonymity. Furthermore, they do not want their true identity to be known or discovered. The aim is to hopefully remain fully anonymous.

Does that seem ominous and underhanded?

Maybe, maybe not.

The counterargument is that by being anonymous, the participants can freely express themselves and not worry about societal repercussions. Sometimes the banner of truth is held high in the air as a claim that without anonymity the postings and discussions would be marred by desires to cave to conventional conventions. It seems that in the real world of showcasing your name, you take mighty risks if veering outside the prevailing cultural norms.

I’d like you to consider and mull over a seemingly straightforward question on this.

Why not have a meeting place online that purports to promote openness and encourages such a vaunted aim by declaring that it is perfectly fine to be anonymous?

This stands to reason. Share as you wish. You are not bound by any threat of reputational tarnishing. You can speak your mind. No holds barred. A seeming huge relief for our watch-your-tongue modern times.

Of course, just as we often have difficulty dealing properly with shiny new toys, the danger is that anonymity will foster the posting of falsehoods, outrageous accusations, utter nonsense, and libelous remarks. Generally, it could end up being a torrid hot mess.

Speaking of shiny new toys, get ready for something that might be a bit of a surprise, though I would think not an out-of-this-world surprise. Turns out that generative AI and large language models (LLMs) tend to have a significant role in the Burnerverse.

Voila, that’s my reason for covering the topic.

In today’s column, I’ll explain the ways that generative AI and large language models enter the grand picture of how the Burnerverse operates. Those who know about the latest in AI being used to aid and abet the Burnerverse insist that this raises serious AI ethics and potentially AI-related legal issues. For my prior coverage on generative AI and its use in other gray areas such as the advent of deepfakes and cheap fakes, see the link here, the link here, and the link here, just to name a few.

Readers of my column asked me to comment on the matter of the Burnerverse and explain the nitty-gritty of what the kerfuffle is about. I have extensively covered the ins and outs of AI ethics and AI law, along with a range of AI trends in a wide variety of domains and uses, see the link here. As a side note, the “Burnerverse” moniker has other meanings too, such as referring to certain online games or specific online groups, but please know that is not my focus here, thanks.

Let’s jump right into the matter at hand.

Diving Deeply Into The Burnerverse

There are numerous definitions associated with the word “Burnerverse” and you can readily find a slew of them via a web search. I decided to undertake a dialogue with ChatGPT, the popular generative AI app that has reportedly over 100 million weekly active users, and asked what definition could encompass the usage I am discussing herein.

Here’s what ChatGPT came up with:

  • “The term ‘Burnerverse’ has several meanings and can include referring to a specific context in social media and online communities, particularly around the use of ‘burner accounts.’ Burner accounts are temporary, anonymous accounts that users create to post content without revealing their true identities. This can be for various reasons, such as sharing controversial opinions, discussing sensitive topics, or avoiding repercussions from their main account.”

I like that definition.

One notable aspect is the handy explanation for why the word “burner” is featured in the moniker. It is a reference to the analogous use of burner phones. I’m sure you know what a burner phone is. Many movies and TV shows depict a spy trying to hide their identity, so they buy one of those cheap mobile phones and use it to remain anonymous. The same applies in the context of an online community, namely you want to remain anonymous, so you create a so-called burner account.

You might at first glance have thought that the Burnerverse is so named because it oftentimes “burns” people in the sense of outrightly stating that this person or that person is a meany, a louse, a baddie, a rotten apple, a dirty rat, and the like (well, the actual wording is a lot harsher and cruder, if you get my drift). There admittedly is quite a bit of name-calling on the Burnerverse, no doubt about it. In any case, the burner aspects are presumably due to the use of burner accounts and not because of the brandishing of invectives per se.

Are you curious about what characteristics tend to suggest a Burnerverse is at play?

I’m glad you asked.

Once again, I continued my dialoguing with ChatGPT on the topic and managed to get a refined list of key characteristics that I believe are a representative indication.

Here it is:

  • “Anonymity: Users of burner accounts typically do not reveal their real identities, allowing them to express themselves more freely.”
  • “Ephemerality: Burner accounts are often temporary and can be discarded after serving their purpose, contributing to the fleeting nature of interactions within the Burnerverse.”
  • “Controversial or Sensitive Topics: These accounts might be used to discuss or share views on contentious issues without the fear of backlash that might occur if posted from an identifiable account.”
  • “Privacy Concerns: Users may use burner accounts to protect their privacy, especially in environments where personal data security is a concern.”
  • “Subversive Activities: In some cases, burner accounts might be used for activities that go against the rules of a platform or community, such as trolling, spamming, or spreading misinformation.”

I would say that those are the key precepts of a Burnerverse.

In my experience, and I say this was a bit of sadness, the amount of trolling, spamming, or spreading of misinformation or disinformation is customarily extraordinarily high. I wish that weren’t so. Anonymity in a sense is a powerful elixir. It can be used for good, meaning that you can express truths that otherwise would be too endangering to say aloud, and/or you can spout foul-mouthed stuff and do so without any semblance of reasonable restriction or sensible constraint.

This gives me a chance to slide into an example of generative AI entanglement.

One good use of generative AI is that you can use the AI to serve as your filter regarding misinformation and disinformation, see my coverage at the link here and the link here.

You can set up generative AI to examine any digital content of interest, whether it be emails, texts, tweets, or content posted on say a Burnerverse, and have the AI try to ascertain what is misinformation or disinformation. The AI can then alert you about it or summarize it, so you don’t have to read it in plaintext abrasive form and then perform other allied actions as per your prior instructions. One potential downside that I mention in my coverage of this generative AI usage is that you can inadvertently create your own echo chamber and never see anything beyond your stated scope, perhaps leaving you stuck inside a narrow box.

Be mindful of such matters.

Generative AI And The Burnerverse Entanglement

I gave you a quick taste of how generative AI gets entangled in the Burnerverse realm.

My example was pretty much something you might do with generative AI acting outside of the Burnerverse. You could feed whichever Burnerverse content you are interested in as a source of input into a generative AI app, tell the AI what you want done with the content, and then have the generative AI act as your personal screening tool. This can be immensely helpful and to some degree can keep you out of the atrocious morass and yet still keep you relatively informed overall.

I’d like to go into further uses of generative AI, some uses are directly involved in a Burnerverse while others are more so at the side edges, such as the screening tool example.

First, let’s talk in general about generative AI and large language models (LLMs), doing so to make sure we are on the same page when it comes to discussing the matter at hand.

I’m sure you’ve heard of generative AI, the darling of the tech field these days. Perhaps you’ve used a generative AI app, such as the popular ones of ChatGPT, GPT-4o, Gemini, Bard, Claude, etc. The crux is that generative AI can take input from your text-entered prompts and produce or generate a response that seems quite fluent. This is a vast overturning of the old-time natural language processing (NLP) that used to be stilted and awkward to use, which has been shifted into a new version of NLP fluency of an at times startling or amazing caliber.

The customary means of achieving modern generative AI involves using a large language model or LLM as the key underpinning.

In brief, a computer-based model of human language is established that in the large has a large-scale data structure and does massive-scale pattern-matching via a large volume of data used for initial data training. The data is typically found by extensively scanning the Internet for lots and lots of essays, blogs, poems, narratives, and the like. The mathematical and computational pattern-matching homes in on how humans write, and then henceforth generates responses to posed questions by leveraging those identified patterns. It is said to be mimicking the writing of humans.

I think that is sufficient for the moment as a quickie backgrounder. Take a look at my extensive coverage of the technical underpinnings of generative AI and LLMs at the link here and the link here, just to name a few.

Back to the crux of things.

I noodled with ChatGPT to discuss the ins and outs of this topic, or shall I say the upsides and downsides of how generative AI intertwines with a Burnerverse. I decided to devise and refine a short list and thus present to you these four notable upsides as generated while using ChatGPT (other facets arise too, but this is sufficient):

  • (1) “Assisting with Anonymity: AI can help users create content that aligns with their goals while maintaining their anonymity, ensuring their writing style or digital fingerprint doesn’t reveal their identity.”
  • (2) “Generating Positive Content: AI can generate supportive, educational, and informative content that can enrich discussions in anonymous forums.”
  • (3) “Identifying Harmful Behavior: AI can be used to detect and flag harmful behavior, such as harassment or the spread of misinformation, helping to maintain a healthier online environment.”
  • (4) “Content Moderation: AI can assist moderators by automatically filtering out abusive or inappropriate content, making it easier to manage anonymous interactions.”

I’ll cover those points in a roundabout way.

You might have instantly recognized or done a familiar head-no on the third bulleted point. The third item on the list indicates that generative AI can be used to identify harmful behavior that is posted on a Burnerverse. My example of using generative AI as a screening tool would be that kind of AI usage.

On a related point, the fourth point covers content moderation.

If an online community or social media forum has a human acting as the content moderator, they could readily similarly use generative AI to do screening. This can be significant because a human moderator is unlikely to be able to keep up with the flood of new postings that typically arise, especially when a topic or celebrity suddenly is getting trounced online. Generative AI can operate at scale, meaning that no matter what volume of postings there might be, the odds are the AI can computationally keep up with the real-time screening process.

That covers points three and four.

I kind of saved the first two bullet points so that we could savor them together.

Here’s the deal.

The first bullet has to do with preserving anonymity.

Assume that you believe that the role of anonymity is valid and a necessity. I know that you might have gripes about that premise, but please go along for the sake of discussion. It can be harder to preserve your anonymity than it might seem at an initial glance. Suppose you aren’t using your real name, and you think that alone will keep you anonymous.

Sorry to say that the manner of what you post can potentially give away your identity. The obvious clues would be by stating something about where you live, what you’ve done in your life or career, etc. A sharp-eyed person will use those clues to piece together a pattern of who you might be.

A subtler possibility is how you write. Your postings might have a distinguishable style. Suppose you write a bunch of postings on an anonymous basis for a Burnerverse. Meanwhile, you decide to write your own blogs or publish articles in your true name and use the same style of writing. Oops, you could get called out by simply having someone text-wise match your anonymous postings with your named postings.

As an aside, this is something that has kept scholars going on who really was William Shakespeare, since some assert that there wasn’t such a person and instead several people involved or it was a specific person that used a fake name, etc. You can do pattern matching of the works of Shakespeare and compare them to other known writers. Fun stuff.

The point here is that generative AI can aid your desire to remain anonymous.

You merely feed in your drafted content that you wish to post at the Burnerverse and ask the AI to convert the content into something showcasing a different style. You can vary the style each time or tell the AI to come up with your preferred new fake “standard” style that you wish to use while posting content anonymously.

If you use a suitable set of prompts to do this, the generative AI will seek to remove anything of an identifying nature. See my explanation about prompt engineering and prompting, at the link here. Furthermore, the AI will compose or re-compose the content in a new style that won’t hopefully be matchable to your true style of writing. I must note that this is not ironclad and there is a chance that the masking efforts by generative AI will not be enough to protect sleuths from using the postings to somehow figure out who you are.

Be wary.

That leaves the final point of the upsides, which is bulleted as the second point above.

I appreciate your willingness to let me discuss them in a catty-corner way.

The second listed point is that you can use generative AI to create content of a positive nature and then post it on Burnerverse. I suspect that some of you are laughing uproariously at this. Much of the content posted is not especially uplifting. Do you need to use generative AI to generate positive commentary for you? A cynic would say that this might be the only means for these people to compose something upbeat. A skeptic would say that no Burnerverse user would ever find any need for this facet since they never intend to be positive.

Anyway, I mention it for completeness, and you can decide the practicality considerations.

Generative AI Can Contribute To The Gloomy Side Of Things

Generative AI is generally a technology that I refer to as having a dual-use facility. Here’s what I mean. AI can be used for good, but it can also be used for badness. The same AI that might aid in discovering a cure for cancer can be turned into an evildoing AI that undermines humankind, see my discussion at the link here.

The same semblance of duality occurs when considering how generative AI entangles with the Burnerverse. We ought to examine how generative AI regrettably can be used for some of the seedier sides of a Burnerverse. Take the good with the bad, as they say.

I did some more noodling with ChatGPT and put together this short list (more possibilities exist, but this seems sufficient here):

  • (1) “Creating Fake Content: Generative AI can be used to create convincing fake news, deepfakes, and misleading information, making it easier for malicious actors to spread falsehoods anonymously.”
  • (2) “Manipulating Public Opinion: AI-generated content can be used to manipulate public opinion or incite conflict, leveraging the anonymity of burner accounts to avoid accountability.”
  • (3) “Automating Trolling: AI can be used to automate trolling and harassment, creating large volumes of abusive content quickly and efficiently.”
  • (4) “Evading Detection: Generative AI can help users craft messages that evade detection by automated moderation systems, allowing harmful content to slip through.”
  • (5) “Impersonation: AI can generate content that mimics specific individuals or styles, making it difficult to trust the authenticity of anonymous posts.”
  • (6) “Deepfakes and Deception: The ability to create realistic deepfakes and other deceptive content can undermine trust in online interactions, especially in anonymous settings.”

I will briefly say a few words about those points.

In the upsides list, I mentioned that you can use generative AI to preserve your anonymity. Does that also potentially deserve to be listed as a downside? One supposes that if you are trying to break the shield of being anonymous, you would not like the fact that someone has cleverly leaned into generative AI to make life very hard to ferret out who they are.

A similar duality exists about using generative AI to try and discover someone’s identity. You could use generative AI to gauge whether the writing of the person matches other outside writings. No need to do that by hand. Is this an upside or a downside? It all depends on your viewpoint of the sanctity or perhaps the danger of allowing anonymous postings.

Another intriguing twist is that you can use personas in generative AI to generate content that seems to be written by a selected celebrity, writer, or just about anyone who has some pattern of writing or speaking, see my explanation about personas at the link here and the link here.

How would that relate to the Burnerverse?

Easy-peasy.

You could use generative AI to not simply mask your writing style but also go the extra mile and have AI mimic someone else’s style. Suppose you wanted everyone to think you are person X or Y, you could do data training of your generative AI to pattern their writing style. With a few prompts, you would then instruct the generative AI to always produce your content in that specific style.

Ingenious or insidious?

The tomfoolery can be worse than you might assume. If someone wants to essentially “incriminate” someone else, the ease of setting up an anonymous burner account and then using generative AI to mimic their style is exceedingly simple to do. The problem will be that others seeing the postings are likely to assume or guess that the person X or Y is indeed the anonymous person doing the postings. A dirty trick, for sure.

Yet another downside is that generative AI can be used to write outstandingly compelling fakery. As I’ve covered at the link here, you can even use generative AI to craft conspiracy theories that seem to be real though they are entirely concocted by the AI. The upshot is that even if you aren’t much of a writer, and if you want your postings to draw people in, just have generative AI re-compose your drafts and they will be infused with alluring indications that will undoubtedly get you increased views.

I’ll leave things to you to examine the other points made on the above downsides list.

For those of you who have heartburn about the Burnerverse and those anonymous postings, I’m sure that the list of ways to use generative AI is going to get your goat. Maybe grab a glass of wine and allow yourself a moment of quiet reflection after reviewing the list.

Conclusion

When I give presentations about the varied ways that generative AI is being used, and when I get to topics like the Burnerverse, the immediate reaction by some is that we ought to disallow such usage of generative AI. Ban the use of generative AI for these uses. Do not let people use generative AI to do nasty things like hiding behind anonymity as goosed up via generative AI.

I hear you.

I recently posted in my column my analysis of whether there perhaps should be warning labels associated with the use of generative AI, see the link here. I did so because the U.S. Surgeon General called for the use of warning labels concerning social media. If you want a warning for social media, the next likely candidate for an akin warning would seem to be generative AI.

Seeking to ban specific uses of generative AI is a rough proposition and not something that is easily implemented, see my explanation of why at the link here. Counterviewpoints are that you are likely to stop the upsides in your pursuit of curtailing the downsides. Might this undercut innovation that can end up aiding humankind? All in all, complexities are aplenty from an AI ethics and AI law perspective.

I trust that you now at least have a better understanding of what Burnerverse is and how generative AI can be used or entangled with these online communities and social media postings. I imagine that some of you might decide to discuss the topic while on the Burnerverse. Others might employ generative AI to figure out who those people are.

A never-ending cat-and-mouse gambit, now super-charged via the advent of modern-day generative AI.

Share.
Exit mobile version