Ex Machina

Biological Hazards in a Digital Age: The Implications of ChatGPT and Modern AI

by AJ Milla

Artificial intelligence has been responsible for some of the worst–and also the best–content on the internet lately. Whether you’re mindlessly scrolling through TikTok, or equally mindlessly scrolling through Reddit, you’ve undoubtedly been exposed to some form of artificial intelligence. And depending on how many of your followers are libertarian, you either love it or hate it.

If you’re a college student, you’ve almost certainly heard of ChatGPT. But for the uninitiated: ChatGPT is an AI tool used to generate text, often using a keyword as a prompt. And while the large language model is being somewhat effectively countered by educators, the limits of AI are a conundrum.

Of course, MIRA Safety isn’t in your inbox today to tell you whether you should love it or hate it. We are, however, here to inform you about one aspect of AI that any savvy survivalist should be aware of: AI-generated bioterrorism. And while you may not need to ask “What is bioterrorism?”–that much is self-evident–the phrase “AI-generated bioterrorism” is almost certainly new to you.

It’s a worrisome combination of words, to be sure–one made worse by the infusion of Edge AI.

But what does any of it mean? And what can you do to prepare?

Let’s begin.

AI generated photo of “bioterrorism”

AI generated photo of “bioterrorism” (Image courtesy of AJ Milla)

Table of Contents

  • 01

    Aum Shinrikyo: Homebrewed Bioterror

  • 02

    Baking for Terrorists

  • 03

    The “Artificial Intelligence Threat” Explained

  • 04

    ChatGPT: Not Just for Memes

  • 05

    Cybersecurity Is Failing Us

  • 06

    Why Biosecurity Matters

  • 07

    AI Tools Could Spell Disaster

  • 08

    Keep Your Head On A Swivel

  • 09

    Final Thoughts

  • 10

    Frequently Asked Questions

Aum Shinrikyo: Homebrewed Bioterror

In the same way that Jets fans were left stunned and confused last Monday evening, you too may find yourself somewhat befuddled at the phrase “AI-generated bioterrorism.”

Bioterrorism, for its part, has been a prevalent threat ever since the introduction of chemical weapons during World War I. In one notable example, the Aum Shinrikyo, or “Supreme Truth” cult, operating out of Japan, was responsible for the heinous assault of over 1,000 civilians in a 1995 nerve agent attack.

Ben Braun and Chiaki Yanagimoto

Aum Shinrikyo. (Image courtesy of Ben Braun and Chiaki Yanagimoto)  

Indeed, from 1990 to 2000, Aum Shinrikyo attempted to carry out multiple chemical and biological attacks against Japanese, Chinese, and U.S. citizens/military personnel. To this end, the cult deployed various weapons systems, depending on target permeability and available attack vectors.

Deadly nerve agents like sarin gas, for example, were dispersed through plastic bags inside of the Tokyo subway–and it would ultimately be discovered that Aum Shinrikyo possessed VX, mustard gas, and tabun, too.

At the beginning of the reign of terror, in 1990, U.S. Naval Arsenal Yokosuka was attacked. Horrifyingly, Aum Shinrikyo attempted to formulate and dispense botulism through a yellow liquid medium. If this attack had succeeded, it could have caused untold chaos.

In 1994, their attacks took a deadly turn, however, when a sarin attack on a residential neighborhood in Nagano Prefecture led to eight deaths and more than 200 injuries.

Their activities would escalate the following year, on March 20 in 1995, when they used sarin gas on the Tokyo subway. As a result of this attack, thirteen were killed, and fifty were seriously injured.

Tsukiji station in Tokyo, 1995, following the Aum Shinrikyo sarin nerve gas attack

Tsukiji station in Tokyo, 1995, following the Aum Shinrikyo sarin nerve gas attack. (Image courtesy of Asahi Shimbun)

Baking for Terrorists

Clearly, Aum Shinrikyo was more than capable of causing grievous harm.

Yet their track record of attacks was inconsistent in its outcomes. Why is this? Why, for example, did their 1990 attack on U.S. Naval Arsenal Yokosuka fail?

First, it’s important to note that botulism–the cult’s biological weapon of choice in this instance–is one of the most hazardous biological substances on the planet, with effects varying from musculoskeletal paralysis to respiratory arrest.

While Botulism is frighteningly easy to create and transport, consider the following: If you were tasked with a complex and challenging baking assignment from your spouse, where’s the first place you’d go? Hold your sneers–you’re not going to like where this is going.

Take macarons for example: they typically take years to master. As such, you’re probably going to consult the Internet for step-by-step instructions, rather than the odd cookbook. And even so, you’re probably going to fail your first few tries–they’re hard to make, after all.

But what if you had an AI bot to help you speed through the learning phase? To give you the insider tips, usually learned through years of trial and error?

What if you could have the ingredients, process, and proper equipment available to you right off the bat? Potentially, you could skip right to perfect macarons.

Or, if you were trying to formulate a highly complex and difficult-to-manufacture chemical weapon, you’d have it done before those macarons finished cooling.

It’s a lot to take in, we know.

Now, while you take the time to answer that nagging question, “Why does talking about cookies and chemical weapons hit a nostalgia nerve?” let's line up some questions that need to be knocked down.

Some nostalgia fuel from Dexter's Lab(Image courtesy of Boyariffic via Fandom)

First off, what is Edge AI? What is it capable of? And who’s governing this mess?

What about biological engineering–what is it? What is biosecurity, for that matter? And most importantly: what can you do to keep yourself and the people you protect safe?

The “Artificial Intelligence Threat” Explained

Abstract AI render.  (Image courtesy of Mr.Core_Photographer via Getty Images)

First and foremost: what is AI? At its absolute basest level, it’s a very advanced sorting algorithm. For those who have ever participated in a CompSci 101 course, this phrase should ring a bell, as it is a popularly assigned homework task. (In particular, the “Tower of Hanoi” puzzle is an excellent example of a sorting algorithm.)

Essentially, through searching and sorting, items–or words in this case–are called up and utilized in various applications.

There’s a “learning” aspect to AI, too–though it’s rather complicated, and a little bit outside the scope of this article. For the sake of simplicity, the key takeaways are that AI software “learns” to sort, and then calls up searched items in various sequences from a database. Most often, this is done so quickly and so accurately that it almost appears that the AI tool is operating like a human would.

Here, we would be remiss not to mention the “diffusion model” as well. What is the diffusion model? Oversimplified, it is a mathematical formula used to decipher between multiple-choice questions or options.

A commonly seen example of this differentiation is the “IF” function in Microsoft Excel. Essentially, the formula uses the command “IF” to say, “IF this option is true, then do THIS next.” You could say, for example, “IF this amount in dollars is less than $5, highlight THIS field in red.”

Or, “IF I quit this job right this second, will my parents let me move back in, and will they charge me rent IF I volunteer to do all my own dishes?”

Pretty useful on a small scale–now imagine it being used millions of times a second in higher AI operations.

While there will be a few readers whose hair is likely smoking at this Neanderthal-level simplification, we’re going to press on, starting with the following question: If AI is a technology that’s able to understand, plan, and utilize information to create and perform critical thinking tasks, what is not AI?

Well, that’s simple: Any technology that requires a human to operate it, make decisions, and control parameters doesn’t count as AI.

Think of your YouTube-suggested video algorithm. It learns what you like, it learns what you want to see (sometimes), and it will search and then sort those recommendations directly to your face. This would count as an independent technology operating without human intervention.

Conversely, a calculator will do the math for you, but it requires direct input from an operator, therefore: NOT AI.

With that distinction out of the way, let’s address the fanfare surrounding AI.

Futurists, for their part, are ablaze with manic and terrifying optimism about the future of AI. In their view, the seemingly limitless possibilities are exciting.

Meanwhile, computer science professionals are warning of AI advancing too quickly, while others are focusing on specific problems that need to be solved. These experts stress that thought the perils of AI are different than we may initially understand, they’re still a problem.

Simply put, while AI is able to perform wondrous calculations and amaze us with its machine-learning capabilities, most experts aren’t especially worried that a world with Skynet running the show is imminent… for now, anyway.

ChatGPT: Not Just for Memes

There’s something to be said for the “human touch” in relation to AI.

Let’s talk specifically about ChatGPT, for example. ChatGPT is a tool that falls into the class of “language models.” These AI models, notably, learn how to sound like humans by using unquantifiable amounts of input gathered from various sources to decide what sounds convincing to us.

ChatGPT theorized interface

ChatGPT theorized interface. (Image courtesy of Getty Images)

Keep in mind that someone programmed it to do that. In other words, it didn’t just wake up one day and decide to try and trick us. As a matter of fact, sometimes it does a really bad job of convincing us at all.

So, who invented ChatGPT? Believe it or not, Elon Musk did not in fact develop ChatGPT, as has been claimed by some. Here, the confusion lies in his holding of an original co-chair position with OpenAI, which he vacated in 2018.

More accurately, ChatGPT was a collaborative effort by the OpenAI engineering team. Though the ultimate credit falls to the OpenAI Chief Scientist Ilya Sutskever, it’s important to emphasize that no one person worked on it alone. Sam Altman, for example–OpenAI’s CEO–is also credited as a major contributor to the project.

Don’t worry though–real-life Tony Stark is tinkering with his own project. Maybe it’ll save Twitter? Or is it X?

But we digress.

ChatGPT and AI seem fairly dangerous, but not in an existential-threatening kind of way, right? Though, for example, AI-generated videos of world leaders saying insane things are worrying, we can usually decipher those as nonsense.

What’s not so innocuous, however, is a recent development where an AI program suggested and created 40,000 chemical weapons in under six hours.

Chat GPT conversation

Profound stuff. (Image courtesy of ChatGPT)

That could be the last sentence of this blog post, and we wouldn’t have to say much more, as that’s absolutely horrifying. And yet, it’s horrifying not because of the findings, but because of what AI can do to spread that kind of information and research.

Cybersecurity Is Failing Us

Significantly, leading companies are beginning to identify critical security loopholes in their data protection systems.

But you may wonder: Who has access to these tools? Where are the databases based, and what is their security level?

These, of course, are all common cyber security concerns, and there’s a fairly strong infrastructure in place at many major data centers. The rub, as it were, comes in the form of AI penetration testing.

This first part is not so unfamiliar, as we all live in a cyber-connected world. We understand, for example, that we shouldn’t connect to public Wi-Fi, share our passwords, etc., because hackers can steal our personal information. That’s not a hard pill to swallow.

The next part, though… Well, this might keep you up at night.

Computer security architecture

Computer security architecture. (Image courtesy of Kjpargeter via Freepik)

Hacking a database is a malicious act of terrorism. To defend against this, a  database can be reinforced–hardened against even the most advanced AI hacker bots. But AI itself, the very programming of its structure, is highly vulnerable.

So far, we’ve (thankfully) been able to safely protect many chemical and biological secrets with high levels of security. But with AI in the fold, that information may be much more easily exposed.

Note that these vulnerabilities are hard-programmed into its code, as there isn’t a good way for the AI to operate without these lines of programming. The bugs, in other words, are a feature. And if the AI itself can be compromised, that makes retrieving its data even easier than attacking a traditional server.

Consider, for example, the potential of a rogue operator gaining control of an AI algorithm that was used for, say, traffic lights in New York City. What if the programmable logic controllers that run on AI at a utility station were infiltrated, and, consequently, water supplies were shut off across multiple states? You can see the problem, we’re sure.

Now, we may not like this next sentence, but it’s a necessary evil: government regulation. Who’s going to lead the charge? Well, at the moment, everyoneas multiple US officials have called for establishing guidelines, policies, restrictions, etc.

Remember the libertarian comment? It wasn’t a jab. One of the main arguments and concerns about this form of regulation, after all, pertains to internet freedom.

In the meantime, many solutions and bills are flying around to try and stymie the potential for AI misuse and abuse. But, of course, we survivalists know one important thing about laws…

Criminals don’t really care about whatever regulations, safe spaces, or gun-free zone signs you post. While these regulations and guidelines may serve, in the future, to make corporations and businesses happy, it doesn’t feel like it’s doing much to actually protect us from malicious actors currently.

Why Biosecurity Matters

Biosecurity collage.

Biosecurity collage.  (Image courtesy of sharafmaksumov via Bigstock.com)

Yes, computer security is a well-known issue in 2023. This isn’t exactly groundbreaking news–but what about biosecurity?

Many of us may not consider the level of precautions required to protect life and property from biological hazards. Nevertheless, biosecurity is a very real and very necessary field, representing the entire suite of holistic approaches used to ensure we’re protected from a plethora of threat vectors.

More specifically, biosafety protects us from biological hazards in our food, plants, animals, and invasive alien species. Okay, so maybe not those aliens, but there are still plenty of alien threats that don’t come from beyond the stars–and these require protection.

As such, government agencies, legislators, and private industry all have a part to play when it comes to biosecurity. Laboratories, too, play a large role in biosecurity. They are, after all, where biological engineering takes place–making biosecurity a natural extension of their work.

Now, the best job–clearly–is that of a writer. But biological engineering is pretty interesting stuff, too. Biological engineers have, for example, developed a pill that you can swallow to track your key biological markers. This, essentially, would facilitate faster and easier diagnoses, as it would allow doctors to track your biological metrics.

But on second thought… Maybe that’s not something we want.

It’s this sort of trepidation surrounding our health information that leads us to the apex of this article. If AI and biological engineering data are easily compromised, and rife with problems, what does that mean for biological hazards like weaponized botulism?

AI Tools Could Spell Disaster

Biological design tools like RFdiffusion and Alphafold, which utilize wide-ranging databases of information not entirely available to the general public, are now commonly used by laboratories as reference and formulation tools.

With that said, there are open-source versions that are widely available to everyone and anyone.

These AIs specialize in creating new biological structures. This is significant because, in the fields of medicine and research, the idea of creating and synthesizing new compounds opens the door to endless possibilities–new medicines, new cures, new therapies, and new peptides.

Alphafold in action

Alphafold in action.  (Image courtesy of Jumper, J., Evans, R., Pritzel via Nature.com)

This all sounds fantastic, unless you take into account everything we’ve talked about. For example:

  • New Edge AI tools are able to deliver thousands of textbooks worth of biological information into an easily understandable package.

  • These tools are potentially rife with security issues like zero-day vulnerabilities.

  • Regulation is lacking, and if any contributor to the biosecurity model fails, it leaves a hole for bad actors to access this information.

  • If a bad actor receives this information, they may hold untold power to create, formulate, and develop new biological or chemical weapons.

  • With the power of AI, they can create these weapons quickly and with shocking accuracy.

One may well wonder: What type of information is out there currently? Can you use ChatGPT to formulate a chemical weapon at this very moment?

conversation with Chat GPT

Limitations abound.  (Image courtesy of ChatGPT)

No, thankfully. BUT… can we use an advanced AI like Alphafold to research the structure of anthrax?

We certainly can. To be noted: this particular protein structure is only one piece of creating something dangerous. As such, the research notes specifically state: “[Lethal factor] is not toxic by itself and only acts as a lethal factor when associated with protective antigen (PA).”

When we hit backspace though, and scroll through the results, we find a listing for “protective antigen.” Note that this writer is not a chemist, a biologist, or a scientist; in fact, mixing iced-tea packets properly sometimes presents a challenge. Still, that kind of information seems worrying.

So, what if someone with the right tools and knowledge stumbled across such information? And what if they were ill-intentioned?

Experts are pushing back, saying that the information listed isn’t the lynchpin some may believe it to be.

Well… let’s hope so.

Keep Your Head On A Swivel

We are prepared for a potential nuclear conflict, because Russia forecasts its potential movements with threats and posturing.

We know, too, that wildfires are just over the horizon, so we ready ourselves for potential smoke inhalation.

We are also aware that the threat of chemical and biological weapons is far from small. Yet, some may feel as though there’s time to wait. After all, we have yet to experience novichok in the U.S., and we consider ourselves more vigilant following the COVID-19 pandemic. As such, we wash our hands, we change our passwords regularly, and we stay informed.

Except, the time to act is now. If there’s an unhinged domestic threat brewing close to home, they may be moving swiftly and unimpeded. Indeed, with the help of AI, they could be moving quicker than anyone could ever expect.

Breaking Bad

Your neighbor’s basement, maybe.  (Image courtesy of Ben Leuner Courtesy of AMC)

For these reasons, we cannot allow ourselves to be outmaneuvered.

This begins with you staying alive and thriving with the MIRA Safety CM-6M Tactical Gas Mask. Your first move in any situation involving an airway-compromising agent, after all, is to protect your face.

CM-6M

CM-6M Tactical Gas Mask

Trusted by governments and militaries worldwide, the CM-6M tactical gas mask gives you tactical flexibility and the guaranteed protection you need to respond and counter any threat. As such, you should make this the cornerstone of your bug-out bag.

MOPP-1 suit

MOPP-1 CBRN Protective Suit

When a biological hazard is on the horizon and you’ve got mere moments to respond, the MIRA Safety MOPP-1 CBRN Protective Suit is your key play. With a semi-permeable construction, you can rest assured that you won’t find yourself sloshing around in your own sweat while operating in a contaminated environment.

Remember: with the ever-changing dangers AI-generated biological threats present, you could find yourself battling a hazard in any kind of setting. As such, it is prudent to make sure you’re protected by the finest equipment available on the market.

The NBC-77 Filter

The NBC-77 Filter

With your CBRN suit and mask sealed in your kit, make sure you have the NBC-77 SOF filter, too. Note that the NBC-77 utilizes 40mm threads: the NATO standard. This ensures maximal compatibility with a variety of our gas masks.

If your ChatGPT enthusiast neighbor generates a bootleg batch of VX toxin instead of a sweet picture of a Warrior Bear, you’ve got to be ready. Not because you were planning to use that Warrior Bear picture as your Discord avatar (it’s pretty cool, ngl), but because it’s time to get out of dodge. Thankfully, a twenty-year shelf life and unparalleled protection against all known CBRN agents guarantee you every chance to make it through any treacherous scenario.

drog-leg bag

The Drop-Leg Survival Kit 

Now, before you hunker down with your (hopefully) chemical-free baked goods, check to make sure you’re strapped with the MIRA Safety Military Gas Mask & Nuclear Survival Kit. Equipped with various counters and safety measures, this kit includes your mask and filter–but also packs a canteen and Thyrosafe tablets.

And you get all of this in an easy-to-carry leg mount. Made of durable rip-stop nylon, this tactical bag will stay effective in the fight while you gather your wits and make moves.

Final Words

Some survivalists might need help appreciating the danger or complexity of this subject. After all, we like to tackle threats head-on or evade them entirely.

It’s a new world though, and something like this doesn’t exactly make it on the radar too often. Thankfully, that’s what MIRA Safety is here for–to keep you safe and informed.

At the end of the day, the reality is that AI tools could make it simple for terrorists (foreign and domestic) to bring devastation to our doorstep. Accordingly, it’s best to stay informed and ready to respond with MIRA Safety’s line of high-quality equipment and weekly newsletters.

Oh and the secret to great macarons is to grind your own almonds–store bought ones suck.

Warrior Bear.  (Image courtesy of Hotpot.AI)

Frequently Asked Questions

What is Edge AI?
What is Rfdiffusion?
What is bioterrorism?
Who invented Chat GPT?