Wednesday, February 21, 2024

AI is the newest revolution in the information technology space and has already proven itself a very useful tool for lots of people doing lots of things, including the most repetitive, often boring, tasks that employees are asked to do on a daily basis. Pew Research Center published a graphic (right) that makes it clearer what types of work are coming in contact with AI, which are all digital-heavy jobs that involve a lot of computer use. This poses a threat, though, to people whose jobs are highly comprised of these kinds of tasks, including many people who work in the video game industry, meaning that while AI might offer some good, it also comes with some bad and certainly some ugly.

The Good:

AI offers some benefits to video game developers such as improving graphics and implementing adaptive gameplay. AI can generate very complex textures and environments that enhance the realism of gameplay, like foliage or water that reacts realistically to a player moving through it, as well as hair, a notoriously difficult aspect of animation. Additionally, AI can be integrated into non-playable character (NPC) behaviors in order to create a truly personalized player experience. Adaptive features of AI NPCs include enemy learning, which would allow enemies to learn from player fighting tactics and adapt accordingly for a more difficult, realistic battle, friendly tailoring, which would make friendly NPCs more helpful to the player, and storyline tailoring, in which the game would learn what aspects of the plot interest the player most and then tailor the plot to be as engaging as possible for any given player.

A prominent example of AI NPC use to enhance gameplay is Rockstar’s leak that Grand Theft Auto VI will include NPCs armed with AI in order to make them more interesting and give them new abilities such as being able to react to the player's smell. An added bonus of these features is that AI would be able to balance difficulty with entertainment, making the game just hard enough to keep players motivated but not too hard that they become discouraged and stop playing.

The Bad:

The drawbacks to implementing AI, however, are numerous. On the industry side, although large studios may be looking to cut costs by replacing salaried designers and writers with algorithms, the cost of upkeep for video game AI is possibly more expensive when the requisite highly skilled labor and pricey soft- and hardware are taken into account. Aside from this, it’s an ethical question of whether or not to put developers out of jobs, especially considering the human element that would be lost in doing so. Much like the way that AI art and writing always feels a little ‘off,’ AI video games will fall into the same trap. Artistic generative AI is trained on existing materials, making it incapable of being truly innovative and, even though it took human arts and can emulate it, it will never possess the human context that goes into the creative process and therefore, by my definition, cannot actually create art of any kind. Video game enthusiasts look for and appreciate the human element that elevates the story of a game, and studios are mistaken if they think that no one will notice when it’s gone.

AI learning in video games also involve vast amounts of private data collection. The algorithm, in order to create a truly personalized gaming experience, collects player data and synthesizes it into a personality to be catered to, meaning that player data profiles are being stored and could be used with questionable intentions or sold to third parties. AI algorithms, because of the way that they’re built, are also susceptible to new kinds of cheat programs that would threaten the integrity of the game.

Lastly, although computers generally imply objectivity and true randomness, they are still built by people, who do not. AI programs written by people who hold explicit or even implicit biases against certain groups are extremely likely to carry traces of those biases. We’ve seen this happen in recruitment scenarios, where applicants of a certain race or gender are preferred by the algorithm, and in generative art AI, which has been seen to be biased in its training and, therefore, in its depictions of Black people as opposed to white people. In video games, AI could perpetuate harmful stereotypes in similar ways, making gameplay a possibly destructive experience.

AI depictions of Black women smiling from Google Colab, 2020

The Ugly:

Generative AI seems to aim to replace, or at the very least devalue, the important work of artists, including screenwriters, digital artists, video game designers, and other professions that rely on the unique artistic talents of creatives, making it an ethical question and even a legal one. The training of generative AI on unlicensed content scraped from the web and then emulated without credit to artists is an intellectual property issue and could be a copyright violation. Midjourney, DeviantArt, Stability, and Runway AI are actually in the middle of a lawsuit right now regarding this issue: some artists are alleging that the AI art programs used art online for training without consent from the artists and arguing that artists must be credited for use of their art, even by AI. A statement from the artists demonstrates the strong sentiment fueling their legal battle: “Though Defendants like to describe their AI image products in lofty terms, the reality is grubbier and nastier. AI image products are primarily valued as copyright-laundering devices, promising customers the benefits of art without the costs of artists.”

Friday, February 16, 2024


“Privacy” is something of a buzzword as of the last decade or so, as more and more scandalous data leaks and online threats appear in the news. I was born right around the time when the internet was hitting that exponential curve really hard and everything became increasingly internet-based. I grew up without any idea of what a life without this handy tool would be like—and why would I want to? My fact-checker tendency could never.

me, age 8, messing with dad's stuff

When I started running across these Terms of Service contracts as a kid, I would always ask my dad; he works in IT and I always deferred to his judgement before clicking anything I didn’t understand. He’d usually give me some version of, “Um, *sigh,* yeah, just agree,” and I’d click confidently onward, comforted by the go-ahead of the smartest person I knew. Little did I know that his response was an over-consolidated product of his knowledge about our online privacy, or lack thereof, about which I would not go on to learn until much later.

Being the apathetic consumer that I’ve thus far established myself to be, even after discovering some of the disturbing facts of the Terms of Service issue, I picked my battles in a similar fashion to my father. My need or desire to use an online service almost always outweighs my hesitation to sign a contract I’m not likely to understand and even less likely to read. I check the box and move on, choosing not to think about it in an effort to preserve what little ignorance born-bliss I still have. In doing so, I’ve likely opened myself to many an online threat, but, as I see it, if a massive medical records storage company can lose all my data to “unknown entities” and just send me a mass announcement effectively saying, “oops,” then how am I expected to protect myself in a meaningful manner?

Luckily, this issue is very well known and there are resources to help lazy people like me, such as Terms of Service; Didn’t Read, which breaks down Terms of Service agreements for popular sites and rates them on their relative shadiness, and haveibeenpwned.com, which lets you know if your email address has been involved in data breaches (this site sounds like a potential scam itself, but it’s actually safe and endorsed by many reputable sources including the New Jersey Cybersecurity & Communications Integration Cell (NJCCIC) and the good people of Reddit.)

The TED Talks included in the class material as well as the ones I watched for further data collection left me with a few distinct messages:

  1.         We do not have control of our data and therefore do not have control of our lives.
  2.          It should be the responsibility of corporations providing internet and online services to make their Terms of Service linguistically accessible and more respectful.
  3.         Since these corporations aren’t going to voluntarily respect our data, it’s fallen to us to take steps to protect ourselves.

Our data didn’t used to belong to every service with which we interacted. Kade Crockford used in their TED Talk the analogy of curtains, fences, and door locks for how we used to protect our privacy. Online, we aren’t offered the luxury of digital curtains, and anyone can peek through our windows or, worse, plaster posters around town sharing our private information with strangers. In Darieth Chisolm’s Talk, she discusses her experience with ‘revenge porn,’ which she more accurately describes as ‘digital domestic violence.’ Her agonizing struggle to have the materials taken down and get just punishment for the worthless waste of space that posted her intimate photos online was far too complex, exhausting, and expensive than it should have been.

Chisolm’s description of the lack of legal measures in place for our government to handle these kinds of situations are infuriating and only underscores the growing sense that not only are we not in control of our data, but the government has zero interest in helping us protect it. In fact, they profit from our lack of online control: Christopher Soghoian explains in his Talk that our phones, back in the day, were always wired first and foremost for wiretapping. Since the first days of telephones, the government has had their ear to the door of civilians’ private lives, regardless of criminal suspicion.

I consume an unholy amount of true crime and consider myself pretty familiar with common tactics and tools used by local police, as well as the FBI, in criminal investigations. Sometimes a cell tower dump, pinging a cellphone location, or old-school wiretapping is the key to brining a missing person home and the linchpin for the case against a violent criminal. In those cases, I always find myself glad to know that police are using these tools responsibly to put horrible people away. In some way, it almost feels dutiful, patriotic, even, to know that my privacy is an illusion for the sake of catching murderers and rapists. Crockford and Soghoian both touched on this, emphasizing the message that safety and privacy are not mutually exclusive. We’ve been led to believe that in order to keep us safe, the government needs us to forfeit our privacy, and by doing so we serve a greater good. They both insist that this is not true, and Soghoian even recommends specific apps, such as FaceTime, iMessage, and WhatsApp for a more encrypted communication experience.