Gillian Smith is an assistant professor at Northeastern University in the art+design and computer science departments. Tanya Short is Creative Director at Kitfox Games and co-coordinates Pixelles Montreal. Amanda Phillips is the IMMERSe Postdoctoral Fellow at the University of California, Davis. Michael Cook is a Senior Research Fellow at the University of Falmouth.
Procedural content generation (PCG) is often said to hold a great deal of promise in game design and development, especially in indie and experimental games. The ability to have the computer act as an on-demand game designer affords exciting new gameplay possibilities: games that change in response to the player’s actions, that provide new material for players to experience, and even software that thinks, creates, and reflects about game design all on its own. This promise, perhaps combined with the simple joy of “making something that makes something”, has led to a growing community from across the games industry: novice and student designers, developers from studios of all sizes, and academic researchers. PCG work spans a wide variety of technical approaches and design roles, from relatively simple algorithms such as template text generation in Twine to complex and experimental artificial intelligence systems (Smith 2014, Togelius et al. 2011).
When building systems that share or even entirely adopt the role of a designer for a game, however, the capability to reason about cultural context is entirely lost. At best, it sits implicitly in the code and the data; at worst, it goes entirely ignored and communicates an idea at odds with the maker’s intent. Though the human designer may have their own intent for the kinds of content or games their system should generate, it is challenging to fully express the constraints, rules, and context needed for generating content that is sufficiently varied for the overall game, valid such that it is even playable, and also consistent with the messaging desired by its creator. Designing generative systems can require human designers to deeply confront their own implicit biases and understand how to formally express, in code, the full generative space of acceptable content that the system should create. For example, consider a character generator with names generated from a gender-partitioned list of constituent name parts. This simple act–born from the common method in PCG of specifying the valid subcomponents of what should be built, partitioning them such that their recombination will always be valid, and then randomly piecing those parts together at runtime—communicates the implicit biases of the maker (including a declaration of the gender binary, a statement that names should conform to those genders) and is then cashed out in every character that is generated by the system.
PCG systems convey meaning whether we intend them to or not. Much of the power and compelling nature of PCG lies in emergent behavior: relatively simple rules of heuristics that lead to emergently complex results. The creation of a PCG system is equivalent to building a formal model of design theory. These models will, necessarily, prioritize some modes of thinking and ignore others. They are a cleaner approximation to the messy and murky reality of game design: an opportunity for designers to think more deeply about their own commitments and formalize them.
For the remainder of this article we present four separate, but complementary, viewpoints on the relationship between computers and generative systems, their authors, and feminism. Each viewpoint is from one of the authors of this article, based upon the remarks they each gave at the Different Games conference in 2015. Each viewpoint answers the question: can computers be feminist?
Lady Brains (Amanda Phillips)
Of course computers can be feminist! We are just limited by imagination. As the cultural theorist in this group (rather than a computer scientist), I am mainly interested in how our ability to imagine computers (in, for example, fiction) informs the way we understand what computers can do. Our design of hardware, software, and computational systems is limited by both imagination and material constraint.
Historically, computers have been number crunchers, extremely skilled at making calculations such as those needed for military operations. Computers perform more complicated operations by following computational models that break the operations down into simpler, concrete tasks. To model climate change, for example, a designer might instruct the computer in different tasks related to carbon emission rates, deforestation, population growth, and more. Complex simulations rely on humans to determine what types of behaviors are important for the machine to model. In the case of intelligence, one of the first standards for computer mimicry was put forth by Alan Turing: can a computer, in conversation with a human, convince them that it is a human?
It’s easy to assume that just because a computer is doing the calculations, that the simulation must be correct. But think about all the assumptions that exist underneath such a seemingly simple question. Alison Adam unpacks some of this, suggesting that the leisure activities of computer scientists, such as chess and logic puzzles, likely shaped how they interpreted “intelligence.” This, in turn, shaped the questions and approaches that they made to solving the computational problem of simulating intelligence. It may be difficult to recognize something like computer intelligence as masculinized by default, but we know a feminine AI when we see one.
I’m interested in how the gendering of computer intelligence gets explored in our cultural texts. GLaDOS from Portal is a good example. As a feminine artificial intelligence who is also a scientist, she has flipped the script on two rather masculinized domains. The result? She is a sadistic monster who tortures and kills humans for fun. The character draws heavily from HAL 9000, but their gendered voices have more than an aesthetic impact on the text. Take, for example, how they die: HAL calmly pleading for his life as he slowly drifts into unconsciousness, and GLaDOS, cycling through extremely animated emotional states to lie, humiliate, and intimidate the gamer to stop pulling her apart. Femininity renders this artificial intelligence irrational and incoherent rather than truly intelligent.
What seems like a simple deviation in personality indicates, to me, an anxiety about feminine intelligence that Jack Halberstam expressed decades ago when writing about the entwinement of Turing’s life and sexuality: “The fear of artificial intelligence, like the fear of homosexuals infiltrating the secret service, was transformed into a paranoid fear of femininity” (444). GLaDOS embodies this fear, but also serves as a convenient narrative device to contain it. Perhaps we can understand monsters a bit better than the implications of feminine intelligence, after all.
Make Gender Matter (Tanya Short)
Most simulations that involve humans include gendering characters, and some also treat race and ethnicity, but almost all of them seem to decide that sexism and racism either do not exist or are not meaningful factors of interest to their observers/players. Gender and race are seen, at best, as purely a matter of appearance: different kinds of characters are used and players are given customization options so that a character can look like them, but not necessarily act like them.
Games can involve generating people at varying levels of detail: from names of incidental characters that flash by a screen quickly, to simple pictorial avatars, to fully realized characters with names, bodies, actions, and dialog. These characters are typically created with a gender, even if it is only implied through clothing choice and appearance, but the gender rarely makes a difference for how the player will interact with them. Gender is typically a Boolean variable that impacts appearance and the use of gendered pronouns, but does not have any meaningful impact on the underlying system or simulation. Any difference in behavior must be “read in” by the player, rather than acted out by the character. For example, Mass Effect’s Commander Shepherd can be set to male or female (impacting primarily the body composition and voice), but the player does not choose a gender for their character that has any systematic difference in the game outside of a change in pronouns for dialog. The player can also choose a skin color for their character, but this is a far cry from a racial and cultural identity that has impact on how other characters interact with Shepherd (except, as Kinzel notes, occasional changes in dialog).¹
Gender is seen as “important” in games, as a key identifier for a character. But the gender is rarely actually important to how the game plays. The Sims has deep simulations for social interaction, yet there is no difference between male and female sims, aside from the ability for women to become pregnant. Male and female sims can dress identically, behave identically, and have the same interactions with and reactions from from the surrounding world. The Sims defines a post-sexist society that does not fit with our current model for society, leaving the human players to “fill in the gaps” of gender through the stories they create in the game.
Designers of generative and simulation systems dance around questions of race and gender, treating them more often as window dressing than as meaningful contributors to identity. We need a future in which a system can never generate elements that don’t significantly influence events in the system, especially elements as important as gender identity, sexuality, and race. In doing so, the designer of the system is “cheating” from making hard decisions by forcing the player’s knowledge, assumptions, and societal context to do the hard work for them, instead of making the system more informative and meaningful.
The Need for Empathy and Intent (Gillian Smith)
Computers cannot, as we currently conceptualize them, be “feminist”—they are entirely reliant upon their human programmers to tell them what to do and how to act. When a system creates “bad” content and surprises the human who created that system, it is not sufficient to shrug our shoulders and blame the machine. It is instead the responsibility of the human creator to create a generative system that can be perceived as feminist by others.
Any generative system defines two formal theories: the first, of how to perform design; the second, of what the nature is of the product that will be generated and how the player should receive and interact with it. The ways in which a computer designs content for games are quite varied. The simplest form involves controlling the knowledge representation layer—saying what the bits are that get re-configured into larger bits. This form of content generation embeds a theory that creativity and intelligence are simply about recombining already-known elements, and often requires that the computer also take on the role of a critic, who can interpret whether or not the produced content is desirable. It risks human players finding patterns in the content being created, and the illusion of creativity being broken. A more convincing, but more complex, set of methods for content generation reduce the number of pre-authored pieces and instead focus on embedding design knowledge about the domain into the algorithm so that it can make intelligent choices during construction. There are also systems that rely upon machine learning and “big data” to create content based on examples, yet these systems typically do not have any semantic information to fall back on: they can create, but not understand what they have created.
Meaning in games emerges when players interact with these procedural systems, when a player not only sees what has been created but interprets that content, makes choices about it, and produces a narrative to explain what they are seeing. When we create generative systems, we are actually creating meaning-making systems, with players interpreting both the process the system follows (especially if it involves player interaction to co-create the content, as in Spore) and the products it creates. Yet, the interpretations made by players are typically not intentionally produced, or even understood, by the machine. Whereas the human author of a book may make intentional choices to influence reader interpretation of underlying messages, a machine is not capable of making such choices as it does not understand a broader cultural context that is experienced by players. Machines themselves lack an ability to have empathy with the human player and the associated ability to communicate messages based on shared experiences. Any semantic information about the actual nature of the content being created is typically tied up in the mind of the creator, who must consciously reflect upon it and find ways to imbue a system with it.
Consider an avatar generator (such as this one) that can randomly create a broad range of pixelated characters It must be told by its creator what it means to be a human-like avatar: for example, the creator must explicitly represent to the machine that avatars have heads, arms, legs, bodies, clothes, hairstyles, and skin colors (and indeed, the machine likely does not understand what a “head” is, semantically, only that it sits atop the body). Yet this decision in itself is conveying a message that avatars represent only able-bodied people. The generative space of the system is entirely controlled by the decisions made by its creator, and is highly sensitive to changes: it is easy to accidentally and unintentionally create a system that accidentally creates very few people of color relative to those who read as white, or that creates more characters with feminine clothes than masculine clothes.
Creators of generative systems are performing a careful magic trick: convincing the player who receives and interacts with the content that it was made by a human-level intelligence, yet was actually made by an explainable algorithm. They must create the system to act as though it has empathy and intent that are actually absent. Indeed, they have a responsibility to create simultaneously the least harmful and most convincing artificial intelligence possible: to ensure that the content created by their systems is appropriate, and also convince users of that system that it was deliberately and intentionally designed.
Viewer Interpretation Matters (Michael Cook)
Whether or not a computer is feminist is irrelevant: what is important is human perception. Our relationship with intelligent software is a strange mix of reality and fiction—we like to pretend that software is intelligent, to talk to it and personify it, to read into it reasons for why and how it acts. This is true even if the piece of software doesn’t claim to be intelligent: a word processor or an on-screen calculator can be seen to have a personality, based on its design, reliability, performance, and purpose.
For software that should intentionally be perceived as creative and to have some measure of autonomy, such as procedural generators, the author of the system is letting it finish a piece of art that they started themselves, complete with making politically-charged decisions. Whether the author of the system, or even the system itself, recognizes this decision as political is irrelevant: if a player reads politics into it, even knowing that the system is a machine acting outside of human cultural context, then the politics are present. Even if the system does not have them, the viewer will read in intentions and belief into the system.
The software we create today can last for a long time—albeit with need for maintenance depending on the platform it is created for—and we are facing a future in which the creative software we author can go on creating content for games and other artworks for as long as it has hardware it can run on. This can be for much longer than a human would typically be able to or even want to. We typically consider human artists’ bodies of work as part of the larger cultural context for when they originally did the work. We are approaching a future in which machine artists can continue their work far into the future, but embedded forever in the cultural context they were originally created in. Software can travel the world, spreading the ideas given to it by its creator far and wide, and can continue to do so long after the creator’s death—not only through the continued advertisement of existing work, but by continuing to act as an extension of the creator in perpetuity.
Diversity in software comes, in part, from diversity in the creators of that software. Especially for generative software, this powerful tool for expression is often locked away behind a large technological barrier. There is a strong need for work in the PCG community to not only work to ensure that systems are communicating desirably, but also make it easier for others who lack a background in software development to use that form of communication. The power of PCG should not be restricted to those with backgrounds in programming and artificial intelligence.
Works Cited
Adam, A. 1998. Artificial Knowing: Gender and the Thinking Machine. Routledge.
Belman, J., Nissenbaum, H., Flanagan, M. and Diamond, J. 2011. Grow-A-Game: A Tool for Values Conscious Design and Analysis of Digital Games. Proceedings of the Digital Games Research Association Conference (2011).
Bogost, I. 2010. Persuasive Games: The Expressive Power of Videogames. The MIT Press.
Halberstam, J. 1991. Automating Gender: Postmodern Feminism in the Age of the Intelligent Machine. Feminist Studies 17.3 (Autumn): 439-460.
Smith, G. 2014. Understanding Procedural Content Generation: A Design-Centric Analysis of the Role of PCG in Games. Proceedings of the 2014 ACM Conference on Computer-Human Interaction (Toronto, Canada, Apr. 2014).
Togelius, J., Yannakakis, G.N., Stanley, K.O. and Browne, C. 2011. Search-Based Procedural Content Generation: A Taxonomy and Survey. Computational Intelligence and AI in Games, IEEE Transactions on. 3, 3 (Sep. 2011), 172 –186.
¹Artist Ronald Wimberly has also written on the relationship between skin color and race in comic books. https://thenib.com/lighten-up-
2 Comments
Comments are closed.